text
listlengths
2
2.54k
id
stringlengths
9
16
[ [ "Exact position distribution of a harmonically-confined run-and-tumble\n particle in two dimensions" ], [ "Abstract We consider an overdamped run-and-tumble particle in two dimensions, with self propulsion in an orientation that stochastically rotates by 90 degrees at a constant rate, clockwise or counter-clockwise with equal probabilities.", "In addition, the particle is confined by an external harmonic potential of stiffness $\\mu$, and possibly diffuses.", "We find the exact time-dependent distribution $P\\left(x,y,t\\right)$ of the particle's position, and in particular, the steady-state distribution $P_{\\text{st}}\\left(x,y\\right)$ that is reached in the long-time limit.", "We also find $P\\left(x,y,t\\right)$ for a \"free\" particle, $\\mu=0$.", "We achieve this by showing that, under a proper change of coordinates, the problem decomposes into two statistically-independent one-dimensional problems, whose exact solution has recently been obtained.", "We then extend these results in several directions, to two such run-and-tumble particles with a harmonic interaction, to analogous systems of dimension three or higher, and by allowing stochastic resetting." ], [ "Introduction", "Active particles consume energy from their environment and use it in order to generate dissipated directed motion [1], [2], [3], [5], [6], [7], [4], [8], [9].", "Examples of active matter are ubiquitous in nature, including many biological systems of living cells and/or bacteria [10], [11], [12], [13], [14], [15], flocks of birds [16], [17] fish schools [18], [19], and also physical systems such as granular matter [20], [22], [21].", "Activity breaks time-reversal symmetry, and therefore, drives the system out of thermal equilibrium.", "Active matter has attracted much interest over recent years, which led to the discovery of several remarkable collective behaviors that are very different to those observed in systems in thermal equilibrium.", "These behaviors include motility induced phase separation [23], [24], [25], clustering [26], [27], [28], and the absence of an equation of state relating pressure to the system's bulk properties [29].", "In fact, even at the level of a single particle, active particles display some nontrivial features that are not observed in their passive counterparts.", "In particular, active particles that are affected by an external potential have been recently studied, both theoretically [34], [35], [36], [37] and experimentally [38], [39], [40], [41].", "It was shown that such a particle can reach a non-Boltzmann steady state, and/or cluster near the boundaries of a spatial region in which it is confined [45], [46], [43], [44], [42], and that it develops a nonzero drift velocity even if the external potential is periodic [30].", "First-passage and relaxation properties were also studied [31], [32], [33].", "In order to make progress analytically, it is usual to focus the study on simple theoretical models.", "One such model of active particles, that has been extensively studied, is the model of the run-and-tumble particle (RTP).", "This model describes an overdamped particle whose speed $v_0$ is constant, while the orientation of its velocity changes in time randomly via sudden jumps (or `tumbles').", "In one spatial dimension (1D), this model becomes especially simple: the only possible velocity orientations are $\\sigma = \\pm 1$ , i.e., the particle's velocity can be $\\pm v_0$ .", "At a constant rate $\\gamma $ , the orientation flips $\\sigma \\rightarrow -\\sigma $ .", "One can optionally take into account an external potential $U(x)$ too.", "The position $x(t)$ of this RTP obeys the Langevin equation $\\dot{x} = f(x) + v_0 \\sigma (t) \\, .", "$ Here $f(x)= - U^{\\prime }(x)$ is the deterministic force exerted on the particle due to the external potential $U(x)$ , while the orientation $\\sigma $ plays the role of a telegraphic (dichotomous) noise.", "The statistical properties of $\\sigma $ lead to a breaking of time reversal symmetry.", "In contrast to the white (Gaussian) noise in equilibrium systems, $\\sigma (t)$ is a colored noise; its autocorrelation function is $\\langle \\sigma (t) \\sigma (t^{\\prime }) \\rangle = e^{-2 \\gamma |t-t^{\\prime }|}$ (angular brackets denote ensemble averaging), describing exponential decay with a typical timescale of $\\tau = (2\\gamma )^{-1}$ .", "Many properties of the 1D RTP can be found exactly, as we recall shortly.", "However, despite its apparent simplicity, the model displays many nontrivial features, e.g., a steady-state distribution that is non-Boltzmann [44], [31].", "One of the most fundamental quantities to study is the (time-dependent) position distribution $P(x,t)$ of the particle, given that it is initially at the origin $x(t=0) = 0$ .", "Let us assume that the initial orientation is randomly selected from the two possible values, $\\sigma (0) = \\pm 1$ , each with equal probability $1/2$ .", "For an RTP in 1D that is `free', i.e., in the absence of an external potential (so $f(x) = 0$ ), $P(x,t)$ is known exactly, but is nevertheless highly nontrivial.", "[47], [48], [49], [50], [31], [51], [44], [52], [53].", "The support of the distribution is is the interval $x\\in [-v_0\\, t, v_0\\, t]$ and for $|x|\\le v_0\\, t$ it is given by $P_{\\text{free}}\\left(x,t\\right)&=&\\frac{{\\rm e}^{-\\gamma t}}{2}\\biggl \\lbrace \\delta \\left(x-v_{0}t\\right)+\\delta \\left(x+v_{0}t\\right) \\nonumber \\\\&+&\\left.\\frac{\\gamma }{2v_{0}}\\left[I_{0}(\\rho )+\\frac{\\gamma I_{1}(\\rho )}{\\rho }\\right]\\theta \\left(v_{0}t-|x|\\right)\\right\\rbrace \\,,$ where $\\rho = \\sqrt{v_0^2 t^2 - x^2}\\,\\frac{\\gamma }{v_0}$ and $I_0(\\rho )$ and $I_1(\\rho )$ are modified Bessel functions of the first kind.", "The $\\delta $ functions at the edges of the support $x=\\pm v_0\\, t$ correspond to the cases where $\\sigma (0) = \\pm 1$ (respectively) and the noise $\\sigma (t)$ does not change its value up to time $t$ .", "At long times, the central part of the distribution approaches a Gaussian form, as one would expect since the free RTP reduces, at late times, to ordinary Brownian motion.", "The presence of a confining potential complicates the theoretical analysis considerably.", "Nevertheless, the (nonequilibrium) steady-state distribution of the RTP's position is known exactly for an arbitrary confining potential $U(x)$ .", "It is given, up to a normalization constant, by $P_{\\textrm {st}}(x)\\propto \\frac{1}{v_{0}^{2}-f^{2}(x)}\\exp \\left[2\\gamma \\int _{0}^{x}dy\\frac{f(y)}{v_{0}^{2}-f^{2}(y)}\\right]$ The result (REF ) has been known for decades, obtained originally in the context of quantum optics [54], [55], [56], [57] and later reproduced in the study of colored noise on dynamical systems [58] and of active matter [29], [44].", "In the diffusive limit, when $v_0 \\rightarrow \\infty $ , $\\gamma \\rightarrow \\infty $ but keeping the ratio $v_0^2/2\\gamma = D$ fixed, the dynamics converge to the overdamped dynamics of a particle of diffusivity $D$ in a trapping potential $U(x)$ .", "Indeed, one finds that in this limit the distribution (REF ) reduces to a Boltzmann distribution $P_{\\textrm {st}}(x)\\propto e^{-U\\left(x\\right)/D}$ .", "For a harmonic potential, $U(x) = \\mu x^2/2$ , a case which is of particular interest, not only theoretically but also experimentally [39], [41], the stationary distribution (REF ) simplifies to [44] $P_\\textrm {st}(x)= \\frac{2 \\mu }{4^{\\beta }B(\\beta , \\beta )v_0} \\left[1- \\left(\\frac{\\mu x}{v_0} \\right)^2 \\right]^{\\beta -1} ,$ where $\\beta = \\gamma /\\mu $ and $B(u,v)$ is the beta-function.", "The distribution is symmetric, $P_{\\textrm {st}}(x)=P_{\\textrm {st}}(-x)$ and describes a particle that is confined to the region $\\left|x\\right|\\le v_0 / \\mu $ .", "As one varies $\\beta $ , the shape of the distribution changes from a unimodal distribution centered around $x=0$ at $\\beta > 1$ , describing a `passive phase', to a bimodal distribution in which the peaks are near the edges $x = \\pm v_0 / \\mu $ at $\\beta < 1$ , describing an `active phase' (at $\\beta =1$ the distribution is uniform).", "In the strongly passive limit $\\beta \\gg 1$ , the distribution (REF ) becomes a Gaussian $P_{\\textrm {st}}(x)\\propto e^{-\\gamma \\mu x^{2}/v_{0}^{2}}$ , corresponding to a Boltzmann distribution with diffusivity $D = v_0^2/2\\gamma $ .", "In fact, for a harmonic potential, the full time-dependent position distribution $P(x,t)$ has recently been obtained exactly [44].", "It is given, in terms of its Laplace transform $\\tilde{P}\\left(x,s\\right)=\\int _{0}^{\\infty }e^{-st}P\\left(x,t\\right)dt \\, ,$ by $\\tilde{P}\\left(x,s\\right)=B\\left(s\\right)z^{\\bar{\\gamma }+\\bar{s}-1}\\,_{2}F_{1}\\left(1-\\bar{\\gamma },\\bar{\\gamma };\\bar{\\gamma }+\\bar{s};z\\right) \\, ,$ where $B\\left(s\\right)&=&2^{2\\left(\\bar{\\gamma }+\\bar{s}\\right)-3}\\frac{\\Gamma \\left(\\bar{s}/2\\right)\\Gamma \\left[\\bar{\\gamma }+\\left(1+\\bar{s}\\right)/2\\right]}{\\sqrt{\\pi }\\Gamma \\left(\\bar{\\gamma }+\\bar{s}\\right)} \\,,\\\\z&=&\\frac{1}{2}\\left(1-\\frac{\\mu |x|}{v_{0}}\\right) \\, ,$ $\\bar{s}=s/\\mu $ , $\\bar{\\gamma }=\\gamma /\\mu $ , and $\\,_{2}F_{1}(a,b;c;d)$ is a standard hypergeometric function.", "One can check that in the free case $\\mu =0$ , the result simplifies to Eq.", "(REF ).", "The principal goal of the present work is to calculate the position distribution of an RTP in higher spatial dimension, focusing mostly on two dimensions (2D), thereby extending the known 1D results.", "Such extensions are very important from the point of view of relevance to experiments.", "In contrast to 1D where there is essentially just one natural definition of an RTP whose speed is constant, in 2D different models of active particles have been introduced and studied, with growing interest over the last few years [32], [42], [59], [52], [60], [62], [61].", "In the active Brownian particle (ABP) model, the velocity can be oriented toward any direction in the plane, and the orientation changes continuously in time through angular diffusion.", "The position distribution of an ABP in 2D was studied in [32], [42] for the free case, and in [43] both the free case as well as in the presence of an external harmonic potential.", "There are different RTP models in 2D (with an orientation that changes discontinuously in time), in which the details differ: The set of possible orientations can be finite or infinite, and different possible transition rules of the orientation have been studied.", "The steady-state distribution of a RTP whose orientation is chosen randomly at each tumbling event, uniformly from all possible orientations, in the presence of a harmonic trap in 2D and 3D, was obtained very recently in [62].", "In the present work, we significantly advance the understanding of RTPs in 2D by finding the exact, time-dependent position distribution of an RTP in 2D whose orientation vector stochastically rotates by 90 degrees (clockwise or counter-clockwise), confined by an external harmonic potential and possibly diffusing, as well as some extensions of this model.", "Let us briefly describe the structure of the remainder of the paper.", "In section , we give the precise definition of the 2D RTP model that we study.", "In section , we solve the model exactly, and calculate the distribution of the position of the particle and related quantities.", "In section , we present several generalizations of the model (to more than one RTP, to higher dimensions etc) and briefly describe how to extend our results to cover those cases too.", "In section , we summarize and discuss our main findings.", "Some of the technical details of the calculations are given in the Appendices." ], [ "Model", "The 2D RTP model that we study was originally introduced in [59].", "It consists of an overdamped particle in the 2D ($xy$ ) plane, which is is affected by an external harmonic potential $U\\left(x,y\\right)=\\mu \\left(x^{2}+y^{2}\\right)/2$ , and in addition, has an internal degree of freedom ${\\sigma }$ that is a unit vector, describing the particle's orientation.", "The dynamics of the particle's position are described by the Langevin equation $\\dot{{r}}=-\\mu {r}\\left(t\\right)+\\sqrt{2}\\,v_{0}{\\sigma }\\left(t\\right)\\,,$ where ${r}=\\left(x,y\\right)$ is the position of the particle and $\\sqrt{2} \\, v_{0}$ would be the particle's speed in the absence of external potential (the factor $\\sqrt{2}$ being included for later convenience).", "The dynamics of ${\\sigma }$ are stochastic: it rotates (“tumbles”) by 90 degrees, clockwise or counter-clockwise with equal probabilities, each of the rotations occurring at a constant rate $\\gamma $ .", "Thus, there are four possible orientations for ${\\sigma }$ , which we choose to be in the directions $\\pm \\hat{x},\\pm \\hat{y}$ (where $\\hat{x}$ and $\\hat{y}$ are unit vectors in the directions of the $x$ and $y$ axes, respectively).", "These four directions are denoted by $E,W,N,S$ respectively, see Fig.", "REF .", "Thus, the master equation that describes the dynamics of ${\\sigma }$ is $\\frac{d}{dt}\\left(\\begin{array}{c}p_{E}\\\\p_{N}\\\\p_{W}\\\\p_{S}\\end{array}\\right)=\\gamma \\left(\\begin{array}{cccc}-2 & 1 & 0 & 1\\\\1 & -2 & 1 & 0\\\\0 & 1 & -2 & 1\\\\1 & 0 & 1 & -2\\end{array}\\right)\\left(\\begin{array}{c}p_{E}\\\\p_{N}\\\\p_{W}\\\\p_{S}\\end{array}\\right) \\,,$ where $p_i(t)$ denotes the probability that at time $t$ , the orientation of the particle is $i=E,N,W,S$ .", "The particle is initially at the origin, $x\\left(t=0\\right)=y\\left(t=0\\right)=0$ , with ${\\sigma }(t=0)$ uniformly distributed over the 4 possible orientations [63].", "Figure: A schematic representation of the dynamics of the orientation vector σ(t){\\sigma } (t) in the 2D RTP model.", "σ{\\sigma } rotates by 90 degrees to the left or to the right, each at rate γ\\gamma .", "The four possible orientations are aligned with the xx and yy axes.", "As explained in the text, the key to our solution of this model is the observation that a 90-degree rotation of σ{\\sigma } corresponds to an inversion of exactly one of its two components σ u \\sigma _{u} and σ v \\sigma _{v}, where u ^=x ^+y ^/2\\hat{u}=\\left(\\hat{x}+\\hat{y}\\right)/\\sqrt{2} and v ^=x ^-y ^/2\\hat{v}=\\left(\\hat{x}-\\hat{y}\\right)/\\sqrt{2} are coordinates rotated by 45 degrees with respect to the x,yx,y coordinates.", "In fact, we find that 2σ u (t)\\sqrt{2}\\,\\sigma _{u}(t) and 2σ v (t)\\sqrt{2}\\,\\sigma _{v}(t) are statistically-independent telegraphic noises, leading to a complete decoupling of the problem in the u,vu,v coordinates.In [59], the exact steady-state marginal distribution along the $x$ axis was calculated for this model.", "However, the relaxation to this steady state, as described by the time dependent distributions, has not been known, and more importantly, neither has the full (two-dimensional) distribution $\\mathcal {P}\\left(x,y,t\\right)$ .", "In particular, the steady-state distribution $\\mathcal {P}_{\\text{st}}\\left(x,y\\right)$ has not been known.", "Note that the time-dependent distribution is meaningful and important also for a “free” RTP, i.e., in the absence of an external potential, $\\mu = 0$ .", "The “free” case for this model, and similar variations of it, was studied in [52] and the marginal distribution along the $x$ -axis was obtained approximately, but the joint distribution of $x$ and $y$ has been unknown.", "In this paper we resolve exactly these outstanding issues: We find the exact time-dependent distribution of the particle's position $\\mathcal {P}\\left(x,y,t\\right)$ for $\\mu \\ge 0$ , and in particular, we find the steady state distribution $\\mathcal {P}_{\\text{st}}\\left(x,y\\right)$ that is reached at long times for $\\mu > 0$ .", "We also solve related problems such as the survival and first-passage properties, and discuss several extensions of the model." ], [ "Exact solution", "The 2D RTP model becomes considerably simpler to analyze if one changes to a coordinate system $u=\\frac{x+y}{\\sqrt{2}},\\quad v=\\frac{x-y}{\\sqrt{2}},$ that is rotated by 45 degrees with respect to the $x,y$ coordinates.", "In this section, we exactly solve the 2D model using the following three key ingredients: (i) We show that the $u$ and $v$ components of the noise ${\\sigma }$ are statistically-independent telegraphic noises.", "(ii) We find that the $u$ and $v$ coordinates of the particle are also statistically-independent processes, and that each of them is mathematically equivalent to the position of a 1D harmonically-confined RTP.", "(iii) We recall the known exact results for the 1D case, and employ them to get the solution to the 2D model.", "After obtaining the exact solution, we study some of its properties such as its anisotropy and the relaxation to the steady state." ], [ "Decoupling of the noise", "The first key step to solving this model exactly is to observe that $\\sqrt{2} \\, \\sigma _{u}\\left(t\\right)$ and $\\sqrt{2}\\, \\sigma _{v}\\left(t\\right)$ (where $\\sigma _u$ and $\\sigma _v$ are the $u$ and $v$ components of ${\\sigma }$ ) are two statistically-independent telegraphic noises, each of which takes the values $\\pm 1$ and stochastically flips its sign with rate $\\gamma $ .", "Let us begin by showing that $\\sqrt{2} \\, \\sigma _{u}\\left(t\\right)$ is a telegraphic noise.", "This is quite easy.", "One simply has $p_{+}&\\equiv &\\text{Prob}\\left(\\sqrt{2}\\sigma _{u}=1\\right)=p_{E}+p_{N} \\, ,\\\\p_{-}&\\equiv &\\text{Prob}\\left(\\sqrt{2}\\sigma _{u}=-1\\right)=p_{W}+p_{S} \\, .$ By summing the first two components and the last two components of the master equation (REF ), one then finds that the dynamics of $\\left(p_{+},p_{-}\\right)$ is governed by the master equation $\\frac{dp_{+}}{dt}&=&-\\gamma p_{+}+\\gamma p_{-} \\, ,\\\\\\frac{dp_{-}}{dt}&=&-\\gamma p_{-}+\\gamma p_{+} \\, ,$ which coincides exactly with that of a telegraphic noise.", "One similarly proves that $\\sqrt{2} \\, \\sigma _{v}\\left(t\\right)$ is a telegraphic noise.", "However, proving that the two processes $\\sqrt{2} \\, \\sigma _{u}\\left(t\\right)$ and $\\sqrt{2}\\, \\sigma _{v}\\left(t\\right)$ are statistically independent is a little more tricky.", "Let us now prove the statistical independence of the two processes $\\sqrt{2} \\, \\sigma _{u}\\left(t\\right)$ and $\\sqrt{2}\\, \\sigma _{v}\\left(t\\right)$ , which turns out to be crucial for the solution of the 2D model.", "Our strategy in the proof is to define a 2D noise ${\\Sigma }$ whose $u$ and $v$ components are statistically-independent telegraphic noises, and then, to show that ${\\Sigma }$ and ${\\sigma }$ are equivalent.", "So, let us define a 2D noise ${\\Sigma }\\left(t\\right)=\\Sigma _{u}\\left(t\\right)\\hat{u}+\\Sigma _{v}\\left(t\\right)\\hat{v} \\, ,$ where $\\Sigma _{u}\\left(t\\right)$ and $\\Sigma _{v}\\left(t\\right)$ , the $u$ and $v$ components (respectively) of ${\\Sigma }\\left(t\\right)$ , are two statistically-independent (decoupled) telegraphic noises, each taking the values $\\pm 1/\\sqrt{2}$ and switching between them at rate $\\gamma $ .", "$\\Sigma _{u}\\left(t=0\\right)$ and $\\Sigma _{v}\\left(t=0\\right)$ are each randomly and independently selected from the two possible values that each of them can take.", "Thus, ${\\Sigma }(t=0)$ takes each of the four possible values $\\pm \\hat{x}, \\pm \\hat{y}$ , each with probability $1/4$ [just like ${\\sigma }(t=0)$ ].", "Figure: The steady-state distribution () for a 2D RTP confined by a harmonic potential, for β=2/3\\beta = 2/3 (a) and β=2\\beta =2 (b).", "In this figure, units are chosen such that μ=v 0 =1\\mu = v_0 = 1.", "At β<1\\beta < 1 the particle is most likely to be accumulated away from the center, whereas at β>1\\beta > 1 it is maximal around the center of the trap.We now show the equivalence between the two processes ${\\Sigma }(t)$ and ${\\sigma }(t)$ .", "Both of them are stationary Markov processes: For ${\\Sigma }(t)$ , this property is inherited from $\\Sigma _{u}\\left(t\\right)$ and $\\Sigma _{v}\\left(t\\right)$ .", "Next, we notice that they are both unit vectors, taking one of the four values $\\pm \\hat{u},\\pm \\hat{v}$ .", "In order to show the equivalence between ${\\Sigma }(t)$ and ${\\sigma }(t)$ , it thus remains to show that the transition rates between these possible values are identical for the two processes, or equivalently, that their dynamics are governed by the same master equation.", "Indeed, one finds that the master equation for ${\\sigma }$ , Eq.", "(REF ), describes the dynamics of ${\\Sigma }(t)$ too.", "A change of sign of $\\Sigma _{u}\\left(t\\right)$ corresponds to one of the transitions $E \\leftrightarrow S$ and $W \\leftrightarrow N$ , while a change of sign of $\\Sigma _{v}\\left(t\\right)$ corresponds to one of the transitions $E \\leftrightarrow N$ and $W \\leftrightarrow S$ , and each of these transitions occurs at rate $\\gamma $ , leading to the master equation (REF ).", "This completes the proof of the equivalence between ${\\Sigma }(t)$ and ${\\sigma }(t)$ .", "It follows that $\\sqrt{2} \\, \\sigma _{x}\\left(t\\right)$ and $\\sqrt{2}\\, \\sigma _{y}\\left(t\\right)$ are two statistically-independent telegraphic noises, each of which takes the values $\\pm 1$ and stochastically switches sign at rate $\\gamma $ ." ], [ "Decoupling of the particle's position", "Now that we have seen that the noise decouples in the $u,v$ coordinates, it is reasonable to expect the analysis of the 2D RTP model to simplify when studied in these coordinates.", "Indeed, writing the Langevin dynamics (REF ) explicitly in the $u,v$ coordinates, we have $\\dot{u}&=&-\\mu u\\left(t\\right)+\\sqrt{2} \\, v_{0}\\sigma _{u}\\left(t\\right) \\, ,\\\\\\dot{v}&=&-\\mu v\\left(t\\right)+\\sqrt{2} \\, v_{0}\\sigma _{v}\\left(t\\right) \\, .$ One immediately observes that Eqs.", "(REF ) and () are decoupled, which makes the solution far simpler.", "Note that no coupling enters through the noise terms, since we have already shown that $\\sqrt{2}\\,\\sigma _{u}\\left(t\\right)$ and $\\sqrt{2}\\,\\sigma _{v}\\left(t\\right)$ are statistically-independent telegraphic noises.", "Moreover, Eqs.", "(REF ) and () are mathematically equivalent to the equations that describe two noninteracting, harmonically-confined RTP's in 1D.", "As a result, $u(t)$ and $v(t)$ are two statistically-independent processes, each of which corresponds to a 1D RTP whose free velocity is given by $v_{0}$ .", "This decoupling enables us to immediately solve the 2D model exactly, as we now explain.", "Consider the Green's function $\\mathcal {P}_{{\\sigma }{\\sigma }^{\\prime }}\\left({r},t\\;|\\;{r}\\text{’},t\\text{’}\\right)$ that gives the joint distribution of the RTP's position ${r}$ and orientation ${\\sigma }$ at time $t$ , conditioned on their values at time $t^{\\prime } < t$ .", "We find that the Green's function decomposes (in the $u,v$ coordinates) as $\\mathcal {P}_{{\\sigma }{\\sigma }^{\\prime }}\\left({r},t\\;|\\;{r}\\text{’},t\\text{’}\\right)=P_{\\sigma _{u}\\sigma _{u}^{\\prime }}\\left(u,t\\;|\\;u\\text{’},t\\text{’}\\right)P_{\\sigma _{v}\\sigma _{v}^{\\prime }}\\left(v,t\\;|\\;v\\text{’},t\\text{’}\\right) \\,,$ where $P_{\\sigma \\sigma ^{\\prime }}\\left(x,t\\;|\\;x\\text{’},t\\text{’}\\right)$ is the Green's function for an RTP in 1D.", "Eq.", "(REF ) follows immediately from the arguments given above.", "However, as an alternative approach, we also recover Eq.", "(REF ) by analyzing the joint Fokker-Planck equation for the position and orientation of the particle in Appendix , providing a useful check of this result.", "Similarly, the time-dependent position distribution decomposes as $\\mathcal {P}\\left({r},t\\right)=P\\left(u,t\\right)P\\left(v,t\\right) \\, ,$ where $P(\\dots ,t)$ is the position distribution of a 1D harmonically-confined RTP, and is given above.", "In particular, the steady-state distribution is given by $\\mathcal {P}_{\\text{st}}\\left(u,v\\right)=P_{\\text{st}}\\left(u\\right)P_{\\text{st}}\\left(v\\right)$ where $P_{\\text{st}}\\left(\\dots \\right)$ is given by Eq.", "(REF ).", "Explicitly, the steady-state distribution is $&&\\mathcal {P}_{\\text{st}}\\left(u,v\\right)=\\frac{4\\mu ^{2}}{2^{4\\beta }\\left[B\\left(\\beta ,\\beta \\right)v_{0}\\right]^{2}} \\nonumber \\\\&&\\qquad \\times \\left[\\left(1-\\left(\\frac{\\mu u}{v_{0}}\\right)^{2}\\right)\\left(1-\\left(\\frac{\\mu v}{v_{0}}\\right)^{2}\\right)\\right]^{\\beta -1} \\, ,$ where we recall that $\\beta =\\gamma /\\mu $ , see Fig.", "REF .", "The support of the distribution $\\mathcal {P}_{\\text{st}}\\left(u,v\\right)$ is the square $\\left|u\\right|,\\left|v\\right|<v_{0}/\\mu $ .", "In the active phase $\\beta < 1$ , the position of the particle accumulates near the edges of the support, the distribution becoming localized around the corners of the square $\\left|u\\right|,\\left|v\\right|<v_{0}/\\mu $ in the limit $\\beta \\ll 1$ .", "In the passive phase $\\beta > 1$ the distribution is maximal near the center of the trap, and in the diffusive limit $\\beta \\gg 1$ , typical fluctuations are described by an isotropic Gaussian distribution around the origin.", "This corresponds to the passive limit in which the noise can be approximated as white.", "For a free particle ($\\mu =0$ ), the distribution never reaches a steady state.", "However, the time-dependent distribution simplifies, since Eq.", "(REF ) reduces to $\\mathcal {P}_{\\text{free}}\\left(u,v,t\\right)=P_{\\text{free}}\\left(u,t\\right)P_{\\text{free}}\\left(v,t\\right) \\, ,$ where $P_{\\text{free}}$ is given by (REF ).", "As a useful check of these results, one can calculate the marginal distribution of $x=\\left(u+v\\right)/\\sqrt{2}$ and compare it to the previously-known results [59], [52].", "In Appendix we perform this check explicitly for the stationary distribution (REF ) for $\\beta \\in \\left\\lbrace 1,2\\right\\rbrace $ , and for the time-dependent distribution (REF ) for a free particle, and find perfect agreement." ], [ "Anisotropy of the distribution", "In order to quantify the anisotropy of the distribution, one can consider the marginal distribution of the polar coordinate $\\theta $ of the particle's position in the $xy$ plane.", "In the steady state, this distribution is given by $&& p_{\\text{marginal,st}}\\left(\\theta \\right)\\nonumber \\\\&&=\\int _{0}^{\\infty }\\mathcal {P}_{\\text{st}}\\left(u=r\\cos \\left(\\theta +\\frac{\\pi }{4}\\right),v=-r\\sin \\left(\\theta +\\frac{\\pi }{4}\\right)\\right)rdr \\nonumber \\\\&& = \\int _{0}^{M\\left(\\phi \\right)}\\frac{4\\left[\\left(1-\\left(r\\cos \\phi \\right)^{2}\\right)\\left(1-\\left(r\\sin \\phi \\right)^{2}\\right)\\right]^{\\beta -1}}{2^{4\\beta }\\left[B\\left(\\beta ,\\beta \\right)\\right]^{2}}rdr \\, , \\nonumber \\\\$ where $M\\left(\\phi \\right)\\equiv \\min \\left\\lbrace \\left|\\frac{1}{\\cos \\phi }\\right|,\\left|\\frac{1}{\\sin \\phi }\\right|\\right\\rbrace \\, ,$ and $\\phi = \\theta + \\pi /4$ .", "It is independent of $\\mu $ and $v_0$ (as one could expect from dimensional analysis).", "For certain values of $\\beta $ , the integral can be solved, yielding for instance $&& \\left.p_{\\text{marginal,st}}\\left(\\theta \\right)\\right|_{\\beta =1}=\\frac{M^{2}\\left(\\phi \\right)}{8},\\\\&& \\left.p_{\\text{marginal,st}}\\left(\\theta \\right)\\right|_{\\beta =2} \\nonumber \\\\&&=\\frac{3M^{2}\\left(\\phi \\right)}{256}\\left[M^{4}\\left(\\phi \\right)\\left(1-\\cos \\left(4\\phi \\right)\\right)-12M^{2}\\left(\\phi \\right)+24\\right] .", "\\nonumber \\\\$ $p_{\\text{marginal,st}}\\left(\\theta \\right)$ is plotted in Fig.", "REF for $\\beta = 2/3$ and $\\beta =2$ .", "$p_{\\text{marginal,st}}\\left(\\theta \\right)$ is maximal (as a function of $\\theta $ ) in the directions of the possible orientations of the noise, $\\theta \\in \\left\\lbrace 0, \\pi /2,\\pi , 3\\pi /2\\right\\rbrace $ , and minimal in the directions of the $u$ and $v$ axes.", "In the active limit $\\beta \\ll 1$ , the anisotropy becomes very pronounced, because the distribution is localized around the corners of the square $\\left|x\\right|,\\left|y\\right|<v_{0}/\\mu $ .", "In the opposite (diffusive) limit, $\\beta \\gg 1$ , the anisotropy becomes very weak (as explained above), i.e., $p_{\\text{marginal,st}}\\left(\\theta \\right)$ is nearly uniform on the interval $0<\\theta <2\\pi $ .", "These limiting behaviors are not shown in the figure.", "Figure: The marginal steady-state distribution () of the polar angle θ\\theta of the RTP's position for β=2/3\\beta = 2/3 (solid line) and β=2\\beta = 2 (dashed line)." ], [ "Relaxation to the steady state for a generic initial orientation distribution", "One can easily extend the discussion to the slightly more general problem of a 2D harmonically-confined RTP initially at the origin $x\\left(t=0\\right)=y\\left(t=0\\right)$ whose initial orientation is given by some distribution $\\left(p_{E}^{\\left(0\\right)},p_{N}^{\\left(0\\right)},p_{W}^{\\left(0\\right)},p_{S}^{\\left(0\\right)}\\right)$ where $p_{i}^{\\left(0\\right)}=p_{i}\\left(t=0\\right)$ is the probability that the initial orientation is in the direction $i$ .", "For a deterministic initial orientation, the position distribution $\\mathcal {P}\\left({r},t\\right)$ will decompose in the $u,v$ coordinates (this can be seen, for instance, by summing Eq.", "(REF ) with ${r}^{\\prime } = t^{\\prime } = 0$ over the four possible values of ${\\sigma }$ ).", "This result can be easily extended to some general initial orientation distribution (REF ) by using the superposition principle, and the result will simply be $&& \\mathcal {P}\\left(u,v,t\\right)=\\sum _{\\sigma _{1}=\\pm }\\sum _{\\sigma _{2}=\\pm }p_{\\sigma _{u},\\sigma _{v}}^{\\left(0\\right)} \\nonumber \\\\&& \\quad \\times \\, P\\left(u,t\\,|\\,\\sigma _{u}\\left(t=0\\right)=\\sigma _{1}\\right)P\\left(v,t\\,|\\,\\sigma _{v}\\left(t=0\\right)=\\sigma _{2}\\right) \\,, \\nonumber \\\\$ where, in the expression $p_{\\sigma _{u},\\sigma _{v}}^{\\left(0\\right)}$ , we identify the four possible orientations of the noise with their corresponding $u$ and $v$ components of the noise, i.e., $E\\equiv \\left(+,+\\right),\\;\\;\\; N\\equiv \\left(+,-\\right),\\;\\;\\; W\\equiv \\left(-,-\\right),\\;\\;\\; S\\equiv \\left(-,+\\right) ,$ and where $P\\left(u,t\\,|\\,\\sigma \\left(t=0\\right)=\\sigma _{0}\\right)$ is the position distribution of a 1D harmonically-confined RTP whose initial orientation is $\\sigma _0$ .", "Eq.", "(REF ) describes the relaxation of the position distribution to the steady state (REF ), which is reached in the long-time limit $t \\rightarrow \\infty $ for any initial condition." ], [ "First-passage and survival properties", "The statistical independence of $u(t)$ and $v(t)$ has additional important consequences, beyond the decomposition (REF ) of their joint distribution.", "One such consequence is that survival and exit probabilities for a 2D RTP are related to the corresponding ones in 1D, for certain geometries.", "In 1D, such problems have been studied quite extensively [64], [65], [31], [51], [66], [33], [30], [68], [69], [67], [44].", "For instance, the first-passage time $t_{q}$ of the 2D RTP ${r}(t)$ out of the quadrant $\\left\\lbrace u>0,v>0\\right\\rbrace $ is defined as the first time at which the particle exits the quadrant.", "Clearly, $t_{q} = \\min \\left\\lbrace t_{u},t_{v}\\right\\rbrace $ where $t_u$ and $t_v$ are the first-passage times of $u(t)$ and $v(t)$ out of the half lines $u>0$ and $v>0$ respectively.", "Now, since $u(t)$ and $v(t)$ are statistically independent, so are $t_u$ and $t_v$ .", "As a result, the cumulative distribution function $\\text{Prob}\\left(t_{q}<\\tau \\right)$ of $t_q$ , that describes the probability that the particle `survives' (i.e., remains) inside the quadrant up to time $\\tau $ is given by the product of the probabilities that $u(t)$ and $v(t)$ remain positive up to time $\\tau $ , i.e, $\\text{Prob}\\left(t_{q}<\\tau \\right)&=&\\text{Prob}\\left(t_{u}<\\tau \\right)\\text{Prob}\\left(t_{v}<\\tau \\right)=\\nonumber \\\\&=&\\left[\\text{Prob}\\left(t_{u}<\\tau \\right)\\right]^{2} \\, ,$ where in the second equality we used the fact that $t_u$ and $t_v$ are identically distributed.", "The distribution of $t_u$ is exactly known for the free case $\\mu =0$ , see Refs.", "[31], [66], and using this result together with Eq.", "(REF ), one obtains the distribution of $t_q$ for the free case." ], [ "Extensions", "In this section we briefly outline some extensions of these results in several directions." ], [ "Two interacting RTP's", "It is fairly straightforward to extend our results to two 2D RTPs with a harmonic interaction (and possibly confined by an external harmonic potential), as long as their possible orientation vectors are the same.", "The Langevin equations describing the time evolution of the positions ${r}_{A}$ and ${r}_{B}$ of the two particles are $\\!\\!\\!\\!\\!\\!\\!\\!\\dot{{r}}_{A}&=&-\\mu {r}_{A}\\left(t\\right)-\\lambda \\left({r}_{A}\\left(t\\right)-{r}_{B}\\left(t\\right)\\right)+\\sqrt{2}\\,v_{0}{\\sigma }_{A}\\left(t\\right), \\\\\\!\\!\\!\\!\\!\\!\\!\\!\\dot{{r}}_{B}&=&-\\mu {r}_{B}\\left(t\\right)-\\lambda \\left({r}_{B}\\left(t\\right)-{r}_{A}\\left(t\\right)\\right)+\\sqrt{2}\\,v_{0}{\\sigma }_{B}\\left(t\\right),$ where $\\lambda $ is the strength of the harmonic interaction between the particles, and ${\\sigma }_{A}\\left(t\\right)$ and ${\\sigma }_{B}\\left(t\\right)$ are two statistically-independent noises, each of which is defined as in the single-particle model (REF ).", "By rewriting these equations in the $u,v$ coordinates, one simply finds that the problem decouples into two independent problems in the directions $u$ and $v$ , each of which consists of two 1D RTPs with a harmonic interaction, so the joint distribution of the positions of the two RTPs is given by $\\mathcal {P}\\left(u_{A},v_{A},u_{B},v_{B},t\\right)=P\\left(u_{A},u_{B},t\\right)P\\left(v_{A},v_{B},t\\right)\\,,$ where $u_A(t)$ and $v_A(t)$ are the $u$ and $v$ coordinate of the first particle (and similarly for the second particle).", "The corresponding 1D problem was studied in Ref.", "[70], and the steady state was obtained exactly $P\\left(x_{1},x_{2},t\\rightarrow \\infty \\right)$ for a general attractive interaction in the case $\\mu =0$ (however, our extension to 2D only works if the interaction is harmonic)." ], [ "General damping strength:", "One can extend this model to an RTP that is not (strongly) overdamped, by taking into account an additional inertial term $m \\ddot{{r}}$ in Eq.", "(REF ).", "The decomposition in the $u$ and $v$ coordinates will still work, i.e., Eq.", "(REF ) will still hold.", "The 1D distribution $P(x,t)$ is not exactly known in the presence of an inertial term.", "It is, however, known in the limit of zero damping, for the free case $\\mu = 0$ [53].", "Moreover, for $\\mu > 0$ the corresponding steady-state distribution $P_{\\text{st}}\\left(x\\right)$ is approximately known in various limits, such as the rapidly-tumbling limit $\\gamma \\rightarrow \\infty $ , for the RTP and similar models of active particles [72], [67], [73], [71]." ], [ "Diffusion:", "One can further take into account diffusion.", "In 1D, for instance, one can consider $\\dot{x}=-\\mu x+v_{0}\\sigma \\left(t\\right)+\\xi \\left(t\\right) \\, ,$ which is Eq.", "(REF ) for the harmonic force $f(x) = -\\mu x$ , with an additional white (Gaussian) noise term $\\xi (t)$ , with zero mean $\\left\\langle \\xi \\left(t\\right)\\right\\rangle =0$ and correlation function $\\left\\langle \\xi \\left(t\\right)\\xi \\left(t^{\\prime }\\right)\\right\\rangle =2D\\delta \\left(t-t^{\\prime }\\right)$ (here $D$ is the diffusion coefficient, and angular brackets denote ensemble averaging).", "Since Eq.", "(REF ) is linear, $x(t)$ can be written as the sum of two independent stochastic processes, $x\\left(t\\right)=x_{1}\\left(t\\right)+x_{2}\\left(t\\right) \\, ,$ which each follows the original dynamics but with just a single noise term, i.e., $\\dot{x}_{1}&=&-\\mu x_{1}+v_{0}\\sigma \\left(t\\right)\\,,\\\\\\dot{x}_{2}&=&-\\mu x_{2}+\\xi \\left(t\\right)\\,.$ As a result, the distribution $P(x,t)$ is simply given by the convolution $P\\left(x,t\\right)=\\int _{-\\infty }^{\\infty }P_{1}\\left(x_{1},t\\right)P_{2}\\left(x-x_{1},t\\right)dx_{1}\\,,$ where $P_{1}\\left(x_{1},t\\right)$ and $P_{2}\\left(x_{2},t\\right)$ are the distributions that correspond to the processes $x_1(t)$ and $x_2(t)$ , respectively, and are each exactly known ($x_1(t)$ being the confined 1D RTP studied in [44] as described above, and $x_2(t)$ being an Ornstein-Uhlenbeck process).", "This decomposition can be generalized, in 1D, to the sum of any number of noise terms of any type [74], [70].", "Returning to 2D, we could consider (REF ) with an additional white-noise term, $\\dot{{r}}=-\\mu {r}\\left(t\\right)+\\sqrt{2}\\,v_{0}{\\sigma }\\left(t\\right)+{\\xi }\\left(t\\right)\\,,$ where $\\left\\langle {\\xi }\\left(t\\right)\\right\\rangle =0$ and $\\left\\langle {\\xi }_{i}\\left(t\\right){\\xi }_{j}\\left(t^{\\prime }\\right)\\right\\rangle =2D\\delta _{ij}\\delta \\left(t-t^{\\prime }\\right)$ (for $i,j\\in \\left\\lbrace x,y\\right\\rbrace $ .", "As in the case $D=0$ , one finds that the dynamics decouple in the coordinates $u$ and $v$ , and in each of these two coordinates one has to consider the 1D dynamics (REF ).", "Thus, the decomposition (REF ) is still valid, but with $P(x,t)$ now given by Eq.", "(REF ).", "For this 2D model with diffusion, Eq.", "(REF ), we can calculate the (internal) entropy production rate (EPR), as was recently done in 1D in [75].", "The EPR is defined as the Kullback–Leibler distance between forward and backward paths (a precise definition can be found in [75]).", "They found that the EPR for a 1D diffusing RTP [described by Eq.", "(REF )] is given by $\\dot{S}_{\\text{1D}}=\\frac{2\\gamma v_{0}^{2}}{D\\left(\\mu +2\\gamma \\right)} \\, ,$ assuming that the observer knows the orientation $\\sigma (t)$ .", "Since the 2D model (REF ) decouples in the $u,v$ coordinates, and since the entropy is additive, one simply finds that the EPR in the 2D model is given by $S_{\\text{2D}}=2S_{\\text{1D}}$ ." ], [ "Stochastic resetting:", "Let us consider the case of a free particle, $\\mu =0$ , but with stochastic resetting of the position of the particle to the origin at rate $\\tilde{r}$ (which is not to be confused with the RTP's radial distance $r$ from the origin).", "For simplicity, let us assume that when resetting occurs, the orientation is randomly chosen from its 4 possible values with equal probabilities.", "At long times, the position distribution approaches a nonequilibrium steady state, that we now find exactly.", "We thus extend the 1D result of [51], that is given by $P_{\\text{free},\\tilde{r}}\\left(x\\right)=\\frac{\\lambda \\left(\\tilde{r}\\right)}{2}e^{-\\lambda \\left(\\tilde{r}\\right)\\left|x\\right|} \\, \\,$ where $\\lambda \\left(\\tilde{r}\\right)=\\frac{\\sqrt{\\tilde{r}\\left(\\tilde{r}+2\\gamma \\right)}}{v_{0}}\\,.$ Before performing the calculation, we notice that $u(t)$ and $v(t)$ are 1D RTP's with stochastic resetting.", "However, the resetting leads to a statistical dependence between $u$ and $v$ .", "Therefore, their joint distribution is nontrivial, since it is not given by the product of the 1D distributions.", "Denoting by $\\mathcal {P}_{\\text{free},\\tilde{r}}\\left({r},t\\right)$ the time-dependent position of the particle initially at the origin with a random orientation and with stochastic resetting, we find that [76] $&&\\!\\!\\!\\!\\!\\!\\!\\!\\mathcal {P}_{\\text{free},\\tilde{r}}\\left({r},t\\right)= \\nonumber \\\\&& e^{-\\tilde{r}t}\\mathcal {P}_{\\text{free},0}\\left({r},t\\right)+\\tilde{r}\\int _{0}^{t}e^{-\\tilde{r}\\tau }\\mathcal {P}_{\\text{free},0}\\left({r},\\tau \\right)d\\tau \\, ,$ where $\\mathcal {P}_{\\text{free},0}=\\mathcal {P}_{\\text{free}}$ is the distribution in the absence of resetting.", "Eq.", "(REF ) follows from a renewal approach, the first term on the right-hand side corresponding to the case in which no resetting events occurs on the time interval $[0,t]$ , and in the second term corresponding to the case in which at least one such event occurs, and the integral is over the time $t-\\tau $ of the last resetting event before time $t$ .", "The steady state is obtained by taking the limit $t\\rightarrow \\infty $ in (REF ), and it gives $\\mathcal {P}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left({r}\\right)=\\tilde{r}\\tilde{\\mathcal {P}}_{\\text{free}}\\left({r},\\tilde{r}\\right)$ where $\\tilde{\\mathcal {P}}_{\\text{free}}\\left({r},\\tilde{r}\\right)=\\int _{0}^{\\infty }e^{-\\tilde{r}t}\\mathcal {P}_{\\text{free}}\\left({r},t\\right)dt$ is the Laplace transform of the time dependent distribution $\\mathcal {P}_{\\text{free}}\\left({r},t\\right)$ .", "Figure: The steady-state distribution P r ˜ (x)P_{\\tilde{r}}(x) for a 1D RTP confined by a harmonic potential, for β=1/2\\beta = 1/2 (a), β=1\\beta =1 (b) and β=2\\beta = 2 (c), with resetting rates r ˜=0\\tilde{r}=0 (solid lines), r ˜=1\\tilde{r}=1 (dashed lines) and r ˜=3\\tilde{r}=3 (dotted lines), see Eq.", "().In this figure, units are chosen such that μ=v 0 =1\\mu = v_0 = 1.", "The resetting causes the particle to be localized closer to the center of the trap.It turns out to be convenient to take a Fourier transform in space, i.e., to calculate $\\mathcal {Q}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left({k}\\right)&=&\\int e^{i{k}\\cdot {r}}\\mathcal {P}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left({r}\\right)d{r}=\\nonumber \\\\&=&\\int _{0}^{\\infty }\\tilde{r}e^{-\\tilde{r}t}dt\\int e^{i{k}\\cdot {r}}\\mathcal {P}_{\\text{free}}\\left({r},t\\right)d{r}$ Working in the $u,v$ coordinates, we find, using the decomposition (REF ) of $\\mathcal {P}_{\\text{free}}^{\\text{st}}\\left({r},t\\right)$ , that $\\mathcal {Q}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left(k_{u},k_{v}\\right)=\\int _{0}^{\\infty }\\tilde{r}e^{-\\tilde{r}t}Q_{\\text{free}}\\left(k_{u},t\\right)Q_{\\text{free}}\\left(k_{v},t\\right)dt\\,,$ where $Q_{\\text{free}}\\left(k,t\\right) = \\int _{-\\infty }^{\\infty }e^{ikx}P_{\\text{free}}\\left(x,t\\right)dx$ is the Fourier transform of the distribution (REF ) of the position of a free 1D RTP, and is exactly known [31] (in the rest of this subsection, we choose units in which $\\gamma = v_0 = 1$ ) $&&\\!\\!\\!\\!\\!\\!", "Q_{\\text{free}}\\left(k,t\\right)=e^{-t} \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!", "\\times \\left[\\cosh \\left(t\\sqrt{1-k^{2}}\\right)+\\frac{1}{\\sqrt{1-k^{2}}}\\sinh \\left(t\\sqrt{1-k^{2}}\\right)\\right].$ Plugging (REF ) into (REF ), the integral over $t$ can be performed because the integrand can be written as a sum of exponentials.", "The result is: $&&\\!\\!\\!\\!\\!\\!\\!\\!\\mathcal {Q}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left(k_{u},k_{v}\\right) =\\nonumber \\\\&& \\frac{\\tilde{r}\\left(\\tilde{r}+2\\right)\\left[k_{u}^{2}+k_{v}^{2}+\\left(\\tilde{r}+2\\right)\\left(\\tilde{r}+4\\right)\\right]}{\\left(k_{u}^{2}-k_{v}^{2}\\right)^{2}+\\left[2\\left(k_{u}^{2}+k_{v}^{2}\\right)+\\tilde{r}\\left(\\tilde{r}+4\\right)\\right]\\left(\\tilde{r}+2\\right)^{2}} \\, .$ A useful check for the result (REF ) is that, when plugging in $k_v=0$ , we obtain $\\mathcal {Q}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left(k_{u},k_{v}=0\\right)=\\frac{\\tilde{r}\\left(\\tilde{r}+2\\right)}{k_{u}^{2}+\\tilde{r}\\left(\\tilde{r}+2\\right)} \\, ,$ which indeed coincides with the Fourier transform of $P_{\\text{free},\\tilde{r}}\\left(x\\right)$ from Eq.", "(REF ).", "All of the moments of the distribution $\\mathcal {P}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left({r}\\right)$ can be read off Eq.", "(REF ), since $\\mathcal {Q}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left({k}\\right)$ is the characteristic function of the distribution.", "The Taylor-series expansion of $\\mathcal {Q}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left(k_{u},k_{v}\\right)$ around $k_u = k_v = 0$ , up to quadratic order in $k_u$ and $k_v$ , is $\\mathcal {Q}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left(k_{u},k_{v}\\right)=1-\\frac{k_{u}^{2}+k_{v}^{2}}{\\tilde{r}\\left(\\tilde{r}+2\\right)}+\\frac{\\left(6\\tilde{r}+8\\right)k_{u}^{2}k_{v}^{2}}{\\tilde{r}^{2}\\left(\\tilde{r}+2\\right)^{2}\\left(\\tilde{r}+4\\right)} + \\dots .$ Of particular interest are the correlation functions that describe the statistical dependence between $u$ and $v$ .", "The first of these is their covariance $\\left\\langle uv\\right\\rangle -\\left\\langle u\\right\\rangle \\left\\langle v\\right\\rangle $ (where angular brackets denote averaging over the steady-state distribution $\\mathcal {P}_{\\text{free},\\tilde{r}}^{\\text{st}}$ ) which vanishes because the coefficients of $k_u k_v$ , $k_u$ and $k_v$ in Eq.", "(REF ) vanish.", "The lowest nonvanishing correlation function of $x$ and $y$ is therefore $&&\\left\\langle u^{2}v^{2}\\right\\rangle -\\left\\langle u^{2}\\right\\rangle \\left\\langle v^{2}\\right\\rangle =\\left.\\partial _{k_{u}}^{2}\\partial _{k_{v}}^{2}\\mathcal {Q}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left(k_{u},k_{v}\\right)\\right|_{k_{u}=k_{v}=0}\\nonumber \\\\&&-\\left.\\partial _{k_{u}}^{2}\\mathcal {Q}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left(k_{u},k_{v}\\right)\\right|_{k_{u}=k_{v}=0}\\left.\\partial _{k_{v}}^{2}\\mathcal {Q}_{\\text{free},\\tilde{r}}^{\\text{st}}\\left(k_{u},k_{v}\\right)\\right|_{k_{u}=k_{v}=0} \\nonumber \\\\&&=\\frac{4\\left(5\\tilde{r}+4\\right)}{\\tilde{r}^{2}\\left(\\tilde{r}+2\\right)^{2}\\left(\\tilde{r}+4\\right)} \\, .$ In fact, one can also consider stochastic resetting with an additional confining harmonic potential, with $\\mu > 0$ .", "This setting has not been considered even for a 1D RTP.", "Using a renewal approach that is very similar to the one that gives Eq.", "(REF ), one can show that the steady-state distribution is given by $P_{\\tilde{r}}\\left(x\\right)=\\tilde{r}\\int _{0}^{\\infty }e^{-\\tilde{r}\\tau }P\\left(x,\\tau \\right)d\\tau =\\tilde{r}\\tilde{P}\\left(x,\\tilde{r}\\right)$ where $P\\left(x,\\tau \\right)$ is the time-dependent position distribution in the absence of resetting, and $\\tilde{P}\\left(x,\\tilde{r}\\right)$ is its Laplace transform, which is given above in Eq.", "(REF ).", "Plugging Eq.", "(REF ) into (REF ), we obtain $P_{\\tilde{r}}\\left(x\\right)=\\tilde{r}B\\left(\\tilde{r}\\right)z^{\\bar{\\gamma }+\\frac{\\tilde{r}}{\\mu }-1}\\,_{2}F_{1}\\left(1-\\bar{\\gamma },\\bar{\\gamma };\\bar{\\gamma }+\\frac{\\tilde{r}}{\\mu };z\\right)\\,,$ where we recall that $z=\\left(1-\\mu |x|/v_{0}\\right)/2$ , and $B(\\tilde{r})$ is defined in Eq.", "(REF ) above.", "$P_{\\tilde{r}}\\left(x\\right)$ is plotted in Fig.", "REF .", "As seen in the figure, the resetting causes the distribution to be localized closer to the center of the trap.", "In the limit $\\tilde{r}\\gg 1$ , the trap becomes unimportant and one recovers the result for a resetting free particle (not shown).", "It would be interesting to consider the 2D case as well, but we will not do so here." ], [ "Higher dimensions and/or other geometries", "Our results can immediately be extended to an RTP in 3D, whose orientation vector points towards one of the 8 vertices of a cube centered at the origin, and stochastically jumps between adjacent vertices at a constant rate.", "One simply has to choose a coordinate system $(u,v,w)$ such that the vertices of the cube are in the directions $\\pm \\hat{u}\\pm \\hat{v}\\pm \\hat{w}$ .", "The problem then decouples into statistically-independent problems for $u$ , $v$ and $w$ , that each corresponds to a harmonically-confined RTP in 1D.", "One can similarly extend to dimensions higher than 3 as well.", "Other geometrical configurations for the orientation ${\\sigma }$ can also be considered, some of which are also exactly solvable.", "Let us introduce a planar hexagonal model, in which ${\\sigma }$ can take 8 possible values: six vertices of a regular hexagon, ${\\sigma }_{0},\\dots {\\sigma }_{5}$ where ${\\sigma }_{n} = \\left(\\cos \\frac{n\\pi }{3},\\sin \\frac{n\\pi }{3}\\right)$ , and two distinct “rest” states which we will denote by $0_0$ and $0_1$ .", "Each of the rest states sits at the origin, i.e., ${\\sigma } = 0$ , and the difference between them corresponds to an internal state of the particle.", "The dynamics of the orientation vector are as follows.", "From a nonzero orientation, the orientation can rotate by 60 degrees, i.e., ${\\sigma }_n \\rightarrow {\\sigma }_{(n\\pm 1) \\, \\text{mod} \\, 6},$ or else move to the rest state whose index has parity identical to $n$ , i.e., ${\\sigma }_n \\rightarrow 0_{n \\, \\text{mod}\\, 2}.$ From a rest state, the orientation can move to one of 3 nonzero states with the same index parities, as follows: $0_n \\rightarrow {\\sigma }_n , {\\sigma }_{n+2}, {\\sigma }_{n+4} \\, , \\quad \\text{for} \\; n\\in \\left\\lbrace 0,1\\right\\rbrace \\, .$ Each of the possible transitions in the system occurs at a constant rate which can be taken to be $\\gamma $ .", "The dynamics of ${\\sigma }$ are graphically represented in Fig.", "REF .", "Figure: A schematic representation of the dynamics of the orientation vector σ(t){\\sigma } (t) in the 2D RTP hexagonal model described in the text.", "The possible orientations are the 6 vertices of a regular hexagon σ 0 ,⋯σ 5 {\\sigma }_{0},\\dots {\\sigma }_{5}, and two distinct rest states 0 0 0_0 and 0 1 0_1 for which σ=0{\\sigma } = 0.Transitions occur between adjacent vertices of the hexagon, and between rest states and vertices whose indices have the same parity (the different parities are indicated by different colors in the figure).", "All transitions occur at the same, constant rate γ\\gamma .For clarity of the figure, the transitions between the rest state 0 1 0_1 and the states σ 1 ,σ 3 ,σ 5 {\\sigma }_{1}, {\\sigma }_{3}, {\\sigma }_{5} are not indicated, and the rest states are not placed exactly at the origin.Let us now briefly outline the solution to this hexagonal model.", "Remarkably, it turns out that the model can be written as the projection to the $xy$ plane of the 3D “cube” model described above, with $\\hat{u} \\!", "= \\!", "\\sqrt{\\frac{2}{3}}\\!\\left(\\!\\!", "\\begin{array}{c}1\\\\[1mm]0\\\\[1mm]1/\\sqrt{2}\\end{array}\\!\\!", "\\right),\\;\\hat{v}\\!", "=\\!", "\\sqrt{\\frac{2}{3}}\\!\\left(\\!\\!\\begin{array}{c}\\cos \\frac{2\\pi }{3}\\\\[1mm]\\sin \\frac{2\\pi }{3}\\\\[1mm]1/\\sqrt{2}\\end{array}\\!\\!", "\\right),\\;\\hat{w}\\!", "=\\!", "\\sqrt{\\frac{2}{3}}\\!\\left(\\!\\!", "\\begin{array}{c}\\cos \\frac{4\\pi }{3}\\\\[1mm]\\sin \\frac{4\\pi }{3}\\\\[1mm]1/\\sqrt{2}\\end{array}\\!\\!", "\\right)\\!.$ Indeed, one finds that the vertices of the hexagon correspond the $xy$ projections 6 of the vertices of the cube, i.e., one can identify $&{\\sigma }_{0}\\leftrightarrow \\hat{u}-\\hat{v}-\\hat{w},\\quad {\\sigma }_{1}\\leftrightarrow \\hat{u}+\\hat{v}-\\hat{w},\\\\&{\\sigma }_{2}\\leftrightarrow -\\hat{u}+\\hat{v}-\\hat{w},\\quad {\\sigma }_{3}\\leftrightarrow -\\hat{u}+\\hat{v}+\\hat{w},\\\\&{\\sigma }_{4}\\leftrightarrow -\\hat{u}-\\hat{v}+\\hat{w},\\quad {\\sigma }_{5}\\leftrightarrow \\hat{u}-\\hat{v}+\\hat{w}$ (when considering only the projections into the $xy$ plane of the 3D vectors, and up to a constant of proportionality which can be absorbed into the definition of $v_0$ ), while the two rest states can be identified with the two remaining vertices of the cube, $0_{1}\\leftrightarrow \\hat{u}+\\hat{v}+\\hat{w},\\quad 0_{0}\\leftrightarrow -\\hat{u}-\\hat{v}-\\hat{w} \\, .$ This correspondence can then be immediately exploited in order to solve the hexagonal model.", "For example, one can take the exact position distribution of the 3D model (which factorizes in the $u,v,w$ coordinates), and then, by marginalizing it along the $z$ direction, one obtains the position distribution of the hexagonal model.", "We do not present these calculations explicitly here.", "Yet another 2D model that can be solved by a decomposition into effective 1D models is that in which the orientation vector takes one of the 9 possible values $\\sigma _{x}\\hat{x}+\\sigma _{y}\\hat{y}$ , with $\\sigma _{x},\\sigma _{y}\\in \\left\\lbrace -1,0,1\\right\\rbrace $ , and where the possible transitions are those for which exactly one of the two components $\\sigma _{x},\\sigma _{y}$ changes by $\\pm 1$ .", "All possible transitions occur at the same rate $\\gamma $ .", "This model decomposes in the $x$ and $y$ coordinates, i.e., the processes $x(t)$ and $y(t)$ are statistically independent, and each of them is described by a 1D RTP model with a noise $\\sigma (t)$ , changing by $\\pm 1$ between the values $\\left\\lbrace -1,0,1\\right\\rbrace $ where all transitions that are possible occur at rate $\\gamma $ .", "To summarize, we calculated the exact time-dependent distribution $P\\left({r},t\\right)$ of the position ${r}$ of an RTP in 2D whose orientation stochastically rotates by 90 degrees, confined by an external harmonic potential.", "In particular, we found the exact steady-state distribution $P_{\\text{st}}\\left({r}\\right)$ that is reached in the long-time limit, and also $P\\left({r},t\\right)$ for a “free” RTP (in the absence of an external potential).", "We achieved this by observing that in a properly-chosen coordinate system, the 2D problem decouples into statistically-independent 1D problems, whose solution has been exactly known previously.", "We extended these results in several directions.", "In particular, we showed how to account for diffusion of the RTP, extended the results to particular RTP models in dimension higher than 2, to two harmonically-interacting RTPs in 2D, and considered stochastic resetting of the RTP's position.", "It is worth noting that a decomposition analogous to that of our $P\\left({r},t\\right)$ holds for a random walker hopping on a 2D square lattice in discrete time (but not in continuous time).", "This is a classical result that has been known for quite some time [77], [78].", "It would be interesting to further extend these results to anharmonic potentials and/or to other 2D RTP models, whose orientation changes discontinuously in time, but not by a 90-degree rotation.", "This presents a major challenge because the equation of motion would then not decouple in the $(u,v)$ coordinates as in the case studied here.", "We hope that the theoretical insight that is gained from the exact solution of the particular case studied may shed light on the more general case.", "Another interesting direction for future research is that of systems of many RTPs.", "1D chains and gases of RTPs were studied in Refs.", "[79], [80], [81], and it would be interesting to investigate the 2D case.", "Acknowledgments: NRS thanks Oded Farago for a collaboration on related topics.", "This research was supported by ANR grant ANR-17-CE30-0027-01 RaMaTraF." ], [ "Proving the Green's function decomposition via the Fokker-Planck approach", "As described in the main text, the decomposition (REF ) of the Green's function follows immediately from the fact that Eqs.", "(REF ) and () are decoupled, together with the statistical independence of the processes $\\sigma _u(t)$ and $\\sigma _v(t)$ .", "Nevertheless, as a useful check, we recover Eq.", "(REF ) by using a Fokker-Planck (FP) approach.", "Although it is rather technical, it may be more natural to some readers, as this approach was used in several previous works.", "The joint distribution $\\mathcal {P}_i(x,y,t)$ of the position $(x,y)$ and orientation $i = E,N,W,S$ of the RTP evolves according to the FP equation [59], [63] $\\frac{\\partial }{\\partial t} \\mathcal {P}_E(x,y,t) &=& \\frac{\\partial }{\\partial x}\\left[\\left(\\mu x-\\sqrt{2}\\,v_{0}\\right)\\mathcal {P}_{E}\\right]+\\frac{\\partial }{\\partial y}\\left(\\mu y\\mathcal {P}_{E}\\right)+\\gamma \\left(\\mathcal {P}_{N}+\\mathcal {P}_{S}\\right)-2\\gamma \\mathcal {P}_{E}\\,, \\\\\\frac{\\partial }{\\partial t} \\mathcal {P}_N(x,y,t) &=& \\frac{\\partial }{\\partial x}\\left(\\mu x\\mathcal {P}_{N}\\right)+\\frac{\\partial }{\\partial y}\\left[\\left(\\mu y-\\sqrt{2}\\,v_{0}\\right)\\mathcal {P}_{N}\\right]+\\gamma \\left(\\mathcal {P}_{E}+\\mathcal {P}_{W}\\right)-2\\gamma \\mathcal {P}_{N}\\,, \\\\\\frac{\\partial }{\\partial t} \\mathcal {P}_W(x,y,t) &=& \\frac{\\partial }{\\partial x}\\left[\\left(\\mu x+\\sqrt{2}\\,v_{0}\\right)\\mathcal {P}_{W}\\right]+\\frac{\\partial }{\\partial y}\\left(\\mu y\\mathcal {P}_{W}\\right)+\\gamma \\left(\\mathcal {P}_{N}+\\mathcal {P}_{S}\\right)-2\\gamma \\mathcal {P}_{W}\\,, \\\\\\frac{\\partial }{\\partial t} \\mathcal {P}_S(x,y,t) &=& \\frac{\\partial }{\\partial x}\\left(\\mu x\\mathcal {P}_{S}\\right)+\\frac{\\partial }{\\partial y}\\left[\\left(\\mu y+\\sqrt{2}\\,v_{0}\\right)\\mathcal {P}_{S}\\right]+\\gamma \\left(\\mathcal {P}_{E}+\\mathcal {P}_{W}\\right)-2\\gamma \\mathcal {P}_{S}\\,.$ However, it is far more convenient to solve the problem in the $u,v$ coordinates (REF ).", "The FP equations that correspond to the Langevin equations (REF ) and () are $\\frac{\\partial }{\\partial t}\\mathcal {P}_{E}\\left(u,v,t\\right)&=&\\frac{\\partial }{\\partial u}\\left[\\left(\\mu u-v_{0}\\right)\\mathcal {P}_{E}\\right]+\\frac{\\partial }{\\partial v}\\left[\\left(\\mu v-v_{0}\\right)\\mathcal {P}_{E}\\right]+\\gamma \\left(\\mathcal {P}_{N}+\\mathcal {P}_{S}\\right)-2\\gamma \\mathcal {P}_{E}\\,,\\\\\\frac{\\partial }{\\partial t}\\mathcal {P}_{N}\\left(u,v,t\\right)&=&\\frac{\\partial }{\\partial u}\\left[\\left(\\mu u-v_{0}\\right)\\mathcal {P}_{N}\\right]+\\frac{\\partial }{\\partial v}\\left[\\left(\\mu v+v_{0}\\right)\\mathcal {P}_{N}\\right]+\\gamma \\left(\\mathcal {P}_{E}+\\mathcal {P}_{W}\\right)-2\\gamma \\mathcal {P}_{N}\\,,\\\\\\frac{\\partial }{\\partial t}\\mathcal {P}_{W}\\left(u,v,t\\right)&=&\\frac{\\partial }{\\partial u}\\left[\\left(\\mu u+v_{0}\\right)\\mathcal {P}_{W}\\right]+\\frac{\\partial }{\\partial v}\\left[\\left(\\mu v+v_{0}\\right)\\mathcal {P}_{W}\\right]+\\gamma \\left(\\mathcal {P}_{N}+\\mathcal {P}_{S}\\right)-2\\gamma \\mathcal {P}_{W}\\,,\\\\\\frac{\\partial }{\\partial t}\\mathcal {P}_{S}\\left(u,v,t\\right)&=&\\frac{\\partial }{\\partial u}\\left[\\left(\\mu u+v_{0}\\right)\\mathcal {P}_{S}\\right]+\\frac{\\partial }{\\partial v}\\left[\\left(\\mu v-v_{0}\\right)\\mathcal {P}_{S}\\right]+\\gamma \\left(\\mathcal {P}_{E}+\\mathcal {P}_{W}\\right)-2\\gamma \\mathcal {P}_{S}\\,.$ We now wish to show that the solutions to these equations decompose to solutions of the FP equations for a 1D RTP, $\\frac{\\partial }{\\partial t}P_{+}\\left(x,t\\right)&=&\\frac{\\partial }{\\partial u}\\left[\\left(\\mu u-v_{0}\\right)P_{+}\\right]+\\gamma P_{-}-\\gamma P_{+}\\,,\\\\\\frac{\\partial }{\\partial t}P_{-}\\left(x,t\\right)&=&\\frac{\\partial }{\\partial u}\\left[\\left(\\mu u+v_{0}\\right)P_{-}\\right]+\\gamma P_{+}-\\gamma P_{-}\\,.$ Indeed, one can verify directly that given any two solutions $P_{\\sigma _{u}}^{\\left(1\\right)}\\left(u,t\\right)$ and $P_{\\sigma _{v}}^{\\left(2\\right)}\\left(v,t\\right)$ to the 1D equations (REF ) and (), $\\mathcal {P}_{\\sigma _{u},\\sigma _{v}}\\left(u,v,t\\right)=P_{\\sigma _{u}}^{\\left(1\\right)}\\left(u,t\\right)P_{\\sigma _{v}}^{\\left(2\\right)}\\left(v,t\\right)$ is a solution to Eqs.", "(REF )-(), under the identification $E\\equiv \\left(+,+\\right),\\quad N\\equiv \\left(+,-\\right),\\quad W\\equiv \\left(-,-\\right),\\quad S\\equiv \\left(-,+\\right)$ between the possible orientations of ${\\sigma }$ and the corresponding signs of its components $\\sigma _u$ and $\\sigma _v$ .", "Let us demonstrate that this is indeed the case.", "Taking a time derivative of Eq.", "(REF ) for $\\sigma _u = \\sigma _v = +$ , one obtains, using Eq.", "(REF ), $\\frac{\\partial }{\\partial t}\\mathcal {P}_{E}\\left(u,v,t\\right)&=&\\frac{\\partial }{\\partial t}P_{+}^{\\left(1\\right)}\\left(u,t\\right)P_{+}^{\\left(2\\right)}\\left(v,t\\right)+P_{+}^{\\left(1\\right)}\\left(u,t\\right)\\frac{\\partial }{\\partial t}P_{+}^{\\left(2\\right)}\\left(v,t\\right) \\nonumber \\\\&=&\\left\\lbrace \\frac{\\partial }{\\partial u}\\left[\\left(\\mu u-v_{0}\\right)P_{+}^{\\left(1\\right)}\\left(u,t\\right)\\right]+\\gamma P_{-}^{\\left(1\\right)}\\left(u,t\\right)-\\gamma P_{+}^{\\left(1\\right)}\\left(u,t\\right)\\right\\rbrace P_{+}^{\\left(2\\right)}\\left(v,t\\right) \\nonumber \\\\&+& P_{+}^{\\left(1\\right)}\\left(u,t\\right)\\left\\lbrace \\frac{\\partial }{\\partial v}\\left[\\left(\\mu v-v_{0}\\right)P_{+}^{\\left(2\\right)}\\left(v,t\\right)\\right]+\\gamma P_{-}^{\\left(2\\right)}\\left(v,t\\right)-\\gamma P_{+}^{\\left(2\\right)}\\left(v,t\\right)\\right\\rbrace \\nonumber \\\\&=&\\frac{\\partial }{\\partial u}\\left[\\left(\\mu u-v_{0}\\right)\\mathcal {P}_{E}\\right]+\\gamma \\mathcal {P}_{S}-\\gamma \\mathcal {P}_{E}+\\frac{\\partial }{\\partial v}\\left[\\left(\\mu v-v_{0}\\right)\\mathcal {P}_{E}\\right]+\\gamma \\mathcal {P}_{N}-\\gamma \\mathcal {P}_{E} \\,,$ which indeed coincides with the right hand side of Eq.", "(REF ).", "The decomposition (REF ) of the Green's function given in the main text is a particular case of Eq.", "(REF ) in which $P_{\\sigma _{u}}^{\\left(1\\right)}\\left(u,t=t^{\\prime }\\right)=\\delta \\left(u-u^{\\prime }\\right)\\delta _{\\sigma _{u},\\sigma ^{\\prime }_{u}}$ and similarly in the $v$ direction.", "Similarly, the decomposition (REF ) of the position distribution is a particular case of Eq.", "(REF ) in which the initial condition is $P_{\\sigma _{u}}^{\\left(1\\right)}\\left(u,t=0\\right)=\\frac{1}{2}\\delta \\left(u\\right)$ and similarly in the $v$ direction." ], [ "Marginal distribution of $x$", "In [59], the marginal steady-state distribution of $x$ was calculated for the 2D RTP.", "As a useful check of (REF ) of the main text, we can reproduce this result.", "By using $x=\\left(u+v\\right)/\\sqrt{2}$ The marginal distribution that we predict is $P_{\\text{marginal,st}}\\left(x\\right)&=&\\int _{-\\infty }^{\\infty }du\\int _{-\\infty }^{\\infty }dv\\mathcal {P}_{\\text{st}}\\left(u,v\\right)\\delta \\left(x-\\frac{u+v}{\\sqrt{2}}\\right)\\nonumber \\\\&=& \\sqrt{2}\\int _{-\\infty }^{\\infty }du\\mathcal {P}_{\\text{st}}\\left(u,v=\\sqrt{2}x-u\\right) \\nonumber \\\\&=&\\frac{\\sqrt{2}}{Z}\\int _{\\sqrt{2}\\,\\left|x\\right|-v_{0}/\\mu }^{v_{0}/\\mu }du\\left[\\left(1-\\left(\\frac{\\mu u}{v_{0}}\\right)^{2}\\right)\\left(1-\\left(\\frac{\\mu }{v_{0}}\\right)^{2}\\left(\\sqrt{2}\\,\\left|x\\right|-u\\right)^{2}\\right)\\right]^{\\beta -1},$ where $Z^{-1}=\\frac{4\\mu ^{2}}{2^{4\\beta }\\left[B\\left(\\beta ,\\beta \\right)v_{0}\\right]^{2}}$ is a normalization factor, and we used the mirror symmetry $P_{\\text{marginal,st}}\\left(-x\\right)=P_{\\text{marginal,st}}\\left(x\\right)$ .", "The integral in (REF ) is in general not so easy to calculate.", "However, for $\\beta = 1$ and $\\beta = 2$ it evaluates to $\\left.P_{\\text{marginal,st}}\\left(x\\right)\\right|_{\\beta =1}=\\frac{\\mu }{2v_{0}}\\left(\\sqrt{2}-\\frac{\\mu \\left|x\\right|}{v_{0}}\\right)$ and $\\left.P_{\\text{marginal,st}}\\left(x\\right)\\right|_{\\beta =2}=\\frac{9\\mu }{8\\sqrt{2}v_{0}}\\left[\\frac{16}{15}-\\frac{8}{3}\\left(\\frac{\\mu x}{v_{0}}\\right)^{2}+\\frac{4\\sqrt{2}}{3}\\left|\\frac{\\mu x}{v_{0}}\\right|^{3}-\\frac{2}{15}\\sqrt{2}\\left|\\frac{\\mu x}{v_{0}}\\right|^{5}\\right]$ respectively, in perfect agreement with [59], [63] (note that in [59] an explicit expression for the marginal distribution, in terms of hypergeometric functions, was obtained for arbitrary $\\beta > 0$ ).", "We can preform the same check for the time-dependent marginal distribution of the $x$ coordinate of a free RTP, comparing with the exact result from Ref. [52].", "It turns out to be much simpler to perform the comparison in Fourier space.", "For simplicity, let us choose units in which $\\gamma = v_0 = 1$ .", "The Fourier transform of the distribution (REF ) of the position of a free 1D RTP is given by [31] Eq.", "(REF ) of the main text, which we give here again for convenience: $Q_{\\text{free}}\\left(k,t\\right)=\\int _{-\\infty }^{\\infty }e^{ikx}P_{\\text{free}}\\left(x,t\\right)dx=e^{-t}\\left[\\cosh \\left(t\\sqrt{1-k^{2}}\\right)+\\frac{1}{\\sqrt{1-k^{2}}}\\sinh \\left(t\\sqrt{1-k^{2}}\\right)\\right].$ Now, since $x=\\left(u+v\\right)/\\sqrt{2}$ , $u(t)$ and $v(t)$ both being independent and described by the same distribution (REF ), the marginal distribution $P_{\\text{marginal,free}}\\left(x,t\\right)$ is given by the convolution of the distributions of $u$ and $v$ .", "In Fourier space, the convolution becomes a product, which (taking into account the factor $\\sqrt{2}$ ) leads to $Q_{\\text{marginal,free}}\\left(k,t\\right)=\\int _{-\\infty }^{\\infty }e^{ikx}P_{\\text{marginal,free}}\\left(x,t\\right)dx=\\left[Q_{\\text{free}}\\left(\\frac{k}{\\sqrt{2}},t\\right)\\right]^{2} \\, .$ Using Eq.", "(REF ), we find $\\left[Q_{\\text{free}}\\left(k,t\\right)\\right]^{2} = \\frac{e^{-2t}}{2\\left(1-k^{2}\\right)}\\left[-k^{2}+\\left(2-k^{2}\\right)\\cosh \\left(2t\\sqrt{1-k^{2}}\\right)+2\\sqrt{1-k^{2}}\\sinh \\left(2t\\sqrt{1-k^{2}}\\right)\\right] \\, ,$ where we used the standard identities $\\cosh x\\sinh x=\\frac{\\sinh \\left(2x\\right)}{2},\\quad \\cosh ^{2}x=\\frac{\\cosh \\left(2x\\right)+1}{2},\\quad \\cosh ^{2}x-\\sinh ^{2}x=1 \\, .$ Eq.", "(REF ) is in perfect agreement with the result of [52], [63], [82].", "In fact, in [52], the Fourier transform was inverted and an expression for $P_{\\text{marginal,free}}\\left(x,t\\right)$ was obtained." ] ]
2207.10445
[ [ "A Linter for Isabelle: Implementation and Evaluation" ], [ "Abstract In interactive theorem proving, formalization quality is a key factor for maintainability and re-usability of developments and can also impact proof-checking performance.", "Commonly, anti-patterns that cause quality issues are known to experienced users.", "However, in many theorem prover systems, there are no automatic tools to check for their presence and make less experienced users aware of them.", "We attempt to fill this gap in the Isabelle environment by developing a linter as a publicly available add-on component.", "The linter offers basic configurability, extensibility, Isabelle/jEdit integration, and a standalone command-line tool.", "We uncovered 480 potential problems in Isabelle/HOL, 14016 in other formalizations of the Isabelle distribution, and an astonishing 59573 in the AFP.", "With a specific lint bundle for AFP submissions, we found that submission guidelines were violated in 1595 cases.", "We set out to alleviate problems in Isabelle/HOL and solved 168 of them so far; we found that high-severity lints corresponded to actual problems most of the time, individual users often made the same mistakes in many places, and that solving those problems retrospectively amounts to a substantial amount of work.", "In contrast, solving these problems interactively for new developments usually incurs only little overhead, as we found in a quantitative user survey with 22 participants (less than a minute for more than 60% of participants).", "We also found that a good explanation of problems is key to the users' ease of solving these problems (correlation coefficient 0.48), and their satisfaction with the end result (correlation coefficient 0.62)." ], [ "positioning,calc,shapes,fit,shadows.blur background foreground background,main,foreground bw ∈$\\in $ Technische Universität München {megdiche,stevensl,huch}@in.tum.de" ] ]
2207.10424
[ [ "Semantic-Aware Fine-Grained Correspondence" ], [ "Abstract Establishing visual correspondence across images is a challenging and essential task.", "Recently, an influx of self-supervised methods have been proposed to better learn representations for visual correspondence.", "However, we find that these methods often fail to leverage semantic information and over-rely on the matching of low-level features.", "In contrast, human vision is capable of distinguishing between distinct objects as a pretext to tracking.", "Inspired by this paradigm, we propose to learn semantic-aware fine-grained correspondence.", "Firstly, we demonstrate that semantic correspondence is implicitly available through a rich set of image-level self-supervised methods.", "We further design a pixel-level self-supervised learning objective which specifically targets fine-grained correspondence.", "For downstream tasks, we fuse these two kinds of complementary correspondence representations together, demonstrating that they boost performance synergistically.", "Our method surpasses previous state-of-the-art self-supervised methods using convolutional networks on a variety of visual correspondence tasks, including video object segmentation, human pose tracking, and human part tracking." ], [ "Introduction", "Correspondence is considered one of the most fundamental problems in computer vision.", "At their core, many tasks require learning visual correspondence across space and time, such as video object segmentation [47], [14], [72], object tracking [40], [29], [4], [37], and optical flow estimation [17], [31], [53], [60], [61].", "Despite its importance, prior art in visual correspondence has largely relied on supervised learning [70], [27], [64], which requires costly human annotations that are difficult to obtain at scale.", "Other works rely on weak supervision from methods like off-the-shelf optical flow estimators, or synthetic training data, which lead to generalization issues when confronted with the long-tailed distribution of real world images.", "Recognizing these limitations, many recent works [69], [73], [38], [36], [35], [33], [78] are exploring self-supervision to learn robust representations of spatiotemporal visual correspondence.", "Aside from creatively leveraging self-supervisory signals across space and time, these works generally share a critical tenet: evaluation on label propagation as an indication of representation quality.", "Given label information, such as segmentation labels or object keypoints, within an initial frame, the goal is to propagate these labels to subsequent frames based on correspondence.", "Figure: We compare Contrastive Random Walk (CRW)  and MoCo  on three different downstream tasks.", "CRW surpasses MoCo on the label propagation task, but is dramatically outperformed by MoCo on semantic segmentation and image classification (details in Appendix ).Let us briefly consider the human visual system and how it performs tracking.", "Many works have argued that our ability to track objects is rooted in our ability to distinguish and understand differences between said objects [21], [20].", "We have rough internal models of different objects, which we adjust by attending more closely to local locations that require fine-grained matching [81], [3].", "In other words, both high-level semantic information and low-level fined-grained information play an important role in correspondence for real visual systems.", "However, when we examine current self-supervised correspondence learning methods, we find them lacking under this paradigm.", "These methods often overprioritize performance on the label propagation task, and fail to leverage semantic information as well as humans can.", "In particular, when representations obtained under these methods are transferred to other downstream tasks which require a deeper semantic understanding of images, performance noticeably suffers (see Figure REF ).", "We show that label propagation and tracking-style tasks rely on frame-to-frame differentiation of low-level features, a kind of “shortcut” exploited by the contrastive-based self-supervised algorithms developed so far.", "Thus, representations learned via these tasks contain limited semantic information, and underperform drastically when used in alternative tasks.", "To this end, we propose Semantic-aware Fine-grained Correspondence (SFC), which simultaneously takes into account semantic correspondence and fine-grained correspondence.", "Firstly, we find that current image-level self-supervised representation learning methods e.g.", "MoCo [25] force the mid-level convolutional features to implicitly capture correspondences between similar objects or parts.", "Second, we design an objective which learns high-fidelity representations of fine-grained correspondence (FC).", "We do this by extending prior image-level loss functions in self-supervised representation learning to a dense paradigm, thereby encouraging local feature consistency.", "Crucially, FC does not use temporal information to learn this low-level correspondence, but our ablations show that this extension alone makes our model competitive with previous methods relying on temporal signals in large video datasets for pretraining.", "Prior works [74], [77] have shown that image-level self-supervision can further facilitate the dense self-supervision in a multitask framework.", "However, we surprisingly find that our fine-grained training objective and image-level semantic training objectives are inconsistent: each of them requires the model to encode conflicting information about the image, leading to degradation in performance when used in conjunction.", "We hypothesize that it is necessary to have two independent models, and propose a late fusion operation to combine separately pretrained semantic correspondence and fine-grained correspondence feature vectors.", "Figure REF overviews the proposed method.", "Through our ablations, we categorically verify that low-level fine-grained correspondence and high-level semantic correspondence are complementary, and indeed orthogonal, in the benefits they bring to self-supervised representation learning.", "The main contributions of our work are as follows: We propose to learn semantic-aware fine-grained correspondence (SFC), while most previous works consider and improve the two kinds of correspondence separately.", "We design a simple and effective self-supervised learning method tailored for low-level fine-grained correspondence.", "Despite using static images and discarding temporal information, we outperform previous methods trained on large-scale video datasets.", "Late fusion is an effective mechanism to prevent conflicting image-level and fine-grained training objectives from interfering with each other.", "Our full model (SFC) sets the new state-of-the-art for self-supervised approaches using convolutional networks on various video label propagation tasks, including video object segmentation, human pose tracking, and human part tracking.", "Figure: Overview of Semantic-aware Fine-grained Correspondence learning framework.", "By maximizing agreement between positive (similar) image pairs, convolutional representations capture semantic correspondences between similar objects implicitly.", "By encouraging the spatially close local feature vectors to be consistent, model can learn fine-grained correspondence explicitly.", "For downstream task, we utilize two kinds of correspondence together to achieve complementary effects." ], [ "Related Work", "Self-Supervised Representation Learning    Self-supervised representation learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets.", "Specifically, methods using instance-level pretext tasks have recently become dominant components in self-supervised learning for computer vision  [75], [48], [28], [1], [62], [25], [11], [46], [9], [6], [19], [52].", "Instance-level discrimination aims to pull embeddings of augmented views of the same image (positive pairs) close to each other, while trying to push away embeddings from different images (negative pairs).", "Recently, some works [22], [12] have discovered that even without negative pairs, self-supervised learning can exhibit strong performance.", "To obtain better transfer performance to dense prediction tasks such as object detection and semantic segmentation, other works [50], [77], [74] explore pretext tasks at the pixel level for representation learning.", "But empirically, we find that these methods fail to leverage fine-grained information well.", "Our fine-grained correspondence network (FC) is most closely related to PixPro [77] which obtains positive pairs by extracting features from the same pixel through two asymmetric pipelines.", "Both FC and PixPro can be seen as dense versions of BYOL [22], but the two methods have completely different goals.", "FC has many design choices tailored for correspondence learning: FC preserves spatial sensitivity by avoiding entirely a pixel propagation module which introduces a certain smoothing effect.", "Furthermore, we discard color augmentation and use higher resolution feature maps, as we find both modifications are beneficial to the fine-grained correspondence task.", "Finally, FC can achieve competitive performance to predominant approaches, with compelling computational and data efficiency.", "In contrast, the transfer performance of PixPro on the correspondence task is far behind its instance-level counterpart [22] and our FC.", "We note that DINO [7], a self-supervised Vision Transformer (ViT) [16], exhibits surprisingly strong correspondence properties and competitive performance on DAVIS-2017 benchmark.", "We speculate that the success of DINO on this task is attributed to the architecture of ViT and much more computation.", "Self-Supervised Correspondence Learning    Recently, numerous approaches have also been developed for correspondence learning in a self-supervised manner [69], [73], [71], [36], [35], [38], [33], [78].", "The key idea behind a number of these methods [69], [36], [35] is to propagate the color of one frame in a video to future frames.", "TimeCycle [73] relies on a cycle-consistent tracking pretext task.", "Along this line, CRW [33] cast correspondence as pathfinding on a space-time graph, also using cycle-consistency as a self-supervisory signal.", "VFS [78] propose to learn correspondence implicitly by performing image-level similarity learning.", "Despite the success of these methods, they all rely heavily on temporal information from videos as the core form of self-supervision signal.", "In our work, we demonstrate that representations with good space-time correspondence can be learned even without videos.", "Moreover, our framework is an entirely alternative perspective on correspondence learning, which can be flexibly adapted with other video-based methods to further improve performance.", "Semantic Correspondence    We borrow the notion of semantic correspondence from literature [54], [55], [56], [44], [45], [41], [43], [63], which aim to establish dense correspondences across images depicting different instances of the same object categories.", "Evaluation of these methods exists solely on image datasets with keypoint annotations, which can be more forgiving and translates poorly to the real world.", "Our semantic correspondence is evaluated on video, which we argue is a much more realistic setting for correspondence.", "In addition, many supervised semantic correspondence approaches [13], [44], [30], [41] adopt a CNN pre-trained on image classification as their frozen backbone, but we explore a self-supervised pre-trained backbone as an alternative." ], [ "Method", "While our framework is compatible with a wide array of contemporary self-supervised representation methods, we demonstrate its efficacy with two recent approaches: MoCo [25] and BYOL [22], which are reviewed in Section REF .", "Next, in Section REF , we argue that image-level methods implicitly learn high-level semantic correspondence.", "In Section REF , we propose our framework to improve fine-grained correspondence learning.", "Finally, in Section REF , we show how to unify these two complementary forms of correspondence to improve performance on video label propagation tasks." ], [ "Background", "In image-level self-supervised representation learning, we seek to minimize a distance metric between two random augmentations $\\mathbf {x}_{1}$ and $\\mathbf {x}_{2}$ of a single image $\\mathbf {x}$ .", "One popular framework for doing this is contrastive learning [23].", "Formally, two augmented views $\\mathbf {x}_{1}$ and $\\mathbf {x}_{2}$ are fed into an online encoder and target encoder respectively, where each encoder consists of a backbone $\\mathbf {f}$ (e.g.", "ResNet), and a projection MLP head $\\mathbf {g}$ .", "The $l_2$ -normalized output global feature vectors for $\\mathbf {x}_{1}$ and $\\mathbf {x}_{2}$ can be represented as $\\mathbf {z}_{1} \\triangleq \\mathbf {g}_{\\theta }(\\mathbf {f}_{\\theta }(\\mathbf {x}_{1}))$ and $\\mathbf {z}_{2} \\triangleq \\mathbf {g}_{\\xi }(\\mathbf {f}_{\\xi }(\\mathbf {x}_{2}))$ , where $\\theta $ and $\\xi $ are parameters of the two respective networks.", "Let the negative features obtained from $K$ different images be represented by the set $\\mathcal {S} = \\lbrace \\mathbf {s}_{1}, \\mathbf {s}_{2}, \\dots , \\mathbf {s}_{K} \\rbrace $ .", "Then contrastive learning uses the InfoNCE [48] to pull $\\mathbf {z}_{1}$ close to $\\mathbf {z}_{2}$ while pushing it away from negative features: $\\mathcal {L}_{\\text{InfoNCE}} = -\\log \\frac{\\exp (\\mathbf {z}_{1} {\\cdot } \\mathbf {z}_{2} / \\tau )}{\\exp (\\mathbf {z}_{1} {\\cdot } \\mathbf {z}_{2} / \\tau ) + \\sum _{k = 1}^K \\exp (\\mathbf {z}_{1} {\\cdot } \\mathbf {s}_{k} / \\tau )}$ where $\\tau $ is the temperature hyperparameter.", "While numerous methods have been explored to construct the set of negative samples $\\mathcal {S}$ , we choose MoCo [25] for obtaining semantic correspondence representations, which achieves this goal via a momentum-updated queue.", "In particular, the target encoder's parameters $\\xi $ are the exponential moving average of the online parameters $\\theta $ : $\\xi \\leftarrow m \\xi + (1-m)\\theta , \\qquad m \\in [0, 1]$ where $m$ is the exponential moving average parameter.", "Some recent works [22], [12] show that it is not necessary to use negative pairs to perform self-supervised representation learning.", "One such method, BYOL [22], relies on an additional prediction MLP head $\\mathbf {q}_{\\theta }$ to transform the output of online encoder $\\mathbf {p}_{1} \\triangleq \\mathbf {q}_{\\theta }(\\mathbf {z}_{1}) $ .", "The contrastive objective then reduces to simply minimizing the negative cosine distance between the predicted features $\\mathbf {p}_{1}$ and the features obtained from the target encoder $\\mathbf {z}_{2}$ ($l_2$ -normalized): $\\mathcal {L}_{\\text{global}} = - \\frac{\\left<\\mathbf {p}_1, \\mathbf {z}_2 \\right>}{\\Vert \\mathbf {p}_1\\Vert _{2}\\cdot \\Vert \\mathbf {z}_2\\Vert _{2}}$ Note again that MoCo and BYOL bear striking similarities in their formulation and training objectives.", "In the following section, we hypothesize that such similarities in frameworks lead to similarities in types of features learned.", "In particular, we claim that image-level representations in general contain information about semantic correspondences." ], [ "Semantic Correspondence Learning", "Representations learned by current self-supervised correspondence learning methods may contain limited semantic information.", "To make the representations more neurophysiologically intuitive, we add the crucially missing semantic correspondence learning into our method.", "Recent image-level self-supervised methods learn representations by imposing invariances to various data augmentations.", "Two random crops sampled from the same image, followed by strong color augmentation [9] are considered as positive pairs.", "The augmentation significantly changes the visual appearance of the image but keeps the semantic meaning unchanged.", "The model can match positive pairs by attending only to the essential part of the representation, while ignoring other non-essential variations.", "As a result, different images with similar visual concepts are grouped together, inducing a latent space with rich semantic information [65], [10], [66].", "This is evidenced by the results shown in Figure REF , where MoCo [25] achieve high performance on tasks that require a deeper semantic understanding of images.", "Moreover, previous works [42], [78] demonstrate that correspondence naturally emerges in the middle-level convolutional features.", "Thus we conclude that current self-supervised representation methods can implicitly learn semantic correspondence well.", "We utilize one approach, MoCo [25], in our downstream correspondence task.", "In particular, only the pre-trained online backbone $\\mathbf {f}_{\\theta }$ is retained, while all other parts of the network, including the online projection head $\\mathbf {g}_{\\theta }$ and target encoder $\\mathbf {f}_{\\xi }, \\mathbf {g}_{\\xi }$ , are discarded.", "We use $\\mathbf {f}_{\\theta }$ to encode each image as a semantic correspondence feature map: $\\mathbf {F} = \\mathbf {f}_{\\theta }(\\mathbf {x}) \\in \\mathbb {R}^{H \\times W \\times C_{s}} $ , where $H$ and $W$ are spatial dimensions.", "Note also that we can adjust the size of the feature map by changing the stride of residual blocks, offering additional flexibility in the scale of semantic information we wish to imbue our representations with.", "Finally, we comment that the emergent mid-level feature behavior extends readily to MoCo, and moreover also to other self-supervised methods like BYOL [22] and SimCLR [9], as the encoders for all such methods are based on ResNet-style architectures.", "We can thus flexibly swap out the semantic correspondence backbone for any of these image-level self-supervised representations." ], [ "Fine-grained Correspondence Learning", "Only considering semantic information is not enough for correspondence learning, which often requires analyzing low-level variables such as object edge, pose, articulation, precise location and so on.", "Like most previous self-supervised methods, we also incorporate low-level fine-grained correspondence in our approach.", "BYOL-style methods [22] learn their representations by directly maximizing the similarity of two views of one image (positive pairs) in the feature space.", "This paradigm naturally connects with our intuitive understanding of correspondence: similar objects, parts and pixels should have similar representations.", "We are thus inspired to generalize this framework to a dense paradigm to learn fine-grained correspondence specifically.", "At a high level, we learn our embedding space by pulling local feature vectors belonging to the same spatial region close together.", "Specifically, given two augmented views $\\mathbf {x}_1$ and $\\mathbf {x}_{2}$ of one image, we extract their dense feature maps $\\mathbf {F}_{1} \\triangleq \\tilde{\\mathbf {f}}_{\\theta }(\\mathbf {x}_{1}) \\in \\mathbb {R}^{H \\times W \\times C_{f}} $ and $\\mathbf {F}_{2} \\triangleq \\tilde{\\mathbf {f}}_{\\xi }(\\mathbf {x}_{2}) \\in \\mathbb {R}^{H \\times W \\times C_{f}}$ by removing the global pooling layer in the encoders.", "We adopt a ResNet-style backbone, and we can thus reduce the stride of some residual blocks in order to obtain a higher resolution feature map.", "In addition, to maintain dense 2D feature vectors, we replace the MLPs in the projection head and prediction head with $1 \\times 1$ convolution layers.", "Then we can get dense prediction feature vectors $\\mathbf {P}_{1} \\triangleq \\tilde{\\mathbf {q}}_{\\theta }(\\tilde{\\mathbf {g}}_{\\theta }(\\mathbf {F}_{1})) \\in \\mathbb {R}^{H \\times W \\times D}$ and dense projection feature vectors $\\mathbf {Z}_{2} \\triangleq \\tilde{\\mathbf {g}}_{\\xi }(\\mathbf {F}_{2}) \\in \\mathbb {R}^{H \\times W \\times D}$ .", "$\\mathbf {P}_{1}^{i}$ denotes the local feature vector at the $i$ -th position of $\\mathbf {P}_{1}$ .", "Now, a significant question remains: for a given local feature vector $\\mathbf {P}_{1}^{i}$ , how can we find its positive correspondence local feature vector in $\\mathbf {Z}_{2}$ ?", "Positive Correspondence Pairs    Note that after we apply different spatial augmentations (random crop) to the two views, the local feature vectors on the two feature maps are no longer aligned.", "An object corresponding to a local feature vector in one view may even be cropped in another view.", "Thus, we only consider feature vectors corresponding to the same cropped region (overlapped areas of two views) and define a small spatial neighborhood around each local feature vector.", "All the local feature vectors in the spatial neighborhood are designated positive samples.", "Specifically, we construct a binary positive mask $\\mathbf {M} \\in \\mathbb {R}^{H\\cdot W \\times H \\cdot W} $ by computing the spatial distance between all pairs of local feature vectors with: $\\mathbf {M}_{ij}= {\\left\\lbrace \\begin{array}{ll}1 & \\operatorname{dist}(\\Phi (\\mathbf {P}_{1}^{i}), \\Phi (\\mathbf {Z}_{2}^{j})) \\leqslant r \\\\0 & \\operatorname{dist}(\\Phi (\\mathbf {P}_{1}^{i}), \\Phi (\\mathbf {Z}_{2}^{j})) > r \\\\\\end{array}\\right.", "}$ $\\Phi $ denotes an operation that translates the coordinates of the local feature vector to the original image space.", "$\\operatorname{dist}$ denotes the distance between coordinates of local feature vectors $\\mathbf {P}_{1}^{i}$ and $\\mathbf {Z}_{2}^{j}$ in the original image space.", "$r$ is positive radius, which controls a notion of locality.", "As we show in the experiment, this is a very important hyperparameter.", "In summary, all 1s in the $i$ -th row of $\\mathbf {M}$ represent the local feature vectors in $\\mathbf {Z}_{2}$ which are positive samples of the $i$ -th vector in $\\mathbf {P}_{1}$ .", "This process is illustrated in Figure REF .", "Figure: For a feature vector in one view 𝐱 1 \\mathbf {x}_{1}, we designate all the feature vectors in view 𝐱 2 \\mathbf {x}_{2} which belonging to the same spatial region as positive pairs.Learning Objectives    We construct a pairwise similarity matrix $\\mathbf {S}$ , where $\\mathbf {S} \\in \\mathbb {R}^{H\\cdot W \\times H\\cdot W} $ with: $\\mathbf {S}_{ij} = \\operatorname{sim}(\\mathbf {P}_{1}^{i}, \\mathbf {Z}_{2}^{j})$ $\\operatorname{sim}(\\mathbf {u}, \\mathbf {v})=\\frac{\\left<\\mathbf {u}, \\mathbf {v}\\right>}{\\Vert \\mathbf {u}\\Vert _2\\cdot \\Vert \\mathbf {v}\\Vert _2}$ denotes the cosine similarity between two vectors.", "We multiply the similarity matrix $\\mathbf {S}$ and the positive mask $\\mathbf {M}$ to get the masked similarity matrix $\\widetilde{\\mathbf {S}}=\\mathbf {S} \\odot \\mathbf {M}$ .", "Finally, the loss function seeks to maximize each element in the masked similarity matrix $\\widetilde{\\mathbf {S}}$ : $\\mathcal {L}_{\\text{local}} = - \\frac{\\sum _{i=1}^{H\\cdot W}\\sum _{j=1}^{H\\cdot W} \\widetilde{\\mathbf {S}}_{ij}}{\\sum _{i=1}^{H\\cdot W}\\sum _{j=1}^{H\\cdot W} \\mathbf {M}_{ij}}$" ], [ "Fusion of Correspondence Signals", "To combine semantic correspondence (Sec.", "REF ) and fine-grained correspondence (Sec.", "REF ) representations, one intuitive approach is simultaneously train with both semantic-level and fine-grained level losses, like [74], [77], [2].", "However, our investigations reveal that jointly using both these objectives may not be sensible, as the representations fundamentally conflict, in two main ways.", "1)receptive fields.", "We find that fine-grained correspondence relies heavily on a higher resolution feature map (see Appendix REF ).", "But trivially increasing the feature resolution of a semantic-level method like MoCo [25] during training causes performance on the label propagation task to drop a lot.", "This is because low-level fine-grained information needs small receptive fields while relatively large receptive fields are necessary to encode global high-level semantic information.", "2)data augmentation.", "Similar to VFS [78], we find that color augmentation (e.g.", "color distortion and grayscale conversion) is harmful to learning fine-grained correspondence, since fine-grained correspondence heavily relies on low-level color and texture details.", "In contrast, image-level self-supervised learning methods learn semantic representations by imposing invariances on various data transformations.", "For example, as seen in the augmentations ablation for SimCLR (Fig.", "5 in [9]), removing color augmentation leads to severe performance issues.", "We conclude that an end-to-end framework utilizing multiple levels of supervision does not always work, especially when these modes of supervision have different requirements on both the model and data sides (see Sec.", "REF for experimental evidence).", "We argue it is necessary to decouple the two models, which is consistent with how humans also attend very differently when re-identifying an object’s main body versus its accurate pixel boundary.", "Inspired by Two-Stream ConvNets[58], which use a late fusion to combine two kinds of complementary information, and hypercolumns [24], which effectively leverage information across different layers of CNNs, we implement a similar mechanism to fuse our orthogonal correspondences.", "For a given image, suppose we have two networks, one which produces a semantic correspondence feature map $\\mathbf {F}_{s} \\triangleq \\mathbf {f}_{\\theta }(\\mathbf {x}) \\in \\mathbb {R}^{H \\times W \\times C_{s}} $ and one which produces a fine-grained correspondence feature map $\\mathbf {F}_{f} \\triangleq \\tilde{\\mathbf {f}}_{\\theta }(\\mathbf {x}) \\in \\mathbb {R}^{H \\times W \\times C_{f}} $ .", "Note that these two feature maps can have different channel dimensions.", "We consider channel-wise concatenation as a simple and intuitive way to fuse these feature maps: $\\mathbf {F} = [\\operatorname{L2Norm}(\\mathbf {F}_{s}), \\lambda \\cdot \\operatorname{L2Norm}(\\mathbf {F}_{f})]$ where $\\operatorname{L2Norm}$ denotes an $l_{2}$ normalization of local feature vectors in every spatial location.", "This ameliorates issues of scale, considering that the two feature maps are obtained under different training objectives which attend to features of different scales.", "$\\lambda $ is a hyperparameter to balance two feature maps.", "Note that $\\mathbf {F}$ also needs to be re-normalized when it is employed in downstream tasks, like label propagation." ], [ "Implementation Details", "Any off-the-shelf image-level self-supervised pre-trained network can serve as our semantic correspondence backbone.", "In our implementation, we use MoCo as the default network, with ResNet-50 [26] as the base architecture and pre-trained on the 1000-class ImageNet [15] training set with strong data augmentation.", "As for our fine-grained correspondence network, we use YouTube-VOS [79] as our pre-training dataset for direct comparison with previous works [35].", "It contains 3471 videos totalling 5.58 hours of playtime, much smaller than Kinetics400 [8] (800 hours).", "Although Youtube-VOS is a video dataset, we treat it as a conventional image dataset and randomly sample individual frames during training (equivalent to 95k images).", "Crucially, this discards temporal information and correspondence signals our model would otherwise be able to exploit.", "We use cropping-only augmentation.", "Following  [33], [38], [73], we adopt ResNet-18 as the backbone.", "Please see Appendix REF for augmentation, architecture and optimization details." ], [ "Experiments", "We evaluate the learned representation without fine-tuning on several challenging video propagation tasks involving objects, human pose and parts.", "We will first introduce our detailed evaluation settings and baselines, then we conduct the comparison with the state-of-the-art self-supervised algorithms.", "Finally, we perform extensive ablations on different elements for SFC." ], [ "Label propagation", "Ideally, a model with good space-time correspondence should be able to track an arbitrary user-annotated target object throughout a video.", "Previous works formulate this kind of tracking task as video label propagation [73], [38], [33], [78].", "We follow the same evaluation protocol as prior art [33] for consistency and equitable comparison.", "At a high level, we use the representation from our pre-trained model as a similarity function.", "Given the ground-truth labels in the first frame, a recurrent inference strategy is applied to propagate the labels to the rest of the frames.", "See Appendix REF for detailed description.", "We compare with state-of-the-art algorithms on DAVIS-2017 [51], a widely-used publicly-available benchmark for video object segmentation.", "To see whether our method can generalize to more visual correspondence tasks, we further evaluate our method on JHMDB benchmark [34], which involves tracking 15 human pose keypoints, and on the Video Instance Parsing (VIP) benchmark [82], which involves propagating 20 parts of the human body.", "We use the same settings as  [33], [38] and report the standard metrics, namely region-based similarity $\\mathcal {J}$ and contour-based accuracy $\\mathcal {F}$  [49] for DAVIS, probability of a correct pose (PCK) metric [80] for JHMDB and mean intersection-over-union (IoU) for VIP.", "Table: Video object segmentation results on the DAVIS-2017 val set.", "Dataset indicates dataset(s) used for pre-training, including: I=ImageNet, V=ImageNet-VID, C=COCO, D=DAVIS-2017, P=PASCAL-VOC, J=JHMDB.", "☆\\star indicates that the method uses its own label propagation algorithm." ], [ "Baselines", "We compare with the following baselines: Instance-Level Pre-Trained Representations: We consider supervised and self-supervised pre-trained models (MoCo, BYOL, SimSiam, etc.)", "on ImageNet.", "We also compare with two recent video-based self-supervised representation learning baselines: VINCE [19] and VFS [78].", "We evaluate VFS pre-trained model using our label propagation implementation (official CRW [33] evaluation code).", "Pixel-Level Pre-Trained Representations: We evaluate representations trained with pixel-level self-supervised proxy tasks: PixPro[77], DetCo[76], DenseCL[74].", "Task-Specific Temporal Correspondence Representations: There are many self-supervised methods designed specifically for visual correspondence learning and evaluated on label propagation.", "We include these for a more comprehensive analysis: Colorization [69], CorrFlow [36], MAST [35], TimeCycle [73], UVC [38], CRW [33].", "We compare our method against previous self-supervised methods in Table REF .", "In summary, our results strongly validate the design choices in our model.", "In particular, the full semantic-aware fine-grained correspondence network (SFC), achieves state-of-the-art performance on all tasks investigated.", "SFC significantly outperforms other methods that learn only semantic correspondence (MoCo, $65.9 \\rightarrow 71.2$ on DAVIS-2017) or only fine-grained correspondence (FC, $67.7 \\rightarrow 71.2$ on DAVIS-2017).", "SFC even outperforms several supervised baselines specially designed for video object segmentation and human part tracking.", "Note also that our fine-grained correspondence network (FC) can achieve comparable performance on DAVIS and JHMDB with methods like CRW, despite training with far less data and discarding temporal information.", "The performance of FC on VIP is lower, but it may be further improved by exploiting more inductive bias, e.g., temporal context or viewpoint changes in videos.", "We show the results on DAVIS-2017 of FC using different pre-training datasets in Appendix REF .", "FC achieves 67.9 $\\mathcal {J\\&F}_\\textrm {m}$ when pre-trained on ImageNet.", "This suggests that a larger dataset offers marginal benefits for fine-grained correspondence learning, which is largely different from learning semantic correspondence.", "When replacing YouTube-VOS pre-trained FC with ImageNet pre-trained one, SFC still achieves 71.3 $\\mathcal {J\\&F}_\\textrm {m}$ .", "This indicates that the performance gain of SFC doesn't come from the extra YouTube-VOS dataset.", "We use YouTube-VOS for faster training and fair comparisons of other correspondence learning methods.", "We also report results of SFC on semantic segmentation and ImageNet-1K linear probing in Appendix REF .", "Our SFC achieves improved results on all considered tasks, showing strong generalization ability and the flexibility of our core contribution.", "Figure: Qualitative results for label propagation.", "Given ground-truth labels in the first frame (outlined in blue), our method can propagate them to the rest of frames.", "For more results, please refer to the Appendix ." ], [ "Visualization", "Figure REF shows samples of video label propagation results.", "We further visualize the learned correspondences of our model in Figure REF , compared with its components, MoCo and FC.", "We notice that the correspondence map of MoCo tends to scatter across the entire visual object, indicating that it focuses more on object-level semantics instead of low-level fine-grained features.", "On the contrary, the correspondence map of FC is highly concentrated, but sometimes loses track of the source pixel, indicating a failure to capture high-level semantics.", "By balancing semantics and fine-grained correspondences, our proposed method SFC is able to overcome their respective drawbacks and give the most accurate correspondence." ], [ "Ablative Analysis", "In this section, we investigate our results on video object segmentation using DAVIS-2017 in more detail, and outline several ablations on important design choices throughout our model architectures and pipelines." ], [ "Fusion Strategy", "We perform experiments by combing the FC training objective with a global image-level loss, resulting in an end-to-end multi-task framework.", "But we find the two losses fail to boost performance synergistically.", "For example, when we add a BYOL loss to FC for joint optimization (see Appendix REF for details), the performance on DAVIS-2017 drops a little (67.7 $\\rightarrow $ 67.2).", "The reason is that the two losses need different receptive fields and augmentations.", "The optimal configuration of FC model will induce a sub-optimal solution under the image-level loss, and vice versa.", "Thus, it is sensible to train two independent models and use concatenation to fuse the two different kinds of representations.", "Table: Fusion of two networks with the same kind of correspondence.", "FC denotes our fine-grained correspondence network, it achieves 67.7 𝒥&ℱ m \\mathcal {J}\\&\\mathcal {F}_\\textrm {m} on DAVIS-2017.", "Other single model results can be found in Table .One may expect the concatenation operation is some form of model ensemble.", "Does combining an arbitrary two networks lead to any reasonable improvement in performance?", "To answer this, we conduct experiments on two sets of models: in the first set, all models have two semantic correspondence networks; while in the second set, all models have two fine-grained correspondence networks.", "Results are shown in Table REF .", "We observe that if two networks have the same type of correspondence, their combination leads to unremarkable increases in performance.", "In Appendix , we show that we can flexibly replace semantic correspondence backbone (MoCo $\\rightarrow $ InstDis, SimCLR, BYOL, etc.)", "and still maintain strong performance on DAVIS.", "This strongly confirms our hypothesis that image-level self-supervised representations in general contain information about semantic correspondence.", "It also supports our framing of semantic correspondence and fine-grained correspondence as orthogonal sources of information.", "Next, we mainly conduct a series of ablation studies on our fine-grained network (FC)." ], [ "Crop Size and Positive Radius", "When we apply cropping for an image, a random patch is selected, with an area uniformly sampled between $\\gamma _{1}$ (lower bound) and $\\gamma _{2}$ (upper bound) of that of the original image.", "In Figure REF , we plot FC model performance on different ratios of crop size area, by varying $\\gamma _{1}$ : $\\lbrace 0, 0.08, 0.2, 0.3\\rbrace $ and fixing $\\gamma _{2}$ to 1.", "Simultaneously, for every lower bound $\\gamma _{1}$ , we investigate how different positive radii $r$ can also affect performance on correspondence learning.", "We find that as the lower bound $\\gamma _{1}$ increases, mode performance worsens.", "$\\gamma _{1}=0$ yields relatively strong performance under a wide range of positive radius $r$ .", "We conjecture that using a small lower bound $\\gamma _{1}$ results in larger scale and translation variations between two views of one image, which induces strong spatial augmentation and thus allows our correspondence learning to rely on scale-invariant representations.", "We also observe that an appropriate positive radius $r$ is crucial for fine-grained correspondence learning.", "On the DAVIS dataset, we show that a large (smooth) or small (sharp) $r$ is demonstrably harmful to performance.", "Finally, for different $\\gamma _{1}$ , the optimal value of $r$ is different." ], [ "Data Augmentation", "VFS [78] has pointed out that color augmentation jeopardizes fine-grained correspondence learning.", "To systematically study the effects of individual data augmentations, we investigate the performance of our FC model on DAVIS when applying random cropping and another common augmentation (random flip, color jittering, etc.).", "We report the results in Table REF .", "Among all color data augmentations, the one that has the greatest negative impact on fine-grained correspondence learning is actually color dropping (grayscale conversion).", "This is in contrast to image-level self-supervised learning, where strong color augmentation [9] is crucial for learning good representations.", "We adopt random crop as the only augmentation in our best-performing models." ], [ "Conclusion and Discussions", "We have developed a novel framework to learn both semantic and fine-grained correspondence from still images alone.", "We demonstrate that these two forms of correspondence offer complementary information, thereby facilitating a simple yet intuitive fusion scheme which leads to state-of-the-art results on a number of downstream correspondence tasks.", "In this work, we mainly explore the correspondence properties of ConvNet.", "Whether ViT [16] also benefits from dense fine-grained self-supervision and combination of two kinds of correspondence is an interesting open question left to future exploration." ], [ "Acknowledgements.", "This work is supported by the Ministry of Science and Technology of the People's Republic of China, the 2030 Innovation Megaprojects “Program on New Generation Artificial Intelligence” (Grant No.", "2021AAA0150000).", "This work is also supported by a grant from the Guoqiang Institute, Tsinghua University." ], [ "FC pre-training", "The implementation details of our fine-grained correspondence network are as follows.", "Data Augmentation   We use only spatial augmentation, where two random crops with scale $[0.0, 1.0]$ from the image are generated and resized into $256\\times 256$ .", "Architectures   Following  [33], [38], [73], we adopt ResNet-18 as the backbone $\\mathbf {f}$ and reduce the stride of last two residual blocks ($\\texttt {res3}$ and $\\texttt {res4}$ ) to 1.", "The modified backbone produces a feature map with size $32 \\times 32$ (ablation in Appendix REF ).", "The dense projection and prediction head use the same architecture: a $1 \\times 1$ convolution layer with 2048 output channels followed by batch normalization and a ReLU activation, and a final $1 \\times 1$ convolution layer with output dimension 256.", "The positive radius $r$ used to control the size of spatial neighborhood is set to $0.5$ .", "Optimization   We train the model with the Adam optimizer for 60k iterations.", "The learning rate is set to $0.001$ .", "The weight decay is set to 0.", "The batch size is 96.", "For the target network, the exponential moving average parameter $\\tau $ starts from 0.99 and gradually increases to 1 under a cosine schedule, following [22].", "The whole model can be trained on a single 24GB NVIDIA 3090 GPU." ], [ "Label Propagation", "We follow the same label propagation algorithm in [33].", "Specifically, given the ground-truth labels in the first frame, a recurrent inference strategy is applied to propagate the labels to the rest of the frames: we calculate the similarity between the current frame with the first frame (to provide ground truth labels) as well as the preceding $m$ frames (to provide predicted labels).", "We reduce the stride of the penultimate residual block ($\\texttt {res4}$ ) of the backbone network to be 1 and use its output (stride 8) to compute a dense similarity matrix.", "To avoid ambiguous matches, we define a localized spatial neighborhood by computing the similarity between pixels that are at most ${r}^{\\prime }$ pixels away from each other.", "Finally, the labels of the top-$k$ most similarly local feature vectors are selected and are propagated to the current frame.", "For a single network which only learns semantic correspondence or fine-grained correspondence, the detailed test hyper-parameters for the three datasets are listed in Table REF .", "Table: Test hyper-parameters for a single network.Recall that in fusing the two different kinds of correspondence, we introduce a new hyper-parameter $\\lambda $ .", "We report the test hyper-parameters for combined correspondence in Table REF .", "In general, we find that more neighbors (larger top-k and propagation radius $r^{\\prime }$ ) are required for consistent performance.", "Table: Test hyper-parameters when fusing two kinds of correspondence." ], [ "Semantic Segmentation Protocol", "The backbone is kept fixed and we train a $1 \\times 1$ convolutional layer on top to predict a semantic segmentation map.", "We apply dilated convolutions in the last residual block to obtain dense predictions.", "We use PASCAL[18] $\\texttt {train\\_aug}$ and $\\texttt {val}$ splits during training and evaluation, respectively.", "We adopt mIoU as the metric.", "The $1 \\times 1$ convolutional layer training uses base $lr=0.1$ for 60 epochs, weight decay $=0.0001$ , momentum $=0.9$ , and batch size $=16$ with an SGD optimizer." ], [ "Linear Classification Protocol", "Given the pre-trained network, we train a supervised linear classifier on top of the frozen features, which are obtained from ResNet's global average pooling layer.", "We train this classifier on the ImageNet train set and report top-1 classification accuracy on the ImageNet validation set.", "Following prior work[25], the linear classifier training uses base $lr=30.0$ for 100 epochs, weight decay $=0$ , momentum $=0.9$ , and batch size$=256$ with a SGD optimizer." ], [ "Combined with Image-level Pretext Task", "We add BYOL loss to FC for joint optimization.", "Specifically, the two loss functions share the same backbone encoder (outputs a feature map with size 32 $\\times $ 32) and data loader (performs only spatial augmentation).", "But the projection head and prediction head are not shared.", "The projection head of BYOL is a two-layer MLP whose hidden and output dimensions are 2048 and 256.", "Note that BYOL average-pool backbone features to aggregate information from all spatial locations.", "Other implementation details follow FC.", "Two loss functions are balanced by a multiplicative factor $\\alpha $ (set to 1 by default)." ], [ "FC is Robust to Different Dataset", "When pretrained on non object-centric dataset (e.g.", "COCO [39]), the performance of typical image-level self-supervised methods drop significantly [52], [57].", "At the same time, it is largely recognized that a larger dataset usually results in stronger semantic representation for these methods.", "But this may not be true for a task that requires analyzing low-level cues.", "The following Table REF compares different training datasets of FC.", "We can see that FC is robust to the size and nature of the dataset.", "FC can effectively learn from a relatively small dataset.", "It actually gains more benefits from datasets that contain more complex scenes with several objects.", "The results on COCO even surpass Youtube-VOS used in the main body of the paper.", "Table: Results on DAVIS-2017 of FC using different training datasets.", "The number of images per dataset is in parentheses." ], [ "Feature Resolution", "We report the results of our fine-grained correspondence network (FC) using different feature resolutions in Table REF .", "The performance on DAVIS improves as the resolution increases.", "This is intuitive, because higher resolution indicates the local feature vectors correspond to a smaller region on the original image (small receptive fields), which benefits fine-grained low-level correspondence learning.", "But high-level semantics require larger receptive fields to encode more holistic information.", "Table: Effect of feature map resolution.", "The results increase as resolution gets higher.", "We use 32 ×\\times 32 by default." ], [ "Semantic Segmentation and Linear Classification", "The quantitative comparison on different downstream tasks is shown in Table REF .", "For a fair comparison, we use ResNet-18 as MoCo backbone.", "CRW surpasses MoCo on DAVIS, but is dramatically outperformed by MoCo on semantic segmentation and image classification.", "Note that our FC model exhibits similar properties as CRW: the learned representation is suitable for fine-grained correspondence task, but lacks high-level semantic information.", "When we add crucial missing semantic information, our SFC achieves significant improvements on label propagation, semantic segmentation and image classification.", "Table: Comparison on label propagation, semantic segmentation and linear classification." ], [ "Semantic Correspondence Backbone", "We use MoCo as the default semantic correspondence backbone in the main experiment, but our framework is extensible to any arbitrary backbone that is capable of producing spatial feature maps.", "In Table REF , we show that we can flexibly swap out the semantic correspondence backbone for any off-the-shelf self-supervised network and maintain strong performance on DAVIS.", "Some methods such as SimCLR and BYOL even surpass MoCo.", "This strongly supports our hypothesis that image-level representations in general contain information about semantic correspondences.", "Table: Results after replacing MoCo with alternate image-level self-supervised representation learning methods." ], [ "Fine-Grained Correspondence Backbone", "In Table REF , we replace our own FC network in SFC with another fine-grained correspondence network CRW.", "We find the performance generally underperforms SFC.", "FC is better than CRW on all evaluation metrics, as shown in Table REF .", "FC is also much simpler and computationally efficient.", "It takes less than a day using a single GPU, but CRW reports seven days of training.", "The results in Table REF surpass image-level self-supervised methods or CRW alone, demonstrating the benefits of considering two orthogonal correspondences and the flexibility of our framework.", "It enables us to explore more effective and efficient self-supervised learning methods for semantic or fine-grained representations separately.", "Table: Replace FC with another fine-grained correspondence model CRW." ], [ "Visualization", "We provide a more detailed visualization of our SFC model on several downstream label propagation tasks.", "In Figure REF , we show a comparison between SFC and CRW on the visual object segmentation benchmark DAVIS-2017.", "Our SFC model can generally output more accurate segmentation boundaries and reduce the amount of mistakes and failures made by CRW.", "In Figure REF and Figure REF , we provide visualizations on the human pose tracking benchmark JHMDB and the human part tracking benchmark VIP.", "Note that in all our experiments, no prior knowledge on human structure or object class is used.", "The label propagation process is solely based on feature matching.", "Figure: Comparing our SFC with CRW on DAVIS-2017.", "Within each example, the upper row is the output of CRW, and the lower row is the output of SFC.", "Blue dashed boxes indicate the main areas of difference.Figure: Visualization on JHMDB.", "Pose keypoints and their initial positions are defined on the input frame (outlined in blue), and propagated to the rest of frames.Figure: Visualization on VIP.", "The segmentation map of different body parts are defined on the input frame (outlined in blue), and propagated to the rest of frames." ] ]
2207.10456
[ [ "Estimation of Non-Crossing Quantile Regression Process with Deep ReQU\n Neural Networks" ], [ "Abstract We propose a penalized nonparametric approach to estimating the quantile regression process (QRP) in a nonseparable model using rectifier quadratic unit (ReQU) activated deep neural networks and introduce a novel penalty function to enforce non-crossing of quantile regression curves.", "We establish the non-asymptotic excess risk bounds for the estimated QRP and derive the mean integrated squared error for the estimated QRP under mild smoothness and regularity conditions.", "To establish these non-asymptotic risk and estimation error bounds, we also develop a new error bound for approximating $C^s$ smooth functions with $s >0$ and their derivatives using ReQU activated neural networks.", "This is a new approximation result for ReQU networks and is of independent interest and may be useful in other problems.", "Our numerical experiments demonstrate that the proposed method is competitive with or outperforms two existing methods, including methods using reproducing kernels and random forests, for nonparametric quantile regression." ], [ "Introduction", "Consider a nonparametric regression model $Y=f_0(X, U),$ where $Y \\in \\mathbb {R}$ is a response variable, $X \\in \\mathcal {X} \\subset \\mathbb {R}^d$ is a $d$ -dimensional vector of predictors, $U$ is an unobservable random variable following the uniform distribution on $(0,1)$ and independent of $X$ .", "The function $f_0:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ is an unknown regression function, and $f_0$ is increasing in its second argument.", "This is a non-separable quantile regression model, in which the specification $U\\sim {\\rm Unif}(0,1)$ is a normalization but not a restrictive assumption [15], [25].", "Nonseparable quantile regression models are important in empirical economics (see, e.g., [8]).", "Based on (REF ), it can be seen that for any $\\tau \\in (0,1)$ , the conditional $\\tau $ -th quantile $Q_{Y|x}(\\tau )$ of $Y$ given $X=x$ is $Q_{Y|x}(\\tau )=f_0(x,\\tau ).$ We refer to $f_0 = \\lbrace f_0(x, \\tau ): (x, \\tau ) \\in \\mathcal {X} \\times (0, 1)\\rbrace $ as a quantile regression process (QRP).", "A basic property of QRP is that it is nondecreasing with respect to $\\tau $ for any given $x$ , often referred to as the non-crossing property.", "We propose a novel penalized nonparametric method for estimating $f_0$ on a random discrete grid of quantile levels in $(0, 1)$ simultaneously, with the penalty designed to ensure the non-crossing property.", "Quantile regression [29] is an important method for modeling the relationship between a response $Y$ and a predictor $X$ .", "Different from least squares regression that estimates the conditional mean of $Y$ given $X$ , quantile regression models the conditional quantiles of $Y$ given $X$ , so it fully describes the conditional distribution of $Y$ given $X.$ The non-separable model (REF ) can be transformed into a familiar quantile regression model with an additive error.", "For any $\\tau \\in (0, 1)$ , we have $P\\lbrace Y-f_0(X,\\tau )\\le 0\\rbrace =\\tau $ under (REF ).", "If we define $\\epsilon =Y-f_0(X,\\tau )$ , then model (REF ) becomes $Y=g_0(X)+\\epsilon ,$ where $g_0(X)=f_0(X,\\tau )$ and $P(\\epsilon \\le 0\\mid X=x) =\\tau $ for any $x\\in \\mathcal {X}$ .", "An attractive feature of the nonseparable model (REF ) is that it explicitly includes the quantile level as a second argument of $f_0$ , which makes it possible to construct a single objective function for estimating the whole quantile process simultaneously.", "A general nonseparable quantile regression model that allows a vector random disturbance $U$ was proposed by [14].", "The model (REF ) in the presence of endogeneity was considered by [15], who gave local identification conditions for the quantile regression function $f_0$ and provided sufficient condition under which a series estimator is consistent.", "The convergence rate of the series estimator is unknown.", "The relationship between the nonseparable quantile regression model (REF ) and the usual separable quantile regression model was discussed in [25].", "A study of nonseparable bivariate quantile regression for nonparametric demand estimation using splines under shape constraints was given in [8].", "There is a large body of literature on separable linear quantile regression in the fixed-dimension setting [29], [28] and in the high-dimensional settings [6], [47], [51].", "Nonparametric estimation of separable quantile regressions has also been considered.", "Examples include the methods using shallow neural networks [48], smoothing splines [30], [23], [22] and reproducing kernels [46], [40].", "Semiparametric quantile regression has also been considered in the literature [11], [7].", "A popular semiparametric quantile regression model is $Q_{Y|x}(\\tau ) = Z(x)^\\top \\beta (\\tau ).$ where $Q_{Y|x}(\\tau )$ is defined in (REF ) and $Z(x) \\in \\mathbb {R}^m$ is usually a series representation of the predictor $x$ .", "The goal is to estimate the coefficient process $\\lbrace \\beta (\\tau ): \\tau \\in (0, 1)\\rbrace $ and derive the asymptotic distribution of the estimators.", "Such results can be used for conducting statistical inference about $\\beta (\\tau )$ .", "However, they hinge on the model assumption (REF ).", "If this assumption is not satisfied, estimation and inference results based on a misspecified model can be misleading.", "Quantile regression curves satisfy a monotonicity condition.", "At the population level, it holds that $f_0(x, \\tau _2) \\ge f_0(x; \\tau _1)$ for any $0 < \\tau _1 < \\tau _2<1$ and every $x \\in \\mathcal {X}$ .", "However, for an estimator $\\hat{f}$ of $f_0$ , there can be values of $x$ for which the quantile curves cross, that is, $\\hat{f}(x, \\tau _2) < \\hat{f} (x; \\tau _1)$ due to finite sample size and sampling variation.", "Quantile crossing makes it challenging to interpret the estimated quantile curves [21].", "Therefore, it is desirable to avoid it in practice.", "Constrained optimization methods have been used to obtain non-crossing conditional quantile estimates in linear quantile regression and nonparametric quantile regression with a scalar covariate [21], [9].", "A method proposed by [13] uses sorting to rearrange the original estimated non-monotone quantile curves into monotone curves without crossing.", "It is also possible to apply the isotonization method for qualitative constraints [36] to the original estimated quantile curves to obtain quantile curves without crossing.", "[10] proposed a deep learning algorithm for estimating conditional quantile functions that ensures quantile monotonicity.", "They first restrict the output of a deep neural network to be positive as the estimator of the derivative of the conditional quantile function, then by using truncated Chebyshev polynomial expansion, the estimated derivative is integrated and the estimator of conditional quantile function is obtained.", "Recently, there has been active research on nonparametric least squares regression using deep neural networks [5], [41], [12], [31], [38], [17], [26].", "These studies show that, under appropriate conditions, least squares regression with neural networks can achieve the optimal rate of convergence up to a logarithmic factor for estimating a conditional mean regression function.", "Since the quantile regression problem considered in this work is quite different from the least squares regression, different treatments are needed in the present setting.", "We propose a penalized nonparametric approach for estimating the nonseparable quantile regression model (REF ) using rectified quadratic unit (ReQU) activated deep neural networks.", "We introduce a penalty function for the derivative of the QRP with respect to the quantile level to avoid quantile crossing, which does not require numerical integration as in [10].", "Our main contributions are as follows.", "We propose a novel loss function that is the expected quantile loss function with respect to a distribution over $(0, 1)$ for the quantile level, instead of the quantile loss function at a single quantile level as in the usual quantile regression.", "An appealing feature of the proposed loss function is that it can be used to estimate quantile regression functions at an arbitrary number of quantile levels simultaneously.", "We propose a new penalty function to enforce the non-crossing property for quantile curves at different quantile levels.", "This is achieved by encouraging the derivative of the quantile regression function $f(x,\\tau )$ with respect to $\\tau $ to be positive.", "The use of ReQU activation ensures that the derivative exists.", "This penalty is easy to implement and computationally feasible for high-dimensional predictors.", "We establish non-asymptotic excess risk bounds for the estimated QRP and derive the mean integrated squared error for the estimated QRP under the assumption that the underlying quantile regression process belongs to the $C^s$ class of functions on $\\mathcal {X}\\times (0,1).$ .", "We derive novel approximation error bounds for $C^s$ smooth functions with a positive smoothness index $s$ and their derivatives using ReQU activated deep neural networks.", "The error bounds hold not only for the target function, but also its derivatives.", "This is a new approximation result for ReQU networks and is of independent interest and may be useful in other problems.", "We conduct simulation studies to evaluate the finite sample performance of the proposed QRP estimation method and demonstrate that it is competitive or outperforms two existing nonparametric quantile regression methods, including kernel based quantile regression and quantile regression forests.", "The remainder of the paper is organized as follows.", "In Section we describe the proposed method for nonparametric estimation of QRP with a novel penalty function for avoiding quantile crossing.", "In Section we state the main results of the paper, including bounds for the non-asymptotic excess risk and the mean integrated squared error for the proposed QRP estimator.", "In Section we derive the stochastic error for the QRP estimator.", "In Section we establish a novel approximation error bound for approximating $C^s$ smooth functions and their derivatives using ReQU activated neural networks.", "Section describes computational implementation of the proposed method.", "In Section we conduct numerical studies to evaluate the performance of the QRP estimator.", "Conclusion remarks are given Section .", "Proofs and technical details are provided in the Appendix." ], [ "Deep quantile regression process estimation with non-crossing constraints", "In this section, we describe the proposed approach for estimating the quantile regression process using deep neural networks with a novel penalty for avoiding non-crossing." ], [ "The standard quantile regression", "We first recall the standard quantile regression method with the check loss function [29].", "For a given quantile level $\\tau \\in (0,1)$ , the quantile check loss function is defined by $\\rho _\\tau (x)=x\\lbrace \\tau -I(x\\le 0)\\rbrace , \\ x \\in \\mathbb {R}.$ For any $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ and $\\tau \\in (0,1)$ , the $\\tau $ -risk of $f$ is defined by $\\mathcal {R}^\\tau (f)=\\mathbb {E}_{X,Y}\\lbrace \\rho _\\tau (Y-f(X,\\tau ))\\rbrace .$ Clearly, by the model assumption in (REF ), for each given $\\tau \\in (0,1)$ , the function $f_0(\\cdot , \\tau )$ is the minimizer of $\\mathcal {R}^\\tau (f)$ over all the measurable functions from $\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ , i.e., for $f_*^\\tau =\\arg \\min _{f} \\mathcal {R}^\\tau (f) =\\arg \\min _{f}\\mathbb {E}_{X,Y}\\lbrace \\rho _\\tau (Y-f(X,\\tau ))\\rbrace ,$ we have $f_*^\\tau \\equiv f_0(\\cdot , \\tau )$ on $\\mathcal {X}\\times \\lbrace \\tau \\rbrace $ .", "This is the basic identification result for the standard quantile regression, where only a single conditional quantile function $f_0(\\cdot , \\tau )$ at a given quantile level $\\tau $ is estimated." ], [ "Expected\ncheck loss with non-crossing constraints", "Our goal is to estimate the whole quantile regression process $\\lbrace f_0(\\cdot ,\\tau ): \\tau \\in (0, 1)\\rbrace $ .", "Of course, computationally we can only obtain an estimate of this process on a discrete grid of quantile levels.", "For this purpose, we propose an objective function and estimate the process $\\lbrace f_0(\\cdot ,\\tau ): \\tau \\in (0, 1)\\rbrace $ on a grid of random quantile levels that are increasingly dense as the sample size $n$ increases.", "We will achieve this by constructing a randomized objective function as follows.", "Let $\\xi $ be a random variable supported on $(0,1)$ with density function $\\pi _\\xi :(0,1)\\rightarrow \\mathbb {R}^+$ .", "Consider the following randomized version of the check loss function $\\rho _\\xi (x)=x\\lbrace \\xi -I(x\\le 0)\\rbrace , \\ x \\in \\mathbb {R}.$ For a measurable function $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ , define the risk of $f$ by $\\mathcal {R}(f)=\\mathbb {E}_{X,Y,\\xi }\\lbrace \\rho _\\xi (Y-f(X,\\xi ))\\rbrace =\\int _0^1 \\mathcal {R}^t(f)\\pi _\\xi (t) dt.$ At the population level, let $f^*: \\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ be a measurable function satisfying $f^* \\in \\arg \\min _{f} \\mathcal {R}(f) =\\arg \\min _{f}\\int _0^1 \\mathcal {R}^t(f)\\pi _\\xi (t) dt.$ Note that $f^*$ may not be uniquely defined if $(X,\\xi )$ has zero density on some set $A_0\\subseteq \\mathcal {X}\\times (0,1)$ with positive Lebesgue measure.", "In this case, $f^*(x,\\xi )$ can take any value for $(x,\\xi )\\in A_0$ since it does not affect the risk.", "Importantly, since the target quantile function $f_0(\\cdot , \\tau )$ defined in (REF ) minimizes the $\\tau $ -risk $\\mathcal {R}^\\tau $ for each $\\tau \\in (0,1)$ , $f_0$ is also the risk minimizer of $\\mathcal {R}$ over all measurable functions.", "Then we have $f_0\\equiv f^*$ on $\\mathcal {X}\\times (0,1)$ almost everywhere given that $(X,\\xi )$ has nonzero density on $\\mathcal {X}\\times (0,1)$ almost everywhere.", "In addition, the risk $\\mathcal {R}$ depends on the distribution of $\\xi $ .", "Different distributions of $\\xi $ may lead to different $\\mathcal {R}$ 's.", "However, the target quantile process $f_0$ is still the risk minimizer of $\\mathcal {R}$ over all measurable functions, regardless of the distribution of $\\xi $ .", "We state this property in the following proposition, whose proof is given in the Appendix.", "Proposition 1 For any random variable $\\xi $ supported on $(0,1)$ , the target function $f_0$ minimizes the risk $\\mathcal {R}(\\cdot )$ defined in (REF ) over all measurable functions, i.e., $f_0\\in \\arg \\min _{f} \\mathcal {R}(f)=\\arg \\min _{f}\\mathbb {E}_{X,Y,\\xi }\\lbrace \\rho _{\\xi }(Y-f(X,\\xi ))\\rbrace .$ Furthermore, if $(X,\\xi )$ has non zero density almost everywhere on $\\mathcal {X}\\times (0,1)$ and the probability measure of $(X,\\xi )$ is absolutely continuous with respect to Lebesgue measure, then $f_0$ is the unique minimizer of $\\mathcal {R}(\\cdot )$ over all measurable functions in the sense of almost everywhere(almost surely), i.e., $f_0=\\arg \\min _{f} \\mathcal {R}(f)=\\arg \\min _{f}\\mathbb {E}_{X,Y,\\xi }\\lbrace \\rho _{\\xi }(Y-f(X,\\xi ))\\rbrace ,$ up to a negligible set with respect to the probability measure of $(X,\\eta )$ on $\\mathcal {X}\\times (0,1)$ .", "The loss function in (REF ) can be viewed as a weighted quantile check loss function, where the distribution of $\\xi $ weights the importance of different quantile levels in the estimation.", "Proposition REF implies that, though different distributions of $\\xi $ may result in different estimators with finite samples, these estimators can be shown to be consistent for the target function $f_0$ under mild conditions.", "A natural and simple choice of the distribution of $\\xi $ is the uniform distribution over $(0,1)$ with density function $\\pi _\\xi (t)\\equiv 1$ for all $t\\in (0,1)$ .", "In this paper we focus on the case that $\\xi $ is uniformly distributed on $(0, 1)$ , but we emphasize that the theoretical results presented in Section 5 hold for different choices of the distribution of $\\xi $ .", "In applications, only a random sample $\\lbrace (X_i,Y_i)\\rbrace _{i=1}^n$ is available.", "Also, the integral with respect to $\\pi _{\\xi }$ in (REF ) does not have an explicit expression.", "We can approximate it using a random sample $\\lbrace \\xi _i\\rbrace _{i=1}^n$ from the uniform distribution on $(0,1).$ The empirical risk corresponding to the population risk $R(f)$ in (REF ) is $\\mathcal {R}_n(f)=\\frac{1}{n} \\sum _{i=1}^{n} \\rho _{\\xi _i}(Y_i-f(X_i,\\xi _i)).", "$ Let $\\mathcal {F}_n$ be a class of deep neural network (DNN) functions defined on $\\mathcal {X} \\times (0, 1).$ We define the QRP estimator as the empirical risk minimizer $\\hat{f}_n\\in \\arg \\min _{f\\in \\mathcal {F}_n}\\mathcal {R}_n(f).$ The estimator $\\hat{f}_n$ contains estimates of the quantile curves $\\lbrace \\hat{f}_n(x, \\xi _1), \\ldots , \\hat{f}_n(x, \\xi _n)\\rbrace $ at the quantile levels $\\xi _1, \\ldots , \\xi _n.$ An attractive feature of this approach is that it estimates all these quantile curves simultaneously.", "By the basic properties of quantiles, the underlying quantile regression function $f_0(x, \\tau )$ satisfies $f_0(x, \\xi _{(1)}) \\le \\cdots \\le f_0(x; \\xi _{(n)}), \\ x \\in \\mathcal {X},$ where $\\xi _{(1)} < \\cdots < \\xi _{(n)}$ are the ordered values of $\\xi _1, \\ldots , \\xi _n.$ It is desirable that the estimated quantile function also possess this monotonicity property.", "However, with finite samples and due to sampling variation, the estimated quantile function $\\hat{f}_n(x,\\tau )$ may violate this monotonicity property and cross for some values of $x$ , leading to an improper distribution for the predicted response.", "To avoid quantile crossing, constraints are required in the estimation process.", "However, it is not a simple matter to impose monotonicity constraints directly on regression quantiles.", "We use the fact that a regression quantile function $f_0(x, \\tau )$ is nondecreasing in its second argument $\\tau $ if its partial derivative with respect to $\\tau $ is non-negative.", "For a quantile regression function $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ with first order partial derivatives, we let $\\partial f/\\partial \\tau $ denote the partial derivative operator for $f$ with respect to its second argument.", "A natural way to impose the monotonicity on $f(x, \\tau )$ with respect to $\\tau $ is to constrain its partial derivative with respect to $\\tau $ to be nonnegative.", "So it is natural to consider ways to constrain the derivative of $f(x;\\tau )$ with respect to $\\tau $ to be nonnegative.", "We propose a penalty function based on the ReLU activation function, $\\sigma _1(x)=\\max \\lbrace x,0\\rbrace $ $x \\in \\mathbb {R}$ , as follows, $\\kappa (f)=\\mathbb {E}_{X, \\xi } \\sigma _1\\Big (-\\frac{\\partial }{\\partial \\tau }f(X,\\xi )\\Big )=\\mathbb {E}_{X, \\xi }\\Big [\\max \\Big \\lbrace -\\frac{\\partial }{\\partial \\tau }f(X,\\xi ),0\\Big \\rbrace \\Big ].$ Clearly, this penalty function encourages $\\frac{\\partial }{\\partial \\tau }f(x,\\xi ) \\ge 0$ .", "The empirical version of $\\kappa $ is $\\kappa _n(f):=\\frac{1}{n}\\sum _{i=1}^n\\Big [\\max \\Big \\lbrace -\\frac{\\partial }{\\partial \\tau }f(X_i,\\xi _i),0\\Big \\rbrace \\Big ].$ Based on the above discussion and combining (REF ) and (REF ), we propose the following population level penalized risk for the regression quantile functions $\\mathcal {R}^\\lambda (f)=\\mathbb {E}_{X,Y,\\xi }\\Big [ \\rho _{\\xi }(Y-f(X,\\xi ))+\\lambda \\max \\Big \\lbrace -\\frac{\\partial }{\\partial \\tau }f(X,\\xi ),0\\Big \\rbrace \\Big ],$ where $\\lambda \\ge 0$ is a tuning parameter.", "Suppose that the partial derivative of the target quantile function $f_0$ with respect to its second argument exists.", "It then follows that $\\frac{\\partial }{\\partial \\tau } f_0(x,u)\\ge 0$ for any $(x,u)\\in \\mathcal {X}\\times (0,1)$ , and thus $f_0$ is also the risk minimizer of $\\mathcal {R}^\\lambda (f)$ over all measurable functions on $\\mathcal {X}\\times (0,1)$ .", "The empirical risk corresponding to (REF ) for estimating the regression quantile functions is $\\mathcal {R}^\\lambda _n(f)=\\frac{1}{n} \\sum _{i=1}^{n}\\Big [ \\rho _{\\xi _i}(Y_i-f(X_i,\\xi _i))+\\lambda \\max \\Big \\lbrace -\\frac{\\partial }{\\partial \\tau }f(X_i,\\xi _i),0\\Big \\rbrace \\Big ].$ The penalized empirical risk minimizer over a class of functions $\\mathcal {F}_n$ is given by $\\hat{f}^\\lambda _n\\in \\arg \\min _{f\\in \\mathcal {F}_n}\\mathcal {R}^\\lambda _n(f),$ We refer to $\\hat{f}^\\lambda _n$ as a penalized deep quantile regression process (DQRP) estimator.", "The function class $\\mathcal {F}_n$ plays an important role in (REF ).", "Below we give a detailed description of $\\mathcal {F}_n.$" ], [ "ReQU activated neural networks", "Neural networks with nonlinear activation functions have proven to be a powerful approach for approximating multi-dimensional functions.", "Rectified linear unit (ReLU), defined as $\\sigma _1(x)=\\max \\lbrace x,0\\rbrace , x \\in \\mathbb {R}$ , is one of the most commonly used activation functions due to its attractive properties in computation and optimization.", "ReLU neural networks have received much attention in statistical machine learning [41], [5], [26] and applied mathematics [49], [50], [43], [42], [35].", "However, since partial derivatives are involved in our objective function (REF ), it is not sensible to use piecewise linear ReLU networks.", "We will use the Rectified quadratic unit (ReQU) activation, which is smooth and has a continuous first derivative.", "The ReQU activation function, denoted as $\\sigma _2$ , is simply the squared ReLU, $\\sigma _2(x)=\\sigma _1^2(x)=[\\max \\lbrace x,0\\rbrace ]^2,\\ x \\in \\mathbb {R}.$ With ReQU as activation function, the network will be smooth and differentiable.", "Thus ReQU activated networks are suitable to the case that the loss function involves derivatives of the networks as in (REF ).", "We set the function class $\\mathcal {F}_n$ in (REF ) to be $\\mathcal {F}_{\\mathcal {D},\\mathcal {W}, \\mathcal {U},\\mathcal {S},\\mathcal {B},\\mathcal {B}^\\prime }$ , a class of ReQU activated multilayer perceptrons $f: \\mathbb {R}^{d+1} \\rightarrow \\mathbb {R} $ with depth $\\mathcal {D}$ , width $\\mathcal {W}$ , size $\\mathcal {S}$ , number of neurons $\\mathcal {U}$ and $f$ satisfying $\\Vert f \\Vert _\\infty \\le \\mathcal {B}$ and $\\Vert \\frac{\\partial }{\\partial \\tau }f \\Vert _\\infty \\le \\mathcal {B}^\\prime $ for some $0 <\\mathcal {B}, \\mathcal {B}^\\prime < \\infty $ , where $\\Vert f \\Vert _\\infty $ is the sup-norm of a function $f$ .", "The network parameters may depend on the sample size $n$ , but the dependence is omitted in the notation for simplicity.", "The architecture of a multilayer perceptron can be expressed as a composition of a series of functions $f(x)=\\mathcal {L}_\\mathcal {D}\\circ \\sigma _2\\circ \\mathcal {L}_{\\mathcal {D}-1}\\circ \\sigma _2\\circ \\cdots \\circ \\sigma _2\\circ \\mathcal {L}_{1}\\circ \\sigma _2\\circ \\mathcal {L}_0(x),\\ x\\in \\mathbb {R}^{p_0},$ where $p_0=d+1$ , $\\sigma _2$ is the rectified quadratic unit (ReQU) activation function defined in (REF ) ( operating on $x$ component-wise if $x$ is a vector), and $\\mathcal {L}_i$ 's are linear functions $\\mathcal {L}_{i}(x)=W_ix+b_i,\\ x \\in \\mathbb {R}^{p_i}, i=0,1,\\ldots ,\\mathcal {D},$ with $W_i\\in \\mathbb {R}^{p_{i+1}\\times p_i}$ a weight matrix and $b_i\\in \\mathbb {R}^{p_{i+1}}$ a bias vector.", "Here $p_i$ is the width (the number of neurons or computational units) of the $i$ -th layer.", "The input data consisting of predictor values $X$ is the first layer and the output is the last layer.", "Such a network $f$ has $\\mathcal {D}$ hidden layers and $(\\mathcal {D}+2)$ layers in total.", "We use a $(\\mathcal {D}+2)$ -vector $(p_0,p_1,\\ldots ,p_\\mathcal {D},p_{\\mathcal {D}+1})^\\top $ to describe the width of each layer; particularly, $p_0=d+1$ is the dimension of the input $(X,\\xi )$ and $p_{\\mathcal {D}+1}=1$ is the dimension of the response $Y$ in model (REF ).", "The width $\\mathcal {W}$ is defined as the maximum width of hidden layers, i.e., $\\mathcal {W}=\\max \\lbrace p_1,...,p_\\mathcal {D}\\rbrace $ ; the size $\\mathcal {S}$ is defined as the total number of parameters in the network $f_\\phi $ , i.e., $\\mathcal {S}=\\sum _{i=0}^\\mathcal {D}\\lbrace p_{i+1}\\times (p_i+1)\\rbrace $ ; the number of neurons $\\mathcal {U}$ is defined as the number of computational units in hidden layers, i.e., $\\mathcal {U}=\\sum _{i=1}^\\mathcal {D} p_i$ .", "Note that the neurons in consecutive layers are connected to each other via linear transformation matrices $W_i$ , $i=0,1,\\ldots ,\\mathcal {D}$ .", "The network parameters can depend on the sample size $n$ , but the dependence is suppressed for notational simplicity, that is, $\\mathcal {S}=\\mathcal {S}_n$ , $\\mathcal {U}=\\mathcal {U}_n$ , $\\mathcal {D}=\\mathcal {D}_n$ , $\\mathcal {W}=\\mathcal {W}_n$ , $\\mathcal {B}=\\mathcal {B}_n$ and $\\mathcal {B}^\\prime =\\mathcal {B}_n^\\prime $ .", "This makes it possible to approximate the target regression function by neural networks as $n$ increases.", "The approximation and excess error rates will be determined in part by how these network parameters depend on $n$ .", "In this section, we state our main results on the bounds for the excess risk and estimation error of the penalized DQRP estimator.", "The excess risk of the penalized DQRP estimator is defined as $ \\mathcal {R}(\\hat{f}_n^\\lambda )-\\mathcal {R}(f_0)&=\\mathbb {E}_{X,Y,\\xi }\\lbrace \\rho _{\\xi }(Y-\\hat{f}^\\lambda _n(X,\\xi ))-\\rho _{\\xi }(Y-f_0(X,\\xi ))\\rbrace ,$ where $(X,Y,\\xi )$ is an independent copy of the random sample $\\lbrace (X_i,Y_i, \\xi _i)\\rbrace _{i=1}^n$ .", "We first state the following basic lemma for bounding the excess risk.", "Lemma 1 (Excess risk decomposition) For the penalized empirical risk minimizer $\\hat{f}^\\lambda _n$ defined in (REF ), its excess risk can be upper bounded by $\\mathbb {E}\\Big \\lbrace \\mathcal {R}(\\hat{f}_n^\\lambda )-\\mathcal {R}(f_0)\\Big \\rbrace \\le & \\mathbb {E}\\Big \\lbrace \\mathcal {R}^\\lambda (\\hat{f}_n^\\lambda )-\\mathcal {R}^\\lambda (f_0)\\Big \\rbrace \\\\\\le &\\mathbb {E}\\Big \\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\Big \\rbrace +2\\inf _{f\\in \\mathcal {F}_n}\\Big [\\mathcal {R}^\\lambda (f)-\\mathcal {R}^\\lambda (f_0)\\Big ].$ Therefore, the bound for excess risk can be decomposed into two parts: the stochastic error $\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace $ and the approximation error $\\inf _{f\\in \\mathcal {F}_n}[\\mathcal {R}^\\lambda (f)-\\mathcal {R}^\\lambda (f_0)]$ .", "Once bounds for the stochastic error and approximation error are available, we can immediately obtain an upper bound for the excess risk of the penalized DQRP estimator $\\hat{f}^\\lambda _n$ ." ], [ "Non-asymptotic excess risk bounds", "We first state the conditions needed for establishing the excess risk bounds.", "Definition 1 (Multivariate differentiability classes $C^s$ ) A function $f: \\mathbb {B}\\subset \\mathbb {R}^{d}\\rightarrow \\mathbb {R}$ defined on a subset $\\mathbb {B}$ of $\\mathbb {R}^d$ is said to be in class $C^s(\\mathbb {B})$ on $\\mathbb {B}$ for a positive integer $s$ , if all partial derivatives $D^\\alpha f:=\\frac{\\partial ^\\alpha }{\\partial x_1^{\\alpha _1}\\partial x_2^{\\alpha _2}\\cdots \\partial x_d^{\\alpha _d}}f$ exist and are continuous on $\\mathbb {B}$ for all non-negative integers $\\alpha _1,\\alpha _2,\\ldots ,\\alpha _d$ such that $\\alpha :=\\alpha _1+\\alpha _2+\\cdots +\\alpha _d\\le s$ .", "In addition, we define the norm of $f$ over $\\mathbb {B}$ by $\\Vert f\\Vert _{C^s} :=\\sum _{\\vert \\alpha \\vert _1\\le s}\\sup _{\\mathbb {B}}\\vert D^\\alpha f\\vert ,$ where $\\vert \\alpha \\vert _1:=\\sum _{i=1}^d\\alpha _i$ for any vector $\\alpha =(\\alpha _1,\\alpha _2,\\ldots ,\\alpha _d)\\in \\mathbb {R}^d$ .", "We make the following smoothness assumption on the target regression quantile function $f_0$ .", "Assumption 1 The target quantile regression function $f_0: \\mathcal {X} \\times (0, 1) \\rightarrow \\mathbb {R}$ defined in (REF ) belongs to $C^s(\\mathcal {X}\\times (0, 1))$ for $s\\in \\mathbb {N}^+$ , where $\\mathbb {N}^+$ is the set of positive integers.", "Let $\\mathcal {F}_n^\\prime :=\\lbrace \\frac{\\partial }{\\partial \\tau }f:f\\in \\mathcal {F}_n\\rbrace $ denote the function class induced by $\\mathcal {F}_n$ .", "For a class $\\mathcal {F}$ of functions: $\\mathcal {X}\\rightarrow \\mathbb {R}$ , its pseudo dimension, denoted by $\\text{Pdim}(\\mathcal {F}),$ is the largest integer $m$ for which there exists $(x_1,\\ldots ,x_m,y_1,\\ldots ,y_m)\\in \\mathcal {X}^m\\times \\mathbb {R}^m$ such that for any $(b_1,\\ldots ,b_m)\\in \\lbrace 0,1\\rbrace ^m$ there exists $f\\in \\mathcal {F}$ such that $\\forall i:f(x_i)>y_i\\iff b_i=1$ [1], [4].", "Theorem 1 (Non-asymptotic excess risk bounds) Let Assumption REF hold.", "For any $N\\in \\mathbb {N}^+$ , let $\\mathcal {F}_n:=\\mathcal {F}_{\\mathcal {D},\\mathcal {W},\\mathcal {U},\\mathcal {S},\\mathcal {B},\\mathcal {B}^\\prime }$ be the ReQU activated neural networks $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ with depth $\\mathcal {D}\\le 2N-1$ , width $\\mathcal {W}\\le 12N^d$ , the number of neurons $\\mathcal {U}\\le 15N^{d+1}$ , the number of parameters $\\mathcal {S}\\le 24N^{d+1}$ .", "Suppose that $\\mathcal {B}\\ge \\Vert f_0\\Vert _{C^0}$ and $\\mathcal {B}^\\prime \\ge \\Vert f_0\\Vert _{C^1}$ .", "Then for $n\\ge \\max \\lbrace {\\rm Pdim}(\\mathcal {F}_n),{\\rm Pdim}(\\mathcal {F}^\\prime _n)\\rbrace $ , the excess risk of the penalized DQRP estimator $\\hat{f}^\\lambda _n$ defined in (REF ) satisfies $\\mathbb {E}\\lbrace \\mathcal {R}(\\hat{f}^\\lambda _n)-\\mathcal {R}(f_0)\\rbrace \\le C_0(\\mathcal {B}+\\lambda \\mathcal {B}^\\prime )\\frac{\\log n}{n}(d+1)N^{d+3}+C_{s,d,\\mathcal {X}}(1+\\lambda ) \\Vert f_0\\Vert _{C^s}N^{-(s-1)},$ where $C_0>0$ is a universal constant and $C_{s,d,\\mathcal {X}}$ is a positive constant depending only on $d,s$ and the diameter of the support $\\mathcal {X}\\times (0,1)$ .", "By Theorem REF , for each fixed sample size $n$ , one can choose a proper positive integer $N$ based on $n$ to construct such a ReQU network to achieve the upper bound (REF ).", "To achieve the optimal convergence rate with respect to the sample size $n$ , we set $N=\\lfloor n^{1/(d+s+2)}\\rfloor $ and $\\lambda =\\log n.$ Then from (REF ) we obtain an upper bound $\\mathbb {E}\\lbrace \\mathcal {R}(\\hat{f}^\\lambda _n)-\\mathcal {R}(f_0)\\rbrace &\\le C(\\log n)^{2} n^{-\\frac{s-1}{d+s+2}},$ where $C>0$ is a constant depending only on $\\mathcal {B},\\mathcal {B}^\\prime ,s,d,\\mathcal {X}$ and $ \\Vert f_0\\Vert _{C^s}$ .", "The convergence rate is $(\\log n)^{2} n^{-(s-1)/(d+s+2)}.$ The term $(s-1)$ in the exponent is due to the approximation of the first-order partial derivative of the target function.", "Of course, the smoothness of the target function $f_0$ is unknown in practice and how to determine the smoothness of an unknown function is a difficult problem." ], [ "Non-asymptotic estimation error bound", "The empirical risk minimization quantile estimator typically results in an estimator $\\hat{f}^\\lambda _n$ whose risk $\\mathcal {R}(\\hat{f}^\\lambda _n)$ is close to the optimal risk $\\mathcal {R}(f_0)$ in expectation or with high probability.", "However, small excess risk in general only implies in a weak sense that the penalized empirical risk minimizer $\\hat{f}^\\lambda _n$ is close to the target $f_0$ (Remark 3.18 [45]).", "We bridge the gap between the excess risk and the mean integrated error of the estimated quantile function.", "To this end, we need the following condition on the conditional distribution of $Y$ given $X$ .", "Assumption 2 There exist constants $K>0$ and $k>0$ such that for any $\\vert \\delta \\vert \\le K$ , $\\vert P_{Y|X}(f_0(x,\\tau )+\\delta \\mid x)-P_{Y|X}(f_0(x,\\tau )\\mid x)\\vert \\ge k\\vert \\delta \\vert ,$ for all $\\tau \\in (0,1)$ and $x\\in \\mathcal {X}$ up to a negligible set, where $ P_{Y|X}(\\cdot \\mid x)$ denotes the conditional distribution function of $Y$ given $X=x$ .", "Assumption REF is a mild condition on the distribution of $Y$ in the sense that, if $Y$ has a density that is bounded away from zero on any compact interval, then Assumption REF will hold.", "In particular, no moment assumptions are made on the distribution of $Y$ .", "Similar conditions are assumed by [39] in studying nonparametric quantile trend filtering for a single quantile level $\\tau \\in (0,1)$ .", "This condition is weaker than Condition 2 in [23] where the density function of response is required to be lower bounded every where by some positive constant.", "Assumption REF is also weaker than Condition D.1 in [6], which requires the conditional density of $Y$ given $X=x$ to be continuously differentiable and bounded away from zero uniformly for all quantiles in $(0,1)$ and all $x$ in the support $\\mathcal {X}$ .", "Under Assumption REF , the following self-calibration condition can be established as stated below.", "This will lead to a bound on the mean integrated error of the estimated quantile process based on a bound for the excess risk.", "Lemma 2 (Self-calibration) Suppose that Assumption REF holds.", "For any $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ , denote $\\Delta ^2(f,f_0) =\\mathbb {E}[ \\min \\lbrace \\vert f(X,\\xi )-f_0(X,\\xi )\\vert ,\\vert f(X,\\xi )-f_0(X,\\xi )\\vert ^2\\rbrace ],$ where $X$ is the predictor vector and $\\xi $ is a uniform random variable on (0,1) independent of $X$ .", "Then we have $\\Delta ^2(f,f_0)\\le c_{K,k} \\lbrace \\mathcal {R}(f)-\\mathcal {R}(f_0)\\rbrace ,$ for any $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ , where $c_{K,k}=\\max \\lbrace 2/k,4/(Kk)\\rbrace $ and $K,k>0$ are defined in Assumption REF .", "Theorem 2 Suppose Assumptions REF and REF hold.", "For any $N\\in \\mathbb {N}^+$ , let $\\mathcal {F}_n:=\\mathcal {F}_{\\mathcal {D},\\mathcal {W},\\mathcal {U},\\mathcal {S},\\mathcal {B},\\mathcal {B}^\\prime }$ be the class of ReQU activated neural networks $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ with depth $\\mathcal {D}\\le 2N-1$ , width $\\mathcal {W}\\le 12N^d$ , number of neurons $\\mathcal {U}\\le 15N^{d+1}$ , number of parameters $\\mathcal {S}\\le 24N^{d+1}$ and satisfying $\\mathcal {B}\\ge \\Vert f_0\\Vert _{C^0}$ and $\\mathcal {B}^\\prime \\ge \\Vert f_0\\Vert _{C^1}$ .", "Then for $n\\ge \\max \\lbrace {\\rm Pdim}(\\mathcal {F}_n),{\\rm Pdim}(\\mathcal {F}^\\prime _n)\\rbrace $ , the mean integrated error of the penalized DQRP estimator $\\hat{f}^\\lambda _n$ defined in (REF ) satisfies $\\mathbb {E}\\lbrace \\Delta ^2(\\hat{f}^\\lambda _n,f_0)\\rbrace \\le c_{K,k}\\Big [C_0(\\mathcal {B}+\\lambda \\mathcal {B}^\\prime )(d+1)N^{d+3}\\frac{\\log n}{n}+C_{s,d,\\mathcal {X}}(1+\\lambda ) \\Vert f_0\\Vert _{C^s}N^{-(s-1)}\\Big ],$ where $C_0>0$ is a universal constant, $c_{K,k}$ is defined in Lemma REF and $C_{s,d,\\mathcal {X}}$ is a positive constant depending only on $d,s$ and the diameter of the support $\\mathcal {X}\\times (0,1)$ .", "By setting $N=\\lfloor n^{1/\\lbrace (d+s+2)\\rbrace }\\rfloor $ and $\\lambda =\\log n$ in (REF ), we obtain an upper bound $\\mathbb {E}\\lbrace \\Delta ^2(\\hat{f}^\\lambda _n,f_0)\\rbrace &\\le C_1(\\log n)^{2} n^{-\\frac{s-1}{d+s+2}},$ where $C_1>0$ is a constant depending only on $\\mathcal {B},\\mathcal {B}^\\prime ,s,d,K,k,\\mathcal {X}$ and $ \\Vert f_0\\Vert _{C^s}$ .", "Without the crossing penalty in the objective function, no estimation for the derivative function is needed, thus the convergence rate can be improved.", "In this case, ReLU activated or other neural networks can be used to estimate the quantile regression process.", "For instance, [44] showed that nonparametric quantile regression based on ReLU neural networks attains a convergence rate of $n^{-s/(d+s)}$ up to a logarithmic factor.", "This rate is slightly faster than the rate $n^{-(s-1)/(d+s+2)}$ in Theorem REF when estimation of the derivative function is involved." ], [ "Stochastic error", "Now we derive non-asymptotic upper bound for the stochastic error given in Lemma REF .", "The main difficulty here is that the term $ \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)=\\mathcal {R}(\\hat{f}^\\lambda _n)-2\\mathcal {R}_n(\\hat{f}^\\lambda _n)+\\mathcal {R}(f_0)+\\lambda \\kappa (\\hat{f}^\\lambda _n)-2\\lambda \\kappa _n(\\hat{f}^\\lambda _n)$ involves the partial derivatives of the neural network functions in $\\mathcal {F}_n.$ Thus we also need to study the properties, especially, the complexity of the partial derivatives of the neural network functions in $\\mathcal {F}_n$ .", "Let $\\mathcal {F}_n^\\prime :=\\Big \\lbrace \\frac{\\partial }{\\partial \\tau }f(x,\\tau ):f\\in \\mathcal {F}_n, (x, \\tau ) \\in \\mathcal {X} \\times (0, 1) \\Big \\rbrace .$ Note that the partial derivative operator is not a Lipschitz contraction operator, thus Talagrand's lemma [32] cannot be used to link the Rademacher complexity of $\\mathcal {F}_n$ and $\\mathcal {F}_n^\\prime $ , and to obtain an upper bound of the Rademacher complexity of $\\mathcal {F}_n^\\prime $ .", "In view of this, we consider a new class of neural network functions whose complexity is convenient to compute.", "Then the complexity of $\\mathcal {F}_n^\\prime $ can be upper bounded by the complexity of such a class of neural network functions.", "The following lemma shows that $\\mathcal {F}_n^\\prime $ is contained in the class of neural network functions with ReLU and ReQU mixed-activated multilayer perceptrons.", "In the following, we refer to the neural networks activated by the ReLU or the ReQU as ReLU-ReQU activated neural networks, i.e., the activation functions in each layer of ReLU-ReQU network can be ReLU or ReQU and the activation functions in different layers can be different.", "Lemma 3 (Network for partial derivative) Let $\\mathcal {F}_n:=\\mathcal {F}_{\\mathcal {D},\\mathcal {W}, \\mathcal {U},\\mathcal {S},\\mathcal {B},\\mathcal {B}^\\prime }$ be a class of ReQU activated neural networks $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ with depth (number of hidden layer) $\\mathcal {D}$ , width (maximum width of hidden layer) $\\mathcal {W}$ , number of neurons $\\mathcal {U}$ , number of parameters (weights and bias) $\\mathcal {S}$ and $f$ satisfying $\\Vert f \\Vert _\\infty \\le \\mathcal {B}$ and $\\Vert \\frac{\\partial }{\\partial \\tau }f \\Vert _\\infty \\le \\mathcal {B}^\\prime $ .", "Then for any $f\\in \\mathcal {F}_n$ , the partial derivative $\\frac{\\partial }{\\partial \\tau } f$ can be implemented by a ReLU-ReQU activated multilayer perceptron with depth $3\\mathcal {D}+3$ , width $10\\mathcal {W}$ , number of neurons $17\\mathcal {U}$ , number of parameters $23\\mathcal {S}$ and bound $\\mathcal {B}^\\prime $ .", "By Lemma REF , the partial derivative of a function in $\\mathcal {F}_n$ can be implemented by a function in $\\mathcal {F}^\\prime _n$ .", "Consequently, for $\\kappa $ and $\\kappa _n$ given in (REF ) and (REF ), $\\sup _{f\\in \\mathcal {F}_n}\\vert \\kappa (f)-\\kappa _n(f)\\vert \\le &\\sup _{f^\\prime \\in \\mathcal {F}^\\prime _n}\\vert \\tilde{\\kappa }(f^\\prime )-\\tilde{\\kappa }_n(f^\\prime )\\vert ,$ where $\\tilde{\\kappa }(f)=\\mathbb {E}[\\max \\lbrace -f(X,\\xi ),0\\rbrace ]$ and $\\tilde{\\kappa }_n(f)=\\sum _{i=1}^n[\\max \\lbrace -f(X_i,\\xi _i),0\\rbrace ]/n$ .", "Note that $\\tilde{\\kappa }$ and $\\tilde{\\kappa }_n$ are both 1-Lipschitz in $f$ , thus an upper bound for $\\sup _{f^\\prime \\in \\mathcal {F}^\\prime _n}\\vert \\tilde{\\kappa }(f^\\prime )-\\tilde{\\kappa }_n(f^\\prime )\\vert $ can be derived once the complexity of $\\mathcal {F}^\\prime _n$ is known.", "The complexity of a function class can be measured in several ways, including Rademacher complexity, covering number, VC dimension and Pseudo dimension.", "These measures depict the complexity of a function class differently but are closely related to each other in many ways (a brief description of these measures can be found in Appendix ).", "Next, we give an upper bound on the Pseudo dimension of the function class $\\mathcal {F}^\\prime _n$ , which facilities our derivation of the upper bound for the stochastic error.", "Lemma 4 (Pseudo dimension of ReLU-ReQU multilayer perceptrons) Let $\\mathcal {F}$ be a function class implemented by ReLU-ReQU activated multilayer perceptrons with depth no more than $\\tilde{\\mathcal {D}}$ , width no more than $\\tilde{\\mathcal {W}}$ , number of neurons (nodes) no more than $\\tilde{\\mathcal {U}}$ and size or number of parameters (weights and bias) no more than $\\tilde{\\mathcal {S}}$ .", "Then the Pseudo dimension of $\\mathcal {F}$ satisfies $ {\\rm Pdim}(\\mathcal {F})\\le \\min \\lbrace 7\\tilde{\\mathcal {D}}\\tilde{\\mathcal {S}}(\\tilde{\\mathcal {D}}+\\log _2\\tilde{\\mathcal {U}}),22\\tilde{\\mathcal {U}}\\tilde{\\mathcal {S}}\\rbrace .$ Theorem 3 (Stochastic error bound) Let $\\mathcal {F}_n=\\mathcal {F}_{\\mathcal {D},\\mathcal {W},\\mathcal {U},\\mathcal {S},\\mathcal {B},\\mathcal {B}^\\prime }$ be the ReQU activated multilayer perceptron and let $\\mathcal {F}^\\prime _n=\\lbrace \\frac{\\partial }{\\partial \\tau } f:f\\in \\mathcal {F}_{n}\\rbrace $ denote the class of first order partial derivatives.", "Then for $n\\ge \\max \\lbrace {\\rm Pdim}(\\mathcal {F}_n),{\\rm Pdim}(\\mathcal {F}^\\prime _n)\\rbrace $ , the stochastic error satisfies $&\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace \\le c_0\\big \\lbrace \\mathcal {B}{\\rm Pdim}(\\mathcal {F}_n)+\\lambda \\mathcal {B}^\\prime {\\rm Pdim}(\\mathcal {F}_n^\\prime )\\big \\rbrace \\frac{\\log (n)}{n},$ for some universal constant $c_0>0$ .", "Also, $&\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace \\\\&\\qquad \\qquad \\le c_1\\big (\\mathcal {B}+\\lambda \\mathcal {B}^\\prime \\big )\\min \\lbrace 5796\\mathcal {D}\\mathcal {S}(\\mathcal {D}+\\log _2\\mathcal {U}),8602\\mathcal {U}\\mathcal {S}\\rbrace \\frac{\\log (n)}{n},$ for some universal constant $c_1>0$ .", "The proofs of Lemma REF and Theorem REF are given in the Appendix." ], [ "Approximation error", "In this section, we give an upper bound on the approximation error of the ReQU network for approximating functions in $C^s$ defined in Definition REF .", "The ReQU activation function has a continuous first order derivative and its first order derivative is the popular ReLU function.", "With ReQU as the activation function, the network is smooth and differentiable.", "Therefore, ReQU is a suitable choice for our problem since derivatives are involved in the penalty function.", "An important property of ReQU is that it can represent the square function $x^2$ without error.", "In the study of ReLU network approximation properties [49], [50], [43], the analyses rely essentially on the fact that $x^2$ can be approximated by deep ReLU networks to any error tolerance as long as the network is large enough.", "With ReQU activated networks, $x^2$ can be represented exactly with one hidden layer and 2 hidden neurons.", "ReQU can be more efficient in approximating smooth functions in the sense that it requires a smaller network size to achieve the same approximation error.", "Now we state some basic approximation properties of ReQU networks.", "The analysis of the approximation power of ReQU networks in our work basically rests on the fact that given inputs $x,y\\in \\mathbb {R}$ , the powers $x,x^2$ and the product $xy$ can be exactly computed by simple ReQU networks.", "Let $\\sigma _2(x)=[\\max \\lbrace x,0\\rbrace ]^2$ denote the ReQU activation function.", "We first list the following basic properties of the ReQU approximation: (1) For any $x\\in \\mathbb {R}$ , the square function $x^2$ can be computed by a ReQU network with 1 hidden layer and 2 neurons, i.e., $x^2=&\\sigma _2(x)+\\sigma _2(-x).$ (2) For any $x,y\\in \\mathbb {R}$ , the multiplication function $xy$ can be computed by a ReQU network with 1 hidden layer and 4 neurons, i.e., $xy=\\frac{1}{4}\\lbrace \\sigma _2(x+y)+\\sigma _2&(-x-y)-\\sigma _2(x-y)-\\sigma _2(-x+y)\\rbrace .$ (3) For any $x\\in \\mathbb {R}$ , taking $y=1$ in the above equation, then the identity map $x\\mapsto x$ can be computed by a ReQU network with 1 hidden layer and 4 neurons, i.e., $x=\\frac{1}{4}\\lbrace \\sigma _2(x+1)+\\sigma _2(-x-1)-\\sigma _2(x-1)-\\sigma _2(-x+1)\\rbrace .$ (4) If both $x$ and $y$ are non-negative, the formulas for square function and multiplication can be simplified as follows: $x^2=\\sigma _2(x),\\qquad xy=\\frac{1}{4}\\lbrace \\sigma _2(x+y)-\\sigma _2(x-y)-\\sigma _2(-x+y)\\rbrace .$ The above equations can be verified using simple algebra.", "The realization of the identity map is not unique here, since for any $a\\ne 0$ , we have $x=\\lbrace (x+a)^2-x^2-a^2\\rbrace /(2a)$ which can be exactly realized by ReQU networks.", "In addition, the constant function 1 can be computed exactly by a 1-layer ReQU network with zero weight matrix and constant 1 bias vector.", "In such a case, the basis $1,x,x^2,\\ldots ,x^p$ of the degree $p\\in \\mathbb {N}_0$ polynomials in $\\mathbb {R}$ can be computed by a ReQU network with proper size.", "Therefore, any $p$ -degree polynomial can be approximated without error.", "To approximate the square function in (1) with ReLU networks on bounded regions, the idea of using “sawtooth\" functions was first raised in [49], and it achieves an error $\\mathcal {O}(2^{-L})$ with width 6 and depth $\\mathcal {O}(L)$ for positive integer $L\\in \\mathbb {N}^+$ .", "General construction of ReLU networks for approximating a square function can achieve an error $N^{-L}$ with width $3N$ and depth $L$ for any positive integers $N, L\\in \\mathbb {N}^+$ [35].", "Based on this basic fact, the ReLU networks approximating multiplication and polynomials can be constructed correspondingly.", "However, the network complexity (cost) in terms of network size (depth and width) for a ReLU network to achieve precise approximation can be large compared to that of a ReQU network since ReQU network can compute polynomials exactly with fewer layers and neurons.", "Theorem 4 (Approximation of Polynomials by ReQU networks) For any non-negative integer $N\\in \\mathbb {N}_0$ and any positive integer $d\\in \\mathbb {N}^+$ , if $f:\\mathbb {R}^d\\rightarrow \\mathbb {R}$ is a polynomial of $d$ variables with total degree $N$ , then there exists a ReQU activated neural network that can compute $f$ with no error.", "More exactly, (1) if $d=1$ where $f(x)=\\sum _{i=1}^Na_ix^i$ is a univariate polynomial with degree $N$ , then there exists a ReQU neural network with $2N-1$ hidden layers , $5N-1$ number of neurons, $8N$ number of parameters (weights and bias) and network width 4 that computes $f$ with no error.", "(2) If $d\\ge 2$ where $f(x_1,\\ldots ,x_d)=\\sum _{i_1+\\ldots +i_d=0}^Na_{i_1,\\ldots ,i_d}x_1^{i_1}\\cdots x_d^{i_d}$ is a multivariate polynomial of $d$ variables with total degree $N$ , then there exists a ReQU neural network with $2N-1$ hidden layers , $2(5N-1)N^{d-1}+(5N-1)\\sum _{j=1}^{d-2}N^j\\le 15N^d$ number of neurons, $16N^{d}+8N\\sum _{j=1}^{d-2}N^j\\le 24N^d$ number of parameters (weights and bias) and network width $8N^{d-1}+4\\sum _{j=1}^{d-2}N^j\\le 12N^{d-1}$ that computes $f$ with no error.", "Theorem REF shows any $d$ -variate multivariate polynomial with degree $N$ on $\\mathbb {R}^d$ can be represented with no error by a ReQU network with $2N-1$ hidden layers, no more than $15N^d$ neurons, no more than $24N^d$ parameters (weights and bias) and width less than $12N^{d-1}$ .", "The approximation powers of ReQU networks (and RePU networks) on polynomials are studied in [33], [34], in which the representation of a $d$ -variate multivariate polynomials with degree $N$ on $\\mathbb {R}^d$ needs a ReQU network with $d\\lfloor \\log _2N\\rfloor +d$ hidden layers, and no more than $\\mathcal {O}(\\binom{N+d}{d})$ neurons and parameters.", "Compared to the results in [34], [33], the orders of neurons and parameters for the constructed ReQU network in Theorem REF are basically the same.", "The the number of hidden layers for the constructed ReQU network here is $2N-1$ depending only on the degree of the target polynomial and independent of the dimension of input $d$ , which is different from the dimension depending $d\\lfloor \\log _2N\\rfloor +d$ hidden layers required in [34].", "In addition, ReLU activated networks with width $\\lbrace 9(W+1)+N-1\\rbrace N^{d}=\\mathcal {O}(WN^d)$ and depth $7N^2L=\\mathcal {O}(LN^2)$ can only approximate $d$ -variate multivariate polynomial with degree $N$ with an accuracy $9N(W+1)^{-7NL}=\\mathcal {O}(NW^{-LN})$ for any positive integers $W,L\\in \\mathbb {N}^+$ .", "Note that the approximation results on polynomials using ReLU networks are generally on bounded regions, while ReQU can exactly compute the polynomials on $\\mathbb {R}^d$ .", "In this sense, the approximation power of ReQU networks is generally greater than that of ReLU networks.", "Next, we leverage the approximation power of multivariate polynomials to derive error bounds of approximating general multivariate smooth functions using ReQU activated neural networks.", "Here we focus on the approximation of multivariate smooth functions in $C^s$ space for $s\\in \\mathbb {N}^+$ defined in Definition REF .", "Theorem 5 Let $f$ be a real-valued function defined on $\\mathcal {X}\\times (0,1)\\subset \\mathbb {R}^{d+1}$ belonging to class $C^s$ for $0\\le s<\\infty $ .", "For any $N\\in \\mathbb {N}^+$ , there exists a ReQU activated neural network $\\phi _N$ with width no more than $12N^{d}$ , hidden layers no more than $2N-1$ , number of neurons no more than $15N^{d+1}$ and parameters no more than $24N^{d+1}$ such that for each multi-index $\\alpha \\in \\mathbb {N}^d_0$ , we have $\\vert \\alpha \\vert _1\\le \\min \\lbrace s,N\\rbrace $ , $\\sup _{\\mathcal {X}\\times (0,1)}\\vert D^\\alpha (f-\\phi _N)\\vert \\le C_{s,d,\\mathcal {X}}\\, N^{-(s-\\vert \\alpha \\vert _1)}\\Vert f\\Vert _{C^s},$ where $C_{s,d,\\mathcal {X}}$ is a positive constant depending only on $d,s$ and the diameter of $\\mathcal {X}\\times (0,1)$ .", "In [33], [34], a similar rate of convergence $\\mathcal {O}(N^{-(s-\\alpha )})$ under the Jacobi-weighted $L^2$ norm was obtained for the approximation of $\\alpha $ -th derivative of a univariate target function, where $\\alpha \\le s\\le N+1$ and $s$ denotes the smoothness of the target function belonging to Jacobi-weighted Sobolev space.", "The ReQU network in [34] has a different shape from ours specified in Theorem REF .", "The results of [34] achieved a $\\mathcal {O}(N^{-(s-\\alpha )})$ rate using a ReQU network with $\\mathcal {O}(\\log _2(N))$ hidden layers, $\\mathcal {O}(N)$ neurons and nonzero weights and width $\\mathcal {O}(N)$ .", "Simultaneous approximation to the target function and its derivatives by a ReQU network was also considered in [16] for solving partial differential equations for $d$ -dimensional smooth target functions in $C^2$ .", "Now we assume that the target function $f_0:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ in our QRP estimation problem belongs to the smooth function class $C^s$ for some $s\\in \\mathbb {N}^+$ .", "The approximation error $\\inf _{f\\in \\mathcal {F}_n}\\Big [\\mathcal {R}(f)-\\mathcal {R}(f_0)+\\lambda \\lbrace \\kappa (f)-\\kappa (f_0)\\rbrace \\Big ]$ given in Lemma REF can be handled correspondingly.", "Corollary 1 (Approximation error bound) Suppose that the target function $f_0$ defined in (REF ) belongs to $C^s$ for some $s\\in \\mathbb {N}^+$ .", "For any $N\\in \\mathbb {N}^+$ , let $\\mathcal {F}_n:=\\mathcal {F}_{\\mathcal {D},\\mathcal {W},\\mathcal {U},\\mathcal {S},\\mathcal {B},\\mathcal {B}^\\prime }$ be the ReQU activated neural networks $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ with depth (number of hidden layer) $\\mathcal {D}\\le 2N-1$ , width $\\mathcal {W}\\le 12N^d$ , number of neurons $\\mathcal {U}\\le 15N^{d+1}$ , number of parameters (weights and bias) $\\mathcal {S}\\le 24N^{d+1}$ , satisfying $\\mathcal {B}\\ge \\Vert f_0\\Vert _{C^0}$ and $\\mathcal {B}^\\prime \\ge \\Vert f_0\\Vert _{C^1}$ .", "Then the approximation error given in Lemma REF satisfies $\\inf _{f\\in \\mathcal {F}_n}\\Big [\\mathcal {R}(f)-\\mathcal {R}(f_0)+\\lambda \\lbrace \\kappa (f)-\\kappa (f_0)\\rbrace \\Big ]\\le C_{s,d,\\mathcal {X}} (1+\\lambda ) N^{-(s-1)}\\Vert f_0\\Vert _{C^s},$ where $C_{s,d,\\mathcal {X}}$ is a positive constant depending only on $d,s$ and the diameter of the support $\\mathcal {X}\\times (0,1)$ ." ], [ "Computation", "In this section, we describe the training algorithms for the proposed penalized DQRP estimator, including a generic algorithm and an improved algorithm.", "[H] An stochastic gradient descent algorithm for the penalized DQRP estimator Sample data $\\lbrace (X_i,Y_i)\\rbrace _{i=1}^n$ with $n\\ge 1$ ; Minibatch size $ m\\le n$ .", "Generate $n$ random values $\\lbrace \\xi _i\\rbrace _{i=1}^n$ uniformly from $(0,1)$ number of training iterations Sample minibatch of $m$ data $\\lbrace (X^{(j)},Y^{(j)},\\xi ^{(j)})\\rbrace _{j=1}^m$ form the data $\\lbrace (X_i,Y_i,\\xi _i)\\rbrace _{i=1}^n$ Update the ReQU network $f$ parametrized by $\\theta $ by descending its stochastic gradient: $\\nabla _\\theta \\frac{1}{m} \\sum _{j=1}^{m}\\Big [ \\rho _{\\xi ^{(j)}}(Y^{(j)}-f(X^{(j)},\\xi ^{(j)}))+\\lambda \\max \\Big \\lbrace -\\frac{\\partial }{\\partial \\tau }f(X^{(j)},\\xi ^{(j)}),0\\Big \\rbrace \\Big ]$ The gradient-based updates can use any standard gradient-based algorithm.", "We used Adam in our experiments.", "In Algorithm , the number of random values $\\lbrace \\xi _i\\rbrace _{i=1}^n$ is set to be the same as the sample size $n$ and each $\\xi _i$ is coupled with the sample $(X_i,Y_i)$ for $i=1,\\ldots ,n$ during the training process.", "This may degrade the efficiency of the learning DQRP $\\hat{f}^\\lambda _n$ since each data $(X_i,Y_i)$ has only been used to train the ReQU network $f(\\cdot ,\\xi _i)$ at a single value (quantile) $\\xi _i$ .", "Hence, we proposed an improved algorithm.", "[H] An improved stochastic gradient descent algorithm for the penalized DQRP estimator Sample data $\\lbrace (X_i,Y_i)\\rbrace _{i=1}^n$ with $n\\ge 1$ ; Minibatch size $ m\\le n$ .", "number of training iterations Sample minibatch of $m$ data $\\lbrace (X^{(j)},Y^{(j)})\\rbrace _{j=1}^m$ form the data $\\lbrace (X_i,Y_i)\\rbrace _{i=1}^n$ Generate $m$ random values $\\lbrace \\xi _j\\rbrace _{j=1}^m$ uniformly from $(0,1)$ Update the ReQU network $f$ parametrized by $\\theta $ by descending its stochastic gradient: $\\nabla _\\theta \\frac{1}{m} \\sum _{j=1}^{m}\\Big [ \\rho _{\\xi _j}(Y^{(j)}-f(X^{(j)},\\xi _j))+\\lambda \\max \\Big \\lbrace -\\frac{\\partial }{\\partial \\tau }f(X^{(j)},\\xi _j),0\\Big \\rbrace \\Big ]$ The gradient-based updates can use any standard gradient-based algorithm.", "We used Adam in our experiments.", "In Algorithm , at each minibatch training iteration, $m$ random values $\\lbrace \\xi _j\\rbrace _{j=1}^m$ are generated uniformly from $(0,1)$ and coupled with the minibatch sample $\\lbrace (X^{(j)},Y^{(j)})\\rbrace _{j=1}^m$ for the gradient-based updates.", "In this case, each sample $(X_i,Y_i)$ gets involved in the training of ReQU network $f(\\cdot ,\\xi )$ at multiple values (quantiles) of $\\xi =\\xi ^{(1)}_i,\\ldots ,\\xi ^{(t)}_i$ where $t$ denotes the number of minibatch iterations and $\\xi ^{(j)}_i,j=1,\\ldots ,t$ denotes the random value generated at iteration $t$ that is coupled with the sample $(X_i,Y_i)$ .", "In such a way, the utilization of each sample $(X_i,Y_i)$ is greatly improved while the computation complexity does not increase compared to the generic Algorithm .", "We use an example to demonstrate the advantage of Algorithm over Algorithm .", "Figure REF displays a comparison between Algorithm and Algorithm with the same simulated dataset generated from the “Wave\" model (see section for detailed introduction of the simulated model).", "The sample size $n=512$ , and the tuning parameter is chosen as $\\lambda =\\log (n)$ .", "Two ReQU neural networks with same architecture (width of hidden layers $(256,256,256)$ ) are trained for 200 epochs by Algorithm and Algorithm , respectively.", "The example and the simulation studies in section show that Algorithm has a better and more stable performance than Algorithm without additional computational complexity.", "Figure: A comparison of Algorithms and .", "The 512 training data generated from the “Wave\" model are depicted as black dots.", "The target quantile functions at quantile levels 0.05 (blue), 0.25 (orange), 0.5 (green), 0.75 (red), 0.95 (purple) are depicted as dashed curves, and the estimated quantile functions are the solid curves with the same color.", "In the left panel, the estimator is trained by Algorithm .", "In the right panel, the estimator is trained by the improved Algorithm .", "Both trainings stop after 200 epochs." ], [ "Numerical studies", "In this section, we compare the proposed penalized deep quantile regression with the following nonparametric quantile regression methods: Kernel-based nonparametric quantile regression [40], denoted by kernel QR.", "This is a joint quantile regression method based on a vector-valued reproducing kernel Hilbert space (RKHS), which enjoys fewer quantile crossings and enhanced performance compared to the estimation of the quantile functions separately.", "In our implementation, the radial basis function (RBF) kernel is chosen and a coordinate descent primal-dual algorithm [18] is used via the Python package qreg.", "Quantile regression forests [37], denoted by QR Forest.", "Conditional quantiles can be estimated using quantile regression forests, a method based on random forests.", "Quantile regression forests can nonparametrically estimate quantile regression functions with high-dimensional predictor variables.", "This method is shown to be consistent in [37].", "Penalized DQRP estimator as described in Section , denoted by DQRP.", "We implement it in Python via Pytorch and use Adam [27] as the optimization algorithm with default learning rate 0.01 and default $\\beta =(0.9,0.99)$ (coefficients used for computing running averages of gradients and their squares)." ], [ "Estimation and Evaluation", "For the proposed penalized DQRP estimator, we set the tuning parameter $\\lambda =\\log (n)$ across the simulations.", "Since the Kernel QR and QR Forest can only estimate the curves at a given quantile level, we consider using Kernel QR and QR Forest to estimate the quantile curves at 5 different levels for each simulated model, i.e., we estimate quantile curves for $\\tau \\in \\lbrace 0.05,0.25,0.5,0.75,0.95\\rbrace $ .", "For each target $f_0$ , according to model (REF ) we generate the training data $(X_i^{train},Y_i^{train})_{i=1}^n$ with sample size $n$ to train the empirical risk minimizer at $\\tau \\in \\lbrace 0.05,0.25,0.5,0.75,0.95\\rbrace $ using Kernel QR and QR Forest, i.e.", "$\\hat{f}^\\tau _n\\in \\arg \\min _{f\\in \\mathcal {F}} \\frac{1}{n}\\sum _{i=1}^n\\rho _\\tau (Y_i^{train}-f(X_i^{train})),$ where $\\mathcal {F}$ is the class of RKHS, the class of functions for QR forest, or the class of ReQU neural network functions.", "For each $f_0$ , we also generate the testing data $(X_t^{test},Y_t^{test})_{t=1}^T$ with sample size $T$ from the same distribution of the training data.", "For the proposed method and for each obtained estimate $\\hat{f}_n$ , we denote $\\hat{f}_n^\\tau (\\cdot )=\\hat{f}_n(\\cdot ,\\tau )$ for notational simplicity.", "For DQRP, Kernel QR and QR Forest, we calculate the testing error on $(X_t^{test},Y_t^{test})_{t=1}^T$ at different quantile levels $\\tau $ .", "For quantile level $\\tau \\in (0,1)$ , we calculate the $L_1$ distance between $\\hat{f}_n^\\tau $ and the corresponding risk minimizer $f_0^\\tau (\\cdot ):=f_0(\\cdot ,\\tau )$ by $\\Vert \\hat{f}_n^\\tau -f_0^\\tau \\Vert _{L^1(\\nu )}=\\frac{1}{T}\\sum _{t=1}^T \\Big \\vert \\hat{f}_n(X_t^{test},\\tau )-f_0^\\tau (X_t^{test},\\tau )\\Big \\vert ,$ and we also calculate the $L_2^2$ distance between $\\hat{f}^\\tau _n$ and the $f_0^\\tau $ , i.e.", "$\\Vert \\hat{f}^\\tau _n-f_0^\\tau \\Vert ^2_{L^2(\\nu )}=\\frac{1}{T}\\sum _{t=1}^T \\Big \\vert \\hat{f}_n(X_t^{test},\\tau )-f_0^\\tau (X_t^{test},\\tau )\\Big \\vert ^2.$ The specific forms of $f_0$ are given in the data generation models below.", "In the simulation studies, the size of testing data $T=10^5$ for each data generation model.", "We report the mean and standard deviation of the $L_1$ and $L^2_2$ distances over $R = 100$ replications under different scenarios." ], [ "Univariate models", "We consider three basic univariate models, including “Linear”, “Wave” and “Triangle”, which corresponds to different specifications of the target function $f_0$ .", "The formulae are given below.", "Linear: $f_0(x,\\tau )=2x+F_t^{-1}(\\tau ),$ Wave: $f_0(x,\\tau )=2x\\sin (4\\pi x) +\\vert \\sin (\\pi x)\\vert \\Phi ^{-1}(\\tau ), $ Triangle: $f_0(x,\\tau )=4(1-\\vert x-0.5\\vert )+\\exp (4x-2) \\Phi ^{-1}(\\tau ), $ where where $F_t(\\cdot )$ is the cumulative distribution function of the standard Student's t random variable, $\\Phi (\\cdot )$ is the cumulative distribution function of the standard normal random variable.", "We use the linear model as a baseline model in our simulations and expect all the methods perform well under the linear model.", "The “Wave” is a nonlinear smooth model and the “Triangle” is a nonlinear, continuous but non-differentiable model.", "These models are chosen so that we can evaluate the performance of DQRP, kernel QR and QR Forest under different types of models.", "Figure: The target quantiles curves.From the left to the right, each column corresponds a data generation model, “Linear”, “Wave” and “Triangle”.", "The sample data with size n=512n=512 is depicted as grey dots.The target quantile functions at the quantile levels τ=\\tau =0.05 (blue), 0.25 (orange), 0.5 (green), 0.75 (red), 0.95 (purple) are depicted as solid curves.For these models, we generate $X$ uniformly from the unit interval $[0,1]$ .", "The $\\tau $ -th conditional quantile of the response $Y$ given $X=x$ can be calculated directly based on the expression of $f_0(x,\\tau )$ .", "Figure REF shows all these univariate data generation models and their corresponding conditional quantile curves at $\\tau =0.05,0.25, 0.50, 0.75,0.95$ .", "Figures REF to REF show an instance of the estimated quantile curves for the “Wave” and “Triangle” models.", "The plot for the “Linear” model is included in the Appendix.", "In these plots, the training data is depicted as grey dots.", "The target quantile functions at the quantile levels $\\tau =$ 0.05 (blue), 0.25 (orange), 0.5 (green), 0.75 (red), 0.95 (purple) are depicted as dashed curves, and the estimated quantile functions are represented by solid curves with the same color.", "For each figure, from the top to the bottom, the rows correspond to the sample size $n=512, 2048$ .", "From the left to the right, the columns correspond to the methods DQRP, kernel QR and QR Forest.", "Figure: The fitted quantile curvesunder the univariate “Wave\" model.The training data is depicted as grey dots.The target quantile functions at the quantile levels τ=\\tau =0.05 (blue), 0.25 (orange), 0.5 (green), 0.75 (red), 0.95 (purple) are depicted as dashed curves, and the estimated quantile functions are represented by solid curves with the same color.", "From the top to the bottom, the rows correspond to the sample size n=512,2048n=512,2048.", "From the left to the right, the columns correspond to the methods DQRP, kernel QR and QR Forest.Figure: The fitted quantile curvesunder the univariate “Triangle\" model.The training data is depicted as grey dots.The target quantile functions at the quantile levels τ=\\tau =0.05 (blue), 0.25 (orange), 0.5 (green), 0.75 (red), 0.95 (purple) are depicted as dashed curves, and the estimated quantile functions are represented by solid curves with the same color.", "From the top to the bottom, the rows correspond to the sample sizes n=512,2048n=512,2048.", "From the left to the right, the columns correspond to the methods DQRP, kernel QR and QR Forest.Table: Data is generated from the “Wave\" model with training sample size n=512,2048n= 512,2048 and the number of replications R=100R = 100.", "The averaged L 1 L_1 and L 2 2 L_2^2 test errors with the corresponding standard deviation (in parentheses) are reported for the estimators trained by different methods.Table: Data is generated from the “Triangle\" model with training sample size n=512,2048n= 512,2048 and the number of replications R=100R = 100.", "The averaged L 1 L_1 and L 2 2 L_2^2 test errors with the corresponding standard deviation (in parentheses) are reported for the estimators trained by different methods.Tables REF and REF summarize the results for the models “Wave” and “Triangle”, respectively.", "For Kernel QR, QR Forest and our proposed DQRP estimator, the corresponding $L_1$ and $L_2^2$ errors (standard deviation in parentheses) between the estimates and the target are reported at different quantile levels $\\tau =0.05,0.25,0.50,0.75,0.95$ .", "For each column, using bold text we highlight the best method which produces the smallest risk among these three methods.", "For the “Wave” model, the proposed DQRP outperforms Kernel QR and QR Forest in all the scenarios.", "For the nonlinear “Triangle” model, DQRP also tends to perform better than Kernel QR and QR Forest.", "For the “Linear” model, the results from the three methods are comparable, but Kernel QR tends to have better performance.", "The results for the “Linear” model are given in Table REF in the Appendix." ], [ "Multivariate models", "We consider three basic multivariate models, including linear model (“Linear”), single index model (“SIM”) and additive model (“Additive”), which correspond to different specifications of the target function $f_0$ .", "The formulae are given below.", "Linear: $ f_0(x,\\tau )=2A^\\top x+F_t^{-1}(\\tau ),$ Single index model: $f_0(x,\\tau )=\\exp (0.1\\times A^\\top x) +\\vert \\sin (\\pi B^\\top x)\\vert \\Phi ^{-1}(\\tau ),$ Additive model: $f_0(x,\\tau )= 3x_1+4(x_2-0.5)^2+2\\sin (\\pi x_3)-5\\vert x_4-0.5\\vert +\\exp \\lbrace 0.1(B^\\top x-0.5)\\rbrace \\Phi ^{-1}(\\tau ),$ where $F_t(\\cdot )$ denotes the cumulative distribution function of the standard Student's t random variable, $\\Phi (\\cdot )$ denotes the cumulative distribution function of the standard normal random variable and the parameters ($d$ -dimensional vectors) $A=&(0.409, 0.908, 0, 0, -2.061, 0.254, 3.024, 1.280)^\\top , \\\\B=&(1.386, -0.902, 5.437, 0, 0, -0.482, 4.611, 0)^\\top .$ Table: Data is generated from the “Single index model\" with training sample size n=512,2048n= 512,2048 and the number of replications R=100R = 100.", "The averaged L 1 L_1 and L 2 2 L_2^2 test errors with the corresponding standard deviation (in parentheses) are reported for the estimators trained by different methods.Table: Data is generated from the “Additive\" model with training sample size n=512,2048n= 512,2048 and the number of replications R=100R = 100.", "The averaged L 1 L_1 and L 2 2 L_2^2 test errors with the corresponding standard deviation (in parentheses) are reported for the estimators trained by different methods.The simulation results under multivariate “SIM\" and “Additive\" models are summarized in Tables REF -REF respectively.", "For Kernel QR, QR Forest and our proposed DQRP, the corresponding $L_1$ and $L_2^2$ distances (standard deviation in parentheses) between the estimates and the target are reported at different quantile levels $\\tau =0.05,0.25,0.50,0.75,0.95$ .", "For each column, using bold text we highlight the best method which produces the smallest risk among these three methods." ], [ "Tuning Parameter", "In this subsection, we study the effects of the tuning parameter $\\lambda $ on the proposed method.", "First, we demonstrate that the “quantile crossing\" phenomenon can be mitigated.", "We apply our method to the bone mineral density (BMD) dataset.", "This dataset is originally reported in [2] and analyzed in [46], [20]The data is also available from the website http://www-stat.stanford.edu/ElemStatlearn..", "The dataset collects the bone mineral density data of 485 North American adolescents ranging from 9.4 years old to 25.55 years old.", "Each response value is the difference of the bone mineral density taken on two consecutive visits, divided by the average.", "The predictor age is the averaged age over the two visits.", "In Figure REF , we present the estimated quantile regression processes with ($\\lambda =\\log (n)$ ) or without ($\\lambda =0$ ) the proposed non-crossing penalty.", "With or without the penalty, we use the Adam optimizer with the same parameters (for the optimization process) to train a fixed-shape ReQU network with three hidden layers and width $(128,128,128)$ .", "The estimated quantile curves at $\\tau =0.1,0.2,\\ldots ,0.9$ and the observations are depicted in Figure REF .", "It can be seen that the proposed non-crossing penalty is effective to avoid quantile crossing, even in the area outside the range of the training data.", "Figure: An example of quantile crossing problem in BMD data set.", "The estimated quantile curves at τ=0.1,0.2,...,0.9\\tau =0.1,0.2,\\ldots ,0.9 and the observations are depicted.", "In the left panel, the estimation is conducted without non-crossing penalty and there are crossings at both edges of the graph.", "In the right figure, the estimation is conducted with non-crossing penalty.", "There is no quantile crossing even in the area outside the range of the training data.Second, we study how the value of tuning parameter $\\lambda $ affects the risk of the estimated quantile regression process and how it helps avoiding crossing.", "Given a sample with size $n$ , we train a series of the DQRP estimators at different values of the tuning parameter $\\lambda $ .", "For each DQRP estimator, we record its risk and penalty values and the track of these values are plotted in Figures REF -REF .", "For each obtained DQRP estimator $\\hat{f}^\\lambda _n$ , the statistics “Risk\" is calculated according to the formula $\\mathcal {R}(\\hat{f}^\\lambda _n)=\\mathbb {E}_{X,Y,\\xi }\\lbrace \\rho _{\\xi }(Y-f(X,\\xi ))\\rbrace ,$ and the statistics “Penalty\" is calculated according to $\\kappa (\\hat{f}^\\lambda _n)=\\mathbb {E}_{X,\\xi }[\\max \\lbrace -\\frac{\\partial }{\\partial \\tau }\\hat{f}^\\lambda _n(X,\\xi ),0\\rbrace ].$ In practice, we generate $T=10,000$ testing data $(X_t^{test},Y_t^{test},\\xi _t^{tesr})_{t=1}^T$ to empirically calculate risks and penalty values.", "In each figure, a vertical dashed line is also depicted at the value $\\lambda =\\log (n)$ .", "It can be seen that crossing seldom happens when we choose a tiny value of the tuning parameter $\\lambda $ .", "And the loss caused by penalty can be negligible compared to the total risk, since the penalty values are generally of order $O(10^{-3})$ instead of $O(10^{3})$ for the total risk.", "For large value of tuning parameter $\\lambda $ , the crossing nearly disappears which is intuitive and encouraged by the formulation of our penalty.", "However, the risk could be very large resulting a poor estimation of the target function.", "As shown by the dashed vertical line in each figure, numerically the choice of $\\lambda =\\log (n)$ can lead to a reasonable estimation of the target function with tiny risk (blue lines) and little crossing (red lines) across different model settings.", "Empirically, we choose $\\lambda =\\log (n)$ in general for the simulations.", "By Theorem REF , such choice of tuning parameter can lead to a consistent estimator with reasonable fast rate of convergence.", "Figure: The value of risks and penalties under the univariate “Triangle\" model when n=512,2048n=512,2048.", "A vertical dashed line is depicted at the value λ=log(n)\\lambda =\\log (n) on x-axis in each figure.Figure: The value of risks and penalties under the multivariate additive model when n=512,2048n=512,2048 and d=8d=8.", "A vertical dashed line is depicted at the value λ=log(n)\\lambda =\\log (n) on x-axis in each figure." ], [ "Conclusion", "We have proposed a penalized nonparametric approach to estimating the nonseparable model (REF ) using ReQU activated deep neural networks and introduced a novel penalty function to enforcing non-crossing quantile curves.", "We have established non-asymptotic excess risk bounds for the estimated QRP and derived the mean integrated squared error for the estimated QRP under mild smoothness and regularity conditions.", "We have also developed a new approximation error bound for $C^s$ smooth functions with smoothness index $s > 0$ using ReQU activated neural networks.", "Our numerical experiments demonstrate that the proposed method is competitive with or outperforms two existing methods, including methods using reproducing kernels and random forests, for nonparmetric quantile regression.", "Therefore, the proposed approach can be a useful addition to the methods for multivariate nonparametric regression analysis.", "The results and methods of this work are expected to be useful in other settings.", "In particular, our approximation results on ReQU activated networks are of independent interest.", "It would be interesting to take advantage of the smoothness of ReQU activated networks and use them in other nonparametric estimation problems, such as the estimation of a regression function and its derivative." ], [ "Proof of Theorems, Corollaries and Lemmas", "In the appendix, we include the proofs for the results stated in Section and the technical details needed in the proofs." ], [ "Proof of Proposition ", "For any random variable $\\xi $ supported on $(0,1)$ , the risk $\\mathcal {R}(f)=&\\mathbb {E}_{X,Y,\\xi }\\lbrace \\rho _{\\xi }(Y-f(X,\\xi ))\\rbrace \\\\=&\\int _{0}^1\\mathbb {E}_{X,Y}\\lbrace \\rho _{\\xi }(Y-f(X,\\tau ))\\rbrace \\pi _\\xi (\\tau )d\\tau $ where $\\pi _\\xi (\\cdot )\\ge 0$ is the density function of $\\xi $ .", "By the definition of $f_0$ and the property of quantile loss function, it is known $f_0$ minimizes $\\mathbb {E}_{X,Y}\\lbrace \\rho _{\\xi }(Y-f(X,\\tau ))\\rbrace $ as well as $\\mathbb {E}_{X,Y}\\lbrace \\rho _{\\xi }(Y-f(X,\\tau ))\\rbrace \\pi _\\xi (\\tau )$ for each $\\tau \\in (0,1)$ .", "Thus $f_0$ minimizes the integral or the risk $\\mathcal {R}(\\cdot )$ over measurable functions.", "Note that if $\\pi _\\xi (\\tau )=0$ for some $\\tau \\in T$ where $T$ is a subset of $(0,1)$ , then any function $\\tilde{f}_0$ defined on $\\mathcal {X}\\times (0,1)$ that is different from $f_0$ only on $\\mathcal {X}\\times T$ will also be a minimizer of $\\mathcal {R}(\\cdot )$ .", "To be exact, $\\tilde{f}_0\\in \\arg \\min _{f}\\mathcal {R}(f)\\qquad {\\rm if\\ and\\ only\\ if}\\qquad \\tilde{f}_0=f_0{\\rm \\ on\\ }\\mathcal {X}\\times T.$ Further, if $(X,\\xi )$ has non zero density almost everywhere on $\\mathcal {X}\\times (0,1)$ and the probability measure of $(X,\\xi )$ is absolutely continuous with respect to Lebesgue measure, then above defined set $\\mathcal {X}\\times T$ is measure-zero and $f_0$ is the unique minimizer of $\\mathcal {R}(\\cdot )$ over all measurable functions in the sense of almost everywhere(almost surely), i.e., $f_0=\\arg \\min _{f} \\mathcal {R}(f)=\\arg \\min _{f}\\mathbb {E}_{X,Y,\\xi }\\lbrace \\rho _{\\xi }(Y-f(X,\\xi ))\\rbrace ,$ up to a negligible set with respect to the probability measure of $(X,\\eta )$ on $\\mathcal {X}\\times (0,1)$ .", "$\\hfill \\Box $ Recall that $\\hat{f}^\\lambda _n$ is the penalized empirical risk minimizer.", "Then, for any $f\\in \\mathcal {F}_n$ we have $\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)\\le \\mathcal {R}^\\lambda _n(f).$ Besides, for any $f\\in \\mathcal {F}$ we have $\\kappa (f)\\ge 0$ and $\\kappa _n(f)\\ge 0$ since $\\kappa $ and $\\kappa _n$ are nonnegative functions.", "Note that $\\kappa (f_0)=\\kappa _n(f_0)=0$ by the assumption that $f_0$ is increasing in its second argument.", "Then, $\\mathcal {R}(\\hat{f}^\\lambda _n)-\\mathcal {R}(f_0)\\le &\\mathcal {R}(\\hat{f}^\\lambda _n)-\\mathcal {R}(f_0)+\\lambda \\lbrace \\kappa (\\hat{f}^\\lambda _n)-\\kappa (f_0)\\rbrace =\\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-\\mathcal {R}^\\lambda (f_0).$ We can then give upper bounds for the excess risk $\\mathcal {R}(\\hat{f}^\\lambda _n)-\\mathcal {R}(f_0)$ .", "For any $f\\in \\mathcal {F}_n$ , $&\\mathbb {E}\\lbrace \\mathcal {R}(\\hat{f}^\\lambda _n)-\\mathcal {R}(f_0)\\rbrace \\\\&\\le \\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-\\mathcal {R}^\\lambda (f_0)\\rbrace \\\\&\\le \\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-\\mathcal {R}^\\lambda (f_0)\\rbrace +2\\mathbb {E}\\lbrace \\mathcal {R}_n^\\lambda (f)-\\mathcal {R}_n^\\lambda (\\hat{f}^\\lambda _n)\\rbrace \\\\&=\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-\\mathcal {R}^\\lambda (f_0)\\rbrace +2\\mathbb {E}[\\lbrace \\mathcal {R}_n^\\lambda (f)-\\mathcal {R}_n^\\lambda (f_0)\\rbrace -\\lbrace \\mathcal {R}_n^\\lambda (\\hat{f}^\\lambda _n)-\\mathcal {R}_n^\\lambda (f_0)\\rbrace ]\\\\&=\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace +2\\mathbb {E}\\lbrace \\mathcal {R}_n^\\lambda (f)-\\mathcal {R}_n^\\lambda (f_0)\\rbrace $ where the second inequality holds by the the fact that $\\hat{f}^\\lambda _n$ satisfies $\\mathcal {R}_n^\\lambda (f)\\ge \\mathcal {R}_n^\\lambda (\\hat{f}^\\lambda _n)$ for any $f\\in \\mathcal {F}_n$ .", "Since the inequality holds for any $f\\in \\mathcal {F}_n$ , we have $\\mathbb {E}\\lbrace \\mathcal {R}(\\hat{f}^\\lambda _n)-\\mathcal {R}(f_0)\\rbrace &\\le \\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace +2\\inf _{f\\in \\mathcal {F}_n}\\lbrace \\mathcal {R}^\\lambda (f)-\\mathcal {R}^\\lambda (f_0)\\rbrace .$ This completes the proof.", "$\\hfill \\Box $ The proof is straightforward by consequences of Theorem REF and Corollary REF .", "For any $N\\in \\mathbb {N}^+$ , let $\\mathcal {F}_n:=\\mathcal {F}_{\\mathcal {D},\\mathcal {W},\\mathcal {U},\\mathcal {S},\\mathcal {B},\\mathcal {B}^\\prime }$ be the ReQU activated neural networks $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ with depth $\\mathcal {D}\\le 2N-1$ , width $\\mathcal {W}\\le 12N^d$ , number of neurons $\\mathcal {U}\\le 15N^{d+1}$ , number of parameters $\\mathcal {S}\\le 24N^{d+1}$ and satisfying $\\mathcal {B}\\ge \\Vert f_0\\Vert _{C^0}$ and $\\mathcal {B}^\\prime \\ge \\Vert f_0\\Vert _{C^1}$ .", "Then we would compare the stochastic error bounds $8602{\\mathcal {U}\\mathcal {S}}$ and $5796{\\mathcal {D}\\mathcal {S}(\\mathcal {D}+\\log _2U)}$ .", "By simple math it can be shown that ${\\mathcal {D}\\mathcal {S}}(\\mathcal {D}+\\log _2U)=\\mathcal {O}(dN^{d+3})$ and $\\mathcal {U}\\mathcal {S}=\\mathcal {O}(N^{2d+2})$ .", "Since $d\\ge 1$ , then we choose apply the upper bound $\\mathcal {D}\\mathcal {S}(\\mathcal {D}+\\log _2U)$ in Theorem REF to get a excess risk bound with lower order in terms of $N$ .", "This completes the proof.", "$\\hfill \\Box $ Let $\\sigma _1(x)=\\max \\lbrace 0,x\\rbrace $ and $\\sigma _2(x)=\\max \\lbrace 0,x\\rbrace ^2$ denote the ReLU and ReQU activation functions respectively.", "Let $(d_0,d_1,\\ldots ,d_{\\mathcal {D}+1})$ be vector of the width (number of neurons) of each layer in the original ReQU network where $d_0=d+1$ and $d_{\\mathcal {D}+1}=1$ in our problem.", "We let $f^{(i)}_j$ be the function (subnetwork of the ReQU network) from $\\mathcal {X}\\times (0,1)\\subset \\mathbb {R}^{d+1}$ to $\\mathbb {R}$ which takes $(X,\\xi )=(x_1,\\ldots ,x_d,x_{d+1})$ as input and outputs the $j$ -th neuron of the $i$ -th layer for $j=1,\\ldots ,d_i$ and $i=1,\\ldots ,\\mathcal {D}+1$ .", "We next construct iteratively ReLU-ReQU activated subnetworks to compute $(\\frac{\\partial }{\\partial \\tau }f^{(i)}_1,\\ldots ,f^{(i)}_{d_i})$ for $i=1,\\ldots ,\\mathcal {D}+1$ , i.e., the partial derivatives of the original ReQU subnetworks step by step.", "We illustrate the details of the construction of the ReLU-ReQU subnetworks for the first two layers ($i=1,2$ ) and the last layer $(\\i =\\mathcal {D}+1)$ and apply induction for layers $i=3,\\ldots ,\\mathcal {D}$ .", "Note that the derivative of ReQU activation function is $\\sigma ^\\prime _2(x)=2\\sigma _1(x)$ , then when $i=1$ for any $j=1,\\ldots ,d_1$ , $\\frac{\\partial }{\\partial \\tau }f^{(1)}_j=\\frac{\\partial }{\\partial \\tau }\\sigma _2\\Big (\\sum _{i=1}^{d+1}w^{(1)}_{ji}x_i+b_j^{(1)}\\Big )=2\\sigma _1\\Big (\\sum _{i=1}^{d+1}w^{(1)}_{ji}x_i+b_j^{(1)}\\Big )\\cdot w_{j,d+1}^{(1)},$ where we denote $w^{(1)}_{ji}$ and $b_j^{(1)}$ by the corresponding weights and bias in 1-th layer of the original ReQU network and with a little bit abuse of notation we view $x_{d+1}$ as the argument $\\tau $ and calculate its partial derivative.", "Now we intend to construct a 4 layer (2 hidden layers) ReLU-ReQU network with width $(d_0,3d_1,10d_1,2d_1)$ which takes $(X,\\xi )=(x_1,\\ldots ,x_d,x_{d+1})$ as input and outputs $(f^{(1)}_1,\\ldots ,f^{(1)}_{d_1},\\frac{\\partial }{\\partial \\tau }f^{(1)}_1,\\ldots ,\\frac{\\partial }{\\partial \\tau }f^{(1)}_{d_1})\\in \\mathbb {R}^{2d_1}.$ Note that the output of such network contains all the quantities needed to calculated $(\\frac{\\partial }{\\partial \\tau }f^{(2)}_1,\\ldots ,\\frac{\\partial }{\\partial \\tau }f^{(2)}_{d_2})$ , and the process of construction can be continued iteratively and the induction proceeds.", "In the firstly hidden layer, we can obtain $3d_1$ neurons $(f^{(1)}_1,\\ldots ,f^{(1)}_{d_1},\\vert w^{(1)}_{1,d_0}\\vert ,\\ldots ,\\vert w^{(1)}_{d_1,d_0}\\vert ,\\sigma _1(\\sum _{i=1}^{d_0}w^{(1)}_{1i}x_i+b^{(1)}_1),\\ldots ,\\sigma _1(\\sum _{i=1}^{d_0}w^{(1)}_{d_1i}x_i+b^{(1)}_{d_1})),$ with weight matrix $A^{(1)}_1$ having $2d_0d_1$ parameters, bias vector $B^{(1)}_1$ and activation function vector $\\Sigma _1$ being $A^{(1)}_1=\\left[\\begin{array}{ccccc}w^{(1)}_{1,1} &w^{(1)}_{1,2} & \\cdots & \\cdots &w^{(1)}_{1,d_0} \\\\w^{(1)}_{2,1} &w^{(1)}_{2,2} &\\cdots & \\cdots &w^{(1)}_{2,d_0} \\\\\\ldots & \\ldots &\\ldots & \\ldots &\\ldots \\\\w^{(1)}_{d_1,1} &w^{(1)}_{d_1,2} &\\cdots &\\cdots &w^{(1)}_{d_1,d_0} \\\\0 & 0& 0&0 & 0\\\\\\ldots & \\ldots &\\ldots & \\ldots &\\ldots \\\\0 & 0& 0&0 & 0\\\\w^{(1)}_{1,1} &w^{(1)}_{1,2} & \\cdots & \\cdots &w^{(1)}_{1,d_0} \\\\w^{(1)}_{2,1} &w^{(1)}_{2,2} &\\cdots & \\cdots &w^{(1)}_{2,d_0} \\\\\\ldots & \\ldots &\\ldots & \\ldots &\\ldots \\\\w^{(1)}_{d_1,1} &w^{(1)}_{d_1,2} &\\cdots &\\cdots &w^{(1)}_{d_1,d_0} \\\\\\end{array}\\right]\\in \\mathbb {R}^{3d_1\\times d_0},\\quad B^{(1)}_1=\\left[\\begin{array}{c}b^{(1)}_1\\\\b^{(1)}_2\\\\\\ldots \\\\b^{(1)}_{d_1}\\\\\\vert w^{(1)}_{1,d_0}\\vert \\\\\\vert w^{(1)}_{2,d_0}\\vert \\\\\\ldots \\\\\\vert w^{(1)}_{d_1,d_0}\\vert \\\\b^{(1)}_1\\\\b^{(1)}_2\\\\\\ldots \\\\b^{(1)}_{d_1}\\\\\\end{array}\\right]\\in \\mathbb {R}^{3d_1},\\quad \\Sigma ^{(1)}_1=\\left[\\begin{array}{c}\\sigma _2\\\\\\ldots \\\\\\sigma _2\\\\\\sigma _1\\\\\\ldots \\\\\\sigma _1\\\\\\sigma _1\\\\\\ldots \\\\\\sigma _1\\\\\\end{array}\\right],$ where the first $d_1$ activation functions of $\\Sigma _1$ are chosen to be $\\sigma _2$ and others $\\sigma _1$ .", "In the second hidden layer, we can obtain $10d_1$ neurons.", "The first $2d_1$ neurons of the second hidden layer (or the third layer) are $(\\sigma _1(f^{(1)}_1),\\sigma _1(-f^{(1)}_1)),\\ldots ,\\sigma _1(f^{(1)}_{d_1}),\\sigma _1(f^{(1)}_{d_1})),$ which intends to implement identity map such that $(f^{(1)}_1,\\ldots ,f^{(1)}_{d_1})$ can be kept and outputted in the next layer since identity map can be realized by $x=\\sigma _1(x)-\\sigma _1(-x)$ .", "The first $8d_1$ neurons of the second hidden layer (or the third layer) are $\\left[\\begin{array}{c}\\sigma _2(w^{(1)}_{1,d_0}+\\sigma _1(\\sum _{i=1}^{d_0}w^{(1)}_{1i}x_i+b^{(1)}_{1}))\\\\\\sigma _2(w^{(1)}_{1,d_0}-\\sigma _1(\\sum _{i=1}^{d_0}w^{(1)}_{1i}x_i+b^{(1)}_{1}))\\\\\\sigma _2(-w^{(1)}_{1,d_0}+\\sigma _1(\\sum _{i=1}^{d_0}w^{(1)}_{1i}x_i+b^{(1)}_{1})\\\\\\sigma _2(-w^{(1)}_{1,d_0}-\\sigma _1(\\sum _{i=1}^{d_0}w^{(1)}_{1i}x_i+b^{(1)}_{1}))\\\\\\ldots \\\\\\sigma _2(w^{(1)}_{d_1,d_0}+\\sigma _1(\\sum _{i=1}^{d_0}w^{(1)}_{d_1i}x_i+b^{(1)}_{d_1}))\\\\\\sigma _2(w^{(1)}_{d_1,d_0}-\\sigma _1(\\sum _{i=1}^{d_0}w^{(1)}_{d_1i}x_i+b^{(1)}_{d_1}))\\\\\\sigma _2(-w^{(1)}_{d_1,d_0}+\\sigma _1(\\sum _{i=1}^{d_0}w^{(1)}_{d_1i}x_i+b^{(1)}_{d_1})\\\\\\sigma _2(-w^{(1)}_{d_1,d_0}-\\sigma _1(\\sum _{i=1}^{d_0}w^{(1)}_{d_1i}x_i+b^{(1)}_{d_1}))\\\\\\end{array}\\right]\\in \\mathbb {R}^{8d_1},$ which is ready for implementing the multiplications in (REF ) to obtain $(\\frac{\\partial }{\\partial \\tau }f^{(1)}_1,\\ldots ,\\frac{\\partial }{\\partial \\tau }f^{(1)}_{d_1})\\in \\mathbb {R}^{d_1}$ since $x\\cdot y=\\frac{1}{4}\\lbrace (x+y)^2-(x-y)^2\\rbrace =\\frac{1}{4}\\lbrace \\sigma _2(x+y)+\\sigma _2(-x-y)-\\sigma _2(x-y)-\\sigma _2(-x+y)\\rbrace .$ In the second hidden layer (the third layer), the bias vector is zero $B^{(1)}_2=(0,\\ldots ,0)\\in \\mathbb {R}^{10d_1}$ , activation functions vector $\\Sigma ^{(1)}_2=(\\underbrace{\\sigma _1,\\ldots ,\\sigma _1}_{2d_1\\ {\\rm times}},\\underbrace{\\sigma _2,\\ldots ,\\sigma _2}_{8d_1\\ {\\rm times}}),$ and the corresponding weight matrix $A^{(1)}_2$ can be formulated correspondingly without difficulty which contains $2d_1+8d_1=10d_1$ non-zero parameters.", "Then in the last layer, by the identity maps and multiplication operations with weight matrix $A^{(1)}_3$ having $2d_1+4d_1=6d_1$ parameters, bias vector $B^{(1)}_3$ being zeros, we obtain $(f^{(1)}_1,\\ldots ,f^{(1)}_{d_1},\\frac{\\partial }{\\partial \\tau }f^{(1)}_1,\\ldots ,\\frac{\\partial }{\\partial \\tau }f^{(1)}_{d_1})\\in \\mathbb {R}^{2d_1}.$ Such ReLU-ReQU neural network has 2 hidden layers (4 layers), $15d_1$ hidden neurons, $2d_0d_1+3d_1+10d_1+6d_1=2d_0d_1+19d_1$ parameters and its width is $(d_0,3d_1,10d_1,2d_1)$ .", "It worth noting that the ReLU-ReQU activation functions do not apply to the last layer since the construction here is for a single network.", "When we are combining two consecutive subnetworks into one long neural network, the ReLU-ReQU activation functions should apply to the last layer of the first subnetwork.", "Hence, in the construction of the whole big network, the last layer of the subnetwork here should output $4d_1$ neurons $&(\\sigma _1(f^{(1)}_1),\\sigma _1(-f^{(1)}_1)\\ldots ,\\sigma _1(f^{(1)}_{d_1}),\\sigma _1(-f^{(1)}_{d_1}),\\\\&\\qquad \\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(1)}_1),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(1)}_1)\\ldots ,\\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(1)}_{d_1}),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(1)}_{d_1}))\\in \\mathbb {R}^{4d_1},$ to keep $(f^{(1)}_1,\\ldots ,f^{(1)}_{d_1},\\frac{\\partial }{\\partial \\tau }f^{(1)}_1,\\ldots ,\\frac{\\partial }{\\partial \\tau }f^{(1)}_{d_1})$ in use in the next subnetwork.", "Then for this ReLU-ReQU neural network, the weight matrix $A^{(1)}_3$ has $2d_1+8d_1=10d_1$ parameters, the bias vector $B^{(1)}_3$ is zeros and the activation functions vector $\\Sigma ^{(1)}_3$ has all $\\sigma _1$ as elements.", "And such ReLU-ReQU neural network has 2 hidden layers (4 layers), $17d_1$ hidden neurons, $2d_0d_1+3d_1+10d_1+10d_1=2d_0d_1+23d_1$ parameters and its width is $(d_0,3d_1,10d_1,4d_1)$ .", "Now we consider the second step, for any $j=1,\\ldots ,d_2$ , $\\frac{\\partial }{\\partial \\tau }f^{(2)}_j=\\frac{\\partial }{\\partial \\tau }\\sigma _2\\Big (\\sum _{i=1}^{d_1}w^{(2)}_{ji}f^{(1)}_{i}+b_j^{(2)}\\Big )=2\\sigma _1\\Big (\\sum _{i=1}^{d_1}w^{(2)}_{ji}f^{(1)}_i+b_j^{(2)}\\Big )\\cdot \\sum _{i=1}^{d_1}w_{j,i}^{(2)}\\frac{\\partial }{\\partial \\tau }f^{(1)}_i,$ where $w^{(2)}_{ji}$ and $b_j^{2)}$ are defined correspondingly as the weights and bias in 2-th layer of the original ReQU network.", "By the previous constructed subnetwork, we can start with its outputs $&(\\sigma _1(f^{(1)}_1),\\sigma _1(-f^{(1)}_1)\\ldots ,\\sigma _1(f^{(1)}_{d_1}),\\sigma _1(-f^{(1)}_{d_1}),\\\\&\\qquad \\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(1)}_1),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(1)}_1)\\ldots ,\\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(1)}_{d_1}),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(1)}_{d_1}))\\in \\mathbb {R}^{4d_1},$ as the inputs of the second subnetwork we are going to build.", "In the firstly hidden layer of the second subnetwork, we can obtain $3d_2$ neurons $&\\Big (f^{(2)}_1,\\ldots ,f^{(2)}_{d_2},\\vert \\sum _{i=1}^{d_1}w_{1,i}^{(2)}\\frac{\\partial }{\\partial \\tau }f^{(1)}_i\\vert ,\\ldots ,\\vert \\sum _{i=1}^{d_1}w_{d_2,i}^{(2)}\\frac{\\partial }{\\partial \\tau }f^{(1)}_i\\vert ,\\\\&\\qquad \\qquad \\sigma _1(\\sum _{i=1}^{d_1}w^{(2)}_{1i}f^{(1)}_i+b^{(1)}_1),\\ldots ,\\sigma _1(\\sum _{i=1}^{d_1}w^{(2)}_{d_2i}f^{(1)}_i+b^{(2)}_{d_2})\\Big ),$ with weight matrix $A^{(2)}_1\\in \\mathbb {R}^{4d_1\\times 3d_2}$ having $6d_1d_2$ non-zero parameters, bias vector $B^{(2)}_1\\in \\mathbb {R}^{3d_2}$ and activation functions vector $\\Sigma ^{(2)}_1=\\Sigma ^{(1)}_1$ .", "Similarly, the second hidden layer can be constructed to have $10d_2$ neurons with weight matrix $A^{(2)}_2\\in \\mathbb {R}^{3d_2\\times 10d_2}$ having $2d_2+8d_2=10d_2$ non-zero parameters, zero bias vector $B^{(2)}_1\\in \\mathbb {R}^{10d_2}$ and activation functions vector $\\Sigma ^{(2)}_2=\\Sigma ^{(1)}_2$ .", "The second hidden layer here serves exactly the same as that in the first subnetwork, which intends to implement the identity map for $(f^{(2)}_1,\\ldots ,f^{(2)}_{d_2}),$ and implement the multiplication in (REF ).", "Similarly, the last layer can also be constructed as that in the first subnetwork, which outputs $&(\\sigma _1(f^{(2)}_1),\\sigma _1(-f^{(2)}_1)\\ldots ,\\sigma _1(f^{(2)}_{d_2}),\\sigma _1(-f^{(2)}_{d_2}),\\\\&\\qquad \\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(2)}_1),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(2)}_1)\\ldots ,\\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(2)}_{d_2}),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(2)}_{d_2}))\\in \\mathbb {R}^{4d_2},$ with the weight matrix $A^{(2)}_3$ having $2d_2+8d_2=10d_2$ parameters, the bias vector $B^{(2)}_3$ being zeros and the activation functions vector $\\Sigma ^{(1)}_3$ with elements being $\\sigma _1$ .", "Then the second ReLU-ReQU subnetwork has 2 hidden layers (4 layers), $17d_2$ hidden neurons, $6d_1d_2+3d_2+10d_2+10d_2=6d_1d_2+23d_2$ parameters and its width is $(4d_1,3d_2,10d_2,4d_2)$ .", "Then we can continuing this process of construction.", "For integers $k=3,\\ldots ,\\mathcal {D}$ and for any $j=1,\\ldots ,d_{k}$ , $\\frac{\\partial }{\\partial \\tau }f^{(k)}_j&=\\frac{\\partial }{\\partial \\tau }\\sigma _2\\Big (\\sum _{i=1}^{d_{k-1}}w^{(k)}_{ji}f^{(k-1)}_{i}+b_j^{(k)}\\Big )\\\\&=2\\sigma _1\\Big (\\sum _{i=1}^{d_{k-1}}w^{(k)}_{ji}f^{(k-1)}_i+b_j^{(k)}\\Big )\\cdot \\sum _{i=1}^{d_{k-1}}w_{j,i}^{(k)}\\frac{\\partial }{\\partial \\tau }f^{(k-1)}_i,$ where $w^{(k)}_{ji}$ and $b_j^{(k)}$ are defined correspondingly as the weights and bias in $k$ -th layer of the original ReQU network.", "We can construct a ReLU-ReQU network taking $&(\\sigma _1(f^{(k-1)}_1),\\sigma _1(-f^{(k-1)}_1)\\ldots ,\\sigma _1(f^{(k-1)}_{d_{k-1}}),\\sigma _1(-f^{(k-1)}_{d_{k-1}}),\\\\&\\qquad \\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(k-1)}_1),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(k-1)}_1)\\ldots ,\\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(k-1)}_{d_{k-1}}),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(k-1)}_{d_{k-1}}))\\in \\mathbb {R}^{4d_{k-1}},$ as input, and it outputs $&(\\sigma _1(f^{(k)}_1),\\sigma _1(-f^{(k)}_1)\\ldots ,\\sigma _1(f^{(k)}_{d_{k}}),\\sigma _1(-f^{(k)}_{d_{k}}),\\\\&\\qquad \\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(k)}_1),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(k)}_1)\\ldots ,\\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(k)}_{d_{k}}),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(k)}_{d_{k}}))\\in \\mathbb {R}^{4d_{k}},$ with 2 hidden layers, $17d_k$ hidden neurons, $6d_{k-1}d_k+23d_k$ parameters and its width is $(4d_{k-1},3d_{k},10d_{k},4d_K)$ .", "Iterate this process until the $k=\\mathcal {D}+1$ step, where the last layer of the original ReQU network has only 1 neurons.", "That is for the ReQU activated neural network $f\\in \\mathcal {F}_n=\\mathcal {F}_{\\mathcal {D},\\mathcal {W},\\mathcal {U},\\mathcal {S},\\mathcal {B}}$ , the output of the network $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ is a scalar and the partial derivative with respect to $\\tau $ is $\\frac{\\partial }{\\partial \\tau }f=\\frac{\\partial }{\\partial \\tau }\\sum _{i=1}^{d_{\\mathcal {D}+1}}w^{(\\mathcal {D})}_{i}f^{(\\mathcal {D})}_i+b^{(\\mathcal {D})}=\\sum _{i=1}^{d_{\\mathcal {D}+1}}w^{(\\mathcal {D})}_{i} \\frac{\\partial }{\\partial \\tau }f^{(\\mathcal {D})}_i,$ where $w^{(\\mathcal {D})}_{i}$ and $b^{(\\mathcal {D})}$ are the weights and bias parameter in the last layer of the ReQU network.", "The the constructed $\\mathcal {D}+1$ -th subnetwork taking $&(\\sigma _1(f^{(\\mathcal {D})}_1),\\sigma _1(-f^{(\\mathcal {D})}_1)\\ldots ,\\sigma _1(f^{(\\mathcal {D})}_{d_{\\mathcal {D}}}),\\sigma _1(-f^{(\\mathcal {D})}_{d_{\\mathcal {D}}}),\\\\&\\qquad \\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(\\mathcal {D})}_1),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(\\mathcal {D})}_1)\\ldots ,\\sigma _1(\\frac{\\partial }{\\partial \\tau }f^{(\\mathcal {D})}_{d_{\\mathcal {D}}}),\\sigma _1(-\\frac{\\partial }{\\partial \\tau }f^{(\\mathcal {D})}_{d_{\\mathcal {D}}}))\\in \\mathbb {R}^{4d_{\\mathcal {D}}},$ as input and it outputs $\\frac{\\partial }{\\partial \\tau }f^{(\\mathcal {D}+1)}=\\frac{\\partial }{\\partial \\tau }f$ which is the partial derivative of the whole ReQU network with respect to its last argument $\\tau $ or $x_{d_0}=x_{d+1}$ here.", "The subnetwork should have 2 hidden layers width $(4d_{\\mathcal {D}},2,8,1)$ with 11 hidden neurons, $4d_{\\mathcal {D}}+2+16=4d_{\\mathcal {D}}+18$ non-zero parameters.", "Lastly, we combing all the $\\mathcal {D}+1$ subnetworks in order to form a big ReLU-ReQU network which takes $(X,\\xi )=(x_1,\\ldots ,x_{d+1})\\in \\mathbb {R}^{d+1}$ as input and outputs $\\frac{\\partial }{\\partial \\tau }f$ for $f\\in \\mathcal {F}_n=\\mathcal {F}_{\\mathcal {D},\\mathcal {W}, \\mathcal {U},\\mathcal {S},\\mathcal {B},\\mathcal {B}^\\prime }$ .", "Recall that here $\\mathcal {D},\\mathcal {W}, \\mathcal {U},\\mathcal {S}$ are the depth, width, number of neurons and number of parameters of the ReQU network respectively, and we have $\\mathcal {U}=\\sum _{i=0}^{\\mathcal {D}+1}d_i$ and $\\mathcal {S}=\\sum _{i=0}^{\\mathcal {D}}d_id_{i+1}+d_{i+1}.$ Then the big network has $3\\mathcal {D}+3$ hidden layers (totally 3$\\mathcal {D}+5$ layers), $d_0+\\sum _{i=1}^{\\mathcal {D}}17d_{i}+11\\le 17\\mathcal {U}$ neurons, $2d_0d_1+23d_1+\\sum _{i=1}^\\mathcal {D}(6d_{i}d_{i+1}+23d_{i+1})+4d_\\mathcal {D}+18\\le 23\\mathcal {S}$ parameters and its width is $10\\max \\lbrace d_1,\\ldots ,d_\\mathcal {D}\\rbrace =10\\mathcal {W}$ .", "This completes the proof.", "$\\hfill \\Box $ Our proof has two parts.", "In the first part, we follow the idea of the proof of Theorem 6 in [4] to prove a somewhat stronger result, where we give the upper bound of the Pseudo dimension of $\\mathcal {F}$ in terms of the depth, size and number of neurons of the network.", "Instead of the VC dimension of ${\\rm sign}(\\mathcal {F})$ given in [4], our Pseudo dimension bound is stronger since ${\\rm VCdim}({\\rm sign}(\\mathcal {F}))\\le {\\rm Pdim}(\\mathcal {F})$ .", "In the second part, based on Theorem 2.2 in [19], we also follow and improve the result in Theorem 8 of [4] to give an upper bound of the Pseudo dimension of $\\mathcal {F}$ in terms of the size and number of neurons of the network.", "Let $\\mathcal {Z}$ denote the domain of the functions $f\\in \\mathcal {F}$ and let $t\\in \\mathbb {R}$ , we consider a new class of functions $\\tilde{\\mathcal {F}}:=\\lbrace \\tilde{f}(z,t)={\\rm sign}(f(z)-t):f\\in \\mathcal {F}\\rbrace .$ Then it is clear that ${\\rm Pdim}(\\mathcal {F})\\le {\\rm VCdim}(\\tilde{\\mathcal {F}})$ and we next bound the VC dimension of $\\tilde{\\mathcal {F}}$ .", "Recall that the the total number of parameters (weights and biases) in the neural network implementing functions in $\\mathcal {F}$ is $\\mathcal {S}$ , we let $\\theta \\in \\mathbb {R}^{\\mathcal {S}}$ denote the parameters vector of the network $f(\\cdot ,\\theta ):\\mathcal {Z}\\rightarrow \\mathbb {R}$ implemented in $\\mathcal {F}$ .", "And here we intend to derive a bound for $K(m):=\\Big \\vert \\lbrace ({\\rm sign}(f(z_1,\\theta )-t_1),\\ldots ,{\\rm sign}(f(z_m,\\theta )-t_m)):\\theta \\in \\mathbb {R}^{\\mathcal {S}}\\rbrace \\Big \\vert $ which uniformly hold for all choice of $\\lbrace z_i\\rbrace _{i=1}^m$ and $\\lbrace t_i\\rbrace _{i=1}^m$ .", "Note that the maximum of $K(m)$ over all all choice of $\\lbrace z_i\\rbrace _{i=1}^m$ and $\\lbrace t_i\\rbrace _{i=1}^m$ is just the growth function of $\\tilde{\\mathcal {F}}$ .", "To give a uniform bound of $K(m)$ , we use the Theorem 8.3 in [1] as a main tool to deal with the analysis.", "Lemma 5 (Theorem 8.3 in [1]) Let $p_1,\\ldots ,p_m$ be polynomials in $n$ variables of degree at most $d$ .", "If $n\\le m$ , define $K:=\\vert \\lbrace ({\\rm sign}(p_1(x),\\ldots ,{\\rm sign}(p_m(x))):x\\in \\mathbb {R}^n)\\rbrace \\vert ,$ i.e.", "$K$ is the number of possible sign vectors given by the polynomials.", "Then $K\\le 2(2emd/n)^n$ .", "Now if we can find a partition $\\mathcal {P}=\\lbrace P_1,\\ldots ,P_N\\rbrace $ of the parameter domain $\\mathbb {R}^{\\mathcal {S}}$ such that within each region $P_i$ , the functions $f(z_j,\\cdot )$ are all fixed polynomials of bounded degree, then $K(m)$ can be bounded via the following sum $K(m)\\le \\sum _{i=1}^N\\Big \\vert \\lbrace ({\\rm sign}(f(z_1,\\theta )-t_1),\\ldots ,{\\rm sign}(f(z_m,\\theta )-t_m)):\\theta \\in P_i\\rbrace \\Big \\vert ,$ and each term in this sum can be bounded via Lemma REF .", "Next, we construct the partition follows the same way as in [4] iteratively layer by layer.", "We define the a sequence of successive refinements $\\mathcal {P}_1,\\ldots ,\\mathcal {P}_{\\mathcal {D}}$ satisfying the following properties: 1.", "The cardinality $\\vert \\mathcal {P}_1\\vert =1$ and for each $n\\in \\lbrace 1,\\ldots ,\\mathcal {D}\\rbrace $ , $\\frac{\\vert \\mathcal {P}_{n+1}\\vert }{\\vert \\mathcal {P}_{n}\\vert }\\le 2\\Big (\\frac{2emk_n(1+(n-1)2^{n-1})}{\\mathcal {S}_n}\\Big )^{\\mathcal {S}_n},$ where $k_n$ denotes the number of neurons in the $n$ -th layer and $\\mathcal {S}_n$ denotes the total number of parameters (weights and biases) at the inputs to units in all the layers up to layer $n$ .", "2.", "For each $n\\in \\lbrace 1,\\ldots ,\\mathcal {D}\\rbrace $ , each element of $P$ of $\\mathcal {P}_n$ , each $j\\in \\lbrace 1,\\ldots ,m\\rbrace $ , and each unit $u$ in the $n$ -th layer, when $\\theta $ varies in $P$ , the net input to $u$ is a fixed polynomial function in $\\mathcal {S}_n$ variables of $\\theta $ , of total degree no more than $1+(n-1)2^{n-1}$ (this polynomial may depend on $P,j$ and $u$ .)", "One can define $\\mathcal {P}_1=\\mathbb {R}^\\mathcal {S}$ , and it can be verified that $\\mathcal {P}_1$ satisfies property 2 above.", "Note that in our case, for fixed $z_j$ and $t_j$ and any subset $P\\subset \\mathbb {R}^{\\mathcal {S}}$ , $f(z_j,\\theta )-t_j$ is a polynomial with respect to $\\theta $ with degree the same as that of $f(z_j,\\theta )$ , which is no more than $1+(\\mathcal {D}-1)2^{\\mathcal {D}-1}$ .", "Then the construction of $\\mathcal {P}_1,\\ldots ,\\mathcal {P}_{\\mathcal {D}}$ and its verification for properties 1 and 2 can follow the same way in [4].", "Finally we obtain a partition $\\mathcal {P}_{\\mathcal {D}}$ of $\\mathbb {R}^\\mathcal {S}$ such that for $P\\in \\mathcal {P}_\\mathcal {D}$ , the network output in response to any $z_j$ is a fixed polynomial of $\\theta \\in P$ of degree no more than $1+(\\mathcal {D}-1)2^{\\mathcal {D}-1}$ (since the last node just outputs its input).", "Then by Lemma REF $\\Big \\vert \\lbrace ({\\rm sign}(f(z_1,\\theta )-t_1),\\ldots ,{\\rm sign}(f(z_m,\\theta )-t_m)):\\theta \\in P\\rbrace \\Big \\vert \\le 2\\Big (\\frac{2em(1+(\\mathcal {D}-1)2^{\\mathcal {D}-1})}{\\mathcal {S}_\\mathcal {D}}\\Big )^{\\mathcal {S}_\\mathcal {D}}.$ Besides, by property 1 we have $\\vert \\mathcal {P}_\\mathcal {D}\\vert &\\le \\Pi _{i=1}^\\mathcal {D-1}2\\Big (\\frac{2emk_i(1+(i-1)2^{i-1})}{\\mathcal {S}_i}\\Big )^{\\mathcal {S}_i}.$ Then using (REF ), and since the sample $z_1,\\ldots ,Z_m$ are arbitrarily chosen, we have $K(m)&\\le \\Pi _{i=1}^\\mathcal {D}2\\Big (\\frac{2emk_i(1+(i-1)2^{i-1})}{\\mathcal {S}_i}\\Big )^{\\mathcal {S}_i}\\\\&\\le 2^\\mathcal {D}\\Big (\\frac{2em\\sum k_i(1+(i-1)2^{i-1})}{\\sum \\mathcal {S}_i}\\Big )^{\\sum \\mathcal {S}_i}\\\\&\\le \\Big (\\frac{4em(1+(\\mathcal {D}-1)2^{\\mathcal {D}-1})\\sum k_i}{\\sum \\mathcal {S}_i}\\Big )^{\\sum \\mathcal {S}_i}\\\\&\\le \\Big (4em(1+(\\mathcal {D}-1)2^{\\mathcal {D}-1})\\Big )^{\\sum \\mathcal {S}_i},$ where the second inequality follows from weighted arithmetic and geometric means inequality, the third holds since $\\mathcal {D}\\le \\sum \\mathcal {S}_i$ and the last holds since $\\sum k_i\\le \\sum \\mathcal {S}_i$ .", "Since $K(m)$ is the growth function of $\\tilde{\\mathcal {F}}$ , we have $2^{{\\rm Pdim}(\\mathcal {F})}\\le 2^{{\\rm VCdim}(\\tilde{\\mathcal {F}})}\\le K({\\rm VCdim}(\\tilde{\\mathcal {F}}))\\le 2^\\mathcal {D}\\Big (\\frac{2emR\\cdot {\\rm VCdim}(\\tilde{\\mathcal {F}})}{\\sum \\mathcal {S}_i}\\Big )^{\\sum \\mathcal {S}_i}$ where $R:=\\sum _{i=1}^\\mathcal {D} k_i(1+(i-1)2^{i-1})\\le \\mathcal {U}+\\mathcal {U}(\\mathcal {D}-1)2^{\\mathcal {D}-1}.$ Since $\\mathcal {U}>0$ and $2eR\\ge 16$ , then by Lemma 16 in [4] we have ${\\rm Pdim}(\\mathcal {F})\\le \\mathcal {D}+(\\sum _{i=1}^n\\mathcal {S}_i)\\log _2(4eR\\log _2(2eR)).$ Note that $\\sum _{i=1}^\\mathcal {D}\\mathcal {S}_i\\le \\mathcal {D}\\mathcal {S}$ and $\\log _2(R)\\le \\log _2(\\mathcal {U}\\lbrace 1+(\\mathcal {D}-1)2^{\\mathcal {D}-1}\\rbrace )\\le \\log _2(\\mathcal {U})+2\\mathcal {D}$ , then we have ${\\rm Pdim}(\\mathcal {F})\\le \\mathcal {D}+\\mathcal {D}\\mathcal {S}(4\\mathcal {D}+2\\log _2\\mathcal {U}+6)\\le 7\\mathcal {D}\\mathcal {S}(\\mathcal {D}+\\log _2\\mathcal {U}))$ for some universal constant $c>0$ .", "We first list Theorem 2.2 in [19].", "Lemma 6 (Theorem 2.2 in [19]) Let $k,n$ be positive integers and $f:\\mathbb {R}^n\\times \\mathbb {R}^k\\rightarrow \\lbrace 0,1\\rbrace $ be a function that can be expressed as a Boolean formula containing $s$ distinct atomic predicates where each atomic predicate is a polynomial inequality or equality in $k+n$ variables of degree at most $d$ .", "Let $\\mathcal {F}=\\lbrace f(\\cdot ,w):w\\in \\mathbb {R}^k\\rbrace .$ Then ${\\rm VCdim}(\\mathcal {F})\\le 2k\\log _2(8eds)$ .", "Suppose the functions in $f\\in \\mathcal {F}$ are implemented by ReLU-REQU neural networks with $\\mathcal {S}$ parameters (weights and bias) and $\\mathcal {U}$ neurons.", "The activation function of $f\\in \\mathcal {F}$ is piecewise polynomial of degree at most 2 with 2 pieces.", "As in Part I of the proof, let $\\mathcal {Z}$ denote the domain of the functions $f\\in \\mathcal {F}$ and let $t\\in \\mathbb {R}$ , we consider the class of functions $\\tilde{\\mathcal {F}}:=\\lbrace \\tilde{f}(z,t)={\\rm sign}(f(z)-t):f\\in \\mathcal {F}\\rbrace .$ Since the outputs of functions in $\\tilde{\\mathcal {F}}$ are 0 or 1, to apply above lemma, we intend to show that the function in $\\tilde{\\mathcal {F}}$ as Boolean functions consisting of no more than $2\\cdot 3^\\mathcal {U}$ atomic predicates with each being a polynomial inequality of degree at most $3\\cdot 2^\\mathcal {U}$ .", "We topologically sort the neurons of the network since the neural network graph is acyclic.", "Let $u_i$ be the $i$ -th neuron in the topological ordering for $i=1,\\ldots ,\\mathcal {U}+1$ .", "Note that the input to each neuron $u$ comes from one of the 2 pieces of the activation function ReLU or ReQU, then we call \"$u_i$ is in the state $j$ \" if the input of $u_i$ lies in the $j$ -th piece for $i\\in \\lbrace 1,\\ldots ,\\mathcal {U}+1\\rbrace $ and $j\\in \\lbrace 1,2\\rbrace $ .", "For $u_1$ and $j\\in \\lbrace 1,2\\rbrace $ , the predicate “$u_1$ is in state$j$ ” is a single atomic predicate thus the state of $u_1$ can be expressed as a function of 2 atomic predicates.", "Given that $u_1$ is in a certain state, the state of $u_2$ can be decided by 2 atomic predicates, which are polynomial inequalities of degree at most $2\\times 2+1$ .", "Hence the state of $u_2$ can be determined using $2+2^2$ atomic predicates, each of which is a polynomial of degree no more than $2\\times 2+1$ .", "By induction, the state of $u_i$ is decided using $2(1+2)^{i-1}$ atomic predicates, each of which is a polynomial of degree at most $2^{i-1}+\\sum _{j=0}^{i-1}2^j$ .", "Then the state of all neurons can be decided using no more than $3^{\\mathcal {U}+1}$ atomic predicates, each of which is a polynomial of degree at most $2^{\\mathcal {U}}+\\sum _{j=0}^{\\mathcal {U}}2^j\\le 3\\cdot 2^\\mathcal {U}$ .", "Then by Lemma REF , an upper bound for ${\\rm VCdim}(\\tilde{\\mathcal {F}})$ is $2\\mathcal {S}\\log _2(8e\\cdot 3^{\\mathcal {U}+1}\\cdot 3\\cdot 2^{\\mathcal {U}})=2\\mathcal {S}[\\log _2(6^{\\mathcal {U}})+\\log _2(72e)]\\le 22\\mathcal {S}\\mathcal {U}$ .", "Since ${\\rm Pdim}(\\mathcal {F})\\le {\\rm VCdim}(\\tilde{\\mathcal {F}})$ , then the upper bounds hold also for ${\\rm Pdim}(\\mathcal {F})$ , ${\\rm Pdim}(\\mathcal {F})\\le 22\\mathcal {S}\\mathcal {U},$ which completes the proof.", "$\\hfill \\Box $ Recall that the stochastic error is $\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace $ Let $S=\\lbrace Z_i=(X_i,Y_i,\\xi _i)\\rbrace _{i=1}^n$ be the sample used to estimate $\\hat{f}^\\lambda _n$ from the distribution $Z=(X,Y,\\xi )$ .", "And let $S^\\prime =\\lbrace Z^\\prime _i=(X^\\prime _i,Y^\\prime _i,\\xi ^\\prime _i)\\rbrace _{i=1}^n$ be another sample independent of $S$ .", "Define $g_1(f,X_i)=\\mathbb {E}\\big \\lbrace \\rho _\\xi (Y_i-f(X_i,\\xi ))-\\rho _\\xi (Y_i-f_0(X_i,\\xi ))\\mid X_i\\big \\rbrace \\\\g_2(f,X_i)=\\mathbb {E}\\big [\\lambda \\max \\lbrace -\\frac{\\partial }{\\partial \\tau }f(X_i,\\xi ),0\\rbrace -\\lambda \\max \\lbrace -\\frac{\\partial }{\\partial \\tau } f_0(X_i,\\xi ),0\\rbrace \\mid X_i\\big ]\\\\g(f,X_i)=g_1(f,X_i)+g_2(f,X_i)$ for any $f$ and sample $X_i$ .", "Note the the empirical risk minimizer $\\hat{f}^\\lambda _n$ depends on the sample $S$ , and the stochastic error is $\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace =\\mathbb {E}_S\\Big (\\frac{1}{n}\\sum _{i=1}^{n}\\bigg [\\mathbb {E}_{S^\\prime }\\big \\lbrace g(\\hat{f}^\\lambda _n,X^\\prime _i)\\big \\rbrace -2g(\\hat{f}^\\lambda _n,X_i)\\bigg ]\\Big )$ Given a positive number $\\delta >0$ and a $\\delta $ -uniform covering of $\\mathcal {F}_n$ , we denote the centers of the balls by $f_j,j=1,\\ldots ,\\mathcal {N}_{2n}$ , where $\\mathcal {N}_{2n}=\\mathcal {N}_{2n}(\\delta ,\\mathcal {F}_n,\\Vert \\cdot \\Vert _\\infty )$ is the uniform covering number with radius $\\delta $ ($\\delta \\le \\mathcal {B}$ ) under the norm $\\Vert \\cdot \\Vert _\\infty $ , where the detailed definition of $\\mathcal {N}_{2n}(\\delta ,\\mathcal {F}_n,\\Vert \\cdot \\Vert _\\infty )$ can be found in (REF ) in Appendix .", "By the definition of covering, there exists a (random) $j^*$ such that $\\Vert \\hat{f}^\\lambda _n(x,\\tau )-f_{j^*}(x,\\tau )\\Vert _\\infty \\le \\delta $ for all $(x,\\tau )\\in \\lbrace (X_1,\\xi _1),\\ldots ,(X_n,\\xi _n),(X^\\prime _1,\\xi _1^\\prime ),\\ldots ,(X^\\prime _n,\\xi _n^\\prime )\\rbrace $ .", "Recall that $g_1(f,X_i)=\\mathbb {E}\\big \\lbrace \\rho _\\xi (Y_i-f(X_i,\\xi ))-\\rho _\\xi (Y_i-f_0(X_i,\\xi ))\\mid X_i\\big \\rbrace $ and $\\rho _\\tau $ is 1-Lipschitz for any $\\tau \\in (0,1)$ , then for $i=1,\\ldots ,n$ we have $\\vert g_1(\\hat{f}^\\lambda _n,X_i)-g_1(f_{j^*},X_i)\\vert \\le \\delta \\quad {\\rm and}\\quad \\vert \\mathbb {E}_{S^\\prime }g_1(\\hat{f}^\\lambda _n,X^\\prime _i)-\\mathbb {E}_{S^\\prime }g_1(f_{j^*},X^\\prime _i)\\vert \\le \\delta .$ Let $\\mathcal {F}^\\prime _n=\\lbrace \\frac{\\partial }{\\partial \\tau } f:f\\in \\mathcal {F}_{n}\\rbrace $ denote the class of first order partial derivatives of functions in $\\mathcal {F}_n$ .", "Similarly, given any positive number $\\delta >0$ and a $\\delta $ -uniform covering of $\\mathcal {F}^\\prime _n$ , we denote the centers of the balls by $f^\\prime _k,k=1,\\ldots ,\\mathcal {N}^\\prime _{2n}$ , where $\\mathcal {N}^\\prime _{2n}=\\mathcal {N}_{2n}(\\delta ,\\mathcal {F}^\\prime _n,\\Vert \\cdot \\Vert _\\infty )$ .", "By the definition of covering, there exists a (random) $k^*$ such that $\\Vert \\frac{\\partial }{\\partial \\tau }\\hat{f}^\\lambda _n(x,\\tau )-f^\\prime _{k^*}(x,\\tau )\\Vert _\\infty \\le \\delta $ for all $(x,\\tau )\\in \\lbrace (X_1,\\xi _1),\\ldots ,(X_n,\\xi _n),(X^\\prime _1,\\xi _1^\\prime ),\\ldots ,(X^\\prime _n,\\xi _n^\\prime )\\rbrace $ .", "And recall $g_2(f,X_i)=\\mathbb {E}[\\lambda \\max \\lbrace -\\frac{\\partial }{\\partial \\tau }f(X_i,\\xi ),0\\rbrace -\\lambda \\max \\lbrace -\\frac{\\partial }{\\partial \\tau } f_0(X_i,\\xi ),0\\rbrace \\mid X_i]$ and $\\max $ function is 1-Lipschitz, then for $i=1,\\ldots ,n$ we have $\\vert g_2(\\hat{f}^\\lambda _n,X_i)-g_2(f_{k^*},X_i)\\vert \\le \\lambda \\delta \\quad {\\rm and}\\quad \\vert \\mathbb {E}_{S^\\prime }g_2(\\hat{f}^\\lambda _n,X^\\prime _i)-\\mathbb {E}_{S^\\prime }g_2(f_{k^*},X^\\prime _i)\\vert \\le \\lambda \\delta .$ Combining above inequalities, we have $\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace &\\le \\mathbb {E}_S\\Big (\\frac{1}{n}\\sum _{i=1}^{n}\\big [\\mathbb {E}_{S^\\prime }\\big \\lbrace g_1(f_{j^*},X^\\prime _i)\\big \\rbrace -2g_1(f_{j^*},X_i)\\big ]\\Big )\\\\&+\\mathbb {E}_S\\Big (\\frac{1}{n}\\sum _{i=1}^{n}\\big [\\mathbb {E}_{S^\\prime }\\big \\lbrace g_2(f_{k^*},X^\\prime _i)\\big \\rbrace -2g_2(f_{k^*},X_i)\\big ]\\Big )\\\\ &+3(1+\\lambda )\\delta .$ Now we consider giving upper bounds for $\\mathbb {E}_S(\\frac{1}{n}\\sum _{i=1}^{n}[\\mathbb {E}_{S^\\prime }\\lbrace g_1(f_{j^*},X^\\prime _i)\\rbrace -2g_1(f_{j^*},X_i)])$ and $\\mathbb {E}_S(\\frac{1}{n}\\sum _{i=1}^{n}[\\mathbb {E}_{S^\\prime }\\lbrace g_2(f_{k^*},X^\\prime _i)\\rbrace -2g_2(f_{k^*},X_i)])$ respectively.", "Recall that it is assumed $\\Vert f_0\\Vert _\\infty \\le \\mathcal {B}$ and $\\Vert f\\Vert _\\infty \\le \\mathcal {B}$ for any $f\\in \\mathcal {F}_n$ , then $\\vert g_1(f,X_i)\\vert \\le \\Vert f-f_0\\Vert _\\infty \\le 2\\mathcal {B}$ and $\\sigma ^2_{g_1}(f):={\\rm Var}(g_1(f,X_i))\\le \\mathbb {E}\\lbrace g_1(f,X_i)^2\\rbrace \\le 2\\mathcal {B}\\mathbb {E}\\lbrace g_1(f,X_i)\\rbrace $ .", "Then for each $f_j$ and any $t>0$ , let $u=t/2+{\\sigma _{g_1}^2(f_j)}/(4\\mathcal {B})$ , by applying the Bernstein inequality we have $&P\\Big \\lbrace \\frac{1}{n}\\sum _{i=1}^{n}[\\mathbb {E}_{S^\\prime }\\lbrace g_1(f_{j^*},X^\\prime _i)\\rbrace -2g_1(f_{j^*},X_i)]>t\\Big \\rbrace \\\\=&P\\Big \\lbrace \\mathbb {E}_{S^\\prime } \\lbrace g_1(f_j,X_i^\\prime )\\rbrace -\\frac{2}{n}\\sum _{i=1}^ng_1(f_j,X_i)>t\\Big \\rbrace \\\\=&P\\Big \\lbrace \\mathbb {E}_{S^\\prime } \\lbrace g_1(f_j,X_i^\\prime )\\rbrace -\\frac{1}{n}\\sum _{i=1}^ng_1(f_j,X_i)>\\frac{t}{2}+\\frac{1}{2}\\mathbb {E}_{S^\\prime } \\lbrace g_1(f_j,X_i^\\prime )\\rbrace \\Big \\rbrace \\\\\\le & P\\Big \\lbrace \\mathbb {E}_{S^\\prime } \\lbrace g_1(f_j,X_i^\\prime )\\rbrace -\\frac{1}{n}\\sum _{i=1}^ng_1(f_j,X_i)>\\frac{t}{2}+\\frac{1}{2}\\frac{\\sigma _{g_1}^2(f_j)}{4\\mathcal {B}}\\Big \\rbrace \\\\\\le & \\exp \\Big ( -\\frac{nu^2}{2\\sigma _{g_1}^2(f_j)+8u\\mathcal {B}/3}\\Big )\\\\\\le & \\exp \\Big ( -\\frac{nu^2}{8u\\mathcal {B}+8u\\mathcal {B}/3}\\Big )\\\\\\le & \\exp \\Big ( -\\frac{1}{8+8/3}\\cdot \\frac{nu}{\\mathcal {B}}\\Big )\\\\\\le & \\exp \\Big ( -\\frac{1}{16+16/3}\\cdot \\frac{nt}{\\mathcal {B}}\\Big ).$ This leads to a tail probability bound of $\\sum _{i=1}^{n}[\\mathbb {E}_{S^\\prime }\\lbrace g_1(f_{j^*},X^\\prime _i)\\rbrace -2g_1(f_{j^*},X_i)]/n$ , which is $P\\Big \\lbrace \\frac{1}{n}\\sum _{i=1}^n\\big [\\mathbb {E}_{S^\\prime }\\lbrace g_1(f_{j^*},X^\\prime _i)\\rbrace -2g_1(f_{j^*},X_i)\\big ]>t\\Big \\rbrace \\le 2\\mathcal {N}_{2n}\\exp \\Big ( -\\frac{1}{22}\\cdot \\frac{nt}{\\mathcal {B}}\\Big ).$ Then for $a_n>0$ , $\\mathbb {E}_S\\Big [ \\frac{1}{n}\\sum _{i=1}^nG(f_{j^*},X_i)\\Big ]\\le & a_n +\\int _{a_n}^\\infty P\\Big \\lbrace \\frac{1}{n}\\sum _{i=1}^nG(f_{j^*},X_i)>t\\Big \\rbrace dt\\\\\\le & a_n+ \\int _{a_n}^\\infty 2\\mathcal {N}_{2n}\\exp \\Big ( -\\frac{1}{22}\\cdot \\frac{nt}{\\mathcal {B}}\\Big ) dt\\\\\\le & a_n+ 2\\mathcal {N}_{2n}\\exp \\Big ( -a_n\\cdot \\frac{n}{22\\mathcal {B}}\\Big )\\frac{22\\mathcal {B}}{n}.$ Choosing $a_n=\\log (2\\mathcal {N}_{2n})\\cdot {22\\mathcal {B}}/{n}$ , we have $ \\mathbb {E}_S\\Big [ \\frac{1}{n}\\sum _{i=1}^nG(f_{j^*},X_i)\\Big ]\\le \\frac{22\\mathcal {B}(\\log (2\\mathcal {N}_{2n})+1)}{n}.$ Similarly, for the derivative of the functions in $\\mathcal {F}_n$ , it is assumed $\\Vert \\frac{\\partial }{\\partial \\tau }f_0\\Vert _\\infty \\le \\mathcal {B}^\\prime $ and $\\Vert f\\Vert _\\infty \\le \\mathcal {B}^\\prime $ for any $f\\in \\mathcal {F}^\\prime _n$ , then $\\vert g_2(f,X_i)\\vert \\le \\lambda \\Vert f-\\frac{\\partial }{\\partial \\tau }f_0\\Vert _\\infty \\le 2\\lambda \\mathcal {B}^\\prime $ and $\\sigma ^2_{g_2}(f):={\\rm Var}(g_2(f,X_i))\\le \\mathbb {E}\\lbrace g_2(f,X_i)^2\\rbrace \\le 2\\lambda \\mathcal {B}^\\prime \\mathbb {E}\\lbrace g_2(f,X_i)\\rbrace $ .", "Then for each $f_k$ and any $t>0$ , let $u=t/2+{\\sigma _{g_2}^2(f_k)}/(4\\lambda \\mathcal {B}^\\prime )$ , by applying the Bernstein inequality we have $&P\\Big \\lbrace \\frac{1}{n}\\sum _{i=1}^{n}[\\mathbb {E}_{S^\\prime }\\lbrace g_2(f_{k},X^\\prime _i)\\rbrace -2g_2(f_{k},X_i)]>t\\Big \\rbrace \\le \\exp \\Big ( -\\frac{1}{22}\\cdot \\frac{nt}{\\lambda \\mathcal {B}^\\prime }\\Big ),$ which leads to $P\\Big \\lbrace \\frac{1}{n}\\sum _{i=1}^n\\big [\\mathbb {E}_{S^\\prime }\\lbrace g_2(f_{k^*},X^\\prime _i)\\rbrace -2g_2(f_{k^*},X_i)\\big ]>t\\Big \\rbrace \\le 2\\mathcal {N}^\\prime _{2n}\\exp \\Big ( -\\frac{1}{22}\\cdot \\frac{nt}{\\lambda \\mathcal {B}^\\prime }\\Big ),$ and $ \\mathbb {E}_S\\Big [\\frac{1}{n}\\sum _{i=1}^n\\big [\\mathbb {E}_{S^\\prime }\\lbrace g_2(f_{k^*},X^\\prime _i)\\rbrace -2g_2(f_{k^*},X_i)\\big ]\\Big ]\\le \\frac{22\\lambda \\mathcal {B}^\\prime (\\log (2\\mathcal {N}^\\prime _{2n})+1)}{n}.$ Setting $\\delta =1/n$ and combining (REF ), (REF ) and (REF ), we have $&\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace \\\\&\\le 69\\big [\\mathcal {B}\\log (\\mathcal {N}_{2n}(\\frac{1}{n},\\mathcal {F}_n,\\Vert \\cdot ,\\Vert _\\infty ))+\\lambda \\mathcal {B}^\\prime \\log (\\mathcal {N}^\\prime _{2n}(\\frac{1}{n},\\mathcal {F}^\\prime _n,\\Vert \\cdot ,\\Vert _\\infty ))\\big ]n^{-1}.$ Then by Lemma REF in Appendix , we can further bound the covering number by the Pseudo dimension.", "More exactly, for $n\\ge {\\rm Pdim}(\\mathcal {F}_n)$ and any $\\delta >0$ , we have $\\log (\\mathcal {N}_{2n}(\\delta ,\\mathcal {F}_n,\\Vert \\cdot \\Vert _\\infty ))\\le {\\rm Pdim}(\\mathcal {F}_n)\\log \\Big (\\frac{en\\mathcal {B}}{\\delta {\\rm Pdim}(\\mathcal {F}_n)}\\Big ),$ and for $n\\ge {\\rm Pdim}(\\mathcal {F}^\\prime _n)$ and any $\\delta >0$ , we have $\\log (\\mathcal {N}_{2n}(\\delta ,\\mathcal {F}^\\prime _n,\\Vert \\cdot \\Vert _\\infty ))\\le {\\rm Pdim}(\\mathcal {F}^\\prime _n)\\log \\Big (\\frac{en\\mathcal {B}^\\prime }{\\delta {\\rm Pdim}(\\mathcal {F}^\\prime _n)}\\Big ).$ Combining the upper bounds of the covering numbers, we have $&\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace \\le c_0\\frac{\\big (\\mathcal {B}{\\rm Pdim}(\\mathcal {F}_n)+\\lambda \\mathcal {B}^\\prime {\\rm Pdim}(\\mathcal {F}_n^\\prime )\\big )\\log (n)}{n},$ for $n\\ge \\max \\lbrace {\\rm Pdim}(\\mathcal {F}_n),{\\rm Pdim}(\\mathcal {F}^\\prime _n)\\rbrace $ and some universal constant $c_0>0$ .", "By Lemma REF , for the function class $\\mathcal {F}_n$ implemented by ReLU-ReQU activated multilayer perceptrons with depth no more than $\\mathcal {D}$ , width no more than $\\mathcal {W}$ , number of neurons (nodes) no more than $\\mathcal {U}$ and size or number of parameters (weights and bias) no more than $\\mathcal {S}$ , we have ${\\rm Pdim}(\\mathcal {F}_n)\\le \\min \\lbrace 7\\mathcal {D}\\mathcal {S}(\\mathcal {D}+\\log _2\\mathcal {U}),22\\mathcal {U}\\mathcal {S}\\rbrace ,$ and by Lemma REF , for any function $f\\in \\mathcal {F}_n$ , its partial derivative $\\frac{\\partial }{\\partial \\tau }f$ can be implemented by a ReLU-ReQU activated multilayer perceptron with depth $3\\mathcal {D}+3$ , width $10\\mathcal {W}$ , number of neurons $17\\mathcal {U}$ , number of parameters $23\\mathcal {S}$ and bound $\\mathcal {B}^\\prime $ .", "Then ${\\rm Pdim}(\\mathcal {F}^\\prime _n)\\le \\min \\lbrace 5796\\mathcal {D}\\mathcal {S}(\\mathcal {D}+\\log _2\\mathcal {U}),8602\\mathcal {U}\\mathcal {S}\\rbrace .$ It follows that $&\\mathbb {E}\\lbrace \\mathcal {R}^\\lambda (\\hat{f}^\\lambda _n)-2\\mathcal {R}^\\lambda _n(\\hat{f}^\\lambda _n)+\\mathcal {R}^\\lambda (f_0)\\rbrace \\le c_1\\big (\\mathcal {B}+\\lambda \\mathcal {B}^\\prime \\big )\\frac{\\min \\lbrace 5796\\mathcal {D}\\mathcal {S}(\\mathcal {D}+\\log _2\\mathcal {U}),8602\\mathcal {U}\\mathcal {S}\\rbrace \\log (n)}{n},$ for $n\\ge \\max \\lbrace {\\rm Pdim}(\\mathcal {F}_n),{\\rm Pdim}(\\mathcal {F}^\\prime _n)\\rbrace $ and some universal constant $c_1>0$ .", "This completes the proof.", "$\\hfill \\Box $ The idea of the proof is to construct a ReQU network that computes a multivariate polynomial with degree $N$ with no error.", "We begin our proof with consider the simple case, which is to construct a proper ReQU network to represent a univariate polynomial with no error.", "Recall that to approximate the multiplication operator is simple and straightforward, we can leverage Horner’s method or Qin Jiushao's algorithm in China to construct such networks.", "Suppose $f(x)=a_0+a_1x+\\cdots +a_Nx^N$ is a univariate polynomial of degree $N$ , then it can be written as $f(x)=a_0+x(a_1+x(a_2+x(a_3+\\cdots +x(a_{N-1}+xa_N)))).$ We can illiterately calculate a sequence of intermediate variables $b_1,\\ldots ,b_N$ by $b_k=\\Big \\lbrace \\begin{array}{lr}a_{N-1}+xa_N, \\qquad k=1,\\\\a_{N-k}+xb_{N-1}, \\ \\ k=2,\\ldots ,N.\\\\\\end{array}\\Big .$ Then we can obtain $b_N=f(x)$ .", "By the basic approximation property we know that to calculate $b_1$ needs a ReQU network with 1 hidden layer and 4 hidden neurons, and to calculate $b_2$ needs a ReQU network with 3 hidden layer, $2\\times 4+2-1$ hidden neurons.", "By induction, to calculate $b_N=f(x)$ needs a ReQU network with $2N-1$ hidden layer, $N\\times 4+N-1=5N-1$ hidden neurons, $8N$ parameters(weights and bias), and its width equals to 4.", "Apart from the construction based on the Horner's method, another construction is shown in Theorem 2.2 of [34], where the constructed ReQU network has $\\lfloor \\log _2N\\rfloor +1$ hidden layers, $8N-2$ neurons and no more than $61N$ parameters (weights and bias).", "Now we consider constructing ReQU networks to compute multivariate polynomial $f$ with total degree $N$ on $\\mathbb {R}^d$ .", "For any $d\\in \\mathbb {N}^+$ and $N\\in \\mathbb {N}_0$ , let $f^d_N(x_1,\\ldots ,x_d)=\\sum _{i_1+\\cdots +i_d=0}^N a_{i_1,i_2,\\ldots ,i_d}x_1^{i_1}x_2^{i_2}\\cdots x_d^{i_d},$ denote the polynomial with total degree $N$ of $d$ variables, where $i_1,i_2,\\ldots ,i_d$ are non-negative integers, $\\lbrace a_{i_1,i_2,\\ldots ,i_d}: i_1+\\cdots +i_d\\le N\\rbrace $ are coefficients in $\\mathbb {R}$ .", "Note that the multivariate polynomial $f^d_N$ can be written as $f^d_N(x_1,\\ldots ,x_d)=\\sum _{i_1=0}^N\\Big (\\sum _{i_2+\\cdots +i_d=0}^{N-i_1} a_{i_1,i_2\\ldots ,i_d}x_2^{i_2}\\cdots x_d^{i_d}\\Big )x_1^{i_1},$ and we can view $f^d_N$ as a univariate polynomial of $x_1$ with degree $N$ if $x_2,\\ldots ,x_d$ are given and for each $i_1\\in \\lbrace 0,\\ldots ,N\\rbrace $ the $(d-1)$ -variate polynomial $\\sum _{i_2+\\cdots +i_d=0}^{N-i_1} a_{i_1,i_2\\ldots ,i_d}x_2^{i_2}\\cdots x_d^{i_d}$ with degree no more than $N$ can be computed by a proper ReQU network.", "This reminds us the construction of ReQU network for $f^d_N$ can be implemented iteratively via composition of $f^{1}_N,f^2_N,\\ldots ,f^{d}_N$ by induction.", "By Horner's method we have constructed a ReQU network with $2N-1$ hidden layers, $5N-1$ hidden neurons and $8N$ parameters to exactly compute $f^1_N$ .", "Now we start to show $f^2_N$ can be computed by proper ReQU networks.", "We can write $f^2_N$ as $f^2_N(x_1,x_2)=\\sum _{i+j=0}^Na_{ij}x_1^ix_2^j=\\sum _{i=0}^N\\Big (\\sum _{j=0}^{N-i}a_{ij}x_2^j\\Big )x_1^i.$ Note that for $i\\in \\lbrace 0,\\ldots ,N\\rbrace $ , the the degree of polynomial $\\sum _{j=0}^{N-i}a_{ij}x_2^j$ is $N-i$ which is less than $N$ .", "But we can still view it as a polynomial with degree $N$ by padding (adding zero terms) such that $\\sum _{j=0}^{N-i}a_{ij}x_2^j=\\sum _{j=0}^{N}a^*_{ij}x_2^j$ where $a^*_{ij}=a_{ij}$ if $i+j\\le N$ and $a^*_{ij}=0$ if $i+j> N$ .", "In such a way, for each $i\\in \\lbrace 0,\\ldots ,N\\rbrace $ the polynomial $\\sum _{j=0}^{N-i}a_{ij}x_2^j$ can be computed by a ReQU network with $2N-1$ hidden layers, $5N-1$ hidden neurons, $8N$ parameters and its width equal to 4.", "Besides, for each $i\\in \\lbrace 0,\\ldots ,N\\rbrace $ , the monomial $x^i$ can also be computed by a ReQU network with $2N-1$ hidden layers, $5N-1$ hidden neurons, $8N$ parameters and its width equal to 4, in whose implementation the identity maps are used after the $(2i-1)$ -th hidden layer.", "Now we parallel these two sub networks to get a ReQU network which takes $x_1$ and $x_2$ as input and outputs $(\\sum _{j=0}^{N-i}a_{ij}x_2^j)x^i$ with width 8, hidden layers $2N-1$ , number of neurons $2\\times (5N-1)$ and size $2\\times 8N$ .", "Since for each $i\\in \\lbrace 0,\\ldots ,N\\rbrace $ , such paralleled ReQU network can be constructed, then with straightforward paralleling of $N$ such ReQU networks, we obtain a ReQU network exactly computes $f^2_N$ with width $8N$ , hidden layers $2N-1$ , number of neurons $2\\times (5N-1)\\times N$ and number of parameters $2\\times 8N\\times N=16N^2$ .", "Similarly for polynomial $f^3_N$ of 3 variables, we can write $f^3_N$ as $f^3_N(x_1,x_2,x_3)=\\sum _{i+j+k=0}^Na_{ijk}x_1^ix_2^jx_3^k=\\sum _{i=0}^N\\Big (\\sum _{j+k=0}^{N-i}a_{ijk}x_2^jx_3^k\\Big )x_1^i.$ By our previous argument, for each $i\\in \\lbrace 0,\\ldots ,N\\rbrace $ , there exists a ReQU network which takes $(x_1,x_2,x_3)$ as input and outputs $\\Big (\\sum _{j+k=0}^{N-i}a_{ijk}x_2^jx_3^k\\Big )x_1^i$ with width $8N+4$ , hidden layers $2N-1$ , number of neurons $2N(5N-1)+(5N-1)$ and parameters $16N^2+8N$ .", "And by paralleling $N$ such subnetworks, we obtain a ReQU network that exactly computes $f^3_N$ with width $(8N+4)\\times N=8N^2+4N$ , hidden layers $2N-1$ , number.", "of neurons $N(2N(5N-1)+(5N-1))=2N^2(5N-1)+N(5N-1)$ and number of parameters $16N^3+8N^2$ .", "Continuing this process, we can construct ReQU networks exactly compute polynomials of any $d$ variables with total degree $N$ .", "With a little bit abuse of notations, we let $\\mathcal {W}_k$ , $\\mathcal {D}_k$ , $\\mathcal {U}_k$ and $\\mathcal {S}_k$ denote the width, number of hidden layers, number of neurons and number of parameters (weights and bias) respectively of the ReQU network computing $f^k_N$ for $k=1,2,3,\\ldots $ .", "We have known that $\\mathcal {D}_1=2N-1\\qquad \\mathcal {W}_1=4\\qquad \\mathcal {U}_1=5N-1\\qquad \\mathcal {S}_1=8N$ Besides, based on the iterate procedure of the network construction, by induction we can see that for $k=2,3,4,\\ldots $ the following equations hold, $\\mathcal {D}_k=&2N-1,\\\\\\mathcal {W}_k=&N\\times (\\mathcal {W}_{k-1}+\\mathcal {W}_1),\\\\\\mathcal {U}_k=&N\\times (\\mathcal {U}_{k-1}+\\mathcal {U}_1),\\\\\\mathcal {S}_k=&N\\times (\\mathcal {S}_{k-1}+\\mathcal {S}_1).$ Then based on the values of $\\mathcal {D}_1,\\mathcal {W}_1,\\mathcal {U}_1,\\mathcal {S}_1$ and the recursion formula, we have for $k=2,3,4,\\ldots $ $\\mathcal {D}_k=2N-1,\\\\\\mathcal {W}_k=8N^{k-1}+4\\frac{N^{k-1}-N}{N-1}\\le 12N^{k-1},\\\\\\mathcal {U}_k=N\\times (\\mathcal {U}_{k-1}+\\mathcal {U}_1)=2(5N-1)N^{k-1}+(5N-1)\\frac{N^{k-1}-N}{N-1}\\le 15N^{k},\\\\\\mathcal {S}_k=N\\times (\\mathcal {S}_{k-1}+\\mathcal {S}_1)=16N^{k}+8N\\frac{N^{k-1}-N}{N-1}\\le 24N^k.$ This completes our proof.", "$\\hfill \\Box $ The proof is straightforward by and leveraging the approximation power of multivariate polynomials since Theorem REF told us any multivariate polynomial can be represented by proper ReQU networks.", "The theories for polynomial approximation have been extensively studies on various spaces of smooth functions.", "We refer to [3] for the polynomial approximation on smooth functions in our proof.", "Lemma 7 (Theorem 2 in [3]) Let $f$ be a function of compact support on $\\mathbb {R}^d$ of class $C^s$ where $s\\in \\mathbb {N}^+$ and let $K$ be a compact subset of $\\mathbb {R}^d$ which contains the support of $f$ .", "Then for each nonnegative integer $N$ there is a polynomial $p_N$ of degree at most $N$ on $\\mathbb {R}^d$ with the following property: for each multi-index $\\alpha $ with $\\vert \\alpha \\vert _1\\le \\min \\lbrace s,N\\rbrace $ we have $\\sup _{K}\\vert D^\\alpha (f-p_N)\\vert \\le \\frac{C}{N^{s-\\vert \\alpha \\vert _1}}\\sum _{\\vert \\alpha \\vert _1\\le s}\\sup _K\\vert D^\\alpha f\\vert ,$ where $C$ is a positive constant depending only on $d,s$ and $K$ .", "The proof of Lemma REF can be found in [3] based on the Whitney extension theorem (Theorem 2.3.6 in [24]).", "To use Lemma REF , we need to find a ReQU network to compute the $p_N$ for each $N\\in \\mathbb {N}^+$ .", "By Theorem REF , we know that any $p_N$ of $d+1$ variables can be computed by a ReQU network with $2N-1$ hidden layer, no more than $15N^{d+1}$ neurons, no more than $24N^{d+1}$ parameters and width no more than $12N^{d}$ .", "This completes the proof.", "By examining the proof of Theorem 1 in [3], the dependence of the constant $C$ in Lemma REF on the $d,s$ and $K$ can be detailed.", "$\\hfill \\Box $ Recall that $&\\inf _{f\\in \\mathcal {F}_n}\\Big [\\mathcal {R}(f)-\\mathcal {R}(f_0)+\\lambda \\lbrace \\kappa (f)-\\kappa (f_0)\\rbrace \\Big ]\\\\=&\\inf _{f\\in \\mathcal {F}_n}\\Bigg [\\mathbb {E}_{X,Y,\\xi }\\big \\lbrace \\rho _{\\xi }(Y-f(X,\\xi ))-\\rho _{\\xi }(Y-f_0(X,\\xi ))\\\\&\\qquad \\qquad +\\lambda (\\max \\lbrace -\\frac{\\partial }{\\partial \\tau }f(X,\\xi ),0\\rbrace -\\max \\lbrace -\\frac{\\partial }{\\partial \\tau }f_0(X,\\xi ),0\\rbrace )\\big \\rbrace \\Bigg ]\\\\\\le &\\inf _{f\\in \\mathcal {F}_n}\\Big [\\mathbb {E}_{X,Y,\\xi }\\Big \\lbrace \\vert f(X,\\xi )-f_0(X,\\xi )\\vert +\\lambda \\vert \\frac{\\partial }{\\partial \\tau }f(X,\\xi )-\\frac{\\partial }{\\partial \\tau }f_0(X,\\xi )\\vert \\Big \\rbrace \\Big ].$ By Theorem REF , for each $N\\in \\mathbb {N}^+$ , there exists a ReQU network $\\phi _N\\in \\mathcal {F}_n$ with $2N-1$ hidden layer, no more than $15N^{d+1}$ neurons, no more than $24N^{d+1}$ parameters and width no more than $12N^{d}$ such that for each multi-index $\\alpha \\in \\mathbb {N}^d_0$ with $\\vert \\alpha \\vert _1\\le \\min \\lbrace s,N\\rbrace $ we have $\\sup _{\\mathcal {X}\\times (0,1)}\\vert D^\\alpha (f-\\phi _N)\\vert \\le C(s,d,\\mathcal {X})\\times N^{-(s-\\vert \\alpha \\vert _1)}\\Vert f\\Vert _{C^s},$ where $C(s,d,\\mathcal {X})$ is a positive constant depending only on $d,s$ and the diameter of $\\mathcal {X}\\times (0,1)$ .", "This implies $\\sup _{\\mathcal {X}\\times (0,1)}\\vert f-\\phi _N\\vert \\le C(s,d,\\mathcal {X})\\times N^{-s}\\Vert f\\Vert _{C^s},$ and $\\sup _{\\mathcal {X}\\times (0,1)}\\Big \\vert \\frac{\\partial }{\\partial \\tau }(f-\\phi _N)\\Big \\vert \\le C(s,d,\\mathcal {X})\\times N^{-(s-1)}\\Vert f\\Vert _{C^s}.$ Combine above two uniform bounds, we have $&\\inf _{f\\in \\mathcal {F}_n}\\Big [\\mathcal {R}(f)-\\mathcal {R}(f_0)+\\lambda \\lbrace \\kappa (f)-\\kappa (f_0)\\rbrace \\Big ]\\\\\\le &\\Big [\\vert \\mathbb {E}_{X,\\xi }\\Big \\lbrace \\phi _N(X,\\xi )-f_0(X,\\xi )\\vert +\\lambda \\vert \\frac{\\partial }{\\partial \\tau }\\phi _N(X,\\xi )-\\frac{\\partial }{\\partial \\tau }f_0(X,\\xi )\\vert \\Big \\rbrace \\Big ]\\\\\\le &C(s,d,\\mathcal {X})\\times N^{-s}\\Vert f\\Vert _{C^s}+\\lambda C(s,d,\\mathcal {X})\\times N^{-(s-1)}\\Vert f\\Vert _{C^s}\\\\\\le & C(s,d,\\mathcal {X}) (1+\\lambda ) N^{-(s-1)}\\Vert f\\Vert _{C^s},$ which completes the proof.", "$\\hfill \\Box $ By equation (B.3) in [6], for any scalar $w,v\\in \\mathbb {R}$ and $\\tau \\in (0,1)$ we have $\\rho _\\tau (w-v)-\\rho _\\tau (w)=-v\\lbrace \\tau -I(w\\le 0)\\rbrace +\\int _0^v\\lbrace I(w\\le z)-I(w\\le 0)\\rbrace dz.$ Given any quantile level $\\tau \\in (0,1)$ , function $f$ and $X=x$ , let $w=Y-f_0(X,\\tau )$ , $v=f(X,\\tau )-f_0(X,\\tau )$ .", "Suppose $\\vert f(x,\\tau )-f_0(x,\\tau )\\vert \\le K$ , taking conditional expectation on above equation with respect to $Y\\mid X=x$ , we have $&\\mathbb {E}\\lbrace \\rho _\\tau (Y-f(X,\\tau ))-\\rho _\\tau (Y-f_0(X,\\tau ))\\mid X=x\\rbrace \\\\=&\\mathbb {E}\\big [-\\lbrace f(X,\\tau )-f_0(X,\\tau )\\rbrace \\lbrace \\tau -I(Y-f_0(X,\\tau )\\le 0)\\rbrace \\mid X=x\\big ]\\\\&+\\mathbb {E}\\big [\\int _0^{f(X,\\tau )-f_0(X,\\tau )}\\lbrace I(Y-f_0(X,\\tau )\\le z)-I(Y-f_0(X,\\tau )\\le 0)\\rbrace dz\\mid X=x\\big ]\\\\=&0+\\mathbb {E}\\big [\\int _0^{f(X,\\tau )-f_0(X,\\tau )}\\lbrace I(Y-f_0(X,\\tau )\\le z)-I(Y-f_0(X,\\tau )\\le 0)\\rbrace dz\\mid X=x\\big ]\\\\=&\\int _0^{f(x,\\tau )-f_0(x,\\tau )} \\lbrace P_{Y|X}(f_0(x,\\tau )+z)-P_{Y|X}(f_0(x,\\tau ))\\rbrace dz\\\\\\ge &\\int _0^{f(x,\\tau )-f_0(x,\\tau )} k \\vert z\\vert dz\\\\=&\\frac{k}{2}\\vert f(x,\\tau )-f_0(x,\\tau )\\vert ^2.$ Suppose $f(x)-f_0(x)> K$ , then similarly we have $&\\mathbb {E}\\lbrace \\rho _\\tau (Y-f(X,\\tau ))-\\rho _\\tau (Y-f_0(X,\\tau ))\\mid X=x\\rbrace \\\\=&\\int _0^{f(x,\\tau )-f_0(x,\\tau )} \\lbrace P_{Y|X}(f_0(x,\\tau )+z)-P_{Y|X}(f_0(x,\\tau ))\\rbrace dz\\\\\\ge &\\int _{K/2}^{ f(x,\\tau )-f_0(x,\\tau )} \\lbrace P_{Y|X}(f_0(x,\\tau )+K/2)-P_{Y|X}(f_0(x,\\tau ))\\rbrace dz\\\\\\ge &(f(x,\\tau )-f_0(x,\\tau )-K/2)(kK/2)\\\\\\ge &\\frac{kK}{4}\\vert f(x,\\tau )-f_0(x,\\tau )\\vert .$ The case $f(x,\\tau )-f_0(x,\\tau )\\le -K$ can be handled similarly as in [39].", "The conclusion follows combining the three different cases and taking expectation with respect to $X$ of above obtained inequality.", "Finally for any function $f:\\mathcal {X}\\times (0,1)\\rightarrow \\mathbb {R}$ , we have $\\Delta ^2(f,f_0)=&\\mathbb {E}\\min \\lbrace \\vert f(X,\\xi )-f_0(X,\\xi )\\vert ,\\vert f(X,\\xi )-f_0(X,\\xi )\\vert ^2\\rbrace \\\\\\le & \\max \\lbrace 2/k,4/(kK)\\rbrace \\mathbb {E} \\Big [\\int _{0}^{1}\\rho _\\tau (Y-f(X,\\tau ))-\\rho _\\tau (Y-f_0(X,\\tau ))d\\tau \\Big ]\\\\=& \\max \\lbrace 2/k,4/(kK)\\rbrace [\\mathcal {R}(f)-\\mathcal {R}(f_0)].$ This completes the proof.", "$\\hfill \\Box $" ], [ "Definitions", "The following definitions are used in the proofs.", "Definition 2 (Covering number) Let $\\mathcal {F}$ be a class of function from $\\mathcal {X}$ to $\\mathbb {R}$ .", "For a given sequence $x=(x_1,\\ldots ,x_n)\\in \\mathcal {X}^n,$ let $\\mathcal {F}_n|_x=\\lbrace (f(x_1),\\ldots ,f(x_n):f\\in \\mathcal {F}_n\\rbrace $ be the subset of $\\mathbb {R}^{n}$ .", "For a positive number $\\delta $ , let $\\mathcal {N}(\\delta ,\\mathcal {F}_n|_x,\\Vert \\cdot \\Vert _\\infty )$ be the covering number of $\\mathcal {F}_n|_x$ under the norm $\\Vert \\cdot \\Vert _\\infty $ with radius $\\delta $ .", "Define the uniform covering number $\\mathcal {N}_n(\\delta ,\\Vert \\cdot \\Vert _\\infty ,\\mathcal {F}_n)$ to be the maximum over all $x\\in \\mathcal {X}$ of the covering number $\\mathcal {N}(\\delta ,\\mathcal {F}_n|_x,\\Vert \\cdot \\Vert _\\infty )$ , i.e., $\\mathcal {N}_n(\\delta ,\\mathcal {F}_n,\\Vert \\cdot \\Vert _\\infty )=\\max \\lbrace \\mathcal {N}(\\delta ,\\mathcal {F}_n|_x,\\Vert \\cdot \\Vert _\\infty ):x\\in \\mathcal {X}^n\\rbrace .$ Definition 3 (Shattering) Let $\\mathcal {F}$ be a family of functions from a set $\\mathcal {Z}$ to $\\mathbb {R}$ .", "A set $\\lbrace z_1,\\ldots ,Z_n\\rbrace \\subset \\mathcal {Z}$ is said to be shattered by $\\mathcal {F}$ , if there exists $t_1,\\ldots ,t_n\\in \\mathbb {R}$ such that $\\Big \\vert \\Big \\lbrace \\Big [\\begin{array}{lr}{\\rm sgn}(f(z_1)-t_1)\\\\\\ldots \\\\{\\rm sgn}(f(z_n)-t_n)\\\\\\end{array}\\Big ]:f\\in \\mathcal {F}\\Big \\rbrace \\Big \\vert =2^n,$ where ${rm sgn}$ is the sign function returns $+1$ or $-1$ and $\\vert \\cdot \\vert $ denotes the cardinality of a set.", "When they exist, the threshold values $t_1,\\ldots ,t_n$ are said to witness the shattering.", "Definition 4 (Pseudo dimension) Let $\\mathcal {F}$ be a family of functions mapping from $\\mathcal {Z}$ to $\\mathbb {R}$ .", "Then, the pseudo dimension of $\\mathcal {F}$ , denoted by ${\\rm Pdim}(\\mathcal {F})$ , is the size of the largest set shattered by $\\mathcal {F}$ .", "Definition 5 (VC dimension) Let $\\mathcal {F}$ be a family of functions mapping from $\\mathcal {Z}$ to $\\mathbb {R}$ .", "Then, the Vapnik–Chervonenkis (VC) dimension of $\\mathcal {F}$ , denoted by ${\\rm VCdim}(\\mathcal {F})$ , is the size of the largest set shattered by $\\mathcal {F}$ with all threshold values being zero, i.e., $t_1=\\ldots ,=t_n=0$ ." ], [ "Supporting Lemmas", "The following lemma gives an upper bound for the covering number in terms of the pseudo-dimension.", "Lemma 8 (Theorem 12.2 in [1]) Let $\\mathcal {F}$ be a set of real functions from domain $\\mathcal {Z}$ to the bounded interval $[0,B]$ .", "Let $\\delta >0$ and suppose that $\\mathcal {F}$ has finite pseudo-dimension ${\\rm Pdim}(\\mathcal {F})$ then $\\mathcal {N}_n(\\delta ,\\mathcal {F},\\Vert \\cdot \\Vert _\\infty )\\le \\sum _{i=1}^{{\\rm Pdim}(\\mathcal {F})}\\binom{n}{i}\\Big (\\frac{B}{\\delta }\\Big )^i,$ which is less than $\\lbrace enB/(\\delta {\\rm Pdim}(\\mathcal {F}))\\rbrace ^{{\\rm Pdim}(\\mathcal {F})}$ for $n\\ge {\\rm Pdim}(\\mathcal {F})$ ." ], [ "Additional simulation results", "In this section, we include additional simulation results for the “Linear\" model.", "Table: Data is generated from the “Linear\" model with training sample size n=512,1024n= 512,1024 and the number of replications R=100R = 100.", "The averaged L 1 L_1 and L 2 2 L_2^2 test errors with the corresponding standard deviation (in parentheses) are reported for the estimators trained by different methods.Table: Data is generated from the multivariate “Linear\" model with training sample size n=512,2048n= 512,2048 and the number of replications R=100R = 100.", "The averaged L 1 L_1 and L 2 2 L_2^2 test errors with the corresponding standard deviation (in parentheses) are reported for the estimators trained by different methods.Figure: The fitted quantile curves by different methods under the univariate “Linear\" model when n=512,2048n=512,2048.", "The training data is depicted as grey dots.The target quantile functions at the quantile levels τ=\\tau =0.05 (blue), 0.25 (orange), 0.5 (green), 0.75 (red), 0.95 (purple) are depicted as dashed curves, and the estimated quantile functions are represented by solid curves with the same color.", "From the top to the bottom, the rows correspond to the sample size n=512,2048n=512,2048.", "From the left to the right, the columns correspond to the methods DQRP, kernel QR and QR Forest.Figure: The value of risks and penalties under the univariate “Wave\" model when n=512,2048n=512,2048.", "A vertical dashed line is depicted at the value λ=log(n)\\lambda =\\log (n) on x-axis in each figure.Figure: The value of risks and penalties under the multivariate single index model when n=512,2048n=512,2048 and d=8d=8.", "A vertical dashed line is depicted at the value λ=log(n)\\lambda =\\log (n) on x-axis in each figure." ] ]
2207.10442
[ [ "Bounds on Successive Minima of Orders in Number Fields and Scrollar\n Invariants of Curves" ], [ "Abstract Let $n \\geq 2$ be any integer.", "A number field $K$ of degree $n$ with embeddings $\\sigma_1,\\dots,\\sigma_{n}$ into $\\mathbb{C}$ has a norm given by \\[ \\lvert x \\rvert = \\sqrt{\\frac{1}{n}\\sum_{i = 1}^{n}\\lvert \\sigma_i(x)\\rvert^2} \\] for $x \\in K$.", "With respect to this norm, an order $\\mathcal{O}$ in a number field of degree $n$ has successive minima $1=\\lambda_0 \\leq \\dots \\leq \\lambda_{n-1}$.", "Motivated by a conjecture of Lenstra, we aim to determine which inequalities of the form $1 \\ll_{\\mathcal{S}} \\lambda_1^{f_1}\\dots\\lambda_{n-1}^{f_{n-1}}$ for $f_1,\\dots,f_{n-1} \\in \\mathbb{R}$ hold for an infinite set $\\mathcal{S}$ of isomorphism classes of orders in degree $n$ fields.", "In many cases -- such as when $\\mathcal{S}$ is the set of isomorphism classes of orders in degree $n$ fields or $\\mathcal{S}$ is the set of isomorphism classes of orders in degree $n$ fields with fixed subfield degrees -- we provide a complete classification of which inequalities $\\lambda_k \\ll \\lambda_i\\lambda_j$ hold.", "Moreover, when $n < 18$, $n$ is a prime power, or $n$ is a product of $2$ distinct primes, we combinatorially classify the integers $1\\leq i_1,\\dots,i_t,j_1,\\dots,j_t,k_1\\dots,k_t < n$ for which there exists an infinite set $\\mathcal{S}$ of isomorphism classes of orders in degree $n$ fields such that $\\lambda_{i_s}\\lambda_{j_s} = o(\\lambda_{k_s})$ for all $1 \\leq s \\leq t$ and all $\\mathcal{O} \\in \\mathcal{S}$.", "We also prove analogous theorems about successive minima of ideals in number fields and scrollar invariants of curves." ], [ "Introduction", "Let $n \\ge 2$ be any integer.", "For any number field $K$ of degree $n$ , denote the embeddings of $K$ into $\\mathbb {C}$ by $\\sigma _1,\\dots ,\\sigma _{n}$ , and define $\\vert x \\vert \\sqrt{\\frac{1}{n}\\sum _{i = 1}^{n} \\vert \\sigma _i(x)\\vert ^2}.$ for $x \\in K$ .", "Let $\\mathcal {L}\\subset K$ be a full rank lattice, i.e.", "a subgroup of $K$ such that $\\mathcal {L}\\simeq \\mathbb {Z}^n$ .", "Let $[n]$ denote the set $\\lbrace 0,\\dots ,n-1\\rbrace $ .", "Definition 1.1 For $i \\in [n]$ , the $i$ -th successive minimum $\\lambda _{i}(\\mathcal {L})$ of $\\mathcal {L}$ is $\\lambda _{i}(\\mathcal {L}) \\min _{r \\in \\mathbb {R}_{> 0}} \\bigg \\lbrace \\dim _{\\mathbb {Q}}\\Big (\\operatorname{span}_{\\mathbb {Q}}\\lbrace x \\in \\mathcal {L}\\mid \\vert x \\vert \\le r\\rbrace \\Big ) \\ge i+1 \\bigg \\rbrace .$ When $\\mathcal {L}$ is implicit, we will refer to the $i$ -th successive minimum of $\\mathcal {L}$ by $\\lambda _i$ .", "In this paper, $\\mathcal {S}$ will refer to an infinite set of isomorphism classes of orders $\\mathcal {O}$ in degree $n$ number fields.", "For such $\\mathcal {S}$ , we wish to understand which inequalities $1 \\ll _{\\mathcal {S}} \\lambda _1^{f_1} \\dots \\lambda _{n-1}^{f_{n-1}}$ hold for $f_1,\\dots ,f_{n-1} \\in \\mathbb {R}$ and $\\mathcal {O}\\in \\mathcal {S}$ .", "More generally, how do the successive minima of $\\mathcal {O}$ behave?", "It is clear that $\\lambda _0 \\le \\lambda _1 \\le \\dots \\le \\lambda _{n-1}$ .", "Moreover, it is not too difficult to prove that $\\lambda _0 = 1$ (lambdazerolemma).", "Minkowski's second theorem shows that $\\prod _{i = 1}^{n-1} \\lambda _i \\asymp _n \\Delta ^{1/2}$ , where $\\Delta $ is the absolute value of the discriminant of $\\mathcal {O}$ (see , Lecture 10, §6)." ], [ "Background and Preliminary Results", "The following results show that the successive minima satisfy additional constraints.", "theorembstttz(Bhargava, Shankar, Taniguchi, Thorne, Tsimerman, Zhao ) For any integers $i,j \\in [n]$ with $i + j \\ge n-1$ and any order $\\mathcal {O}$ in a degree $n$ number field, we have $\\lambda _{n-1}(\\mathcal {O}) \\le \\sqrt{n} \\lambda _{i}(\\mathcal {O})\\lambda _{j}(\\mathcal {O}).$ theoremchiche(Chiche-lapierre ) For any integer $1 \\le i < n$ and any order $\\mathcal {O}$ in a number field $K$ of degree $n$ such that $K$ does not contain a subfield of degree $d > 1$ with $d \\mid i$ , we have $\\lambda _{i}(\\mathcal {O}) \\le \\sqrt{n} \\lambda _{1}(\\mathcal {O}) \\lambda _{i-1}(\\mathcal {O}).$ If $n \\ge 5$ , then for any order $\\mathcal {O}$ in a degree $n$ number field which does contain subfields of degree 2, 3, or 4, we have $\\lambda _{4}(\\mathcal {O}) \\le \\sqrt{n} \\lambda _{2}(\\mathcal {O})^2.$ The statement of bstttz (respectively chiche) is stronger than the statement given in (respectively ); a slight modification of the proof in (resp. )", "gives this improved result.", "We will not present the modifications of the above proofs in this paper; we instead will prove both theorems are a corollary of exthm.", "We say a number field $K$ is primitive if $[K \\colon \\mathbb {Q}] > 1$ and the only subfields of $K$ are itself and $\\mathbb {Q}$ .", "Bhargava and Lenstra proved the following theorem for orders contained in primitive number fields.", "A direct proof of Bhargava and Lenstra's theorem is not given in this paper; instead, it will be proven as a corollary of exthm.", "theoremprimcorollary For any integers $0 \\le i, j \\le i + j < n$ and any order $\\mathcal {O}$ in a primitive degree $n$ number field, we have $\\lambda _{i+j}(\\mathcal {O}) \\le \\sqrt{n} \\lambda _i(\\mathcal {O})\\lambda _j(\\mathcal {O}).$ For any two positive integers $i$ and $j$ with $j \\ne 0$ , let $i \\operatorname{\\%}j \\in \\lbrace 0,\\dots ,j-1\\rbrace $ denote the remainder when dividing $i$ by $j$ .", "One of our main results in this paper will be the following theorem generalizing bstttz, chiche, and primcorollary.", "theoremexthm Let $1 = k_1 < k_2 < \\dots < k_{\\ell }=n$ be positive integers such that $k_s \\mid n$ for all $1 \\le s \\le \\ell $ .", "Choose any integers $0 \\le i,j \\le i + j < n$ .", "The following statements are equivalent: There exists a number field $K$ of degree $n$ whose subfields have precisely the degrees $k_1,\\dots ,k_{\\ell }$ and there exists a constant $c_K \\in \\mathbb {R}_{>0}$ depending only on $K$ such that $\\lambda _{i+j}(\\mathcal {O}) \\le c_K\\lambda _i(\\mathcal {O})\\lambda _j(\\mathcal {O})$ for every order $\\mathcal {O}\\subseteq K$ .", "There exists a number field $K$ of degree $n$ whose subfields have precisely the degrees $k_1,\\dots ,k_{\\ell }$ and there exists a constant $c_K \\in \\mathbb {R}_{>0}$ depending only on $K$ such that $\\lambda _{i+j}(I) \\le c_K\\lambda _i(I)\\lambda _j(\\mathcal {O})$ for every order $\\mathcal {O}\\subseteq K$ and every fractional ideal $I$ of $\\mathcal {O}$ .", "For every order $\\mathcal {O}$ in a degree $n$ number field such that $\\mathcal {O}\\otimes \\mathbb {Q}$ has subfields of precisely the degrees $k_1,\\dots ,k_{\\ell }$ , we have $\\lambda _{i+j}(\\mathcal {O}) \\le \\sqrt{n}\\lambda _i(\\mathcal {O})\\lambda _j(\\mathcal {O}).$ For every order $\\mathcal {O}$ in a degree $n$ number field such that $\\mathcal {O}\\otimes \\mathbb {Q}$ has subfields of precisely the degrees $k_1,\\dots ,k_{\\ell }$ and every fractional ideal $I$ of $\\mathcal {O}$ , we have $\\lambda _{i+j}(I) \\le \\sqrt{n}\\lambda _i(I)\\lambda _j(\\mathcal {O}).$ We have $(i\\operatorname{\\%}k_s) + (j\\operatorname{\\%}k_s) = (i + j)\\operatorname{\\%}k_s$ for all $1 \\le s \\le {\\ell }$ .", "We will prove exthm in exthmproof.", "An eager reader may skip directly to the proof, as it has no logically dependencies to any other content in the introduction.", "To see that exthm implies bstttz, observe that if $i + j = n-1$ , then $(i\\operatorname{\\%}k) + (j\\operatorname{\\%}k) = (i + j)\\operatorname{\\%}k$ for all $k \\mid n$ .", "We will now see that exthm implies the first statement of chiche.", "First, choose any integer $1 \\le i < n$ and let $j = 1$ .", "Let $\\mathcal {O}$ be an order in a number field $K$ of degree $n$ such that $K$ does not contain a subfield of degree $d > 1$ with $d \\mid i$ .", "Let $1 = k_1 < \\dots < k_{\\ell }=n$ be the degrees of the subfields of $K$ , in increasing order.", "Because $k_{s} \\nmid i$ for any $1 \\le s \\le \\ell $ , we have $(i\\operatorname{\\%}k_s) + (j\\operatorname{\\%}k_s) = (i + j)\\operatorname{\\%}k_s$ for all $1 \\le s \\le \\ell $ , and thus $\\lambda _{i}(\\mathcal {O}) \\le \\sqrt{n} \\lambda _{1}(\\mathcal {O}) \\lambda _{i-1}(\\mathcal {O}).$ To see that exthm implies the second statement of chiche, let $i = j = 2$ .", "Let $\\mathcal {O}$ be an order in a number field $K$ of degree $n$ such that $K$ does not contain subfields of degree 2, 3, or 4.", "Let $1 = k_1 < \\dots < k_{\\ell }=n$ be the degrees of the subfields of $K$ , in increasing order; observe that $k_2 \\ge 5$ , so $(i\\operatorname{\\%}k_s) + (j\\operatorname{\\%}k_s) = (i + j)\\operatorname{\\%}k_s$ for all $1 \\le s \\le \\ell $ , and thus $\\lambda _{4}(\\mathcal {O}) \\le \\sqrt{n} \\lambda _{2}(\\mathcal {O})^2.$ If we set $\\ell = 2$ , so that $k_1 = 1$ and $k_2 = n$ , exthm immediately implies primcorollary.", "In fact, relations between the successive minima are closely related to multiplication in the ambient field.", "Let $\\mathcal {B}= \\lbrace v_0,v_1\\dots ,v_{n-1}\\rbrace $ be a basis of a degree $n$ number field $K$ .", "For $i \\in [n]$ and $v \\in K$ , let $\\pi _{i,\\mathcal {B}}(v)$ be the coefficient of $v_i$ in the expansion of $v$ with respect to the basis $\\mathcal {B}$ .", "We will prove inboundprop in inboundpropproof.", "propositioninboundprop Let $\\mathcal {O}$ be an order in a degree $n$ number field and let $I$ be a fractional ideal of $\\mathcal {O}$ .", "Let $\\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace \\subseteq \\mathcal {O}$ be linearly independent elements such that $\\lambda _i(\\mathcal {O}) = \\operatorname{\\vert }v_i\\operatorname{\\vert }$ , and let $\\mathcal {B}=\\lbrace w_0,\\dots ,w_{n-1}\\rbrace \\subseteq I$ be linearly independent elements such that $\\lambda _i(I) = \\operatorname{\\vert }w_i\\operatorname{\\vert }$ .", "For any $i,j,k \\in [n]$ , if $\\pi _{k,\\mathcal {B}}(v_iw_j) \\ne 0$ , then $\\lambda _{k}(I) \\le \\sqrt{n} \\lambda _i(\\mathcal {O})\\lambda _j(I).$ Moreover, for any $i,j,i+j\\in [n]$ , if $\\lambda _{i+j}(I) > \\sqrt{n} \\lambda _i(\\mathcal {O})\\lambda _j(I)$ , then there exists an integer $1 < m < n$ such that $\\mathcal {O}\\otimes \\mathbb {Q}$ has a degree $m$ subfield and $(i \\operatorname{\\%}m) + (j \\operatorname{\\%}m) \\ne (i+j) \\operatorname{\\%}m$ ." ], [ "Lenstra's conjecture", "Hendrik Lenstra partitioned the set of isomorphism classes of orders in degree $n$ number fields in a different way into a finite set of subsets and conjectured which inequalities $\\lambda _{i+j} \\ll \\lambda _{i} \\lambda _{j}$ hold for each subset.", "Although the conjecture turns out to be false, as we will show in this paper (deg8counterex), it contains valuable insight.", "Definition 1.2 A tower type is a $t$ -tuple of integers $(n_1,\\dots ,n_t) \\in \\mathbb {Z}_{>1}^t$ for some $t \\ge 1$ .", "We say $t$ is the length of the tower type and $\\prod _{i = 1}^tn_i$ is the degree of the tower type.", "Throughout this article, the variable $\\mathfrak {T}$ will refer to a tower type of length $t$ and degree $n$ .", "We now canonically associate a tower type to a tower of field extensions.", "Suppose we have a tower of fields $\\mathbb {Q}= K_0 \\subset K_1 \\subset \\dots \\subset K_t = K$ with no trivial extensions and $t \\ge 1$ .", "The tower type of $K_0 \\subset K_1 \\subset \\dots \\subset K_t$ is $\\big ([K_1 \\colon K_0], [K_2 \\colon K_1], \\dots , [K_t \\colon K_{t-1}]\\big ).$ To an order $\\mathcal {O}$ in a number field $K$ , we canonically associate a tower of subfields $\\mathbb {Q}= K_0 \\subset K_1 \\subset \\dots \\subset K_t = K$ with no trivial extensions and $t \\ge 1$ as follows; a field $L$ belongs to the tower if and only if there exists a positive real number $r$ such that $L$ is obtained from $\\mathbb {Q}$ by adjoining all $x \\in \\mathcal {O}$ for which $\\vert x \\vert \\le r$ .", "We say the tower type of $\\mathcal {O}$ is the tower type of $K_0 \\subset K_1 \\subset \\dots \\subset K_t$ .", "Definition 1.3 Let $\\mathcal {S}(\\mathfrak {T})$ be the set of isomorphism classes of orders $\\mathcal {O}$ with tower type $\\mathfrak {T}$ .", "Write $\\mathfrak {T}= (n_1,\\dots ,n_t)$ .", "The set $\\mathcal {S}(\\mathfrak {T})$ is infinite (see prototypicalExample for a construction of infinitely many elements of $\\mathcal {S}(\\mathfrak {T})$ ).", "Mixed radix notation will be useful to concisely state arithmetic conditions throughout this paper.", "Definition 1.4 Choose a tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ and $i \\in [n]$ .", "Writing $i$ in mixed radix notation with respect to $\\mathfrak {T}$ means writing $i = i_1 + i_2 n_1 + i_3 (n_1n_2) + \\dots + i_t(n_1\\dots n_{t-1})$ where $i_s$ is an integer such that $0 \\le i_s < n_s$ for $1\\le s \\le t$ .", "Note that the integers $i_s$ are uniquely determined.", "definitionoverflowdefn Fix a tower type $\\mathfrak {T}=(n_1,\\dots ,n_t)$ and integers $0 \\le i, j \\le i + j < n$ .", "We say the addition $i+j$ does not overflow modulo $\\mathfrak {T}$ if we write $i$ , $j$ , and $k=i+j$ in mixed radix notation with respect to $\\mathfrak {T}$ as $i = i_1 + i_2 n_1 + i_3 (n_1n_2) + \\dots + i_t(n_1\\dots n_{t-1})$ $j = j_1 + j_2 n_1 + i_3 (n_1n_2) + \\dots + j_t(n_1\\dots n_{t-1})$ $k = k_1 + k_2 n_1 + k_3 (n_1n_2) + \\dots + k_t(n_1\\dots n_{t-1})$ and $i_s + j_s = k_s$ for all $1 \\le s \\le t$ .", "Otherwise, we say the addition $i+j$ overflows modulo $\\mathfrak {T}$.", "Note that condition $(5)$ in exthm is equivalent the condition that $i+j$ does not overflow modulo $(k_s, n/k_s)$ for all $1\\le s \\le \\ell $ .", "Hendrik Lenstra suggested the following infinite set of isomorphism classes of orders as a model for the extremal case when determining when inequalities $\\lambda _{i+j} \\ll \\lambda _{i} \\lambda _{j}$ should hold.", "Example 1.5 For a tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ of degree $n$ , let $\\lbrace (\\alpha _{1,\\ell },\\dots ,\\alpha _{t,\\ell })\\rbrace _{\\ell \\in \\mathbb {Z}_{>0}}$ be a sequence of $t$ -tuples of algebraic integers such that: for all $1 \\le i \\le t$ and all $\\ell \\in \\mathbb {Z}_{>0}$ we have $[\\mathbb {Q}(\\alpha _{1,\\ell },\\dots ,\\alpha _{i,\\ell }) \\colon \\mathbb {Q}(\\alpha _{1,\\ell },\\dots ,\\alpha _{i-1,\\ell })] = n_i;$ for all $1 \\le i \\le t$ and all $\\ell \\in \\mathbb {Z}_{>0}$ we have $\\vert \\alpha _{i,\\ell } \\vert \\vert \\alpha _{j,\\ell } \\vert \\asymp \\vert \\alpha _{i,\\ell } \\alpha _{j,\\ell }\\vert ;$ for all $1 \\le i < t$ and all $\\ell \\in \\mathbb {Z}_{>0}$ we have $\\lim _{\\ell \\rightarrow \\infty } \\frac{\\vert \\alpha _{1,\\ell } \\vert ^{n_1-1} \\dots \\vert \\alpha _{i,\\ell }\\vert ^{n_i-1}}{\\vert \\alpha _{i+1,\\ell }\\vert } = 0;$ and all $\\ell \\in \\mathbb {Z}_{>0}$ we have $\\operatorname{\\vert }\\operatorname{Disc}(\\mathbb {Z}[\\alpha _{1,\\ell },\\dots ,\\alpha _{t,\\ell }]) \\operatorname{\\vert }\\asymp \\prod _{i = 1}^t \\operatorname{\\vert }\\alpha _{i,\\ell } \\operatorname{\\vert }^{n(n_i-1)/2}.$ Moreover, given a number field $K$ and elements $\\alpha _1,\\dots ,\\alpha _t \\in K$ such that $\\deg (\\mathbb {Q}(\\alpha _1,\\dots ,\\alpha _{\\ell })) = n_1\\dots n_{\\ell }$ for all $1 \\le \\ell \\le t$ , the set $\\lbrace (\\alpha _{1,\\ell },\\dots ,\\alpha _{t,\\ell })\\rbrace _{\\ell \\in \\mathbb {Z}_{>0}}$ can be chosen so that all elements $\\alpha _{i,\\ell }$ lie in $K$ ); see existenceofalpha for a proof.", "Define the order $\\mathcal {O}_{\\ell } \\mathbb {Z}[\\alpha _{1,\\ell },\\dots ,\\alpha _{t,\\ell }]$ and consider the basis $\\mathcal {B}_{\\ell } = \\lbrace 1=v_{0,\\ell },\\dots ,v_{n-1,\\ell }\\rbrace $ of $\\mathcal {O}_{\\ell }$ defined as follows.", "For $i \\in [n]$ , write $i = i_1 + i_2 (n_1) + i_3 (n_1 n_2) + \\dots + i_t(n_1\\dots n_{t-1})$ in mixed radix notation with respect to $\\mathfrak {T}$ ; then set $v_{i,\\ell } = \\alpha _{1,\\ell }^{i_1}\\dots \\alpha _{t,\\ell }^{i_{t}}$ .", "The basis $\\mathcal {B}_{\\ell }$ is essentially lexicographic; it is $\\mathcal {B}_{\\ell } = \\bigg \\lbrace 1, \\alpha _{1,\\ell }, \\alpha _{1,\\ell }^2,\\dots ,\\alpha _{1,\\ell }^{n_1-1}, \\alpha _{2,\\ell }, \\alpha _{1,\\ell }\\alpha _{2,\\ell }, \\alpha _{1,\\ell }^2\\alpha _{2,\\ell },\\dots ,\\prod _{s=1}^{t}\\alpha _{s,\\ell }^{n_s-1} \\bigg \\rbrace .$ Thus $\\lambda _{i}(\\mathcal {O}_{\\ell }) \\asymp \\vert \\alpha _{1,\\ell }\\vert ^{i_1}\\dots \\vert \\alpha _{t,\\ell }\\vert ^{i_{t}}$ .", "For any $\\ell $ and $1\\le i,j,i+j \\le n$ , observe that $i+j$ does not overflow modulo $\\mathfrak {T}$ if and only if $v_{i+j,\\ell } = v_{i,\\ell }v_{j,\\ell }$ .", "If $i+j$ does not overflow modulo $\\mathfrak {T}$ then $\\lambda _{i+j}(\\mathcal {O}_{\\ell }) \\asymp \\vert v_{i+j,\\ell } \\vert \\asymp \\vert v_{i,\\ell } \\vert \\vert v_{j,\\ell } \\vert \\asymp \\lambda _{i}(\\mathcal {O}_{\\ell }) \\lambda _{j}(\\mathcal {O}_{\\ell }).$ If $i+j$ overflows modulo $\\mathfrak {T}$ , then write $i$ , $j$ , and $k = i+j$ in mixed radix notation with respect to $\\mathfrak {T}= (n_1,\\dots ,n_t)$ as $i &= i_1 + i_2 (n_1) + i_3 (n_1 n_2) + \\dots + i_t(n_1\\dots n_{t-1}) \\\\j &= j_1 + j_2 (n_1) + j_3 (n_1 n_2) + \\dots + j_t(n_1\\dots n_{t-1}) \\\\k &= k_1 + k_2(n_1) + k_3(n_1n_2) + \\dots + k_t(n_1\\dots n_t).$ Let $s$ be the largest index such that $i_s + j_s \\ne k_s$ ; note that $s \\ne t$ and $k_s = i_s + j_s+1$ .", "Then $\\lim _{\\ell \\rightarrow \\infty } \\frac{\\lambda _{i}(\\mathcal {O}_{\\ell }) \\lambda _{j}(\\mathcal {O}_{\\ell })}{\\lambda _{k}(\\mathcal {O}_{\\ell })} &\\asymp \\lim _{\\ell \\rightarrow \\infty } \\frac{\\vert \\alpha _{1,\\ell } \\vert ^{i_1+j_1} \\dots \\vert \\alpha _{t,\\ell }\\vert ^{i_t+j_t}}{\\vert \\alpha _{1,\\ell } \\vert ^{k_1} \\dots \\vert \\alpha _{t,\\ell }\\vert ^{k_t}} \\\\&\\asymp \\lim _{\\ell \\rightarrow \\infty } \\bigg (\\frac{\\vert \\alpha _{1,\\ell } \\vert ^{i_1+j_1} \\dots \\vert \\alpha _{s-1,\\ell }\\vert ^{i_{s-1}+j_{s-1}}}{\\vert \\alpha _{1,\\ell } \\vert ^{k_1} \\dots \\vert \\alpha _{s-1,\\ell }\\vert ^{k_{s-1}}}\\bigg )\\bigg (\\frac{1}{\\operatorname{\\vert }\\alpha _{s,\\ell } \\operatorname{\\vert }}\\bigg ) \\\\&\\le \\lim _{\\ell \\rightarrow \\infty } \\frac{\\vert \\alpha _{1,\\ell } \\vert ^{n_1-1} \\dots \\vert \\alpha _{s-1,\\ell }\\vert ^{n_{s-1}-1}}{\\vert \\alpha _{s,\\ell }\\vert } \\\\&= 0.$ Therefore, in this example, $\\lambda _{i+j}(\\mathcal {O}_{\\ell }) \\ll \\lambda _i(\\mathcal {O}_{\\ell })\\lambda _j(\\mathcal {O}_{\\ell })$ if and only if $i+j$ does not overflow modulo $\\mathfrak {T}$ .", "The above example motivated the conjecture below.", "Conjecture 1.6 (Lenstra) Fix a tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ and choose integers $0 \\le i, j \\le i + j < n$ .", "Then the following statements are equivalent: There exists a constant $c_{\\mathfrak {T}} \\in \\mathbb {R}_{>0}$ , depending only on $\\mathfrak {T}$ , such that $\\lambda _{i + j}(\\mathcal {O}) \\le c_{\\mathfrak {T}} \\lambda _i(\\mathcal {O}) \\lambda _j(\\mathcal {O})$ for all $\\mathcal {O}\\in \\mathcal {S}(\\mathfrak {T})$ .", "The addition $i + j$ does not overflow modulo $\\mathfrak {T}$ .", "It is not too difficult a to prove stronger version of one direction (onlyifthm) of lenstraconjoriginalstatement.", "Definition 1.7 For a number field $K$ and a tower type $\\mathfrak {T}$ of degree $n$ , let $\\mathcal {S}(\\mathfrak {T},K)$ be the set of isomorphism classes of orders in $K$ with tower type $\\mathfrak {T}$ .", "The set $\\mathcal {S}(\\mathfrak {T},K)$ is either empty or infinite, as the construction in prototypicalExample shows.", "We prove the following in onlyifthmproof.", "propositiononlyifthm For a degree $n$ number field $K$ and a tower type $\\mathfrak {T}$ of degree $n$ , if $\\mathcal {S}(\\mathfrak {T},K)$ is infinite and there exists a constant $c_K \\in \\mathbb {R}_{>0}$ , depending only on $K$ , such that $\\lambda _{i + j}(\\mathcal {O}) \\le c_{K} \\lambda _i(\\mathcal {O}) \\lambda _j(\\mathcal {O})$ for all $\\mathcal {O}\\in \\mathcal {S}(\\mathfrak {T},K)$ , then $i+j$ does not overflow modulo $\\mathfrak {T}$ .", "Remarkably, lenstraconjoriginalstatement is true when $n < 8$ , as we will prove in lenstrathmproof below.", "We exhibit a counterexample to Lenstra's conjecture when $n=8$ (deg8counterex) and then ask if a variant of Lenstra's conjecture is true in general (refinedguessintro).", "We prove this variant when $n < 18$ , $n$ is a prime power, or $n$ is a product of 2 distinct primes, and exhibit a counterexample for all other $n$ (refinedguessthmintrothm).", "theoremlenstrathm Lenstra's conjecture (lenstraconjoriginalstatement) is true in the following cases: $n < 8$ ; $n = 8$ and $\\mathfrak {T}\\ne (8)$ ; $n$ is prime; $\\mathfrak {T}= (p,\\dots ,p)$ for a prime $p$ ; $\\mathfrak {T}= (2,p)$ for a prime $p$ ; or $\\mathfrak {T}= (3,p)$ for a prime $p$ .", "We present a counterexample to lenstraconjoriginalstatement when $n = 8$ and $\\mathfrak {T}= (8)$ .", "Counterexample 1.8 The addition $3+3$ does not overflow modulo $\\mathfrak {T}= (8)$ .", "We will produce an infinite set of isomorphism classes of orders in degree 8 fields with tower type $\\mathfrak {T}= (8)$ such that $\\lambda _{3} = o(\\lambda _6)$ .", "Let $\\alpha \\in \\overline{\\mathbb {Q}}$ be quadratic and let $\\beta \\in \\overline{\\mathbb {Q}}$ be such that $[\\mathbb {Q}(\\beta )\\colon \\mathbb {Q}] = 8$ and $[\\mathbb {Q}(\\alpha , \\beta ) \\colon \\mathbb {Q}(\\alpha )] = 4$ .", "By constructM, there exists an infinite set of positive integers $\\mathcal {M}\\subseteq \\mathbb {Z}_{>0}$ such that for all $M \\in \\mathcal {M}$ , the lattice $\\mathcal {O}_M \\mathbb {Z}\\langle 1, M^{x_1}\\beta , M^{x_2}\\alpha , M^{x_3}\\alpha \\beta , M^{x_4}\\beta ^2, M^{x_5}\\alpha \\beta ^2, M^{x_6}\\beta ^3, M^{x_7}\\alpha \\beta ^3 \\rangle $ is an order for $(x_1,\\dots ,x_7) = (4,5,5,8,8,12,12)$ .", "Then $\\lambda _{i}(\\mathcal {O}_M) \\asymp _{\\alpha ,\\beta } M^{x_i}$ for $1 \\le i < 8$ .", "Because $\\lambda _1(\\mathcal {O}_M) \\asymp _{\\alpha ,\\beta } M^4 = o(M^5) = o(\\lambda _2(\\mathcal {O}_M)),$ the order $\\mathcal {O}_M$ has tower type $\\mathfrak {T}= (8)$ for sufficiently large $M$ .", "Observe that $\\lambda _3(\\mathcal {O}_M)^2 \\asymp _{\\alpha ,\\beta } M^{10} = o(M^{12}) = o(\\lambda _6(\\mathcal {O}_M)).$" ], [ "Restating Lenstra's conjecture using logarithms and polytopes", "deg8counterex demonstrates that Lenstra's conjecture is false.", "We instead ask if some variant of lenstraconjoriginalstatement is true; in order to state this variant, we first rephrase lenstraconjoriginalstatement in different language.", "Let $\\Delta _{\\mathcal {O}}$ be the absolute value of the discriminant of the order $\\mathcal {O}$ .", "When $\\mathcal {O}$ is implicit, we simply write $\\Delta $ .", "Definition 1.9 Let $\\mathcal {S}$ be an infinite set of isomorphism classes of orders in degree $n$ number fields.", "If $\\lim _{\\mathcal {O}\\in \\mathcal {S}}\\Big (\\log _{ \\Delta }\\lambda _{1},\\dots ,\\log _{ \\Delta }\\lambda _{n-1}\\Big )$ exists in $\\mathbb {R}^{n-1}$ , call the limit the Minkowski type of $\\mathcal {S}$ and denote it by $\\mathbf {x}_{\\mathcal {S}}$ .", "Definition 1.10 Let $\\mathcal {S}$ be an infinite set of isomorphism classes of orders in degree $n$ number fields.", "The successive minima spectrum of $\\mathcal {F}$ is $\\operatorname{Spectrum}(\\mathcal {S}) \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\mid \\exists \\; \\mathcal {S}^{\\prime }\\subseteq \\mathcal {S}\\;\\; s.t.", "\\;\\; \\mathbf {x}_{\\mathcal {S}^{\\prime }} = \\mathbf {x} \\rbrace .$ For the infinite set $\\mathcal {S}(\\mathfrak {T})$ , the set $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T}))$ captures which inequalities $\\lambda _{k} \\ll \\lambda _i\\lambda _j$ hold.", "We prove the following two theorems in lenstraeqthmproof.", "theoremlenstraeqthmintro For any tower type $\\mathfrak {T}$ , any integers $1 \\le i,j,k < n$ , and any set $\\mathcal {C}$ of isomorphism classes of degree $n$ number fields, we have $\\operatorname{Spectrum}\\Big (\\bigcup _{K \\in \\mathcal {C}}\\mathcal {S}(\\mathfrak {T}, K)\\Big ) \\subseteq \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\;\\; \\vert \\;\\; x_k \\le x_i + x_j\\rbrace $ if and only if there exists a constant $c_{\\mathfrak {T},\\mathcal {C}} \\in \\mathbb {R}_{>0}$ dependent only on $\\mathfrak {T}$ and $\\mathcal {C}$ such that $\\lambda _{k}(\\mathcal {O}) \\le c_{\\mathfrak {T},\\mathcal {C}} \\lambda _i(\\mathcal {O})\\lambda _j(\\mathcal {O})$ for all $\\mathcal {O}\\in \\cup _{K \\in \\mathcal {C}}\\mathcal {S}(\\mathfrak {T}, K))$ .", "For example, one could let $\\mathcal {C}$ be the set of isomorphism classes of degree $n$ number fields with Galois group $S_n$ .", "Definition 1.11 For a number field $K$ , let $\\mathcal {S}(K)$ be the set of isomorphism classes of orders in $K$ .", "theoremkthmeq For any number field $K$ and any integers $1 \\le i,j,k < n$ , the following are equivalent: we have $\\operatorname{Spectrum}(\\mathcal {S}(K)) \\subseteq \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\;\\; \\vert \\;\\; x_k \\le x_i + x_j\\rbrace $ ; there exists a constant $c_{K} \\in \\mathbb {R}_{>0}$ dependent only on $K$ such that $\\lambda _{k}(\\mathcal {O}) \\le c_{K} \\lambda _i(\\mathcal {O})\\lambda _j(\\mathcal {O})$ for all $\\mathcal {O}\\in \\mathcal {S}(K)$ ; $\\lambda _{k}(\\mathcal {O}) \\le \\sqrt{n} \\lambda _i(\\mathcal {O})\\lambda _j(\\mathcal {O})$ for all $\\mathcal {O}\\in \\mathcal {S}(K)$ ; and $K$ has no subfield of degree $m$ such that $(i \\operatorname{\\%}m) + (j \\operatorname{\\%}m) = (i+j)\\operatorname{\\%}m$ .", "Therefore, our primary goal is to determine the sets $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T}))$ and $\\operatorname{Spectrum}(\\mathcal {S}(K))$ .", "We restate Lenstra's conjecture as follows.", "conjlenstraconjrestated For any integers $1 \\le i, j < i+j < n$ , we have $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T})) \\subseteq \\lbrace \\mathbf {x}=(x_1,\\dots ,x_{n-1}) \\in \\mathbb {R}^{n-1} \\mid x_{i+j} \\le x_i + x_j\\rbrace $ if and only if $i+j$ does not overflow modulo $\\mathfrak {T}$ .", "lenstraeqthmintro shows that lenstraconjoriginalstatement and lenstraconjrestated are equivalent.", "However, we can improve the statement of lenstraconjrestated even further.", "For a tower type $\\mathfrak {T}$ , we would like to simultaneously make reference to all linear inequalities $x_{i+j} \\le x_i + x_j$ such that $i+j$ does not overflow modulo $\\mathfrak {T}$ ; in other words, we would like to specify a polytope in $\\mathbb {R}^{n-1}$ encoding all linear half spaces $x_{i+j} \\le x_i + x_j$ such that $i+j$ does not overflow modulo $\\mathfrak {T}$ .", "Definition 1.12 The Lenstra polytope $\\operatorname{Len}(\\mathfrak {T})$ of a tower type $\\mathfrak {T}$ is the set of $\\mathbf {x} = (x_1,\\dots ,x_{n-1}) \\in \\mathbb {R}^{n-1}$ satisfying the following conditions: $\\sum _{i = 1}^{n-1}x_i = 1/2$ ; $0 \\le x_1 \\le x_2 \\le \\dots \\le x_{n-1}$ ; and $x_{i+j} \\le x_i + x_j$ for $i+j$ not overflowing modulo $\\mathfrak {T}$ .", "The conditions in LenstraPolytopeDefinition arise from constraints on the successive minima of orders.", "For example, $\\prod _{i=1}^{n-1}\\lambda _{i} \\asymp _n \\Delta ^{1/2}$ implies $\\operatorname{Spectrum}(\\mathcal {S}) \\subseteq \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\; \\vert \\; \\sum _{i = 1}^{n-1}x_i = 1/2\\rbrace $ for any infinite set $\\mathcal {S}$ of isomorphism classes of orders in degree $n$ number fields, which motivates condition $(1)$ .", "Similarly, condition $(2)$ arises from the fact that $1\\le \\lambda _1 \\le \\dots \\le \\lambda _{n-1}$ , and condition $(3)$ arises from the fact that Lenstra's conjecture hypothesizes $\\lambda _{i+j} \\ll _n \\lambda _{i} \\lambda _{j}$ for $i+j$ not overflowing modulo $\\mathfrak {T}$ .", "lenstraeqthmintro shows that lenstraconjrestated is equivalent to the statement $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T})) \\subseteq \\operatorname{Len}(\\mathfrak {T})$ .", "We will show in lenstraeqthmproof that the converse of lenstraconjrestated is true.", "theoremlenstraexistencethm For a tower type $\\mathfrak {T}$ and a number field $K$ such that $\\mathcal {S}(\\mathfrak {T},K)$ is nonempty, we have $\\operatorname{Len}(\\mathfrak {T}) \\subseteq \\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T},K))$ .", "We may thus rephrase lenstraconjrestated as follows.", "Conjecture 1.13 For a tower type $\\mathfrak {T}$ , we have $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T})) = \\operatorname{Len}(\\mathfrak {T})$ .", "We have discussed how lenstraconjoriginalstatement and lenstraconjrestated are equivalent, and both false.", "lenstraexistencethm shows that Lenstraconjfinal is also equivalent to lenstraconjoriginalstatement and lenstraconjrestated.", "The statement of Lenstraconjfinal will inspire a modified conjecture." ], [ "A new conjecture inspired by Lenstra's conjecture", "Definition 1.14 Let $\\mathcal {S}_n$ be the infinite set of isomorphism classes of orders in degree $n$ number fields.", "Let $\\mathfrak {T}= (8)$ .", "Although deg8counterex proves that $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T})) \\lnot \\subseteq \\operatorname{Len}(\\mathfrak {T})$ , we will prove ( refinedguessthmintrothm) that $\\operatorname{Spectrum}(\\mathcal {S}_8) = \\operatorname{Len}(2,2,2) \\cup \\operatorname{Len}(2,4)\\cup \\operatorname{Len}(4,2) \\cup \\operatorname{Len}(8).$ Thus, we weaken Lenstraconjfinal to the following.", "conjrefinedguessintro We have $\\operatorname{Spectrum}(\\mathcal {S}_n) = \\bigcup _{\\mathfrak {T}} \\operatorname{Len}(\\mathfrak {T})$ where $\\mathfrak {T}$ ranges across all tower types of degree $n$ .", "theoremrefinedguessthmintrothm refinedguessintro is true if and only if $n$ is of the following form: $n = p^k$ , with $p$ prime and $k \\ge 1$ ; $n = pq$ , with $p$ and $q$ distinct primes; or $n = 12$ .", "In particular, refinedguessintro is false when $n=18$ ." ], [ "Expressing $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T}))$ as a union of polytopes", "We will prove that the set $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T}))$ is a finite union of polytopes.", "These polytopes are governed by the multiplicative structure of number fields; to keep track of the key datum, we make the following definition.", "definitionmulttypedefn A flag type is a function $T \\colon [n] \\times [n] \\rightarrow [n]$ such that for any $i,j \\in [n]$ , we have: $T(i,j) = T(j,i)$ ; $T(i,0) = i$ ; and $T(i,j) \\le T(i+1,j)$ for $i < n-1$ .", "Let $L/K$ be a degree $n$ field extension.", "Definition 1.15 A flag of $L/K$ is an indexed set $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ of $K$ -vector spaces contained in $L$ such that $K = F_0 \\subset F_1 \\subset \\dots \\subset F_{n-1} = L$ and $\\dim _{K} F_i = i + 1$ for all $i \\in [n]$ .", "A flag over $K$ is a flag of some degree $n$ field extension of $K$ .", "When $K$ is implicit, we say $\\mathcal {F}$ is a flag of $L$.", "Let $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ be a flag.", "Associate to $\\mathcal {F}$ the flag type $T_{\\mathcal {F}} \\colon [n]\\times [n] & \\longrightarrow [n] \\\\(i\\;\\;,\\;\\;j ) \\; & \\longmapsto \\min \\lbrace k \\in [n] \\mid F_iF_j \\subseteq F_k\\rbrace .$ Say $T_{\\mathcal {F}}$ is the flag type of $\\mathcal {F}$ .", "Analogously to the Lenstra polytope $\\operatorname{Len}(\\mathfrak {T})$ , we define a polytope encoding the data of a flag type.", "definitionsmudefn The polytope $P_{T}$ of a flag type $T$ is the set of $\\mathbf {x} = (x_1,\\dots ,x_{n-1}) \\in \\mathbb {R}^{n-1}$ satisfying the following conditions: $\\sum _{i=1}^{n-1} x_i = 1/2$ ; $0 \\le x_1 \\le \\dots \\le x_{n-1}$ ; and $x_{T(i,j)} \\le x_i + x_j$ for $1 \\le i,j < n$ .", "The polytopes $P_{T}$ are a generalization of the Lenstra polytopes.", "For example, fix a tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ and choose elements $\\alpha _1,\\dots ,\\alpha _t\\in \\overline{\\mathbb {Q}}$ such that $[\\mathbb {Q}(\\alpha _1,\\dots ,\\alpha _t) \\colon \\mathbb {Q}] = n$ and $\\deg (\\alpha _i) = n_i$ for all $1 \\le i \\le t$ .", "Define a basis $\\mathcal {B}= \\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ of $\\mathbb {Q}(\\alpha _1,\\dots ,\\alpha _t)$ as follows: for $0 \\le i < n$ , write $i = i_1 + i_2(n_1) + i_3(n_1n_2) + \\dots + i_t(n_1\\dots n_{t-1})$ where $0 \\le i_s < n_s$ for $1 \\le s \\le t$ , and then set $v_i = \\alpha _1^{i_1}\\dots \\alpha _t^{i_t}$ .", "Then $P_{T} = \\operatorname{Len}(\\mathfrak {T})$ .", "See prototypicalExample for more details.", "To a flag $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ of a number field $K$ , we canonically associate a tower of subfields $\\mathbb {Q}= K_0 \\subset K_1 \\subset \\dots \\subset K_t = K$ with no trivial extensions as follows and $t \\ge 1$ ; a field $L$ belongs to the tower if and only if $L = \\mathbb {Q}(F_i)$ for $i \\in [n]$ .", "We say the tower type of $\\mathcal {F}$ is the tower type of $\\mathbb {Q}= K_0 \\subset K_1 \\subset \\dots \\subset K_t = K$ .", "We prove the following in lenstraeqthmproof.", "theoremsfstructurethmintro For a tower type $\\mathfrak {T}$ and a set of isomorphism classes of degree $n$ number fields $\\mathcal {C}$ , we have $\\operatorname{Spectrum}\\bigg (\\bigcup _{K \\in \\mathcal {C}}\\mathcal {S}(\\mathfrak {T},K)\\bigg ) = \\bigcup _{\\mathcal {F}}P_{T_{\\mathcal {F}}}$ where $\\mathcal {F}$ ranges across all flags with tower type $\\mathfrak {T}$ of number fields in $\\mathcal {C}$ .", "corollarycombcorol We have: $\\operatorname{Spectrum}(\\mathcal {S}_n) = \\cup _{\\mathcal {F}}P_{T_{\\mathcal {F}}}$ where $\\mathcal {F}$ ranges across all flags of degree $n$ number fields; $\\operatorname{Spectrum}(\\mathcal {S}(K)) = \\cup _{\\mathcal {F}}P_{T_{\\mathcal {F}}}$ where $\\mathcal {F}$ ranges across all flags of $K$ ; $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T})) = \\cup _{\\mathcal {F}}P_{T_{\\mathcal {F}}}$ where $\\mathcal {F}$ ranges across all flags of tower type $\\mathfrak {T}$ ; and for any set of isomorphism classes of degree $n$ number fields $\\mathcal {C}$ , we have $\\operatorname{Spectrum}\\bigg (\\bigcup _{K \\in \\mathcal {C}}\\mathcal {S}(\\mathfrak {T},K)\\bigg ) = \\bigcup _{K \\in \\mathcal {C}}\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T},K)).$" ], [ "The structure of the polytopes $P_{T_{\\mathcal {F}}}$", "sfstructurethmintro proves that $\\operatorname{Spectrum}(\\mathcal {S}_n)$ is a union of polytopes.", "We now recall several combinatorial theorems about the structure of these polytopes from earlier work of the author.", "Let $T$ be a flag type of degree $n$ .", "definitioncornerdef For any integers $0 < i,j < n$ , say $(i,j)$ is a corner of $T$ if $T(i-1,j) < T(i,j)$ and $T(i,j-1) < T(i,j)$ .", "Lemma 1.16 (Vemulapalli, ) The map from flag types to polyhedra sending $T$ to $P_{T}$ is an injection.", "Let $x_1,\\dots ,x_{n-1}$ be coordinates for $\\mathbb {R}^{n-1}$ .", "Theorem 1.17 (Vemulapalli, ) $P_T$ is an unbounded polyhedron of dimension $n-2$ .", "For $0 < i,j,k < n$ , the inequality $x_{k} \\le x_i + x_j$ defines a facet of $P_T$ if and only if $(i,j)$ is a corner and $T(i,j)=k$ .", "Associate a flag $\\mathcal {F}=\\lbrace F_i\\rbrace _{i \\in [n]}$ to a $\\mathbb {Q}$ -basis $\\mathcal {B}= \\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ of a degree $n$ number field as follows: let $F_i = \\mathbb {Q}\\langle v_0,\\dots ,v_i\\rangle $ .", "For a tower type $\\mathfrak {T}$ , let $\\mathcal {F}(\\mathfrak {T})$ be the flag arising from the basis constructed in prototypicalExample.", "Observe that $P_{T_{\\mathcal {F}}} = \\operatorname{Len}(\\mathfrak {T})$ .", "Theorem 1.18 (Vemulapalli, ) For $0 < i,j < n$ , the following are equivalent: $i + j$ does not overflow modulo $\\mathfrak {T}$ ; $(i,j)$ is a corner of $T_{\\mathcal {F}(\\mathfrak {T})}$ ; and $T_{\\mathcal {F}(\\mathfrak {T})}(i,j) = i+j$ .", "In particular, cornerfacetthm and explicitflagtype imply that all corners $(i,j)$ of $T_{\\mathcal {F}(\\mathfrak {T})}$ satisfy $T_{\\mathcal {F}(\\mathfrak {T})}(i,j) = i + j$ , and equivalently, that all facets defined by linear inequalities of the form $x_k \\le x_i + x_j$ of $\\operatorname{Len}(\\mathfrak {T})$ have $k = i+j$ .", "Theorem 1.19 (Vemulapalli, ) For a flag $\\mathcal {F}$ and every corner $(i,j)$ of $T_{\\mathcal {F}}$ , we have $T_{\\mathcal {F}}(i + j) \\ge i + j$ .", "Theorem 1.20 (Vemulapalli, ) Let $K$ be a field whose finite extensions have extensions of all degrees.", "Then there exists a flag $\\mathcal {F}$ of degree 18 such that $P_{T_{\\mathcal {F}}} \\setminus \\big ( \\operatorname{Len}(2,3,3) \\cup \\operatorname{Len}(3,2,3) \\cup \\operatorname{Len}(3,3,2)\\big ) \\ne \\emptyset $ and $T_{\\mathcal {F}}$ has a corner $(i,j)$ such that $T_{\\mathcal {F}}(i + j) > i + j$ .", "refinedguessthmintrothm provides an alternate proof of refinedguessintro when $n = 18$ ." ], [ "The function field case", "Let $C$ be a geometrically integral smooth projective curve of genus $g$ over a field $k$ equipped with a finite morphism $\\pi \\colon C \\rightarrow \\mathbb {P}^1$ of degree $n \\ge 2$ .", "Let $\\mathcal {L}$ be a line bundle on $C$ .", "definitionlogsuccmincoh For $i \\in [n]$ , the $i$ -th logarithmic successive minimum of $\\mathcal {L}$ with respect to $\\pi $ is $a_i(\\mathcal {L}, \\pi ) \\min _{j \\in \\mathbb {Z}_{\\ge 0}}\\Big \\lbrace h^0\\big (C,\\mathcal {L}\\otimes \\pi ^*\\mathcal {O}_{\\mathbb {P}^1}(j+1)\\big ) - h^0\\big (C,\\mathcal {L}\\otimes \\pi ^*\\mathcal {O}_{\\mathbb {P}^1}(j)\\big ) \\ge i+1\\Big \\rbrace .$ When $\\pi $ is implicit, we will refer to the $i$ -th logarithmic successive minimum by $a_i(\\mathcal {L})$ .", "By abuse of notation, let $a_0 \\le a_1 \\le a_2 \\le \\dots \\le a_{n-1}$ be the unique integers such that $\\pi _*\\mathcal {L}\\simeq \\mathcal {O}_{\\mathbb {P}^1}(-a_0) \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_1) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_{n-1}).$ For any integer $j$ , we have $h^0(X,\\mathcal {L}\\otimes \\pi ^*\\mathcal {O}_{\\mathbb {P}^1}(j)) &= h^0(\\mathbb {P}^1, \\pi _*\\mathcal {L}\\otimes \\mathcal {O}_{\\mathbb {P}^1}(j)) \\\\&= \\sum _{i \\in [n]}h^0(\\mathbb {P}^1,\\mathcal {O}_{\\mathbb {P}^1}(j-a_i)) \\\\&= \\sum _{i \\in [n]}\\max \\lbrace 0,j+1-a_i\\rbrace ,$ so $a_i = a_i(\\mathcal {L},\\pi )$ .", "Therefore, one may equivalently define the logarithmic successive minima of $\\mathcal {L}$ with respect to $\\pi $ as the unique integers $a_0 \\le a_1 \\le a_2 \\le \\dots \\le a_{n-1}$ such that $\\pi _*\\mathcal {L}\\simeq \\mathcal {O}_{\\mathbb {P}^1}(-a_0) \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_1) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_{n-1}).$ The logarithmic successive minima $a_0(\\mathcal {O}_C), \\dots , a_{n-1}(\\mathcal {O}_C)$ of the structure sheaf are are analogous to $\\log \\lambda _i$ for an order $\\mathcal {O}$ in a degree $n$ number field, and the genus $g$ is analogous to $\\log \\Delta ^{1/2}$ ; see for a detailed discussion.", "This analogy is the reason behing the name logarithmic successive minima.", "An application of Riemann–Roch, written explicitly in riemannroch, shows that: $a_1(\\mathcal {L}) + \\dots + a_{n-1}(\\mathcal {L}) = g + n - 1 - \\deg (\\mathcal {L})$ for any line bundle $\\mathcal {L}$ ; and $a_0(\\mathcal {O}_C) = 0$ and $a_i(\\mathcal {O}_C) > 0$ for any $1 \\le i < n$ .", "For an order $\\mathcal {O}$ in a degree $n$ number field with fractional ideal $I$ , statement $(1)$ is analogous to the fact that $\\prod _{i=1}^n\\lambda _i(I) \\asymp _n \\Delta ^{1/2}N(I)$ , and statement $(2)$ is analogous the fact that $\\log _{\\Delta }\\lambda _0(\\mathcal {O}) = 0$ .", "bstttz implies that $\\lambda _{n-1} \\ll \\Delta ^{1/n}$ ; the function field analogue is called the Maroni bound and it states that $a_{n-1} \\le (2g-2)/n + 2$ .", "In the literature, the numbers $a_i(\\mathcal {O}_C) - 2$ are often denoted by $e_{n-i}$ and are called the scrollar invariants of $\\pi $ .", "The bundle $\\big (\\pi _*O_C/\\mathcal {O}_{\\mathbb {P}^1}\\big )^{\\vee }$ is called the Tschirnhausen bundle; the question of which $(n-1)$ -tuples $(e_1,\\dots ,e_{n-1})$ (equivalently, which Tschirnhausen bundles) can arise as the scrollar invariants of a curve is an old and important problem.", "In this paper, we will prove strong constraints on the scrollar invariants.", "There are a number of classical results regarding the logarithmic successive minima of the structure sheaf, which we will generalize.", "Theorem 1.21 (Ohbuchi , Theorem 3) If $\\pi ^*(\\mathcal {O}_{\\mathbb {P}^1}(a_1(\\mathcal {O}_C)))$ is birationally very ample, then $a_{i+j}(\\mathcal {O}_C) \\le a_i(\\mathcal {O}_C) + a_j(\\mathcal {O}_C)$ for any integers $1 \\le i < n$ and $1 \\le j \\le n-3i$ .", "Theorem 1.22 (Deopurkar, Patel , Proposition 2.6) If $\\pi \\colon C \\rightarrow \\mathbb {P}^1$ does not factor nontrivially, then $\\frac{g+n-1}{\\binom{n}{2}} \\le a_1(\\mathcal {O}_C) \\le \\frac{g+n+1}{n-1}.$ The above theorem is analogous to the statement that $\\Delta _{\\mathcal {O}}^{1/(2 \\binom{n}{2})} \\ll _n \\lambda _1(\\mathcal {O}) \\ll _n \\Delta _{\\mathcal {O}}^{1/2(n-1)}$ in the case when $\\mathcal {O}\\otimes \\mathbb {Q}$ is a primitive field; this is a corollary of primcorollary (see primcorolstrongresult).", "Let $C$ , $\\pi $ , and $\\mathcal {L}$ be as above.", "The action map $\\psi \\colon \\mathcal {O}_C \\otimes \\mathcal {L}\\rightarrow \\mathcal {L}$ induces an action map $\\widehat{\\psi } \\colon \\pi _{*}\\mathcal {O}_C\\otimes \\pi _{*}\\mathcal {L}\\rightarrow \\pi _{*}\\mathcal {L}$ via the pushforward.", "Let $\\eta $ denote the generic point of $C$ , let $L$ be the function field of $C$ , let $\\nu $ denote the generic point of $\\mathbb {P}^1$ , and let $K$ be the function field of $\\mathbb {P}^1$ .", "The map $\\pi $ induces an isomorphisms $\\phi _1 \\colon (\\mathcal {O}_C)_{\\mid \\eta } \\simeq (\\pi _{*}\\mathcal {O}_C)_{\\mid \\nu }$ $\\phi _2 \\colon \\mathcal {L}_{\\mid \\eta } \\simeq (\\pi _*\\mathcal {L})_{\\mid \\nu }$ of $K$ -vector spaces respecting the action maps, i.e.", "the following diagram commutes.", "$\\begin{tikzcd}(\\mathcal {O}_C)_{\\mid \\eta } \\otimes \\mathcal {L}_{\\mid \\eta } {r}{\\psi _{\\mid \\eta }} [swap]{d}{\\phi _1 \\otimes \\phi _2} & \\mathcal {L}_{\\mid \\eta } {d}{\\phi _2} \\\\(\\pi _{*}(\\mathcal {O}_C))_{\\mid \\nu } \\otimes (\\pi _{*}\\mathcal {L})_{\\mid \\nu } {r}{\\widehat{\\psi }_{\\mid \\nu }}& (\\pi _{*}\\mathcal {L})_{\\mid \\nu }.\\end{tikzcd}$ Moreover, observe that the top map $(\\mathcal {O}_C)_{\\mid \\eta } \\otimes \\mathcal {L}_{\\mid \\eta } \\rightarrow \\mathcal {L}_{\\mid \\eta }$ is simply multiplication in the field $L$ , represented as a $K$ -vector space, because $(\\mathcal {O}_C)_{\\mid \\eta } = \\mathcal {L}_{\\mid \\eta } = L$ .", "Define two sets of $K$ -vector spaces $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ and $\\mathcal {G}= \\lbrace G_i\\rbrace _{i \\in [n]}$ of $L/K$ as follows.", "Let $F_i = \\phi _1^{-1}\\Bigg (\\Big (\\mathcal {O}_{\\mathbb {P}^1}(-a_0(\\mathcal {O}_C)) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_i(\\mathcal {O}_C))\\Big )_{\\mid \\nu }\\Bigg )$ and $G_i = \\phi _2^{-1}\\Bigg (\\Big (\\mathcal {O}_{\\mathbb {P}^1}(-a_0(\\mathcal {L})) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_i(\\mathcal {L}))\\Big )_{\\mid \\nu }\\Bigg )$ The following statements are proven in geometricsection.", "theoremfnfieldmainthm Fix $i,j \\in [n]$ and let $k \\in [n]$ be the smallest integer such that $F_iG_j \\subseteq G_k$ .", "Then $a_{k}(\\mathcal {L},\\pi ) \\le a_i(\\mathcal {O},\\pi ) + a_j(\\mathcal {L},\\pi ).$ In particular, for any $ i,j,i+j \\in [n]$ , if $a_{i+j}(\\mathcal {L}) > a_i(\\mathcal {O}_C) + a_j(\\mathcal {L})$ then there exists an integer $1 < m < n$ such that $L/K$ has a degree $m$ subfield and $(i \\operatorname{\\%}m) + (j \\operatorname{\\%}m) \\ne (i+j) \\operatorname{\\%}m$ .", "fnfieldmainthm is the function field analogue of inboundprop.", "definitionminktypegeometricdefn The Minkowski type of the pair $(C,\\pi )$ is $\\mathbf {x}_{C,\\pi } \\bigg (\\frac{a_1(\\mathcal {O}_C,\\pi )}{2(g+n-1)}, \\frac{a_2(\\mathcal {O}_C,\\pi )}{2(g+n-1)}, \\dots , \\frac{a_{n-1}(\\mathcal {O}_C,\\pi )}{2(g+n-1)}\\bigg ).$ Observe that if $\\mathbf {x} = (x_1,\\dots ,x_{n-1})$ is the Minkowski type of some pair $(C,\\pi )$ , then $\\sum _i x_i = 1/2$ and $0 \\le x_1 \\le \\dots \\le x_{n-1}$ , as in the number field case.", "Note also that $\\mathcal {F}$ is a flag.", "corollarygeometricthm With the above notation, we have $\\mathbf {x}_{(C,\\pi )} \\in P_{T_{\\mathcal {F}}}$ .", "In particular, for any $1 \\le i,j,i+j < n$ , if $a_{i+j}(\\mathcal {O}_C) > a_i(\\mathcal {O}_C) + a_j(\\mathcal {O}_C)$ then there exists an integer $1 < m < n$ such that $L/K$ has a degree $m$ subfield and $(i \\operatorname{\\%}m) + (j \\operatorname{\\%}m) \\ne (i+j) \\operatorname{\\%}m$ .", "anandthm is a corollary of geometricthm, and the proof is analogous to that of primcorolstrongresult.", "theoremrefinedguessthmintrothmgeom Suppose every finite extension of $K$ has extensions of every degree and suppose $n$ has the following form: $n = p^k$ , with $p$ prime and $k \\ge 1$ ; $n = pq$ , with $p$ and $q$ distinct primes; or $n = 12$ .", "Then $\\mathbf {x}_{(C,\\pi )} \\in \\bigcup _{\\mathfrak {T}} \\operatorname{Len}(\\mathfrak {T}).$" ], [ "Acknowledgments", "I am extremely grateful to Hendrik Lenstra for the many invaluable ideas, conversations, and corrections throughout the course of this project.", "I also thank Manjul Bhargava for suggesting the questions that led to this paper and for providing invaluable advice and encouragement throughout the course of this research.", "Thank you as well to Jacob Tsimerman and Arul Shankar for feedback and illuminating conversations.", "The author was supported by the NSF Graduate Research Fellowship." ], [ "Proof of inboundprop and exthm", "For a field extension $L/K$ and two $K$ -vector spaces $I,J \\subseteq L$ , define the $K$ -vector space $IJ = \\lbrace v_iv_j \\mid v_i \\in I, v_j \\in J\\rbrace $ .", "Similarly, for a $K$ -vector space $I \\subseteq L$ , let $\\operatorname{Stab}(I) \\lbrace v \\in L \\mid vI = I\\rbrace $ .", "Lemma 2.1 (Bachoc, Serra, Zémor , Theorem 2) Let $L/K$ be a field extension of degree $n$ .", "Choose integers $0 \\le i,j,i+j < n$ .", "Let $I,J$ be dimension $i+1$ (resp.", "$j+1$ ) $K$ -vector subspaces of $L$ and suppose $\\dim _{K}IJ \\le i+j$ .", "Set $m = \\dim _{K}(\\operatorname{Stab}(IJ))$ and write $i$ and $j$ in mixed radix notation with respect to $(m, n/m)$ as $i &= i_1 + i_2 m \\\\j &= j_1 + j_2 m.$ Then $m > 1$ , the addition $i + j$ overflows modulo $m$ , $\\dim _{\\mathbb {Q}} \\operatorname{Stab}(IJ)I = (i_2 + 1)m$ , $\\dim _{K} \\operatorname{Stab}(IJ)J = (j_2 + 1)m$ , and $\\dim _{K} IJ = (i_2 + j_2 + 1)m$ .", "Let $\\mathcal {B}= \\lbrace v_0,v_1\\dots ,v_{n-1}\\rbrace $ be a basis of a degree $n$ number field $K$ .", "Recall that for $i \\in [n]$ and $v \\in K$ , let $\\pi _{i,\\mathcal {B}}(v)$ be the coefficient of $v_i$ in the expansion of $v$ with respect to the basis $\\mathcal {B}$ .", "* We have $\\lambda _k(I) = \\operatorname{\\vert }w_k \\operatorname{\\vert }\\le \\operatorname{\\vert }v_iw_j \\operatorname{\\vert }\\le \\sqrt{n}\\operatorname{\\vert }v_i\\operatorname{\\vert }\\operatorname{\\vert }w_j\\operatorname{\\vert }\\le \\sqrt{n}\\lambda _i(\\mathcal {O})\\lambda _j(I),$ where $\\operatorname{\\vert }v_iw_j \\operatorname{\\vert }\\le \\sqrt{n}\\operatorname{\\vert }v_i\\operatorname{\\vert }\\operatorname{\\vert }w_j\\operatorname{\\vert }$ by proofofsqrtn.", "For $i,j,i+j\\in [n]$ , if $\\lambda _{i+j}(I) > \\sqrt{n}\\lambda _i(\\mathcal {O})\\lambda _j(I)$ , then $\\dim _{\\mathbb {Q}}\\mathbb {Q}\\langle v_0,\\dots ,v_i\\rangle \\mathbb {Q}\\langle v_0,\\dots w_j\\rangle \\le i+j.$ Applying overflowlemma completes the proof.", "We can now prove exthm.", "* Clearly, $(4) \\Rightarrow (3) \\Rightarrow (1)$ and $(4) \\Rightarrow (2) \\Rightarrow (1)$ .", "We will show that $(5) \\Rightarrow (4)$ by overflowlemma.", "Suppose that $(i \\operatorname{\\%}k_s) + (j \\operatorname{\\%}k_s) = (i+j)\\operatorname{\\%}k_s$ for all $1 \\le s \\le t$ .", "Let $\\mathcal {O}$ be an order in a degree $n$ field with subfields of precisely the degrees $k_1,\\dots ,k_{\\ell }$ .", "Let $I$ be a fractional ideal of $\\mathcal {O}$ .", "Choose linearly independent elements $1=v_0,\\dots ,v_{n-1} \\in \\mathcal {O}$ such that $\\lambda _i(\\mathcal {O}) = \\operatorname{\\vert }v_i \\operatorname{\\vert }$ and choose linearly independent elements $w_0,\\dots ,v_{n-1} \\in I$ such that $\\lambda _i(I) = \\operatorname{\\vert }w_i \\operatorname{\\vert }$ .", "By overflowlemma, $\\dim _\\mathbb {Q}\\mathbb {Q}\\langle 1,w_1,\\dots ,w_i\\rangle \\dim _\\mathbb {Q}\\mathbb {Q}\\langle 1,v_1,\\dots ,v_j\\rangle \\ge i+j+1$ because $(i \\operatorname{\\%}k_s) + (j \\operatorname{\\%}k_s) = (i+j)\\operatorname{\\%}k_s$ for all $1 \\le s \\le t$ .", "Because $I$ is an $\\mathcal {O}$ -module, $\\lambda _{i+j}(I) \\le \\vert w_iv_j\\vert \\le \\sqrt{n}\\operatorname{\\vert }w_i\\operatorname{\\vert }\\operatorname{\\vert }v_j\\operatorname{\\vert }= \\lambda _i(I)\\lambda _j(\\mathcal {O}),$ where $\\vert w_iv_j\\vert \\le \\sqrt{n}\\operatorname{\\vert }w_i\\operatorname{\\vert }\\operatorname{\\vert }v_j\\operatorname{\\vert }$ by proofofsqrtn.", "Now it suffices to show that $(1) \\Rightarrow (5)$ .", "We prove the contrapositive; suppose there exists some $s$ such that $(i \\operatorname{\\%}k_s) + (j \\operatorname{\\%}k_s) \\ne (i+j) \\operatorname{\\%}k_s$ .", "Choose $\\alpha _1,\\alpha _2 \\in \\overline{\\mathbb {Q}}$ algebraic integers such that $\\deg (\\alpha _1) = k_s$ and $\\deg (\\mathbb {Q}(\\alpha _1,\\alpha _2)) = n$ .", "The construction in prototypicalExample produces an infinite set $\\mathcal {S}$ of orders contained in $K$ such that $\\lim _{\\mathcal {O}\\in \\mathcal {S}}\\frac{\\lambda _i(\\mathcal {O})\\lambda _j(\\mathcal {O})}{\\lambda _{i+j}(\\mathcal {O})} = 0.$" ], [ "Constructing points in $\\operatorname{Spectrum}(\\mathcal {S}_n)$", "For any infinite set $\\mathcal {S}$ of isomorphism classes of orders in degree $n$ number fields, we prove in this section that $\\operatorname{Spectrum}(\\mathcal {S})$ is closed.", "We also provide a method to construct points in $\\operatorname{Spectrum}(\\mathcal {S}_n)$ .", "Proposition 3.1 For any infinite set $\\mathcal {S}$ of isomorphism classes of orders in degree $n$ number fields, the set $\\operatorname{Spectrum}(\\mathcal {S})$ is closed.", "Choose a sequence $\\lbrace \\mathbf {x}^i\\rbrace _{i\\in \\mathbb {Z}_{> 0}} \\subseteq \\operatorname{Spectrum}(\\mathcal {S})$ converging to $\\mathbf {x} \\in \\mathbb {R}^{n-1}$ .", "For $i \\in \\mathbb {Z}_{> 0}$ , there exists a subset $\\mathcal {S}_i \\subseteq \\mathcal {S}$ such that $\\mathbf {x}_{\\mathcal {S}_i} = \\mathbf {x}^i$ .", "Choose $\\mathcal {O}_1 \\in \\mathcal {S}_1$ .", "Now for $i \\in \\mathbb {Z}_{> 1}$ , choose $\\mathcal {O}_i \\in \\mathcal {S}_i$ such that $\\lim _{i \\rightarrow \\infty } \\big \\vert \\big (\\log _{\\Delta _i}\\lambda _{1}(\\mathcal {O}_i),\\dots ,\\log _{\\Delta _i}\\lambda _{n-1}(\\mathcal {O}_i)\\big ) - \\mathbf {x}^i\\big \\vert = 0$ and $\\Delta _i > \\Delta _{i-1}$ , where here $\\Delta _i$ is the absolute value of the discriminant of $\\mathcal {O}_i$ .", "Let $\\mathcal {S}^{\\prime } = \\lbrace \\mathcal {O}_i\\rbrace _{i\\in \\mathbb {Z}_{\\ge 1}}$ .", "Then $\\lim _{i \\rightarrow \\infty }\\big (\\log _{\\Delta _i}\\lambda _1(\\mathcal {O}_i),\\dots ,\\log _{\\Delta _i}\\lambda _{n-1}(\\mathcal {O}_i)\\big ) = \\mathbf {x}$ so $\\mathbf {x}_{\\mathcal {S}^{\\prime }} = \\mathbf {x}$ , which implies $\\mathbf {x} \\in \\operatorname{Spectrum}(\\mathcal {S})$ .", "Thus, $\\operatorname{Spectrum}(\\mathcal {S})$ is closed.", "Next, we will construct points in $\\operatorname{Spectrum}(\\mathcal {S}_n)$ ; more precisely, given $\\mathbf {x} \\in \\mathbb {R}^{n-1}$ , when and how can we produce an infinite set $\\mathcal {S}$ of isomorphism classes of orders such that $\\mathbf {x}_{\\mathcal {S}} = \\mathbf {x}$ ?", "constructM provides an explicit construction of $\\mathcal {S}$ when $\\mathbf {x}$ satisfies certain constraints.", "Let $\\mathcal {B}= \\lbrace 1=v_0,v_1\\dots ,v_{n-1}\\rbrace $ be a basis of a degree $n$ number field $K$ .", "For $i \\in [n]$ and $v \\in K$ , let $\\pi _{i,\\mathcal {B}}(v)$ be the coefficient of $v_i$ in the expansion of $v$ with respect to the basis $\\mathcal {B}$ .", "Proposition 3.2 Let $\\mathcal {B}= \\lbrace 1=v_0,v_1\\dots ,v_{n-1}\\rbrace $ be a basis of a degree $n$ number field $K$ .", "Let $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ be the flag given by $F_i = \\mathbb {Q}\\langle v_0,\\dots ,v_i \\rangle $ and choose $\\mathbf {x} = (x_1,\\dots ,x_{n-1})$ to be a rational point in the relative interior of $P_{T_{\\mathcal {F}}}$ .", "Define $\\mathcal {M}$ to the set of $M \\in \\mathbb {Z}_{\\ge 1}$ such that $M^{x_i + x_j - x_k}\\pi _{k,\\mathcal {B}}(v_iv_j) \\in \\mathbb {Z}$ for all $i,j,k \\in [n]$ .", "Then $\\lbrace \\mathcal {O}_M = \\mathbb {Z}\\langle 1, M^{x_1}v_1,\\dots , M^{x_{n-1}} v_{n-1} \\rangle \\rbrace _{M \\in \\mathcal {M}}$ is an infinite set of isomorphism classes of orders with Minkowski type $\\mathbf {x}$ .", "If $\\mathbb {Z}\\langle 1=v_0,v_1\\dots ,v_{n-1}\\rangle $ is an order, then it suffices to let $\\mathbf {x} = (x_1,\\dots ,x_{n-1})$ to be a rational point in $P_{T_{\\mathcal {F}}}$ .", "Let $x_0 = 0$ .", "For $M \\in \\mathcal {M}$ , $(M^{x_i}v_i)(M^{x_j}v_j) = \\sum _{k \\in [n]} M^{x_i + x_j - x_k}\\pi _{k,\\mathcal {B}}(v_iv_j) (M^{x_k}v_k).$ Suppose $\\mathbf {x}$ is in the relative interior of $P_{T_{\\mathcal {F}}}$ .", "Then we have $0 < x_1 < \\dots < x_{n-1}$ and $x_k < x_i + x_j$ for any $1 \\le i,j,k < n$ such that $k \\le T_{\\mathcal {F}}(i,j)$ .", "For any $i,j,k \\in [n]$ , if $\\pi _{k,\\mathcal {B}}(v_iv_j) \\ne 0$ , then $k \\le T_{\\mathcal {F}}(i,j)$ so $x_i + x_j - x_k > 0$ .", "Because $\\mathbf {x}$ is rational, $x_i + x_j - x_k$ is a positive rational number.", "Therefore, $\\mathcal {M}$ is an infinite set.", "Now suppose instead that $\\mathbb {Z}\\langle 1=v_0,v_1\\dots ,v_{n-1}\\rangle $ is an order and $\\mathbf {x} \\in P_{T_{\\mathcal {F}}}$ .", "Then $\\pi _{k,\\mathcal {B}}(v_iv_j) \\in \\mathbb {Z}$ and $\\pi _{k,\\mathcal {B}}(v_iv_j) \\ne 0$ implies $x_i + x_j - x_k \\ge 0$ , so $\\mathcal {M}$ is an infinite set.", "Because $M^{x_i + x_j - x_k}\\pi _{k,\\mathcal {B}}(v_iv_j) \\in \\mathbb {Z}$ for all $i,j,k \\in [n]$ , the lattice $\\mathcal {O}_M$ is an order for all $M \\in \\mathcal {M}$ .", "For an order $\\mathcal {O}_M$ , we have $\\Delta _{\\mathcal {O}_M} \\asymp _{\\mathcal {B}} M^{2(x_1 + \\dots + x_{n-1})} = M$ and $\\lambda _i(\\mathcal {O}_M) \\asymp _{\\mathcal {B}} M^{x_i}$ .", "Therefore, $\\lim _{M \\in \\mathcal {M}}\\log _{\\Delta _{\\mathcal {O}_M}}\\lambda _i(\\mathcal {O}_M) = \\lim _{M \\in \\mathcal {M}}\\log _{M}M^{x_i} = x_i.$ Thus, the infinite set of isomorphism classes of orders $\\lbrace \\mathcal {O}_M\\rbrace _{M \\in \\mathcal {M}}$ has Minkowski type $\\mathbf {x}$ .", "The infinite set of isomorphism classes of orders constructed in constructM will be essential in proving the theorems in this paper, so we give them a name.", "Definition 3.3 Suppose $\\mathcal {B}= \\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ is a basis of a degree $n$ number field, $\\mathcal {M}\\subseteq \\mathbb {Z}_{\\ge 1}$ is an infinite subset, and $\\mathbf {x}=(x_1,\\dots ,x_{n-1}) \\in \\mathbb {R}^{n-1}$ satisfies $x_1 \\le \\dots \\le x_{n-1}$ and $\\sum _i x_i = 1/2$ .", "Moreover, suppose that for all $M \\in \\mathcal {M}$ , the lattice $\\mathcal {O}_M = \\mathbb {Z}\\langle 1,M^{x_1}v_1,\\dots ,M^{x_{n-1}}v_{n-1}\\rangle $ is an order.", "Then, define $\\mathcal {S}(\\mathcal {B},\\mathcal {M}, \\mathbf {x})$ to be $\\lbrace \\mathcal {O}_M\\rbrace _{M \\in \\mathcal {M}}$ .", "The infinite set $\\mathcal {S}(\\mathcal {B},\\mathcal {M}, \\mathbf {x})$ has Minkowski type $\\mathbf {x}$ , as $\\Delta _{\\mathcal {O}_M} \\asymp _{\\mathcal {S}(\\mathcal {B},\\mathcal {M}, \\mathbf {x})} M$ and $\\lambda _{i}(\\mathcal {O}_M) \\asymp _{\\mathcal {S}(\\mathcal {B},\\mathcal {M}, \\mathbf {x})} M^{x_i}$ .", "Given a basis $\\mathcal {B}$ of a degree $n$ number field, for which $\\mathbf {x} \\in \\mathbb {R}^{n-1}$ does there exist a infinite set of the form $\\mathcal {S}(\\mathcal {B},\\mathcal {M}, \\mathbf {x})$ ?", "constructM gives sufficient conditions on $\\mathbf {x}$ for an infinite set of the form $\\mathcal {S}(\\mathcal {B},\\mathcal {M}, \\delta )$ to exist; we will now prove necessary conditions on $\\mathbf {x}$ .", "Proposition 3.4 Choose a set of the form $\\mathcal {S}(\\mathcal {B},\\mathcal {M},\\mathbf {x})$ .", "Let $\\mathcal {F}$ be the flag arising from $\\mathcal {B}$ .", "Then $\\mathbf {x} \\in P_{T_{\\mathcal {F}}} \\cap \\mathbb {Q}^{n-1}$ .", "Again let $x_0 = 0$ .", "By the definition of $\\mathcal {S}(\\mathcal {B},\\mathcal {M},\\mathbf {x})$ , we have $0 \\le x_1 \\le \\dots \\le x_{n-1}$ and $\\sum _i x_i = 1/2$ .", "Write $\\mathcal {B}= \\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ .", "As $\\mathbb {Z}\\langle 1, M^{x_1}v_1,\\dots ,M^{x_{n-1}}v_{n-1}\\rangle $ is a ring for all $M \\in \\mathcal {M}$ and $(M^{x_i}v_i)(M^{x_j}v_j) = \\sum _{k \\in [n]} M^{x_i + x_j - x_k}\\pi _{k,\\mathcal {B}}(v_iv_j) (M^{x_k}v_k).$ we must have that $M^{x_i + x_j - x_k}\\pi _{k,\\mathcal {B}}(v_iv_j) \\in \\mathbb {Z}$ for all $M \\in \\mathcal {M}$ .", "Thus $\\mathbf {x} \\in \\mathbb {Q}^{n-1}$ and $x_k \\le x_i + x_j$ if $\\pi _{k,\\mathcal {B}}(v_iv_j) \\ne 0$ ." ], [ "Approximating orders in $\\mathcal {S}(\\mathfrak {T})$", "In this section, we prove approximationprop, which will be necessary to approximate orders in $\\mathcal {S}(\\mathfrak {T})$ .", "Lemma 4.1 Let $L/K$ be a simple field extension of degree $n$ .", "Let $\\alpha _1, \\dots ,\\alpha _t \\in L$ be such that $K(\\alpha _1,\\dots ,\\alpha _t) = L$ .", "Then there exists $a_1, \\dots ,a_t \\in \\mathbb {Z}_{\\ne 0}$ such $K(a_1\\alpha _1 +\\dots + a_t\\alpha _t) = L$ .", "Moreover, $a_1,\\dots ,a_t$ can be chosen so that $\\vert a_i \\vert \\le n(n-1)$ .", "Define the map $I \\colon L \\rightarrow K$ sending $x$ to $\\det (1,x,x^2,\\dots ,x^{n-1})$ .", "The map $I$ a polynomial map of degree $n(n-1)/2$ that vanishes precisely on those $x \\in L$ such that $K(x) \\ne L$ .", "Let $P$ be the $t$ -dimensional $K$ -linear subspace in $L$ spanned by $\\alpha _1,\\dots ,\\alpha _t$ .", "Then $I_{\\mid P} \\ne 0$ because $K(\\alpha _1,\\dots ,\\alpha _t) = L$ and $L/K$ has finitely many subextensions.", "Because $I(x_1\\alpha _1 + \\dots + x_t\\alpha _t)$ has degree $n(n-1)/2$ and is not uniformly zero, it can vanish on at most $n(n-1)/2$ hyperplanes.", "Thus there exists an $a_1$ such that $I(a_1\\alpha _1 + x_2\\alpha _2 + \\dots + x_t\\alpha _t)$ is not uniformly zero with $\\operatorname{\\vert }a_1 \\operatorname{\\vert }\\le n(n-1)$ .", "Now proceed by induction on $n$ .", "Lemma 4.2 Let $\\mathcal {O}$ be an order in a degree $n$ number field and let $\\mathcal {L}\\subset \\mathcal {O}$ be a sublattice of index $D$ .", "Then $\\mathbb {Z}+ D\\mathcal {L}$ is an order.", "Take $x,y \\in D\\mathcal {L}$ .", "Then $xy \\in D^2\\mathcal {L}^2 \\subseteq D^2\\mathcal {O}\\subseteq D\\mathcal {L}$ .", "The following is a corollary of Minkowski's second theorem and a computation of the volume of the $n$ -ball.", "Lemma 4.3 Let $\\mathcal {O}$ be an order and let $\\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace \\in \\mathcal {O}$ be linearly independent elements such that $\\vert v_i \\vert = \\lambda _i$ .", "Then the index of the lattice $\\mathbb {Z}\\langle 1=v_0,\\dots ,v_{n-1} \\rangle $ in $\\mathcal {O}$ is at most $2^{3n/2}\\frac{\\pi ^{n/2}}{\\Gamma (n/2 + 1)}$ .", "Given a basis $\\mathcal {B}= \\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ of a degree $n$ number field, we say the tower type of $\\mathcal {B}$ is the tower type of the tower $\\mathbb {Q}(v_0) \\subseteq \\mathbb {Q}(v_1) \\subseteq \\dots \\subseteq \\mathbb {Q}(v_{n-1})$ with all trivial extensions removed.", "Observe that the tower type of the flag obtained from a basis is equal to the tower type of the basis itself.", "Proposition 4.4 Choose $\\mathcal {O}\\in \\mathcal {S}(\\mathfrak {T})$ .", "Then there exists a suborder $\\mathcal {O}^{\\prime } \\subseteq \\mathcal {O}$ such that: $\\mathcal {O}^{\\prime }$ has a $\\mathbb {Z}$ -basis with tower type $\\mathfrak {T}$ ; $\\Delta _{\\mathcal {O}} \\asymp _n \\Delta _{\\mathcal {O}^{\\prime }}$ ; and $\\lambda _i(\\mathcal {O}) \\asymp _n \\lambda _i(\\mathcal {O}^{\\prime })$ for all $i \\in [n]$ .", "Choose linearly independent elements $\\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace \\in \\mathcal {O}$ such that $\\vert v_i \\vert = \\lambda _i(\\mathcal {O})$ .", "Construct the $\\mathbb {Q}$ -basis $\\mathcal {B}= \\lbrace 1=v^{\\prime }_0,\\dots ,v^{\\prime }_{n-1}\\rbrace $ of $\\mathcal {O}\\otimes \\mathbb {Q}$ using the following procedure.", "Write $\\mathfrak {T}= (n_1,\\dots ,n_t)$ .", "Let the variable $i$ be the counter; start at $i = 1$ .", "At the end of step $i$ , we ensure that: $\\lbrace 1=v^{\\prime }_0,\\dots , v_i^{\\prime }, v_{i+1},\\dots ,v_{n-1}\\rbrace $ are linearly independent; and there exists $1\\le \\ell \\le t$ such that the tower $\\mathbb {Q}= \\mathbb {Q}(v^{\\prime }_0) \\subseteq \\mathbb {Q}(v^{\\prime }_0,v^{\\prime }_1) \\subseteq \\dots \\subseteq \\mathbb {Q}(v^{\\prime }_0,\\dots ,v_i^{\\prime })$ has tower type $(n_1,\\dots ,n_{\\ell })$ after all trivial extensions are removed.", "If there exists $1\\le \\ell \\le t$ such that $[\\mathbb {Q}(v_0^{\\prime },\\dots ,v_{i-1}^{\\prime },v_i)\\colon \\mathbb {Q}] = n_1\\dots n_{\\ell }$ , then set $v_i^{\\prime } = v_i$ .", "Else, let $i \\le j < n$ be the smallest integer such that $[\\mathbb {Q}(v_0^{\\prime },\\dots ,v_{i-1}^{\\prime },v_i,\\dots ,v_j)\\colon \\mathbb {Q}] = n_1\\dots n_{\\ell }$ for some $1\\le \\ell \\le t$ .", "Such a $j$ exists because the order $\\mathcal {O}$ has tower type $\\mathfrak {T}$ ; moreover, if $i \\ne 1$ then $[\\mathbb {Q}(v^{\\prime }_0,\\dots ,v_{i-1}^{\\prime }) \\colon \\mathbb {Q}] = n_1\\dots n_{\\ell -1}$ .", "Set $K = \\mathbb {Q}(v_0^{\\prime },\\dots ,v_{i-1}^{\\prime })$ and $L = K(v_i,\\dots ,v_j)$ .", "By generatinglemma, we can choose positive integers $a_i,\\dots ,a_j$ such that $\\operatorname{\\vert }a_i \\operatorname{\\vert },\\dots , \\operatorname{\\vert }a_j \\operatorname{\\vert }\\le n_{\\ell }(n_{\\ell }-1)$ and $K(a_iv_i + \\dots + a_jv_j) = L$ .", "Set $v_i^{\\prime } = a_iv_i + \\dots + a_jv_j$ .", "By construction, the basis $\\mathcal {B}$ has tower type $\\mathfrak {T}$ .", "Moreover, let $i$ be such that $[\\mathbb {Q}(v_0^{\\prime },\\dots ,v_{i-1}^{\\prime },v_i)\\colon \\mathbb {Q}] \\ne n_1\\dots n_{\\ell }$ for all $1\\le \\ell \\le t$ , and let $i \\le j < n$ be the smallest integer such that there exists $1\\le \\ell \\le t$ such that $[\\mathbb {Q}(v_0^{\\prime },\\dots ,v_{i-1}^{\\prime },v_i,\\dots ,v_j)\\colon \\mathbb {Q}] = n_1\\dots n_{\\ell }$ .", "Because $\\mathcal {O}$ has tower type $\\mathfrak {T}$ , we must have $\\operatorname{\\vert }v_i \\operatorname{\\vert }= \\dots = \\operatorname{\\vert }v_j \\operatorname{\\vert }$ , so $\\operatorname{\\vert }v_i \\operatorname{\\vert }\\asymp _n \\operatorname{\\vert }v^{\\prime }_i \\operatorname{\\vert }$ .", "Let $D$ be index of the lattice spanned by the basis $\\mathcal {B}$ inside $\\mathcal {O}$ .", "By minksecondthmanalogue, $D \\asymp _n 1$ .", "Then by orderwithbasis, the set $\\mathcal {O}^{\\prime } = \\mathbb {Z}\\langle 1, Dv^{\\prime }_1,\\dots ,Dv^{\\prime }_{n-1}\\rangle $ is an order such that $\\Delta _{\\mathcal {O}^{\\prime }} \\asymp _n \\Delta _{\\mathcal {O}}$ and $\\lambda _i(\\mathcal {O}^{\\prime }) \\asymp _n \\operatorname{\\vert }v^{\\prime }_i\\operatorname{\\vert }\\asymp _n \\operatorname{\\vert }v_i\\operatorname{\\vert }= \\lambda _i(\\mathcal {O}).$ The $\\mathbb {Z}$ -basis $\\lbrace 1, Dv^{\\prime }_1,\\dots ,Dv^{\\prime }_{n-1}\\rbrace $ of $\\mathcal {O}^{\\prime }$ has tower type $\\mathfrak {T}$ ." ], [ "Proofs of the main theorems in the number field case", "In this section, we prove many theorems in the number field case.", "Throughout this section, we will make use of the following construction.", "Example 5.1 Choose a tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ of degree $n$ .", "Suppose we have a number field $K$ with $\\alpha _1,\\dots ,\\alpha _t\\in K$ such that $[\\mathbb {Q}(\\alpha _1,\\dots ,\\alpha _{\\ell }) \\colon \\mathbb {Q}] = n_1\\dots n_{\\ell }$ for $1 \\le \\ell \\le t$ .", "Define a basis $\\mathcal {B}= \\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ of $K$ as follows: for $i \\in [n]$ , write $i = i_1 + i_2(n_1) + i_3(n_1n_2) + \\dots + i_t(n_1\\dots n_{t-1})$ in mixed radix notation with respect to $\\mathfrak {T}$ , and then set $v_i \\alpha _1^{i_1}\\dots \\alpha _t^{i_t}$ .", "Let $\\mathcal {F}$ be the flag corresponding to the basis $\\mathcal {B}$ ; then $\\operatorname{Len}(\\mathfrak {T}) = T_{\\mathcal {F}}$ .", "* Write $\\mathfrak {T}= (n_1,\\dots ,n_t)$ .", "Because $\\mathcal {S}(\\mathfrak {T},K)$ is nonempty, there exist $\\alpha _1,\\dots ,\\alpha _t\\in K$ such that $\\mathbb {Q}(\\alpha _1,\\dots ,\\alpha _{\\ell }) \\colon \\mathbb {Q}] = n_1\\dots n_{\\ell }$ for all $1 \\le \\ell \\le t$ .", "Let $\\mathcal {B}= \\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ be the basis given in aboveex.", "Let $\\mathbf {x}$ be in the relative interior of $\\operatorname{Len}(\\mathfrak {T})$ .", "By constructM, there exists an infinite set $\\mathcal {M}$ of positive integers such that $\\lbrace \\mathcal {O}_M = \\mathbb {Z}\\langle 1,M^{x_1}v_1,\\dots ,M^{x_{n-1}}v_{n-1}\\rangle \\rbrace _{M \\in \\mathcal {M}}$ is an infinite set of orders with Minkowski type $\\mathbf {x}$ .", "For large enough $M$ , the ring $\\mathcal {O}_M$ has tower type $\\mathfrak {T}$ .", "Therefore, $\\mathbf {x} \\in \\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T},K))$ .", "Because $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T},K))$ is closed (closedprop), we have $\\operatorname{Len}(\\mathfrak {T}) \\subseteq \\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T},K))$ .", "* We will prove the contrapositive.", "Suppose $i+j$ overflows modulo $\\mathfrak {T}$ .", "Then there exists a relative interior point $\\mathbf {x}$ of $\\operatorname{Len}(\\mathfrak {T})$ such that $x_{i+j} > x_i + x_j$ .", "Write $\\mathfrak {T}= (n_1,\\dots ,n_t)$ .", "Because $\\mathcal {S}(\\mathfrak {T},K)$ is nonempty, there exist $\\alpha _1,\\dots ,\\alpha _t\\in K$ such that $\\mathbb {Q}(\\alpha _1,\\dots ,\\alpha _{\\ell }) \\colon \\mathbb {Q}] = n_1\\dots n_{\\ell }$ for $1 \\le \\ell \\le t$ .", "Let $\\mathcal {B}= \\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ be the basis given in aboveex.", "By constructM, there exists an infinite set $\\mathcal {M}$ of positive integers such that $\\lbrace \\mathcal {O}_M = \\mathbb {Z}\\langle 1,M^{x_1}v_1,\\dots ,M^{x_{n-1}}v_{n-1}\\rangle \\rbrace _{M \\in \\mathcal {M}}$ is an infinite set of orders with Minkowski type $\\mathbf {x}$ .", "For large enough $M$ , the ring $\\mathcal {O}_M$ has tower type $\\mathfrak {T}$ , but $\\lim _{M \\in \\mathcal {M}}\\frac{\\lambda _i(\\mathcal {O}_M)\\lambda _j(\\mathcal {O}_M)}{\\lambda _{i+j}(\\mathcal {O}_M)} = M^{x_i + x_j - x_{i+j}} = 0.$ * Choose $\\mathcal {O}\\in \\bigcup _{K\\in \\mathcal {C}}\\mathcal {S}(\\mathfrak {T},K)$ .", "Choose a set of linearly independent elements $1=v_0,\\dots ,v_{n-1} \\in \\mathcal {O}$ such that $\\lambda _i(\\mathcal {O}) = \\operatorname{\\vert }v_i \\operatorname{\\vert }$ .", "Let $\\mathfrak {T}^{\\prime }$ be the tower type of the basis $\\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ .", "Let $\\mathcal {F}$ be the flag given by the basis.", "If $\\mathfrak {T}= \\mathfrak {T}^{\\prime }$ , then for every $1 \\le i,j < n$ , we have $\\lambda _{T_{\\mathcal {F}}(i,j)}(\\mathcal {O}) = \\operatorname{\\vert }v_{T_{\\mathcal {F}}(i,j)} \\operatorname{\\vert }\\le \\sqrt{n} \\operatorname{\\vert }v_i \\operatorname{\\vert }\\operatorname{\\vert }v_j \\operatorname{\\vert }= \\lambda _i(\\mathcal {O}) \\lambda _j(\\mathcal {O}).$ If $\\mathfrak {T}\\ne \\mathfrak {T}^{\\prime }$ , apply approximationprop to obtain an order $\\mathcal {O}^{\\prime }$ equipped with a $\\mathbb {Z}$ -basis $\\lbrace 1=v^{\\prime }_0,\\dots ,v^{\\prime }_{n-1}\\rbrace $ of tower type $\\mathfrak {T}$ such that $\\lambda _i(\\mathcal {O}) \\asymp _n \\operatorname{\\vert }v^{\\prime }_i \\operatorname{\\vert }$ such that $\\prod _{i = 1}^{n-1} \\operatorname{\\vert }v_i^{\\prime } \\operatorname{\\vert }\\asymp _n \\Delta ^{\\prime }$ where here $\\Delta ^{\\prime }$ is the absolute discriminant of $\\mathcal {O}^{\\prime }$ .", "Then $\\lambda _{T_{\\mathcal {F}}(i,j)}(\\mathcal {O}) \\asymp _n \\operatorname{\\vert }v^{\\prime }_{T_{\\mathcal {F}}(i,j)} \\operatorname{\\vert }\\ll _n \\operatorname{\\vert }v^{\\prime }_i \\operatorname{\\vert }\\operatorname{\\vert }v^{\\prime }_j \\operatorname{\\vert }\\asymp _n \\lambda _i(\\mathcal {O}) \\lambda _j(\\mathcal {O}).$ Therefore, $\\operatorname{Spectrum}(\\bigcup _{K\\in \\mathcal {C}}\\mathcal {S}(\\mathfrak {T},K))$ is contained in the union of polytopes $P_{T_{\\mathcal {F}}}$ for every flag $\\mathcal {F}$ with tower type $\\mathfrak {T}$ of a degree $n$ number field in $\\mathcal {C}$ .", "On the other hand, pick a flag of a number field $K \\in \\mathcal {C}$ with tower type $\\mathfrak {T}$ .", "Then there exists a basis $\\lbrace 1,v_0,\\dots ,v_{n-1}\\rbrace $ of $K$ with tower type $\\mathfrak {T}$ giving rise to $\\mathcal {F}$ .", "For any point $\\mathbf {x}$ in the relative interior of the polytope $P_{T_{\\mathcal {F}}}$ , constructM shows that $\\mathbf {x} \\in \\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T},K))$ .", "By closedprop, $P_T \\subseteq \\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T}))$ .", "Therefore, $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T},K))$ is equal to the union of polytopes $P_{\\mathcal {F}}$ for every flag $\\mathcal {F}$ of $K$ with tower type $\\mathfrak {T}$ .", "* * If there exists a constant $c_{\\mathfrak {T},\\mathcal {C}}$ such that $\\lambda _k(\\mathcal {O}) \\le c_{\\mathfrak {T}} \\lambda _i(\\mathcal {O})\\lambda _j(\\mathcal {O})$ for all $\\mathcal {O}\\in \\cup _{K\\in \\mathcal {C}}\\mathcal {S}(\\mathfrak {T},K)$ then clearly $\\cup _{K\\in \\mathcal {C}}\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T},K)) \\subseteq \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\;\\; \\vert \\;\\; x_k \\le x_i + x_j\\rbrace $ .", "On the other hand, if $\\operatorname{Spectrum}(\\cup _{K\\in \\mathcal {C}}\\mathcal {S}(\\mathfrak {T},K)) \\subseteq \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\;\\; \\vert \\;\\; x_k \\le x_i + x_j\\rbrace $ then by sfstructurethmintro, every flag $\\mathcal {F}$ of a number field $K \\in \\mathcal {C}$ with tower type $\\mathfrak {T}$ satisfies $T_{\\mathcal {F}}(i,j) \\ge k$ .", "Suppose $\\mathcal {O}\\in \\cup _{K\\in \\mathcal {C}}\\mathcal {S}(\\mathfrak {T},K)$ .", "Choose a set of linearly independent elements $1=v_0,\\dots ,v_{n-1} \\in \\mathcal {O}$ such that $\\lambda _i(\\mathcal {O}) = \\operatorname{\\vert }v_i \\operatorname{\\vert }$ .", "Let $\\mathfrak {T}^{\\prime }$ be the tower type of the basis $\\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ .", "Let $\\mathcal {F}$ be the flag given by the basis.", "If $\\mathfrak {T}= \\mathfrak {T}^{\\prime }$ , then for every $1 \\le i,j < n$ , we have $\\lambda _k(\\mathcal {O}) \\le \\lambda _{T_{\\mathcal {F}}(i,j)}(\\mathcal {O}) = \\operatorname{\\vert }v_{T_{\\mathcal {F}}(i,j)} \\operatorname{\\vert }\\le \\sqrt{n} \\operatorname{\\vert }v_i \\operatorname{\\vert }\\operatorname{\\vert }v_j \\operatorname{\\vert }= \\lambda _i(\\mathcal {O}) \\lambda _j(\\mathcal {O}).$ If $\\mathfrak {T}\\ne \\mathfrak {T}^{\\prime }$ , apply approximationprop to obtain an order $\\mathcal {O}^{\\prime }$ with a $\\mathbb {Z}$ -basis $\\lbrace 1=v^{\\prime }_0,\\dots ,v^{\\prime }_{n-1}\\rbrace $ with tower type $\\mathfrak {T}$ such that $\\lambda _i(\\mathcal {O}) \\asymp _n \\operatorname{\\vert }v^{\\prime }_i \\operatorname{\\vert }$ such that $\\prod _{i = 1}^{n-1} \\operatorname{\\vert }v_i^{\\prime } \\operatorname{\\vert }\\asymp _n \\Delta ^{\\prime }$ where here $\\Delta ^{\\prime }$ is the absolute discriminant of $\\mathcal {O}^{\\prime }$ .", "Then $\\lambda _k(\\mathcal {O}) \\le \\lambda _{T_{\\mathcal {F}}(i,j)}(\\mathcal {O}) \\asymp _n \\operatorname{\\vert }v^{\\prime }_{T_{\\mathcal {F}}(i,j)} \\operatorname{\\vert }\\ll _n \\operatorname{\\vert }v^{\\prime }_i \\operatorname{\\vert }\\operatorname{\\vert }v^{\\prime }_j \\operatorname{\\vert }\\asymp _n \\lambda _i(\\mathcal {O}) \\lambda _j(\\mathcal {O}).$ * Clearly, $(2)$ , $(3)$ and $(4)$ are equivalent by exthm.", "Clearly, $(2) \\Rightarrow (1)$ .", "It is sufficient to show that $(1) \\Rightarrow (2)$ ; we prove the contrapositive.", "suppose $K$ has a subfield $L$ such that $(i \\operatorname{\\%}\\deg (L)) + (j \\operatorname{\\%}\\deg (L)) \\ne (i+j) \\operatorname{\\%}\\deg (L)$ .", "Choose $\\alpha _1,\\alpha _2 \\in K$ algebraic integers such that $\\mathbb {Q}(\\alpha _1) = L$ and $\\mathbb {Q}(\\alpha _1,\\alpha _2) = K$ .", "The construction in prototypicalExample produces an infinite set $\\mathcal {S}$ of orders contained in $K$ such that $\\lim _{\\mathcal {O}\\in \\mathcal {S}}\\frac{\\lambda _i(\\mathcal {O})\\lambda _j(\\mathcal {O})}{\\lambda _{i+j}(\\mathcal {O})} = 0.$ Theorem 5.2 (Vemulapalli, ) Let $K$ be a field whose finite extensions have extensions of all degrees.", "Suppose $\\mathfrak {T}= (n_1,\\dots ,n_t)$ is as follows: $n < 8$ ; $n = 8$ and $\\mathfrak {T}\\ne (8)$ ; $n$ is prime; $\\mathfrak {T}= (p,\\dots ,p)$ for a prime $p$ ; $\\mathfrak {T}= (2,p)$ for a prime $p$ ; or $\\mathfrak {T}= (3,p)$ for a prime $p$ .", "Then $\\bigcup _{\\mathcal {F}} P_{T_{\\mathcal {F}}} = \\operatorname{Len}(\\mathfrak {T})$ as $\\mathcal {F}$ ranges across all flags over $K$ with tower type $\\mathfrak {T}$ .", "* We will instead prove Lenstraconjfinal, which is equivalent; we will show that $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T})) = \\operatorname{Len}(T)$ .", "By sfstructurethmintro, the region $\\operatorname{Spectrum}(\\mathcal {S}(\\mathfrak {T}))$ is the union of polytopes $P_{T_{\\mathcal {F}}}$ where $\\mathcal {F}$ a flag with tower type $\\mathfrak {T}$ .", "lenstrathmaddcomb shows that the union of polytopes $P_{T_{\\mathcal {F}}}$ where $\\mathcal {F}$ is a flag over $\\mathbb {Q}$ with tower type $\\mathfrak {T}$ is $\\operatorname{Len}(\\mathfrak {T})$ .", "Theorem 5.3 (Vemulapalli, ) Let $K$ be a field whose finite extensions have extensions of all degrees.", "We have $\\bigcup _{\\mathfrak {T}}\\operatorname{Len}(\\mathfrak {T}) = \\bigcup _{\\mathcal {F}} P_{T_{\\mathcal {F}}}$ as $\\mathcal {F}$ ranges across flags over $K$ if and only if $n$ is of the following form: $n = p^k$ , with $p$ prime and $k \\ge 1$ ; $n = pq$ , with $p$ and $q$ distinct primes; or $n = 12$ .", "* By sfstructurethmintro, the region $\\operatorname{Spectrum}(\\mathcal {S}_n)$ is the union of polytopes $P_{T_{\\mathcal {F}}}$ where $\\mathcal {F}$ ranges across all flags of degree $n$ number fields.", "minimalitycompletethmbetter shows that the union of polytopes $P_{T_{\\mathcal {F}}}$ where $\\mathcal {F}$ ranges across all flags of degree $n$ number fields is $\\cup _{\\mathfrak {T}} \\operatorname{Len}(\\mathfrak {T})$ precisely when $n$ is as above." ], [ "Proofs of the main theorems in the function field case", "In this section, we prove fnfieldmainthm, geometricthm, and refinedguessthmintrothmgeom.", "We first recall the notation of fnfieldsubsection.", "Let $C$ be a geometrically integral smooth projective curve of genus $g$ over a field $k$ equipped with a finite morphism $\\pi \\colon C \\rightarrow \\mathbb {P}^1$ of degree $n \\ge 2$ .", "Let $\\mathcal {L}$ be a line bundle on $C$ .", "Recall that the logarithmic successive minima are the integers $a_0(\\mathcal {L},\\pi )\\le \\dots \\le a_{n-1}(\\mathcal {L},\\pi )$ .", "The action map $\\psi \\colon \\mathcal {O}_C \\otimes \\mathcal {L}\\rightarrow \\mathcal {L}$ induces an action map $\\widehat{\\psi } \\colon \\pi _{*}\\mathcal {O}_C\\otimes \\pi _{*}\\mathcal {L}\\rightarrow \\pi _{*}\\mathcal {L}$ via the pushforward.", "Reall that $\\eta $ denotes the generic point of $C$ , $L$ is the function field of $C$ , $\\nu $ is the generic point of $\\mathbb {P}^1$ , and $K$ is the function field of $\\mathbb {P}^1$ .", "The map $\\pi $ induces an isomorphisms $\\phi _1 \\colon (\\mathcal {O}_C)_{\\mid \\eta } \\simeq (\\pi _{*}\\mathcal {O}_C)_{\\mid \\nu }$ $\\phi _2 \\colon \\mathcal {L}_{\\mid \\eta } \\simeq (\\pi _*\\mathcal {L})_{\\mid \\nu }$ such that the following diagram commutes: $\\begin{tikzcd}(\\mathcal {O}_C)_{\\mid \\eta } \\otimes \\mathcal {L}_{\\mid \\eta } {r}{\\psi _{\\mid \\eta }} [swap]{d}{\\phi _1 \\otimes \\phi _2} & \\mathcal {L}_{\\mid \\eta } {d}{\\phi _2} \\\\(\\pi _{*}(\\mathcal {O}_C))_{\\mid \\nu } \\otimes (\\pi _{*}\\mathcal {L})_{\\mid \\nu } {r}{\\widehat{\\psi }_{\\mid \\nu }}& (\\pi _{*}\\mathcal {L})_{\\mid \\nu }.\\end{tikzcd}$ Recall that the top map $(\\mathcal {O}_C)_{\\mid \\eta } \\otimes \\mathcal {L}_{\\mid \\eta } \\rightarrow \\mathcal {L}_{\\mid \\eta }$ is multiplication in the field $L$ , represented as a $K$ -vector space, because $(\\mathcal {O}_C)_{\\mid \\eta } = \\mathcal {L}_{\\mid \\eta } = L$ .", "Recall the two sets of $K$ -vector spaces $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ and $\\mathcal {G}= \\lbrace G_i\\rbrace _{i \\in [n]}$ of $L/K$ given by $F_i = \\phi _1^{-1}\\Bigg (\\Big (\\mathcal {O}_{\\mathbb {P}^1}(-a_0(\\mathcal {O}_C)) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_i(\\mathcal {O}_C))\\Big )_{\\mid \\nu }\\Bigg )$ and $G_i = \\phi _2^{-1}\\Bigg (\\Big (\\mathcal {O}_{\\mathbb {P}^1}(-a_0(\\mathcal {L})) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_i(\\mathcal {L}))\\Big )_{\\mid \\nu }\\Bigg ).$ * Let $p_{k,\\mathcal {L}}$ be the natural projection $p_{k,\\mathcal {L}} \\colon \\mathcal {O}_{\\mathbb {P}^1}(-a_0(\\mathcal {L})) \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_1(\\mathcal {L})) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_{n-1}(\\mathcal {L})) \\rightarrow \\mathcal {O}_{\\mathbb {P}^1}(-a_k(\\mathcal {L})).$ Let $\\iota _{i,\\mathcal {O}_C}$ be the natural inclusion $\\iota _{i,\\mathcal {O}_C} \\colon \\mathcal {O}_{\\mathbb {P}^1}(-a_0(\\mathcal {O}_C)) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_{i}(\\mathcal {O}_C)) {} \\mathcal {O}_{\\mathbb {P}^1}(-a_0(\\mathcal {O}_C)) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_{n-1}(\\mathcal {O}_C)).$ Let $\\iota _{j,\\mathcal {L}}$ be the natural inclusion $\\iota _{j,\\mathcal {L}} \\colon \\mathcal {O}_{\\mathbb {P}^1}(-a_0(\\mathcal {L})) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_{j}(\\mathcal {L})){} \\mathcal {O}_{\\mathbb {P}^1}(-a_0(\\mathcal {L})) \\oplus \\dots \\oplus \\mathcal {O}_{\\mathbb {P}^1}(-a_{n-1}(\\mathcal {L})).$ By assumption, $(p_{k,\\mathcal {L}} \\circ \\widehat{\\psi }(\\iota _{i,\\mathcal {O}}, \\iota _{j,\\mathcal {L}}))_{\\mid \\nu } \\ne 0$ ; thus $p_{k,\\mathcal {L}} \\circ \\widehat{\\psi }(\\iota _{i,\\mathcal {O}}, \\iota _{j,\\mathcal {L}}) \\ne 0$ .", "Therefore, there exists integers $i^{\\prime },j^{\\prime } \\in [n]$ with that $i^{\\prime } \\le i$ and $j^{\\prime } \\le j$ , and there exists a nonzero map $\\mathcal {O}_{\\mathbb {P}^1}(-a_{i^{\\prime }}(\\mathcal {O}_C)) \\otimes \\mathcal {O}_{\\mathbb {P}^1}(-a_{j^{\\prime }}(\\mathcal {L})) \\rightarrow \\mathcal {O}_{\\mathbb {P}^1}(-a_{k}(\\mathcal {L})).$ Recall that for any integers $x$ , $y$ , and $z$ , we have $\\operatorname{Hom}(\\mathcal {O}_{\\mathbb {P}^1}(-x) \\otimes \\mathcal {O}_{\\mathbb {P}^1}(-y), \\mathcal {O}_{\\mathbb {P}^1}(-z)) \\ne 0$ if and only if $z \\le x + y$ .", "Therefore $a_{k}(\\mathcal {L}) \\le a_i(\\mathcal {O}) + a_j(\\mathcal {L})$ .", "The second statement is a consequence of overflowlemma.", "* Apply fnfieldmainthm with $\\mathcal {L}= \\mathcal {O}_C$ .", "* Apply geometricthm and minimalitycompletethmbetter." ], [ "Appendix", "Lemma 7.1 For any order $\\mathcal {O}$ , we have $\\lambda _0 = 1$ .", "Note that $\\vert 1 \\vert = 1$ and for any $x \\in \\mathcal {O}$ , we have $\\vert x \\vert ^2 &= \\frac{1}{n}\\bigg (\\sum _{i = 1}^{r_1} \\sigma _i(x)^2 + \\sum _{i = r_1 + 1}^{r_1 + r_2} 2\\vert \\sigma _i(x)\\vert ^2\\bigg ) \\\\&\\ge \\@root n \\of {\\prod _{i = 1}^{r_1} \\sigma _i(x)^2\\prod _{i = r_1 + 1}^{r_1 + r_2} \\vert \\sigma _i(x)\\vert ^4} \\\\&= \\@root n \\of {\\prod _{i = 1}^{r_1} \\sigma _i(x)^2\\prod _{i = r_1 + 1}^{r_1 + r_2} \\sigma _i(x)^2\\overline{\\sigma _i}(x)^2} \\\\&= \\vert N_{K/\\mathbb {Q}}(x)\\vert ^{2/n} \\\\& \\ge 1,$ where here $(2)$ is an application of the inequality of arithmetic and geometric means.", "Let $C$ be a geometrically integral smooth projective curve of genus $g$ over a field $k$ equipped with a finite map $\\pi \\colon C \\rightarrow \\mathbb {P}^1$ of degree $n \\ge 1$ .", "Recall that for a line bundle $\\mathcal {L}$ on $C$ , the integers $a_0(\\mathcal {L},\\pi ) \\le \\dots \\le a_{n-1}(\\mathcal {L},\\pi )$ are the logarithmic successive minima of $\\mathcal {L}$ with respect to $\\pi $ .", "The following computation is adapted from .", "Lemma 7.2 () We have $a_0(\\mathcal {L},\\pi ) + \\dots + a_{n-1}(\\mathcal {L},\\pi ) = g + n - 1 - \\deg (\\mathcal {L}).$ If $h^0(C,\\mathcal {L}) = 1$ then $a_0(\\mathcal {L},\\pi ) = 0$ and all the other logarithmic successive minima $a_1(\\mathcal {L},\\pi ),\\dots ,a_{n-1}(\\mathcal {L},\\pi )$ are strictly positive.", "For conciseness, we denote $a_i(\\mathcal {L},\\pi )$ as $a_i$ .", "For any integer $j$ , we have $h^0(X,\\mathcal {L}\\otimes \\pi ^*\\mathcal {O}_{\\mathbb {P}^1}(j)) &= h^0(\\mathbb {P}^1, \\pi _*\\mathcal {L}\\otimes \\mathbb {P}^1(j)) \\\\&= \\sum _{i \\in [n]}h^0(\\mathbb {P}^1,\\mathcal {O}_{\\mathbb {P}^1}(j-a_i)) \\\\&= \\sum _{i \\in [n]}\\max \\lbrace 0,j+1-a_i\\rbrace \\\\&= \\max _{i\\in [n]}\\big \\lbrace (j+1)(i+1) - (a_0 + \\dots + a_i)\\big \\rbrace .$ By Riemann–Roch, for large enough $j$ we have $h^0(X,\\mathcal {L}\\otimes \\pi ^*\\mathcal {O}_{\\mathbb {P}^1}(j)) = \\deg (\\mathcal {L}) + jn - g + 1$ .", "Similarly, for large enough $j$ , $\\max _{i\\in [n]}\\big \\lbrace (j+1)(i+1) - (a_0 + \\dots + a_i)\\big \\rbrace = (j+1)(n) - (a_0 + \\dots + a_{n-1}).$ Therefore, $a_0 + \\dots + a_{n-1} = g + n - 1 - \\deg (\\mathcal {L}).$ If $h^0(C,\\mathcal {L}) = 1$ then $1 = h^0(C,\\mathcal {L}) = \\sum _{i \\in [n]}\\max \\lbrace 0,1-a_i\\rbrace $ so $a_0 = 0$ and $a_i > 0$ for all $1 \\le i < n$ .", "Lemma 7.3 Let $\\mathfrak {T}= (n_1,\\dots ,n_t)$ be a tower type of degree $n$ and let $K$ be any field with elements $\\alpha _1,\\dots ,\\alpha _t$ such that $\\deg (\\mathbb {Q}(\\alpha _1,\\dots ,\\alpha _i)) = n_1\\dots n_i$ .", "Then there exists a sequence of $t$ -tuples of algebraic integers $\\lbrace (\\alpha _{1,\\ell },\\dots ,\\alpha _{t,\\ell })\\rbrace _{\\ell \\in \\mathbb {Z}_{>0}}$ such that: for all $1 \\le i \\le t$ and all $\\ell \\in \\mathbb {Z}_{>0}$ we have $\\vert \\alpha _{i,\\ell } \\vert \\vert \\alpha _{j,\\ell } \\vert \\asymp \\vert \\alpha _{i,\\ell } \\alpha _{j,\\ell }\\vert ;$ for all $1 \\le i < t$ and all $\\ell \\in \\mathbb {Z}_{>0}$ we have $\\lim _{\\ell \\rightarrow \\infty } \\frac{\\vert \\alpha _{1,\\ell } \\vert ^{n_1-1} \\dots \\vert \\alpha _{i,\\ell }\\vert ^{n_i-1}}{\\vert \\alpha _{i+1,\\ell }\\vert } = 0;$ and all $\\ell \\in \\mathbb {Z}_{>0}$ we have $\\operatorname{\\vert }\\operatorname{Disc}(\\mathbb {Z}[\\alpha _{1,\\ell },\\dots ,\\alpha _{t,\\ell }]) \\operatorname{\\vert }\\asymp \\prod _{i = 1}^t \\operatorname{\\vert }\\alpha _{i,\\ell } \\operatorname{\\vert }^{n^2(n-1)/2n_i}.$ Without loss of generality, we may suppose $\\alpha _1,\\dots ,\\alpha _t$ are integral and $\\operatorname{\\vert }\\alpha _i \\operatorname{\\vert }\\le \\operatorname{\\vert }\\alpha _{i+1} \\operatorname{\\vert }$ for $0 \\le i < n-1$ .", "Define a basis $\\mathcal {B}= \\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ of $K$ as follows: for $i \\in [n]$ , write $i = i_1 + i_2(n_1) + i_3(n_1n_2) + \\dots + i_t(n_1\\dots n_{t-1})$ in mixed radix notation with respect to $\\mathfrak {T}$ , and then set $v_i \\alpha _1^{i_1}\\dots \\alpha _t^{i_t}$ .", "Let $\\mathcal {F}$ be the flag corresponding to the basis $\\mathcal {B}$ .", "Define $\\mathbf {x} = (x_1,\\dots ,x_{n-1})$ as follows: let $x_i = \\frac{i}{2(n-1)}$ ; then $\\mathbf {x} \\in P_{T_{\\mathcal {F}}} = \\operatorname{Len}(\\mathfrak {T})$ .", "Apply constructM to obtain an infinite set $\\mathcal {M}\\subseteq \\mathbb {Z}_{\\ge 1}$ such that $\\lbrace \\mathcal {O}_M = \\mathbb {Z}\\langle 1,M^{x_1}v_1,\\dots ,M^{x_{n-1}}v_{n-1}\\rangle \\rbrace _{M \\in \\mathcal {M}}$ is a set of orders with tower type $\\mathfrak {T}$ .", "Let $M_i$ be the $i$ –th largest element of $\\mathcal {M}$ .", "Letting $\\alpha _{i,\\ell } = M_i^{x_i}\\alpha _{i}$ completes the proof.", "Lemma 7.4 Suppose we have $x,y \\in K$ for some number field $K$ of degree $n$ .", "Then $\\operatorname{\\vert }xy \\operatorname{\\vert }\\le \\sqrt{n}\\operatorname{\\vert }x \\operatorname{\\vert }\\operatorname{\\vert }y \\operatorname{\\vert }$ .", "We have $\\operatorname{\\vert }xy \\operatorname{\\vert }^2 &= \\frac{1}{n}\\sum _{i = 1}^n\\operatorname{\\vert }\\sigma _i(xy)\\operatorname{\\vert }^2 \\\\&= \\frac{1}{n}\\sum _{i = 1}^n\\operatorname{\\vert }\\sigma _i(x)\\operatorname{\\vert }^2\\operatorname{\\vert }\\sigma _i(y)\\operatorname{\\vert }^2 \\\\&\\le \\frac{1}{n}\\Big (\\sum _{i = 1}^n\\operatorname{\\vert }\\sigma _i(x)\\operatorname{\\vert }^2 \\Big )\\Big (\\sum _{i = 1}^n\\operatorname{\\vert }\\sigma _i(y)\\operatorname{\\vert }^2\\Big ) \\\\&= n\\Big (\\frac{1}{n}\\sum _{i = 1}^n\\operatorname{\\vert }\\sigma _i(x)\\operatorname{\\vert }^2 \\Big )\\Big (\\frac{1}{n}\\sum _{i = 1}^n\\operatorname{\\vert }\\sigma _i(y)\\operatorname{\\vert }^2\\Big ) \\\\&= n\\operatorname{\\vert }x\\operatorname{\\vert }^2\\operatorname{\\vert }y\\operatorname{\\vert }^2.$ Lemma 7.5 For a tower type $\\mathfrak {T}$ , the set $\\mathcal {S}(\\mathfrak {T},K)$ is either empty or infinite.", "Suppose there exists an element $\\mathcal {O}\\in \\mathcal {S}(\\mathfrak {T},K)$ .", "Then, there exists a basis $\\mathcal {B}=\\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ of $K$ with tower type $\\mathfrak {T}$ , and let $\\mathcal {F}$ be the corresponding flag.", "Choose a relative interior point $\\mathbf {x} \\in P_{T_{\\mathcal {F}}}$ .", "constructM constructs an infinite family $\\mathcal {S}(\\mathcal {B},\\mathcal {M},\\mathbf {x})$ of orders in $K$ such that all but finitely many have tower type $\\mathfrak {T}$ .", "Lemma 7.6 For an order $\\mathcal {O}$ in a primitive degree $n$ number field, we have $\\Delta _{\\mathcal {O}}^{1/(2 \\binom{n}{2})} \\ll _n \\lambda _1(\\mathcal {O}) \\ll _n \\Delta _{\\mathcal {O}}^{1/2(n-1)}.$ We have $\\lambda _1(\\mathcal {O})^{n-1} \\le \\prod _{i = 1}^{n-1}\\lambda _i(\\mathcal {O}) \\asymp _n \\Delta _{\\mathcal {O}}^{1/2},$ so $\\lambda _1(\\mathcal {O}) \\ll _n \\Delta _{\\mathcal {O}}^{1/2(n-1)}$ .", "On the other hand, by primcorollary, we have $\\lambda _{i+j}(\\mathcal {O}) \\le \\sqrt{n}\\lambda _i(\\mathcal {O})\\lambda _j(\\mathcal {O})$ for all $0 \\le i,j,i+j < n$ .", "So, $\\lambda _i \\le n^{(i-1)/2}\\lambda _1^i$ .", "So, $\\Delta _{\\mathcal {O}}^{1/2} \\asymp _n \\prod _{i = 1}^{n-1}\\lambda _i(\\mathcal {O}) \\ll _n \\prod _{i = 1}^{n-1}\\lambda _1(\\mathcal {O})^i \\ll _n \\lambda _1(\\mathcal {O})^{\\binom{n}{2}}$ so $\\Delta _{\\mathcal {O}}^{1/(2 \\binom{n}{2})} \\ll _n \\lambda _1(\\mathcal {O})$ .", "zemorthmarticle author=Bachoc, Christine, author=Serra, Oriole, author=Zémor, Gilles title = Revisiting Kneser's theorem for field extensions, journal = Combinatorica, year = 2018, volume = 38, number=4, pages = 759–777, TwoTorsionarticle title=Bounds on 2-torsion in class groups of number fields and integral points on elliptic curves, author=Bhargava, Manjul, author=Shankar, Arul, author=Taniguchi, Takashi, author=Thorne, Frank, author=Tsimerman, Jacob, author=Zhao, Yifei journal=Journal of the American Mathematical Society, volume=33, date=2020, pages=1087–1099 chichethesis author=Chiche-lapierre, Val, title=Length of elements in a Minkowski basis for an order in a number field, organization=University of Toronto, date=2019, anandsarticle author = Deopurkar, Anand, author = Patel, Anand, title = The Picard rank conjecture for the Hurwitz spaces of degree up to five, volume = 9, journal = Algebra & Number Theory, number = 2, publisher = MSP, pages = 459 – 492, keywords = Hurwitz space, Picard group, year = 2015, doi = 10.2140/ant.2015.9.459, URL = https://doi.org/10.2140/ant.2015.9.459 hessarticle author=Hess, F., title= Computing Riemann–Roch Spaces in Algebraic Function Fields and Related Topics, journla=Journal of Symbolic Computation, year=2002, month=April, volume=33, issue=4, pages=425–445 knesergenarticle author=Hou, Xiang–Dong, author=Leung, Ha Kin, author=Xiang, Qing, title = A Generalization of an Addition Theorem of Kneser, journal = Journal of Number Theory, volume = 97, number = 1, pages = 1–9, year = 2002, issn = 0022-314X, doi = https://doi.org/10.1006/jnth.2002.2793, url = https://www.sciencedirect.com/science/article/pii/S0022314X02927939, keywords = finite field, Kneser's theorem, the Cauchy–Davenport theorem, the Dyson e-transform., abstract = A theorem of Kneser states that in an abelian group G, if A and B are finite subsets in G and AB=ab:a∈A,b∈B, then ∣AB∣⩾∣A∣+ ∣B∣− ∣H(AB)∣ where H(AB)=g∈G:g(AB)=AB.", "Motivated by the study of a problem in finite fields, we prove an analogous result for vector spaces over a field E in an extension field K of E. Our proof is algebraic and it gives an immediate proof of Kneser's Theorem.", "jensenarticle author=Jensen,David, author=Sawyer, Kalila Joelle, title=Scrollar Invariants of Tropical Curves, doi = 10.48550/arXiv.2001.02710, url = https://arxiv.org/abs/2001.02710, publisher = arXiv, year = 2020, copyright = arXiv.org perpetual, non-exclusive license kneserarticle author=Kneser, Martin, title = Abschätzungen der asymptotischen Dichte von Summenmengen, journal = Matematika, year = 1961, volume = 5, issue = 3, pages = 17–44, obuchiarticle author=Ohbuchi, Akira, title=On some numerical relations of $d$ -gonal linear systems, journal=Journal of Math, Tokushima University, year=1997, volume=31, pages=7–10 siegelbook author=C.", "L. Siegel, title=Lectures on the Geometry of Numbers, publisher=Springer-Verlag, year = 1989, city=Berlin, mearticle author=Vemulapalli, Sameera, title=Sumsets of sequences in abelian groups and flags in field extensions, year = 2022," ] ]
2207.10522
[ [ "Remodelling of the fibre-aggregate structure of collagen gels by\n cancer-associated fibroblasts: a time-resolved grey-tone image analysis based\n on stochastic modelling" ], [ "Abstract In solid tumors, cells constantly interact with the surrounding extracellular matrix.", "In particular cancer-associated fibroblasts modulate the architecture of the matrix by exerting forces and contracting collagen fibres, creating paths that facilitate cancer cell migration.", "The characterization of the collagen fibre network and its space and time-dependent remodelling is therefore key to investigating the interactions between cells and the matrix, and to understanding tumor growth.", "The structural complexity and multiscale nature of the collagen network rule out classical image analysis algorithms, and call for specific methods.", "We propose an approach based on the mathematical modelling of the collagen network, and on the identification of the model parameters from the correlation functions and histograms of grey-tone images.", "The specific model considered accounts for both the small-scale fibrillar structure of the network and for the presence of large-scale aggregates.", "When applied to time-resolved images of cancer-associated fibroblasts actively invading a collagen matrix, the method reveals two different densification mechanisms for the matrix in direct contact or far from the cells.", "The very observation of two distinct phenomenologies hints at diverse mechanisms, which presumably involve both biochemical and mechanical effects." ], [ "Introduction", "In living tissues, cells are commonly embedded within complex networks of extracellular matrix (ECM) constituted of diverse highly cross-linked components, including fibrous proteins, proteoglycans, glycoproteins and polysaccharides [1], [2], [3].", "Each of the individual ECM components exhibits specific biomechanical and biochemical properties that are related to its polymer structure, size and binding affinities to signaling molecules [4], [5].", "These properties in turn modulate several cellular processes including proliferation, differentiation, survival and motility by ligating specialized cell surface receptors [6], [7].", "Cell-matrix interactions also lead to ECM remodeling by the cells, with the ECM fibres being synthesized, re-oriented, deformed and degraded by the cells, particularly fibroblasts [8].", "The topology of the ECM (organization of fibres, matrix porosity and density) also influences the mechanical properties of the matrix [9], [10].", "For instance, aligned bundles of collagen increase matrix stiffness [4].", "Along with fibre organization, the degree of porosity of the ECM influences its rheological properties which in turn modulates the cell behavior [11].", "Collagens are the most abundant components of the ECM and their structure and composition differ across various tissue types [12], [13].", "The collagen family is composed of 28 types consisting in fibril-forming, fibril-associated, network-forming and other structures subfamilies [14], [15].", "Type I collagen represents the most common fibrillar collagen in vertebrates.", "Its synthesis, stiffening and remodelling are involved in both physiological processes such as wound healing, inflammation, tissue repair and pathological ones such as angiogenesis or tumor fibrosis which promotes growth and invasion [16], [17], [18], [14].", "Many solid tumors such as those of the breast, lung and pancreas are characterized by the presence of a desmoplastic stroma typified by an accumulation of fibrillar collagen.", "This collagen accretion increases the stiffness and creates discrete structural patterns called tumor-associated collagen signatures (TACS) in the stroma.", "These anisotropic patterns promote directed cell migration into and through the stroma by contact guidance [19], [20], [21], [22].", "The stroma comprises a complex ecosystem of endothelial cells, immune cells, and cancer-associated fibroblasts (CAFs) [23].", "The latter make up the majority of the desmoplastic stroma and have been shown to promote the growth of primary tumors [24], [25], [26].", "Understanding how CAFs influence the architecture of collagen is key to improving our understanding of cancer cell invasion through a dense collagenous stroma as observed in breast, lung and pancreatic cancers.", "At macroscopic scales, this can be studied with gel-contraction assays [27], [28], [29] and bead tracking [30], [31], [32], [33], whereby the mechanical deformation of the collagen matrix is monitored without having to explicitly consider the underlying remodelling of the collagen network.", "More detailed understanding of the process, based on local influence of individual cells on the matrix and the biochemical interactions involved can be obtained using atomic-force and confocal microscopy at the scale of the cells [34].", "However, few studies have attempted to reconcile the macroscopic and cellular approaches to characterize the influence of cells on collagen architecture.", "Developing image analysis tools to characterize the collagen-fiber network and its space- and time-dependent microstructural modifications in the context of cellular migration is challenging for a variety of reasons.", "In particular, the low signal-to-noise ratio of imaging techniques suitable for time-resolved studies at the required resolution precludes standard image analysis methods based on the segmentation of the fibrillar structures [35], [36].", "Grey-tone image analysis methods have been developed in this context, but they focus on the analysis of fibre orientation using a variety of methods such as pixel-wised gradient estimations [37], Fourier transforms [38], or mathematical morphological operations [39].", "In addition to fibre orientation, however, key aspects of ECM remodelling concern the spatial distribution of collagen fibres.", "Grey-tone image analysis methods that can address that question, and statistically capture possibly non-homogeneous fibre densification patterns, are yet to be developed.", "The paper proposes such a method based on the stochastic modelling of grey-tone structures in microscopy images, and on the identification of the model parameters from their correlation functions and grey-tone histograms.", "Acellular collagen gels with different concentrations are considered first, in order to develop and validate the methodology.", "The question of ECM remodelling is then addressed in the context of a collagen-embedded CAF spheroid model, and the local evolution of the fibre network that accompanies cellular migration is studied in space- and time-resolved way." ], [ "Materials and Methods", "Dulbecco's modified Eagle medium (DMEM), L-glutamine, sodium pyruvate, penicillin, streptomycin and 0.25% trypsin/EDTA solution were purchased from ThermoFisher Scientific.", "Recombinant platelet-derived growth factor BB isotype (PDGF-BB) was obtained from R&D Systems.", "Fetal bovine serum (FBS), sodium bicarbonate, carboxymethylcellulose and 10$\\times $ concentrated DMEM, high viscosity carboxymethylcellulose sodium salt were obtained from Sigma-Aldrich.", "High concentration acid soluble native type I rat tail collagen was purchased from Corning and DQ-Collagen™type I from bovine skin fluorescein conjugate was obtained from ThermoFisher.", "SPY650-DNA, a non-toxic, cell permeable and highly specific live cell DNA probe was purchased from Spirochrome." ], [ "Preparation of collagen gels", "Type I collagen gels were prepared for imaging of collagen architecture and spheroid invasion assays.", "Collagen gels of 2.0 mg/mL and 3.0 mg/mL were prepared by diluting the stock collagen solution (8-11 mg/mL) with 10x concentrated DMEM, NaHCO$_3$ , 1N NaOH, milliQ H$_2$ O and neutralized to pH 7.2.", "To visualize collagen fibers by confocal fluorescence microscopy (CFM), fluorogenic DQ-collagen I was mixed with unlabeled diluted collagen as previously described [40], obtaining a final concentration of 20 $\\mu $ g/mL.", "Collagen dilutions were performed and maintained on ice until use.", "Droplets (25 $\\mu $ L) of diluted collagens were spotted in 35mm-glass bottom $\\mu $ -dishes (Ibidi) and polymerized during 45 min at 19$^\\circ $ C, followed by a 30 min incubation at 37$^\\circ $ C. The polymerization temperature of 19$^\\circ $ C was chosen to provide matrix architecture comparable to in vivo with a network consisting of a mixture of a few thin bundles and many thick bundles [41].", "After completion of collagen polymerization, 1 mL of preheated DMEM supplemented with 5% FBS was added to the dishes.", "Image acquisition was performed after 24 h of incubation at 37$^\\circ $ C with 5% CO$_2$ ." ], [ "Cell culture", "Mouse CAFs (CAFs) have been isolated from mammary gland tumors of mammary specific polyomavirus middle T antigen overexpression mouse model (MMTV-PyMT) at 12 weeks as previously described [42] and immortalized with the pLenti HPV16 E6-E7 RFP, expressing a cytoplasmic red fluorescent protein (RFP).", "CAFs were grown in high-glucose DMEM supplemented with 10% FBS, 2mM L-glutamine, 1mM sodium pyruvate, 100IU/ml penicillin, 100$\\mu $ g/mL streptomycin.", "Cultures were maintained at 37$^\\circ $ C with 5% CO$_2$ until their confluence reaches about 80%." ], [ "Spheroid invasion assay", "Spheroids were prepared by seeding 1000 CAFs in 100 $\\mu $ L of spheroid formation medium composed of 0.22 $\\mu $ m-filtered DMEM medium supplemented with 10% FBS and 20% carboxymethylcellulose 4000 centipoise.", "Cells were seeded in round-bottom non-adherent 96-well plates (CELLSTAR, Greiner Bio-One) and centrifugated at 1000 rpm for 5 min.", "Plates were incubated at 37$^\\circ $ C with 5% CO$_2$ for 48 hours to promote spheroid formation.", "The content of each well was transferred in a petri dish with a 200 $\\mu $ L pipette (with cutted tip) and individual spheroids were collected under a binocular microscope with a 10 $\\mu $ L pipette.", "Each spheroid was resuspended in 23 $\\mu $ L of diluted collagen (2 mg/mL) and spotted as a 25 $\\mu $ l drop in a prechilled 8 well-glass bottom chamber slide (Ibidi).", "The slides were then transferred immediately to 19$^\\circ $ C and flipped to maintain the spheroids in the middle of the collagen drop (preventing their sedimentation to the glass surface or to the collagen/air interface).", "The extent of collagen polymerization and spheroid positioning were carefully controlled by microscopic examination throughout the polymerization step.", "After 30 min at 19$^\\circ $ C, the slides were transferred at 37$^\\circ $ C to complete the polymerization.", "Preheated culture medium (300 $\\mu $ L/well) supplemented with 5% FBS, 10 ng/mL PDGF-BB and SPY650-DNA (1000-fold dilution) was added to the slides and time-lapse imaging was initiated within an hour after collagen polymerization." ], [ "Confocal microscopy", "Images of the three-dimensional (3D) collagen gels were acquired with an inverted confocal laser scanning microscope (LSM 880 Airyscan Elyra S1, Zeiss) with a Plan-Neofluar 10$\\times $ /0.30 N.A.", "or a Plan-Neofluar 20$\\times $ /0.50 N.A.", "objective (Zeiss).", "The DQ-collagen containing gels were excited with 488 nm laser.", "Non-fluorescent collagen gels were imaged by confocal reflectance microscopy (CRM) in Airyscan high resolution mode with a simultaneous excitation of the matrix by 488 nm and 633 nm lasers.", "To avoid edge effects, images were acquired at least 100 $\\mu $ m away from the gel border, avoiding regions close to the gel/glass and gel/medium interfaces.", "To visualize collagen fibres of acellular gels (gels containing no cells), samples were imaged both by CFM and CRM, using the 20$\\times $ objective.", "The resulting images have dimensions of 1000$\\times $ 1000$\\times $ 90 voxels with anisotropic voxel size of 0.42$\\times $ 0.42$\\times $ 1.10 $\\mu $ m$^3$ , corresponding to a physical volume of approximately 500$\\times $ 500$\\times $ 100 $\\mu $ m$^3$ .", "Time-lapse imaging of spheroid-containing collagen gels was performed using the 10$\\times $ objective with the samples incubated at 5% CO$_2$ and 37$^\\circ $ C in the on-stage incubator (Okolab).", "The collagen matrix was imaged by CRM in Airyscan high resolution mode with a 1.4$\\times $ digital zoom (scaling per pixel: 0.59 $\\mu $ m $\\times $ 0.59 $\\mu $ m $\\times $ 3.29 $\\mu $ m) and CAF were imaged by CFM in Fast Airyscan mode (scaling per pixel: 0.91$\\mu $ m $\\times $ 0.91 $\\mu $ m $\\times $ 3.29 $\\mu $ m).", "CAF images were rescaled to fit the images of the collagen matrix.", "CAF-derived RFP was excited by a 561 nm laser and detected at 591 nm.", "The SPY650 DNA nuclear stain was excited by a 633nm laser and the signal was captured at 654 nm.", "The imaging procedures for the spheroids and the matrix required a sequential imaging of the same tridimensional zone, leading to two 3D image files.", "Images were recorded every 30 min up to 16 h. 3D stacks were obtained at a step size of 2-$\\mu $ m intervals.", "Type I collagen fibrils have a diameter ranging from 20 nm to several hundred nm [43] while fibres are larger in diameter.", "Given that the size of each pixel is 0.42 and 0.59 $\\mu $ m (for 20$\\times $ and 10$\\times $ objectives, respectively), it is not possible to distinguish fibrils from fibres [44], therefore the term “fibres” was used to include both fibrils and fibres.", "The raw images were converted using the Airyscan algorithm from the Zeiss Zen Black software.", "The images were then subjected to a histogram stretching and converted to tiff format through the Zeiss Zen Blue software.", "This operation turned the original 16-bit images into 8-bit images, which were corrected by histogram adjustment to maximize the conservation of valuable information." ], [ "Covariance and grey-tone correlation function", "Examples of images of the dilute (2 mg/mL) and concentrated (3 mg/mL) gels obtained through confocal fluorescence (CFM) and confocal reflectance (CRM) microscopy are given in Fig REF .", "These are 2D single-$z$ images taken out of the 3D images.", "The structural analysis is based on 15 images such as presented in the figure, for each gel and each imaging mode.", "Type I collagen fibrils have a diameter ranging from 20 nm to several hundred nm [43] while fibres are larger in diameter.", "Given that the size of each pixel is 0.42 and 0.59 $\\mu $ m (for 20x and 10x objectives, respectively), it is not possible to distinguish fibrils and fibres [44], therefore the term “fibres” was used to include both fibrils and fibres.", "The most salient structures are the fibre aggregates that are a few tens of micrometers across.", "The fused images in Fig REF a$_3$ and b$_3$ show that there is limited overlap between the CRM and CFM microscopy data: the fibres and aggregates are well captured in CFM but it is mostly the aggregates that are visible in the CRM data.", "In order to follow a standard image analysis procedure, and justify further less-standard developments, we first explored a method based on image segmentation.", "In that spirit, the grey-tone images of the gels are converted to binary images following a method described in the Supplementary Material (see Fig.", "S1).", "Examples of segmented images of the gels, with collagen in white and the rest in black, are provided in the insets of Figs.", "REF a and REF b.", "The segmented gel images display complex and disordered structures, which are also corrupted by noise in the case of the reflectance images (Figs.", "REF b$_1$ and REF b$_2$ ).", "In this context, the covariance - which describes the spatial correlation between all pixels intensities in the images [45], [46], [47], [48]- was used to quantitatively analyze the gel structures.", "The covariances shown in Fig.", "REF a and REF b were obtained from fifteen images taken in the same gel with dimension 200 $\\times $ 200 $\\mu m^2$ as illustrated in the insets, via a Fourier-transform algorithm, and the error bars are the standard errors of the means.", "Figure: Experimental covariance (top) and grey-tone correlation function (bottom) of the 2 mg/mL (blue circle, a 1 _1, b 1 _1, c 1 _1, d 1 _1) and 3 mg/mL (red diamond, a 2 _2, b 2 _2, c 2 _2, d 2 _2) gels, imaged in fluorescence (a, c) and reflectance (b, d) modes.", "The insets display one of the fifteen images used for each condition, and the error bars are the standard errors of the mean.The covariance $C(r)$ of say, the white component of an image, has a clear geometrical interpretation as it is defined as the probability that any couple of points at distance $r$ from one another both belong to that phase.", "For very small values of $r$ , this definition coincides with the probability for a single point to belong to the white phase, which is numerically equal to its density $\\phi _1$ normalized between 0 and 1.", "In the opposite limit, that is for very large distances $r$ the covariance converges to the value $\\phi _1^2$ , which corresponds to a horizontal asymptote in Fig REF .", "The shape of the covariance curve between those two limits characterizes the structures present in the images.", "In particular the progressive decrease of $C(r)$ over distances of a few tens of microns testifies to the presence of structures with those dimensions, which we qualitatively referred to earlier as fibre aggregates.", "The covariance of the fluorescence images of the gels (in Fig.", "REF a) highlights at once the unsuitability of image segmentation in the present context.", "The covariance of the 2mg/mL gel is found to be larger than that of the 3mg/mL gel for any $r$ , which means that densities of the segmented images contradict the actual collagen concentrations of the gels.", "This results from the fact that collagen-rich areas of the images display a variety of grey-tones related to the local fibre density (see Fig.", "S1b), which information is lost during the all-or-nothing segmentation procedure.", "This general observation calls for grey-tone image analysis methods that preserve the structural information in the images.", "In that spirit, the grey-tone correlation functions $C(r)$ of the gels are shown in Fig.", "REF c and REF d. The latter are measured directly on the unprocessed images and they characterise the statistical correlation between grey-tones of all pixels that are at distance $r$ from one another.", "In the case where the images contain only the values 0 and 1, this definition is mathematically equivalent to the covariance.", "We postpone to a later section the discussion of the structural significance of the grey-tone correlation function, but we already notice at this stage that the grey-tone data in Figs.", "REF c and REF d scale with the actual collagen concentration of the 2mg/mL and 3mg/mL gels as they should." ], [ "Structural models", "Covariance and grey-tone correlation functions convey indirect yet very rich structural information [49], [50], [51], which can notably be retrieved using structural models.", "We here present two models aimed at extracting structural information from the covariance data in Fig.", "REF , which we generalize later to grey-tone correlation functions.", "In order to cope with the disordered structure of the gel, the models have to be stochastic [52], [47], [53], [48].", "When using stochastic models, a structure is defined through probabilistic rules and this calls for specific concepts.", "In particular, it is convenient to introduce the indicator function of the structure $\\mathcal {I}(\\mathbf {x})$ , which takes the value 1 if the point $\\mathbf {x}$ belongs to the structure and 0 otherwise [47].", "With such definition, the density of the model is calculated as $ \\phi _1 = \\langle \\mathcal {I}(\\mathbf {x}) \\rangle $ where the brackets $\\langle \\rangle $ stand for the average value, calculated either over $\\mathbf {x}$ or over all the possible realization of the model.", "Similarly, the covariance is calculated as the following two-point average $ C_{11}(r) = \\langle \\mathcal {I}(\\mathbf {x}) \\mathcal {I}(\\mathbf {x} + \\mathbf {r}) \\rangle $ because the product $\\mathcal {I}(\\mathbf {x}) \\mathcal {I}(\\mathbf {x} + \\mathbf {r})$ is equal to one, only if the points $\\mathbf {x}$ and $\\mathbf {x}+\\mathbf {r}$ belong to the structure.", "In the latter equation, we have assumed statistical isotropy so that the dependence is only through the modulus $r = |\\mathbf {r}|$ ." ], [ "Homogeneous fibre model", "The simplest model we consider to analyze the structures in Fig.", "REF assumes that the gel matrix is statistically homogeneous.", "The model consists in tossing fibres (modelled as elongated rectangles) with random position and orientation, as sketched in Fig.", "REF .", "Such model is described by three parameters, namely: the number density of fibres $\\theta $ (unit $\\mu $ m$^{-2}$ ), as well as their length $L_F$ and diameter $D_F$ (both in units of $\\mu $ m).", "Note that the statistical homogeneity of the model does not rule out the existence of aggregates, which form when a large number of fibres coincidentally fall in the same region of space.", "The same holds for pores in the gel matrix.", "Figure: Homogeneous fibre model, obtained as a Boolean model of randomly oriented rectangles.", "The two realisations are obtained with length and diameter L F =20L_F=20 μ\\mu m, D F =1D_F=1 μ\\mu m (a and ▴\\blacktriangle ) and L F =100L_F=100 μ\\mu m, D F =10D_F=10 μ\\mu m (b and ▪\\blacksquare ), and the calculated covariances are compared with that of the 2mg/mL gel imaged by CFM (same as Fig.", "a1)The homogeneous fibre model is a particular case of a Boolean model [45], [48], for which the density and covariance are known analytically.", "In particular, the fibre density is $ \\phi _F = 1 - \\exp \\left[ - \\eta \\right]$ where $\\eta = \\theta D_F L_F$ is the density one would expect in absence of fibre overlap.", "The covariance is given by the following expression $ C_{FF}(r) = 2 \\phi _F - 1 + (1-\\phi _F)^2 \\exp \\left[ \\eta K_F(r) \\right]$ where $K_F(r)$ is the geometrical covariogram of the randomly-oriented fibres, which is calculated as $ K_F(r) = \\frac{2}{\\pi } \\int _0^{\\pi /2} \\left[1 - \\frac{r}{D_F} \\sin (t) \\right] \\left[1 - \\frac{r}{L_F} \\cos (t) \\right] \\textrm {d}t$ for $r<D_F$ and the upper integration bound is replaced by $\\textrm {asin}(D_F/r)$ for $r \\ge D_F$ .", "The covariance of the homogeneous fibre model is plotted in Fig.", "REF for two fibres sizes.", "For the purpose of illustration, the model is compared with the experimental covariance of the segmented image of the 2mg/mL gel (from CFM).", "In the figure, the number of fibres $\\theta $ is chosen to achieve a density $\\phi _F \\simeq 0.54$ comparable to the segmented image.", "As a consequence, the asymptotic values of $C_{FF}(r)$ are a close match to the gel covariance both for $r = 0$ and for $r \\rightarrow \\infty $ .", "The shape of the covariance at intermediate distances, however, cannot be captured by the homogeneous model.", "The model can account for either the small- or large-$r$ data but not the two simultaneously.", "In the former case, the model captures the small-scale structure of the gel (Fig.", "REF a) but it is unable to account for the aggregates, which are found to be more prevalent than what can be expected from randomness alone.", "In the latter case, the large-scale structure of the gel is reasonably reproduced, but this is done by replacing fibre aggregates by unrealistically large rectangles (Fig.", "REF b)." ], [ "Fibre aggregates model", "The inability of the homogeneous fibre model to reproduce the experimental covariance of gels proves that structures larger than individual fibres are more frequent than what can be accounted for by statistical fluctuations alone.", "The existence of such large pores and aggregates is notably apparent when comparing the realization of the homogeneous model in Fig.", "REF a with the insets of Fig.", "REF .", "To address this issue, we introduce a second model that builds on two distinctly different structures.", "At the smallest scale the structure is assumed to be that of homogeneously-distributed fibres, corresponding to indicator function $\\mathcal {I}_F(\\mathbf {x})$ .The larger-scale structure, however, is accounted for by creating the indicator function of the entire structure $\\mathcal {I}(\\mathbf {x})$ through the following multiplication $ \\mathcal {I}(\\mathbf {x}) = \\mathcal {I}_F(\\mathbf {x}) \\times \\mathcal {I}_A(\\mathbf {x})$ where $\\mathcal {I}_A(\\mathbf {x})$ is the indicator function of the aggregates, equal to one if $\\mathbf {x}$ is inside an aggregate.", "Mathematically, Eq.", "(REF ) is equivalent to starting with a homogeneous fibre structure and subsequently carving pores out of it, using $\\mathcal {I}_A(\\mathbf {x})$ as a mathematical cookie-cutter.", "Independently of the specific models chosen for $\\mathcal {I}_F(\\mathbf {x})$ and $\\mathcal {I}_A(\\mathbf {x})$ , evaluating the average value of Eq.", "(REF ) yields the following density for the two-scale structure $ \\phi _1 = \\phi _F \\phi _A$ where $\\phi _A$ is the density of the aggregates, and $\\phi _F$ is the density of the fibres within the aggregates.", "Equation (REF ) results from the general definition of the density in Eq.", "(REF ), with the assumption that the fibre and aggregate models are statistically independent from one another.", "The same assumption provides the following relation for the covariance of the solid phase $C_{11}(r)= C_{FF}(r) C_{AA}(r)$ as a consequence of Eq.", "(REF ).", "For the small-scale structure, we assume the same fibre model as considered earlier, with density $\\phi _F$ and covariance $C_{FF}(r)$ given in Eqs.", "(REF ) and (REF ).", "As the aggregates are very disordered with no well-defined shape, we model them with a clipped Gaussian-field approach [54], [55], [53] as described in the Supplementary Material.", "The two parameters of the large-scale model are the density of the aggregates $\\phi _A$ and a single characteristic length $L_A$ that controls their size.", "Figure: Covariance of the fibre-aggregate model, with homogeneous fibre model at small scale (a and ⋄\\diamond ) and clipped Gaussian-field model for the large-scale aggregates (b and □\\square ), combined to yield a two-scale structure (c and ▴\\blacktriangle ).", "The calculated covariance is compared with that of the 2mg/mL gel imaged by fluorescence microscopy (same as Fig.", "a1)As illustrated in Fig.", "REF for the segmented image of the 2mg/mL gel in fluorescence mode, the fibre-aggregate model captures well the experimental covariance of the gels.", "For the fitting of the data, the parameter $L_F$ in the fibre model is irrelevant as the actual length of the fibres is controlled by the size of the aggregates.", "Equation REF was therefore simplified to its limit $L_F \\rightarrow \\infty $ .", "The values of the remaining parameters are gathered in Tab.", "REF for the two gels and the two imaging modes.", "From the fitted parameters $\\phi _F$ and $\\phi _A$ of the fibre-aggregate model, the total density of the fibres $\\phi _1$ was calculated through Eq.", "(REF ) and is also reported in Tab.", "REF .", "Table: Structural parameters of the gels, obtained from fitting the covariance (binary images) or the correlation function and histogram (grey-tone images) with the fibre-aggregate model.Realizations of the fibre aggregate model for the two gels and the two imaging modes are given in Fig.", "REF .", "These realizations illustrate the structures captured by the covariance in the segmented images of the gels, and they largely illustrate the drawbacks of a data-analysis based on image segmentation.", "As mentioned earlier, the segmentation of the fluorescence data leads to inconsistencies between the density of the segmented images and the gel concentrations.", "In addition, the significant noise in the reflectance data lead to unrealistically large values for the aggregate density (see Figs.", "REF b$_1$ -b$_2$ and REF b$_1$ -b$_2$ ).", "Figure: Realizations of the fibre aggregate model, for the 2mg/mL (a 1 _1, a 2 _2) and 3mg/mL (b 1 _1, b 2 _2) gels, based on the average values of the parameters fitted from the covariance of the CFM (a 1 _1 and b 1 _1) and CRM (a 2 _2 and b 2 _2) segmented images." ], [ "Greytone model", "In order to overcome the limitations of image segmentation, we expand here the binary fibre-aggregate model so as to allow the identification of its parameters directly from grey-tone images.", "The binary models ignore the fact that large image intensities are associated with large local fibre concentrations.", "In the grey-tone model, we therefore assume that every fibre contributes additively (by a quantity $\\Delta $ ) to the local grey-tone of the image.", "Namely, the image intensity where 2, 3 or more fibres overlap is $2 \\Delta $ , $3 \\Delta $ , etc.", "The model also accounts explicitly for the noisiness of the data, through uncorrelated Gaussian noise $n(\\mathbf {x})$ with variance $\\sigma ^2$ , and for a background intensity $b$ .", "These assumptions can be formally written by expressing the local image intensity at point $\\mathbf {x}$ as $ I(\\mathbf {x}) = b + \\mathcal {I}_A(\\mathbf {x}) \\times \\Delta \\sum _i \\mathcal {I}_F^{(i)}(\\mathbf {x}) + n(\\mathbf {x})$ where $\\mathcal {I}_A(\\mathbf {x})$ is the indicator function of the aggregates as before, and the sum accounts for the overlapping of the fibres.", "In the sum, each term $\\mathcal {I}_F^{(i)}(\\mathbf {x})$ is the indicator function of a homogeneous fibre model with vanishingly small number density $\\eta ^{(i)}$ so as to the overlapping of fibres.", "The fibre density $\\phi _F$ is then obtained again through Eq.", "(REF )., with the total number density $\\eta = \\sum _i \\eta ^{(i)}$ .", "The statistical independence of all contributions in Eq.", "(REF ) -namely, $\\mathcal {I}_A(\\mathbf {x})$ , $n(\\mathbf {x})$ and $\\mathcal {I}_F^{(i)}(\\mathbf {x})$ for all $i$ 's- enables one to calculate the corresponding grey-tone correlation function.", "This is done through to application of Eq.", "(REF ) to the image intensity in Eq.", "(REF ), and the result is $ C(r) = \\langle I \\rangle ^2 + \\eta \\Delta ^2 K_F(r) C_{AA}(r) + (\\eta \\Delta )^2 \\left( C_{AA}(r) - \\phi _A^2 \\right) + \\sigma ^2 \\delta (r)$ where $K_F(r)$ is the geometrical covariogram of the fibres, $C_{AA}(r)$ is the covariance of the aggregates, the last term is the correlation function of uncorrelated Gaussian noise where $\\delta (r)$ is equal to 1 if $r=0$ and to 0 otherwise, and $\\langle I \\rangle = b + \\eta \\phi _A \\Delta $ is the average intensity of the image.", "Figure: Greytone fibre-aggregates model of the 2mg/mL type-I collagen gel, imaged by fluorescence microscopy, with (a1) histogram of grey levels (bars: experimental values; blue shade: model), (b) the correlation function (dots: experimental value; solid line: model), as well as realizations of the fibres (c), aggregates (d) and complete model (e).", "In (a 2 _2) the values inferred from the segmented images in the pores (back bars) and fibre areas (white bars) are compared with the model contributions to the background (dark shade) and increasing number of overlapping fibres (bright shades).In addition to correlation functions, the grey-tone modelling enables one to extract structural information from the grey-tone distribution itself.", "Because the fibres in the model are distributed according to a Boolean process, the probability for exactly $k$ fibres to overlap at any point of space is a Poisson variable, namely $ \\textrm {Prob} \\Big \\lbrace k \\textrm { overlapping fibres} \\Big \\rbrace = \\frac{\\eta ^k \\exp [-\\eta ]}{k !", "}$ which contains Eq.", "(REF ) as a particular case, when all probabilities for $k$ larger or equal to 1 are added.", "As each fibre contributes a quantity $\\Delta $ to the local intensity, Eq.", "(REF ) can also be interpreted as the probability for observing the intensity $k \\times \\Delta $ for any point inside an aggregate.", "Accounting also for the background intensity $b$ and for the Gaussian noise, the intensity distribution is therefore $ f(I) = (1 - \\phi _A \\phi _F) g_\\sigma [I-b] + \\phi _A \\exp [-\\eta ] \\sum _{k=1}^\\infty \\frac{\\eta ^k}{k !}", "g_\\sigma [I-(b+k\\Delta )]$ where $g_\\sigma [x] = \\frac{1}{\\sqrt{2 \\pi } \\sigma } \\exp [-\\frac{x^2}{2 \\sigma ^2} ]$ is the centred Gaussian probability density.", "The quantity $f(I) \\textrm {d}I$ is the probability for a pixel to have intensity in the interval $[I, I + \\textrm {d}I]$ .", "In Eq.", "(REF ) the first term accounts for grey-tone distribution of the background, outside the aggregates or between fibres within the aggregates.", "In the second term, the sum over $k$ is on the increasing number of overlapping fibres within the aggregates.", "Figure: Realizations of the grey-level fibre aggregate model, for the 2mg/mL (a 1 _1, a 2 _2) and 3mg/mL (b 1 _1, b 2 _2) gels, based on the average values of the parameters fitted from the fluorescence (a 1 _1 and b 1 _1) and reflectance (a 2 _2 and b 2 _2) images.Figure REF illustrates the fitting of the microscopy images with the grey-tone model.", "The model captures nicely both the distribution of grey levels (Fig.", "REF a1) and the correlation function (Fig.", "REF b).", "As an independent check, Fig.", "REF a$_2$ compares the modelled distribution of grey tones with the values measured in the regions of the image that were classified as pores or fibres in the segmented images (see Fig.", "S1).", "The comparison should be considered with caution, as it is the unsuitability of the segmentation that justified the present grey-tone approach.", "The agreement of the two methods, however, seem reasonable.", "The various contributions to the fibre intensity in Fig.", "REF a$_2$ also testify to the importance of fibre overlap for the image analysis.", "The values of the fitted parameters on the two gels and two imaging modes are reported in Tab.", "REF .", "Realizations obtained from the average values of the fitted parameters are given in Fig.", "REF ." ], [ "Time- and space-resolved analysis of the gel structure in the spheroid model", "Time-resolved reflectance images of the platelet-derived growth factor-BB (PDGF-BB)-treated CAF spheroid and surrounding gel are shown in Fig.", "REF , for times ranging from 30 min to 750 min.", "In order to characterize the spatiotemporal dynamics of cell interaction with the surrounding collagen matrix during cell migration, the system was imaged at lower resolution than the acellular gels considered so far.", "At the scale of the spheroid, individual fibres are not visible and the only structures detected in the gels are the aggregates.", "The images testify to a progressive local densification of the gel structure close to the spheroid and migrating cells but it is unclear whether other structural characteristics of the gel evolve as well.", "Moreover, as the gel heterogeneity seems to develop at the same scale as the spheroid, it is difficult to ascertain whether the effect of the cells is through direct contact with the gel, or if long-range effects are also operative.", "Figure: Images of the CAF spheroid and surrounding gel at various times: t 1 =30t_1=30 min, t 3 =210t_3=210 min, t 9 t_9=750 min.", "The images correspond to one height zz cutting through the spheroid, with the cells shown in red and the density of the gel coded from blue to yellow.In order to quantitatively investigate the modification of the gel structure in relation with cell migration, the gel area surrounding the spheroid was decomposed into layers corresponding to increasing distances to the closest cells (shown as different colors in Fig.", "REF left).", "At each time step, the layers were recalculated according to the updated position of the cells.", "The shapes of the equal-distance layers become increasingly complex with time, as a consequence of the irregularity of the cell invasion pattern.", "The histogram of grey tones and the correlation function of the collagen were measured within each layer at each time step (Fig.", "REF middle and right).", "Figure: Collagen structure analysis in the neighbourhood of the CAF spheroid, at different times (t 1 =30t_1 = 30 min, t 3 =210t_3 = 210 min t 9 =750t_9 = 750 min) and distances from the cells (d 1 <100μd_1 < 100 \\ \\mu m, 100μm<d 2 <200μm100 \\ \\mu \\textrm {m} < d_2 < 200 \\ \\mu \\textrm {m} and 200μm<d 3 <300μm200 \\ \\mu \\textrm {m} < d_3 < 300 \\ \\mu \\textrm {m} ).", "In the original images (left) the gel is shown in grey and the cells in bright red, and the same colour code is used for the distances in the grey-tone distributions (middle) and correlation functions (right).", "The middle panel displays the distributions of grey-tone intensity II, with the bars being the experimental distributions and the 90 % percentile shown as a vertical line.", "The shaded areas are the grey-tone model, with the background contribution (darker) and increasing number of fibre overlaps.", "In the right panel the grey-tone correlation functions C(r)C(r) are shown, with the dots being the experimental values and the solid lines being the grey-tone model.The densification of the fibres in the vicinity of the spheroid as well as of the migrating cells is manifest in Fig.", "REF through the progressive shifting of the grey-tone distributions towards brighter values (middle panel), and notably the 90 % percentile (vertical lines).", "A similar evolution is observed in the correlation functions that progressively shift towards larger values with time.", "The evolution is quite rapid for the collagen in direct contact with the cells (d$_1$ ) and much slower for larger distance (d$_3$ ).", "In order to analyse the structural modifications corresponding to the changes in grey-tone distributions and correlation functions, all the data measured at 9 successive time steps and 5 different distances from the cells were fitted simultaneously with the grey-tone model (see Fig.", "REF for 3 times and 3 distances).", "Among the parameters of the grey-tone model, the densities and sizes bear a structural meaning (see Tab.", "REF ), but other parameters merely characterize the imaging mode.", "This is notably the case for the fibre contrast $\\Delta $ , the background intensity $b$ , and the noise intensity $\\sigma $ (see Eqs.", "REF and REF ).", "For the fitting of the data, different values of the structural parameters were allowed for each time and distance, but a unique value of the imaging parameters was allowed for all images.", "This is justified as the time series images of the gel and spheroid were measured through a time-laps protocol with unchanged imaging parameters.", "The values of the imaging parameters from the fit are $b \\simeq 9$ , $\\Delta \\simeq 15$ and $\\sigma \\simeq 5$ .", "The structural parameters are $D_F \\simeq 2$ $\\mu $ m and $L_A \\simeq 19$ $\\mu $ m for all distances and times, and the volume fractions $\\phi _A$ and $\\phi _F$ are as shown in Fig.", "REF a and REF b.", "Figure: Fitted collagen structure for various times and distances to the CAF cells, with aggregate density φ A \\phi _A (a) and fibre density within the aggregates φ F \\phi _F (b).", "The grey surface is a guide to the eye.", "Specific realizations are shown on the right panel to illustrate the significance of the parameters.", "The colour code for distances is the same as in Fig.", "; the non-colored symbols point to intermediate distances not shown in Fig.", ".Fig.", "REF testifies to qualitatively different structural changes in the gel in direct contact with the CAF cells and distant from them.", "Close to the cells (blue series), a large number of aggregates have already formed before the first measurement time ($t < 30$ min).", "It remains relatively constant afterward with density $\\phi _A \\simeq 80$ %, which can also be described in terms of a matrix porosity that remains constant around 20 %.", "In the meantime, the fibre density within the aggregates increases steadily all throughout the experiment.", "This contrasts with the situation far from the cells (yellow series), where the aggregate density $\\phi _A$ increases far more progressively and the fibre density within the aggregates $\\phi _F$ remains altogether constant.", "These qualitatively different structural changes close and far from the cells are illustrated with right panel of Fig.", "REF for various distances and times." ], [ "Discussion", "The structure of collagen gels is a highly hierarchical one [56].", "Starting form molecular dimensions, the smallest structures are the tropocollagen triple helices (diameter 1.5 nm), assembled into protofibrils and fibrils (6 to 100 nm), woven into fibres (1 $\\mu $ m), which are eventually connected to form the gel matrix.", "The matrix, however, is not homogeneous at a scale larger than 10 $\\mu $ m, as visible in all images of Fig.", "REF .", "A central structure of collagen at large scale is he presence of fibre aggregates, which can also be thought of in terms of pores in the matrix.", "The latter are the most salient structures in the gels at the scale of the cells (Fig.", "REF ), and it is therefore natural to enquire how they are modified by neighboring cells.", "We addressed this question by developing first a novel image analysis methodology to characterize the large-scale structure of collagen, by validating it on two acellular gels, and finally applying it to characterise the collagen structure surrounding a spheroid of CAF cells.", "The characterisation of the disordered pore-and-aggregate structure of the gels is challenging, and calls for a statistical description.", "The approach we explored here is based on a mathematical modelling, which aims at statistically capturing the main structural characteristics of the gels through a small number of meaningful parameters.", "It is important to stress that the goal of the models is not to reproduce a local gel structure, as one would typically observe in a single image, but also to capture the structure variability.", "This feature is essential to assign a single value to the structural parameters based on many different images of one and a single gel.", "In that spirit we first considered a homogeneous fibre model (Fig.", "REF ).", "This simple model was found to be incompatible the experimental covariances of the collagen, which statistically confirmed the presence of fibre aggregates and pores.", "We then proposed a two-scale model, where the small-scale structure is described as a homogeneous model of random fibres, and the large-scale aggregate structure is obtained by using a clipped Gaussian-field to carve pores out of it Fig.", "REF ).", "In this two-scale model, two parameters $\\phi _A$ and $L_A$ describe the density and size of the aggregates, and two parameters $\\phi _F$ and $D_F$ describe the density and the diameter of the fibres inside the aggregates.", "Analytical expressions were derived to fit experimental characteristics of the gels measured from the images, and infer the values of the mentioned structural parameters.", "A classical image analysis method consists in segmenting first the images to identify objects of interest and subsequently measure them.", "This approach was proved unsuitable in the present context, as segmentation would result in the loss of valuable structural information.", "The collagen-rich areas of the gels are indeed characterized by broad grey-tone distributions (see Fig.", "S1b), with large intensities being associated with the close proximity or overlapping of many fibres.", "Image segmentation would amount to treating all these regions on the same footing, and would therefore bias any measurement.", "We developed a method to identify the model parameters directly from grey-tone microscopy images, through the fitting of the covariance function and histogram of intensities.", "The covariance -or correlation function, in the case of grey-tone images- is a very informative structural descriptor [57], [51] that can easily be measured on any image through fast Fourier transforms.", "Covariance analysis offers a variety of advantages.", "First, it provides a robust and fully objective statistical description of the spatial distribution of the objects that make up an image.", "This is particularly useful for structures as complex and disordered as fibrillar collagen gels.", "Second, it reduces the impact and possible bias of image preprocessing, which is absent altogether in the case of grey-tone correlation functions.", "Finally, it also provides an efficient way to cope with noisy images, as noise is typically defined as the non-correlated contribution to an image.", "Globally, the procedure based on grey-tone measurements (correlation functions and intensity distributions) and on their modelling proved to be quite consistent with respect to the two types of imaging modes considered in the paper.", "The absolute values of the parameters obtained through reflectance and fluorescence data differ slightly, but identical trends are detected when comparing the 2mg/mL and 3mg/mL acellular gels (Tab.", "REF ).", "The overall fibre density $\\phi _1$ is larger in the 3 mg/mL gel, as it should.", "Our analysis shows that this is due largely to an increased density of aggregates $\\phi _A$ while the fibre density within the aggregates $\\phi _F$ is almost identical in the two gels.", "The characteristic size of the aggregates $L_A$ is larger in the 3mg/mL gel because the aggregates are more numerous and form a larger connected structure.", "To further highlight the importance of using grey-tone images, we also performed the same analysis on segmented images.", "In that case, segmentation leads to the undesirable situation where the density of the segmented images is opposite to the known collagen concentration of the gels in the case of fluorescence data (Tab.", "REF and Fig.", "REF a).", "From a methodological point of view, the application of the grey-tone model to analyze the structure of the 2mg/mL and 3 mg/mL gels in both reflectance and fluorescence modes, validated the overall image analysis approach and enabled us to apply it to the dynamic model of CAF spheroid invasion (Figs.", "REF and REF ).", "Matrix remodelling has been widely documented in earlier works [58], [59], [60], [61], [33].", "Contractility of collagen-embedded cells generates strains on the collagen fibres, leading to their local densification [62], [63] and reorganization [59], [60], [64].", "This process is especially evident for cells of mesenchymal origin such as CAFs, which are characterized by high expression levels of the contractile myosin protein and collagen-binding integrins [63].", "In our experimental model, CAF spheroids were treated with PDGF-BB, a growth factor known to stimulate collagen gel contraction [65], [66].", "By pulling on collagen fibers, cells stiffen their microenvironment and induce irreversible collagen deformations, thereby altering locally the mechanical properties of the ECM.", "This remodeled microenvironment in turn provides guidance cues for the nearby cells through durotactic and/or topotactic migrations [67], [68], [69].", "Besides these guidance cues, high concentrations of fibrillar collagen also promote the local invasion of cells by inducing the formation of specialized cellular extensions termed invadopodia implicated in the local proteolytic remodeling of the ECM [70].", "Our use of a two-scale stochastic model to describe the collagen structure and the identification of its parameters from grey-tone images, enables us to describe the ECM remodelling in more detail than simply the collagen density.", "The pristine state of the ECM in the CAF spheroid model is the one observed at early times $t$ and large distances $d$ from the cells.", "The relevant values of the aggregate and fibre densities are $\\phi _A \\simeq 0.6$ and $\\phi _F \\simeq 0.5$ based on Figs REF a and REF b.", "These values are reasonably consistent with those of the 2mg/mL acellular gels in Tab.", "REF , and the slight differences can be attributed to the lower resolution used in the time-resolved analysis of the spheroid.", "Starting from that initial state, our space- and time-dependent analysis of the collagen structure shows that the ECM densification happens via two distinct mechanisms, namely: the increased density of fibre aggregates $\\phi _A$ and the increased fibre density $\\phi _F$ within the aggregates.", "The two mechanisms are both at work in the gel in close contact with the cells.", "The aggregate density increases very fast, from about $\\phi _A \\simeq 60 $ % to 80% in less than 30 mins (Fig.", "REF a), similar to the cancer cell spheroids investigated by [64] which exert a strong contraction of the surrounding collagen immediately after embedding in the matrix.", "The fibre densification also takes place inside the aggregates is a much more progressive way, as it occurs over hours (Fig.", "REF b).", "The global densification of the gel resulting from the two processes is quite significant, as the overall fibre density in contact with the cells - estimated as $\\phi _1 = \\phi _A \\times \\phi _F$ - approximately passes from $\\phi _1 \\simeq $ 30% to 50%.", "It is important to stress that this only concerns the fraction of fibres that are visible in our experiments, which is determined by the imaging technique (CRM vs CFM) and by the spatial resolution of the imaging system ($\\sim $ 0.59$\\mu $ m/pixel in our case).", "As smaller fibrils and protofibrils are undetected at the considered scale, the apparent densification does not contradict the fact that the total collagen concentration has to remain locally constant.", "We also cannot exclude that a fraction of the aggregate densification observed by CRM in our spheroid model results from a CAF-mediated partial reorientation of the more vertical collagen fibres relative to the imaging plane [71].", "In any event, the analysis testifies to significant yet different remodelling of the gel structure at two scales simultaneously.", "Interestingly, the remodelling of the collagen is not limited to the regions in direct contact with the cells.", "Significant structural changes are observed also in regions as far as 300 $\\mu $ m away from the closest CAF cell, but the phenomenology is distinctly different there.", "In those regions, the density of fibre aggregates increases slowly, over hours, while the fibre density within the aggregates remains constant.", "Similar long-range (up to 1300 $\\mu $ m) mechanical signals generated by fibroblasts have been shown to propagate through fibrillar collagen networks.", "Such mechanical cues can be sensed far beyond the signal source by cells sharing the same substrate, which is responsible for the ability of contractile fibroblasts to induce the migration of macrophages towards the force source [32].", "From a methodological point of view, it is important to stress that the discrimination between the two types of densification processes is robust.", "Considering Fig.", "REF a$_2$ , an increase of aggregate density $\\phi _A$ with constant fibre density $\\phi _F$ would be manifest through a reduction of the background contribution (dark blue) in favor of the fibre contribution (bright blue) but the shapes of the two contributions would remain unchanged.", "By contrast, any increase in the fibre density $\\phi _F$ is accompanied by an increasing number of pixels where the fibres overlap, which would necessarily extend the grey-tone distribution towards higher intensities.", "With that in mind, the observed shifting of the 90%-percentile of the image intensity in Fig.", "REF can only be interpreted as an increase of $\\phi _F$ , independently of any evolution of the aggregate density $\\phi _A$ .", "As for the grey-tone correlation function, its two asymptotic values for $r=0$ and $r \\rightarrow \\infty $ depend only on the total fibre density $\\phi _1 = \\phi _A \\times \\phi _F$ .", "The combination of the grey-tone distribution and correlation function is therefore key to discriminate the two different densification processes." ], [ "Conclusion", "In this study, we developed an innovative image analysis approach to investigate the remodelling of fibrillar collagen in a 3D spheroid model of cellular invasion.", "Unlike existing work, most of which focus on fibre the densification of the collagen network and on small-scale fibre reorientation, we focused here on the structural modification of the collagen matrix at the scale of a few microns, comparable with that of the cells.", "This was achieved by first developing a novel image analysis method based on the stochastic modelling of the acellular gel structure, and applying it afterwards to study the space- and time-dependent reshaping of the collagen matrix by migrating CAFs.", "The analysis of acellular collagen gels (without embedded cells) investigated by confocal microscopy in both reflectance and fluorescence modes, shows that the structure of the gels is not homogeneous at the scale of about 10 microns.", "The structure consists in regions with high fibre density separated by depleted regions, which can be thought of as fibre aggregates and pores.", "In order to mathematically describe this structure, we developed a two-scale stochastic model with a clipped Gaussian-field model for the aggregates and pores, and a homogeneous Boolean model to describe the fibre network within the aggregates.", "We also developed a method to identify the model parameters from the grey-tone distributions and correlation functions of the gel images.", "The specificity of the method is that it applies to the unprocessed grey-tone images, and it can therefore be used with noisy time-lapse reflectance images of non-fluorescent collagen.", "When applied to the collagen-embedded CAF spheroid images, the developed method testifies that the invasion of PDGF-BB-treated CAFs is accompanied by an overall densification of the collagen gel while the sizes of the pores and aggregates, as well as that of the fibres, remains largely unchanged.", "Interestingly, the densification occurs differently for the gel in direct contact with the cells or far away from them.", "The gel in close contact with the invading cells densifies through the rapid increase of the number of aggregates, over less than 30 min, followed by the slow increase of fibre density within the aggregates, over hours.", "By contrast, the densification occurring in the gel located farther away from the cells occurs via the slow increase of the aggregate density, while the density of fibres within the aggregates remains constant.", "At the present stage, one can only speculate on the biomechanical mechanisms responsible for the two-scale densification.", "The very observation of two distinct phenomenologies hint at diverse mechanisms, which presumably involve both biochemical and mechanical effects." ], [ "Conflict of Interest Statement", "The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.", "AN and EM designed the biological experiments; CJG and SB designed the mathematical analysis; IB prepared the biological samples; TL performed the confocal microscopy imaging; CJG developed the mathematical models, analyzed the data and prepared all figures; CJG, EM, SB and AN wrote the manuscript.", "CJG and EM are grateful to the Funds for Scientific Research (F.R.S.-FNRS, Belgium) for Research Associate positions.", "This work was supported by FNRS-Télévie grants 7.4589.16 and 7.6527.18.", "Technical support from the GIGA Imaging platform of the Université de Liège is gratefully acknowledged.", "A Supplementary Material file is available with details about the method used to segment the gel images into fibre and background pixels, and about the Gaussian Random Field approach used to model the fibre aggregates." ] ]
2207.10458
[ [ "Search for dark matter annihilation signals in the H.E.S.S. Inner Galaxy\n Survey" ], [ "Abstract The central region of the Milky Way is one of the foremost locations to look for dark matter (DM) signatures.", "We report the first results on a search for DM particle annihilation signals using new observations from an unprecedented gamma-ray survey of the Galactic Center (GC) region, ${\\it i.e.", "}$, the Inner Galaxy Survey, at very high energies ($\\gtrsim$ 100 GeV) performed with the H.E.S.S.", "array of five ground-based Cherenkov telescopes.", "No significant gamma-ray excess is found in the search region of the 2014-2020 dataset and a profile likelihood ratio analysis is carried out to set exclusion limits on the annihilation cross section $\\langle \\sigma v\\rangle$.", "Assuming Einasto and Navarro-Frenk-White (NFW) DM density profiles at the GC, these constraints are the strongest obtained so far in the TeV DM mass range.", "For the Einasto profile, the constraints reach $\\langle \\sigma v\\rangle$ values of $\\rm 3.7\\times10^{-26} cm^3s^{-1}$ for 1.5 TeV DM mass in the $W^+W^-$ annihilation channel, and $\\rm 1.2 \\times 10^{-26} cm^3s^{-1}$ for 0.7 TeV DM mass in the $\\tau^+\\tau^-$ annihilation channel.", "With the H.E.S.S.", "Inner Galaxy Survey, ground-based $\\gamma$-ray observations thus probe $\\langle \\sigma v\\rangle$ values expected from thermal-relic annihilating TeV DM particles." ], [ "Pointing positions of the telescopes for the\nInner Galaxy Survey", "The H.E.S.S.", "collaboration is carrying out an extensive observation campaign to survey the central region of the Galactic halo within several degrees from the Galactic Centre.", "Such a survey is hereafter referred to as the Inner Galaxy Survey (IGS).", "The IGS is a multiple-year observation program with the five-telescope array of H.E.S.S.", "The IGS started in 2016, covering the Galactic Centre region with significant exposure at Galactic longitudes $|l|<$ 5$^\\circ $ and latitudes $b$ from -3$^\\circ $ to 6$^\\circ $ .", "The data set used in the present study also includes earlier dedicated observations towards the supermassive black hole Sagittarius A$^*$ taken in 2014 and 2015.", "The pointing positions at that time were chosen for the needs of the Galactic plane survey  and dedicated source observations such as the pulsar PSR J1723-2837.", "Given the well-suited location of the H.E.S.S.", "instrument to observe the GC region, the aim of such a survey is to reach the best sensitivity to diffuse VHE emissions with the lowest energy threshold.", "To this purpose, observations are taken with the five telescopes under nominal darkness and very good atmospheric conditions with a zenith angle lower than 40$^\\circ $ .", "The IGS dataset used in this work is obtained from 1317 observational runs.", "Figure: Exposure maps (in m 2 ^2s) in Galactic coordinates from H.E.S.S.", "observations.", "The black triangle indicates the position of the supermassive black hole Sgr A*.Top-left panel: Full exposure map obtained with the 2014-2020 dataset used in this work.The telescope pointing positions of the IGS are displayed as black crosses.Top-right panel: Full exposure map with, overlaid, the regions of interest (ROI, solid magenta lines), defined as 25 annuli centered on the nominal GC position with inner radii from 0.5 ∘ ^\\circ to 2.9 ∘ ^\\circ , and width of 0.1 ∘ ^\\circ .The gray-shaded regions show the set of masks used for the exclusion regions in the field of view to avoid astrophysical background contamination from VHE sources in the ROI.Bottom-left panel: Exposure map in Galactic coordinates from H.E.S.S.", "phase-1 observations.Bottom-right panel: Zoomed view of the exposure map obtained from the 2014-2020 dataset.The top-left panel of Fig.", "REF shows the exposure map obtained from the 2014-2020 dataset together with the IGS telescope pointing positions.", "The latter are chosen in order to map the GC region for positive Galactic latitudes.", "The pointing positions in Galactic coordinates are listed in Tab.", "REF .", "The bottom panels of Fig.", "REF show the exposure map from the H.E.S.S.", "phase-1 observations used in Refs.", ", and a zoomed view of the exposure map obtained from the 2014-2020 dataset, respectively.", "After data quality selection  the data set used in the present study amounts to a total of 546 hours.", "The data are analyzed in stereo mode, i.e., requiring at least two telescopes of the array to trigger the same shower event, with a semi-analytical shower model  where the best event reconstruction between an array configuration with only the four 12-m diameter telescopes and one with the five telescopes is chosen.", "Table: Pointing positions in Galactic coordinates for the 2016-2020 IGS observations.", "The first row gives the names of the pointing positions, which were chosen sequentially during the years.", "The second and third rows give the Galactic longitudes and latitudes of the pointing positions." ], [ "Dark matter halo profiles for the Milky Way and J-factor computation in the regions of interest", "The DM distribution in the Milky Way is assumed to follow a cuspy distribution for which the Einasto and Navarro-Frenk-White (NFW) profiles are typical parametrizations.", "They are expressed as: $\\rho _{\\rm E}(r) = \\rho _{\\rm s} \\exp \\left[-\\frac{2}{\\alpha _{\\rm s}}\\left(\\Big (\\frac{r}{r_{\\rm s}}\\Big )^{\\alpha _{\\rm s} }-1\\right)\\right]\\\\\\quad {\\rm and} \\quad \\rho _{\\rm NFW}(r) = \\rho _{\\rm s}\\left(\\frac{r}{r_{\\rm s}}\\Big (1+\\frac{r}{r_{\\rm s}}\\Big )^2\\right)^{-1} \\ ,$ respectively, assuming a DM density at the Solar position of $\\rho _{\\odot } = 0.39\\ \\rm GeV cm^{-3}$  .", "$\\rho _{\\rm s}$ and $r_{\\rm s}$ are the scale density and scale radius, respectively.", "Table REF provides the parameters of the Einasto and NFW profiles used here, as well as an alternative parameter set for the Einasto profile.", "Table: Parameters of the cuspy profiles used for the DM distribution.", "The Einasto and NFW profiles considered here follow Ref. .", "An alternative normalization of the Einasto profile  is also used and referred as to \"Einasto 2\".The DM density profiles for the parametrizations given in Tab.", "REF are plotted in Fig.", "REF .", "Figure: Dark matter density profiles ρ DM \\rho _{\\rm DM} versus distance rr from the Galactic Center.", "The Einasto and NFW profile parametrizations considered here follow Ref. .", "An alternative parametrization of the Einasto profile  is also used and referred as to \"Einasto 2\".", "The red-shaded area corresponds to the signal region where the DM annihilation signal is searched.Table: J-factor values in units of GeV 2 ^2cm -5 ^{-5} in each of the 25 ROI considered in this work.", "The first four columns give the ROI number, the inner radius, the outer radius, and the size in solid angle for each RoI.", "The fifth column provides the total J-factor values in the ROI, i.e., computed without applying the masks on the excluded regions, for the Einasto profile considered in this work together with the values obtained for an NFW profile  and an alternative normalization of the Einasto profile  in sixth and seventh columns, respectively.The total J-factor values are computed for each region of interest (ROI).", "Table REF provides the inner and outer radii for each ROI, their solid angle and the corresponding total J-factor values for Einasto and NFW profiles.", "The J-factor map for the Einasto profile is shown in left panel of Fig.", "REF ." ], [ "Definition of the regions of interest, measurement of the residual background, photons statistics and fluxes", "The observations are performed with all the telescopes pointed in the same direction in the sky and the data taking requires that at least two telescopes are triggered by the same air shower event.", "Given the field of view of the H.E.S.S.", "instrument and the pointing positions, a significant event statistics is obtained up to about 6$^\\circ $ above the Galactic plane.", "We search for a DM annihilation signal in a disk centered at the Galactic Centre with radius of 3$^\\circ $ .", "In order to benefit from the different spatial morphology of the searched signal with respect to the background, the disk is further divided into 25 ROIs defined as rings of inner radii from 0.5$^\\circ $ to 2.9$^\\circ $ .", "The width of each ring is 0.1$^\\circ $ .", "Each ROI is defined as the ON region.", "On the top-right panel of Fig.", "REF the 25 ROI are overlaid.", "The GC region is a very rich and complex environment at VHE energies.", "It harbors numerous cosmic-ray sources producing VHE gamma-rays including the supermassive black hole Sagittarius A*, pulsar wind nebulae and supernova remnants.", "A set of conservative masks is used in order to avoid VHE gamma-ray contamination both in the signal and in the background regions.", "For each pointing position and each run of the dataset, the background is measured simultaneously in the same field of view as used for signal search in an OFF region taken symmetrically to the ON region with respect to the pointing position, as described in Refs.", ", .", "Therefore, the expected background in the ON region is determined from a OFF region measurement performed under the same observational and instrumental conditions.", "The regions of the sky with VHE gamma-ray sources are excluded for both the ON and OFF measurements, providing them with the same solid angle size.", "This background measurement technique is carried out on a run-by-run basis and provides an accurate determination of the residual background.", "For a given ROI and a given pointing position of a run, the OFF events are measured.", "For a different run with the same ROI and same pointing position, the OFF events fall in the same spatial region but they are totally independent from the previous observation run.", "For each ROI, the photon statistics in the ON and OFF regions as well as the corresponding excess significance are provided in Tab.", "REF .", "Table: Photon statistics in the ON and OFF regions, respectively, together with the corresponding excess significance in each of the 25 ROIs considered in this work.", "The first row gives the ROI number.", "The second and third columns give the photon statistics in the ON and OFF regions, respectively.", "The fourth row gives the excess significance.Fig.", "REF shows the significance maps of the residuals for three energy bands.", "The energy bands are chosen such that comparable photon statistics is found in each one.", "In the high energy band, there is an overall significant gamma-ray excess, with the formal total significance of 5.7$\\sigma $ which corresponds to the p-value of 1.1x10$^{-8}$ .", "This suggests the presence of an unaccounted additional background signal.", "However, the spatial and spectral shapes of this signal are not compatible with the expected ones from the DM signal.", "Nevertheless, this additional contribution is considered as part of the measured excess events in our analysis, which makes the constraints on $\\langle \\sigma v \\rangle $ conservative since the observed excess is positive.", "Figure: Significance map of the residuals in Galactic coordinates in three energy bands.The grey-shaded region corresponds to the set of masks used in this analysis to avoid astrophysical background contamination from the VHE sources in the ROIs.", "The black triangle shows the position of the supermassive black hole Sagittarius A*.The right panel of Fig.", "REF shows an example of the background measurement for the ROI 7 and 13 and the pointing positions (black crosses) 2-5 ($l$ = -1.8$^\\circ $ , $b$ = 2.0$^\\circ $ ) and 3-7 ($l$ = 0.8$^\\circ $ , $b$ = 3.2$^\\circ $ ), respectively.", "The background measurement for ROI 25 and pointing position 2-5 is also shown.", "The set of masks used in this analysis is shown as a grey shaded area.", "Masks include the Galactic plane between $\\pm $ 0.3$^{\\circ }$ as well as the diffuse emission region around the GC , sources from Ref.", ", and all VHE gamma-ray sources in the field of view.", "The exclusion regions are removed similarly in the ON and OFF regions such that they keep the same solid angle and acceptance.", "The color scale indicates the value of the J-factor computed for the Einasto profile in the pixel size of 0.02$^\\circ \\times $ 0.02$^\\circ $ .", "The ratio between the J-factor values in ON and OFF regions for ROI 13 with respect to pointing positions 3-7 and 2-5 are 5 and 4, respectively, which maintains a significant expected DM excess signal in the ON region with respect to the OFF region.", "Figure: Left panel: J-factor map for the Einasto profile in Galactic coordinates.", "The J-factor values are integrated in pixels of 0.02 ∘ ×^\\circ \\times 0.02 ∘ ^\\circ size.", "The grey-shaded region corresponds to the set of masks used in this analysis to avoid astrophysical background contamination from the VHE sources in the ROIs.", "The black triangle shows the position of the supermassive black hole Sagittarius A*.Rigth panel: Background determination method in Galactic coordinates.", "Two IGS pointing positions are marked with black crosses.", "J-factor values are displayed for the ROI 7 and 13, respectively, together with those obtained in the corresponding OFF regions.", "In addition, the J-factor values for ROI 25 and its corresponding OFF region with respect to the pointing position 2-5 are shown.", "The masked regions are excluded similarly in the ON and OFF regions such that these regions keep the same solid angle size and acceptance.", "The black triangle shows the position of the supermassive black hole Sagittarius A*.Figure REF shows the energy-differential annihilation spectrum in the $W^+W^-$ channel convolved with the H.E.S.S.", "acceptance and energy resolution expected for the self-annihilation of DM with mass $m_{\\rm DM}$ = 0.98 TeV and $\\langle \\sigma v \\rangle = 3.8 \\times 10^{-26}$ cm$^3$ s$^{-1}$ for individual ROIs as well as for the combination of all ROIs.", "Overlaid are the corresponding ON and OFF energy-differential spectra convolved with the H.E.S.S.", "energy-dependent acceptance ($A_{\\rm eff}$ ) and energy resolution.", "Figure: Energy-differential spectra expected for the self-annihilation of DM with mass m DM m_{\\rm DM} = 0.98 TeV and 〈σv〉=3.8×10 -26 \\langle \\sigma v \\rangle = 3.8 \\times 10^{-26} cm 3 ^3s -1 ^{-1} in the W + W - W^+W^- annihilation channelmultiplied by E 2 E^2 andconvolved with the H.E.S.S.", "response (orange line) for individual ROIs as well as for the combination of all ROIs.A eff (E)A_{\\rm eff}(E) stands for the energy-dependent acceptance of the instrument.Also plotted are the corresponding ON (black line) and OFF (red line) energy-differential spectra.In Fig.", "REF are plotted the energy-differential flux for ON and OFF regions for individual ROIs as well as for the combination of all ROIs.", "The steep spectrum of the residual background is mainly due to the dominant contribution of misidentified cosmic-rays.", "Fig.", "REF shows the background-subtracted energy-differential flux, convolved with the H.E.S.S.", "response, for different combinations of the ROIs as explained in the caption.", "Figure: The energy-differential flux for ON (black line) and OFF (red line) regions for individual ROIs are shown in the first three panels for ROI 16, 17 and 18, respectively.", "The right panel shows ON and OFF energy-differential fluxes for the combination of all ROIs.Figure: Background-subtracted energy-differential fluxmultiplied by E 2 E^2 and convolved with the H.E.S.S.", "response, versus energy for different combinations of ROIs.A eff (E)A_{\\rm eff}(E) stands for the energy-dependent acceptance of the instrument.1σ\\sigma error bars are shown.From the top to the bottom, the following ROI combinations are shown: from ROI 1 up to 6, from 7 up to 11, from 12 up to 15, from 16 up to 18, 19 and 20, 21 and 22, 23 and 24, and 25." ], [ "Statistical method for upper limit determination", "The statistical analysis method makes use of a log-likelihood ratio test statistic (TS) to test the DM signal hypothesis against the data assuming a positive searched signal, i.e., $\\langle \\sigma v \\rangle >$ 0.", "Following Ref.", ", the TS is defined as: $TS ={\\left\\lbrace \\begin{array}{ll}- 2\\, \\rm ln \\frac{\\mathcal {L}(N^{\\rm S}(\\langle \\sigma v \\rangle ), \\widehat{\\widehat{N^{\\rm B}}}(\\langle \\sigma v \\rangle ))}{\\mathcal {L}(0,\\widehat{\\widehat{N^{\\rm B}(0)}})} \\, & N^{\\rm S}(\\widehat{\\langle \\sigma v \\rangle }) < 0\\\\- 2\\, \\rm ln \\frac{\\mathcal {L}(N^{\\rm S}(\\langle \\sigma v \\rangle ), \\widehat{\\widehat{N^{\\rm B}}}(\\langle \\sigma v \\rangle ))}{\\mathcal {L}(N^{\\rm S}(\\widehat{\\langle \\sigma v \\rangle }),\\widehat{N^{\\rm B}})} \\, & 0\\le N^{\\rm S}(\\widehat{\\langle \\sigma v \\rangle }) \\le N^{\\rm S}(\\langle \\sigma v \\rangle )\\\\0 & N^{\\rm S}(\\widehat{\\langle \\sigma v \\rangle }) > N^{\\rm S}(\\langle \\sigma v \\rangle ) \\, .\\end{array}\\right.", "}$ $N^{\\rm S}$ is obtained summing $N^{\\rm S}_{\\rm k}$ over all the runs $k$ , where $N^{\\rm S}_{\\rm k}$ corresponds to the number of gamma rays expected from DM annihilation for the observational run $k$ .", "From Majorana DM particles of mass $m_{\\rm DM}$ self-annihilating with a thermally-averaged annihilation cross section $\\langle \\sigma v \\rangle $ in the channels $f$ with differential spectra $dN^f_{\\gamma }/dE_{\\gamma }$ of branching ratios $BR_f$ , in a region of solid angle $\\Delta \\Omega $ with a J-factor $J(\\Delta \\Omega $ ), $N^{\\rm S}_{\\rm k}$ is given by: $N^{\\text{S}}_{\\text{k}}(\\langle \\sigma v \\rangle ) = \\frac{\\langle \\sigma v \\rangle J(\\Delta \\Omega )}{8\\pi m_{\\rm DM}^2} T_{\\rm {obs},k} \\int _{E_{\\rm th}}^{m_{\\rm DM}} \\int ^{\\infty }_{0} \\sum _f BR_f \\frac{dN^f_{\\gamma }}{dE_{\\gamma }}(E_{\\gamma }) \\: R(E_{\\gamma }, E^{\\prime }_{\\gamma }) \\: A_{\\rm eff, k}(E_{\\gamma })\\: dE_{\\gamma } \\: dE^{\\prime }_{\\gamma }\\, ,$ where the finite energy resolution $R(E_{\\gamma }, E^{\\prime }_{\\gamma })$ relates the energy detected $E^{\\prime }_{\\gamma }$ to the true energy $E_{\\gamma }$ of the events, $A_{\\rm eff, k}(E_{\\gamma })$ is the energy-dependent acceptance for the run $k$ , and $T_{\\rm obs ,k}$ is the observation time of the run $k$ .", "The energy-dependent acceptance is computed according to the semi-analytical shower model template technique using standard selection cuts .", "For each run, the spatial response of the instrument is encoded in the acceptance term which depends on the angular distance between the reconstructed event position and the pointing position of the run $k$ .", "Spatial responses of the H.E.S.S.", "instrument can be found, for instance, in Ref. .", "The energy resolution is well described by a Gaussian function of $\\sigma /E$ of 10% above 200 GeV .", "$\\widehat{\\widehat{N^{\\rm B}_{\\rm ij}}}$ is obtained through a conditional maximization by solving $\\partial \\mathcal {L}/\\partial N^{\\rm B}_{\\rm ij} = 0$ .", "It represents the conditional maximum likelihood estimator of $\\mathcal {L}$ , i.e., the value of $N^{\\rm B}_{\\rm ij}$ that maximizes $\\mathcal {L}$ for the specified $N^{\\rm S}_{\\rm ij}(\\langle \\sigma v \\rangle )$ .", "$N^{\\rm S}_{\\rm ij}(\\widehat{\\langle \\sigma v \\rangle })$ and $\\widehat{N^{\\rm B}_{\\rm ij}}$ are computed using an unconditional maximization, i.e., they represent the maximum likelihood estimators of $\\mathcal {L}$ .", "As no significant VHE gamma-ray excess is found in any of the ROI, the TS enables to derive upper limits on the thermally-averaged velocity-weighted annihilation cross section $\\langle \\sigma v \\rangle $ for a set of DM masses $m_{\\rm DM}$ and annihilation spectra.", "95% C. L. one-sided upper limits are computed via the TS by demanding a TS value of 2.71 assuming that the TS follows a $\\chi ^2$ distribution, as expected in the high statistics limit, with one degree of freedom.", "The analysis makes use of the expected spectral and spatial characteristics of the searched DM signal with respect to residual background.", "Therefore, the total likelihood function is given by the product of Poisson likelihood functions over the spatial and energy bins $\\mathcal {L} = \\prod _{\\rm ij}\\mathcal {L}_{\\rm ij}$ .", "The two-body DM annihilation is taking place almost at rest, the DM-induced gamma-ray spectrum in the final state is expected to exhibit a sharp energy cut-off at the DM mass with possible bump-like energy features close to the DM mass.", "These spectral characteristics provides efficient discrimination against the much smoother power-law like spectrum of the residual background.", "In addition, the spatial morphology of the expected DM signal follows the spatial J-factor profile, which provides additional sensitivity given the spatially-independent morphology of the residual background." ], [ "Expected limit computation", "For each mass and each annihilation channel, the 95% C.L.", "expected limits are derived from a set of 300 Poisson realizations of the measured background event distributions.", "For each ROI and each run, an independent Poisson realization of the measured background event energy distribution is computed for the ON and the OFF regions, respectively.", "This provides an overall realization of the expected ON and OFF energy count distributions which is obtained by summing the realizations over all the runs of the dataset.", "For each realization of the overall energy count distribution in the ON and OFF regions, the corresponding value of $\\langle \\sigma v \\rangle $ is computed according to the test statistics given in Eq.", "(REF ).", "This procedure is repeated 300 times.", "The mean expected limits, the 68% and 95% statistical containment bands are given by the mean, the 1 and 2$\\sigma $ standard deviations, obtained by the distribution of the computed limits.", "The mean expected limits and the containment bands are plotted in Fig.", "REF ." ], [ "Study of the systematic uncertainties", "The inner few degrees of the Galactic Centre region is a complex environment with numerous sources emitting in the high and very-high-energy gamma-ray regimes.", "A conservative set of masks is used to exclude all the regions of the sky with VHE emissions and therefore avoid leakage from nearby sources both into the ON and OFF regions.", "The H.E.S.S.", "PSF is 0.06° at 68% containment radius above 200 GeV  improving slightly with energy (see, from instance, Fig.", "25 of Ref. ).", "For pointlike sources, a circular mask of 0.25$^\\circ $ radius is used, the remaining signal leaking outside the source mask is much less than 1%.", "In the case of the extended source HESS J1745-303, a circular mask of 0.9$^\\circ $ radius is used (see the set of used masks shown as a grey-shaded region in the top-right panel of Fig.", "REF ).", "The level of the Night Sky Background (NSB) is subject to significant changes due to the presence of bright stars in the field of view, varying from 100 MHz up to 350 MHz photoelectron rate per pixel in the field of view.", "A dedicated treatment of the NSB is performed in the shower template analysis method as described in Ref.", ", where the contribution of the NSB is modelled in every pixel of the camera.", "This analysis method does not require any further image cleaning to extract the pixels illuminated by the showers.", "The method used for the background determination implies that the event counts in the OFF region may be measured in regions of the sky with a different level of NSB compared to the ON region.", "However, as mentioned above, this can be properly handled with the analysis method used here .", "For each run with a given telescope pointing position and each ROI, background events are measured in an OFF region defined as the region symmetric to the ON region with respect to the pointing position.", "The residual background rate is correlated with the zenith angle of the observation.", "For a pointing position at a given zenith angle, a gradient in the residual background rate is expected across the telescope field of view.", "A difference in the zenith angles of the events is obtained between the ON and OFF regions.", "For the considered ROIs and pointing positions of the IGS, the difference of the means of the distributions of the ON and OFF event zenith angles is up to 1$^\\circ $ depending on the zenith angle of the observational run.", "The gradient of the gamma-ray-like rate in the FoV is taken into account on a run-by-run basis: for each run, the gamma-ray-like rate is renormalized according to the difference of the zenith angle means of the ON and OFF distributions, $\\widehat{\\theta _z^{ON}}$ and $\\widehat{\\theta _z^{OFF}}$ , respectively, such as $N_{\\rm OFF, renorm} = 1.01 \\times N_{\\rm OFF} \\times (\\widehat{\\theta _z^{ON}} - \\widehat{\\theta _z^{OFF}})/1^\\circ $ .", "The mean zenith angle of the observational runs of the dataset is 18$^\\circ $ .", "For a run taken at this zenith angle, the difference of the zenith angle means of the ON and OFF distributions is up to 0.5$^\\circ $ .", "To account for the typical width of 1$^\\circ $ of the zenith angle distribution, a systematic uncertainty of 1% for the normalisation of the measured energy count distributions is used.", "The systematic uncertainty derived on the normalisation of the energy count distributions deteriorates the mean expected limits from 8% to 18% depending on the DM particle mass.", "A systematic uncertainty may arise from the assumption of azimuthal symmetry in the field of view.", "For a given pointing position, the number of counts is computed as a function of the angle.", "No significant effect is observed beyond the expected 1%-per-degree gradient in the FoV.", "In the present dataset, the systematic uncertainty on the energy scale of the energy count distributions is 10%.", "This systematic uncertainty affects similarly the energy scale of the measured and expected energy count distributions.", "Therefore, the 10% shift of the energy scale leads to an overall shift of the limits curves along the DM mass axis by 10%.", "This systematic uncertainty is not included in the limits." ], [ "Upper limits in several annihilation channels", "WIMPs can self-annihilate into pairs of Standard Model particles allowed by kinematics, providing gamma rays in the final states from hadronization, decay and radiation of the particles produced in the annihilation process.", "We perform the analysis in the $b\\bar{b}$ , $t\\bar{t}$ , $W^+W^-$ , $ZZ$ , $hh$ , $e^+e^-$ , $\\mu ^+\\mu ^-$ , and $\\tau ^+\\tau ^-$ annihilation channels, assuming a 100% branching ratio in each case.", "The constraints are shown in Fig.", "REF for the $b\\bar{b}$ , $t\\bar{t}$ , $ZZ$ , $hh$ , $e^+e^-$ and $\\mu ^+\\mu ^-$ channels, respectively.", "Figure: Constraints on the velocity-weighted annihilation cross section 〈σv〉\\langle \\sigma v \\rangle for the bb ¯b\\bar{b}, tt ¯t\\bar{t}, ZZZZ, hhhh, e + e - e^+e^- and μ + μ - \\mu ^+\\mu ^- channels, respectively,derived from H.E.S.S.", "five-telescope observations taken from 2014 to 2020.", "The constraints are given as 95% C. L. upper limits including the systematic uncertainty, as a function of the DM mass m DM _{\\rm DM}.The observed limit is shown as black solid line.The mean expected limit (black dashed line) together with the 68% (green band) and 95% (yellow band) C. L. containment bands are shown.The mean expected upper limit without systematic uncertainty is also plotted(red dashed line).", "The horizontal grey long-dashed line is set to the value of the natural scale expected for the thermally-produced WIMPs." ] ]
2207.10471
[ [ "The shape of $x^2\\bmod n$" ], [ "Abstract We examine the graphs generated by the map $x\\mapsto x^2\\bmod n$ for various $n$, present some results on the structure of these graphs, and compute some very cool examples." ], [ "Overview", "For any $n$ , we consider the map $f_n(x) := x^2\\bmod n.$ From this map, we can generate a (directed) graph $\\mathcal {G}({n})$ whose vertices are the set $\\lbrace 0,1,\\dots , n-1\\rbrace $ and edges from $x$ to $f_n(x)$ for each $x$ .", "We show a few examples of such graphs later in the paper — one particularly intruiging graph is shown in Figure REF , that appears as a subgraph of $\\mathcal {G}({10455})$ .", "Figure REF , which shows the entire graph $\\mathcal {G}({817})$ , is also excellent.", "As we can see from examples, sometimes the graph $\\mathcal {G}({n})$ is a forest (a collection of trees), sometimes it has loops, etc.", "In this paper we will seek to characterize all such graphs; not surprisingly, the shape of these graphs depend on the number theoretic properties of the integer $n$ .", "We give a quick summary of the results below: (Theorem REF ) We show that if $n=a\\cdot b$ where $\\gcd (a,b)=1$ , then the graph $\\mathcal {G}({n})$ is the Kronecker product of the graphs $\\mathcal {G}({a})$ and $\\mathcal {G}({b})$ (we define the Kronecker product in Definition REF below).", "Using induction, we can therefore determine the graph $\\mathcal {G}({n})$ in terms of its prime decomposition, and all that remains is to describe $\\mathcal {G}({p^k})$ for all primes $p$ and $k\\ge 1$ .", "We break this up into several cases.", "(Theorem REF ) Let $p$ be an odd prime.", "Let us denote by $\\mathcal {U}({p^k})$ the set of numbers that are relatively prime to $p^k$ ; these are the multiplicative units modulo $p^k$ .", "We use heavily the fact that the units under multiplication modulo $p^k$ form a cyclic group, and this allows us to characterize $\\mathcal {U}({p^k})$ (we note here that the technique for $\\mathcal {U}({p^k})$ follows closely the results of [1] where they studied $\\mathcal {G}({p})$ for $p$ prime); (Theorem REF ) Let $p$ be an odd prime.", "This theorem will fully characterize $\\mathcal {N}({p^k})$ , the component of $\\mathcal {G}({p^k})$ that corresponds to the nilpotent elements.", "(Section REF ) Finally, we work on the graph for $\\mathcal {G}({2^k})$ , describing both $\\mathcal {U}({2^k})$ and $\\mathcal {N}({2^k})$ separately.", "From this, we can characterize any $\\mathcal {G}({n})$ , and we then use this characterization to compute several quantities of interest and work out many sweet examples.", "A few notes: this manuscript does not contain any new theoretical results and, in fact, only uses results from elementary number theoryAnyone familiar with the author's body of research would know that any number theory appearing here is perforce extremely elementary.", "If the reader is so inclined, they can think of this paper as an extended pedagogical example.", "In fact, the author recently taught UIUC's MATH 347 (our undergrad math major \"intro to proofs\" course) and found that the students responded pretty well to visualizing the graphs of various functions that appear in a math course at this level (functions modulo some integer being a key family of examples).", "The author also made some videos with visualizations for the square function considered here [2], and got intrigued by the patterns that appear.", "This led to the current manuscript." ], [ "Summary of results", "As noted above, there are four main results, and the sections below address each of these in order.", "After we have stated the theoretical results in this section, this is sufficient to describe all $\\mathcal {G}({n})$ — with the caveat that in many cases there is still some work to do to compute everything.", "We work out many concrete examples in Section ." ], [ "Kronecker products and the Classical Remainder Theorem", "In this section we show how the graphs $\\mathcal {G}({a})$ and $\\mathcal {G}({b})$ “combine” to form the graph of $\\mathcal {G}({ab})$ , when $a,b$ are relatively prime.", "Definition 2.1.1 Let $G_1 = (V_1,E_1)$ and $G_2 = (V_2,E_2)$ be two (directed) graphs.", "Let us define $G_1\\square G_2$ , the Kronecker product of $G_1$ and $G_2$ , as follows: the vertex set of $G_1\\square G_2$ is the Cartesian product $V_1\\times V_2$ , and we say that $(v_1,w_1)\\rightarrow (v_2,w_2)\\in G_1\\square G_2\\mbox{ iff } v_1\\rightarrow w_1\\in G_1\\mbox{ and } v_2\\rightarrow w_2\\in G_2.$ In particular, the edge exists for the pair iff there is an edge for the first component of each and one for the second component of each.", "Remark 2.1.2 One motivation for calling this the Kronecker product is that the adjacency matrix of $G_1\\square G_2$ is the Kronecker matrix product of the adjacency matrices of $G_1$ and $G_2$ .", "This is also called the Cartesian product of graphs by many authors.", "We also note that it is straightforward to show that the Kronecker graph product is associative, and so we can define $n$ -ary Kronecker products unambiguously as well.", "Theorem 2.1.3 If $gcd(a,b)=1$ , then $\\mathcal {G}({ab}) \\cong \\mathcal {G}({a}) \\square \\mathcal {G}({b}).$ Furthermore, If we write $n = p_1^{\\alpha _1} p_2^{\\alpha _2} \\cdots p_j^{\\alpha _j}$ as the prime factorization of $n$ , then $\\mathcal {G}({n}) \\cong \\square _{i=1}^j \\mathcal {G}({p_i^{\\alpha _i}}),$ namely: the graph for $n$ is just the Kronecker product of the individual graphs for each of the $p_i^{\\alpha _i}$ .", "Perhaps surprisingly, the proof of Theorem REF is more or less equivalent to the Classical Remainder Theorem (see Lemma REF below).", "The basic idea goes as follows: if we consider the map $x\\mapsto x^2\\bmod {ab}$ , then we can naturally map this to a pair where we first consider the operation modulo $a$ , and then consider the operation modulo $b$ .", "(In fact, we can do this for any $a,b$ , not just those that are relatively prime.)", "However, if $a,b$ are relatively prime, then the CRT tells us that we can invert this process, and this is what gives us the full isomorphism.", "Lemma 2.1.4 [Classical Remainder Theorem (CRT)] [3] Given a set of coprime numbers $n_1, n_2, \\dots , n_j$ , then for any integers $a_1,\\dots , a_j$ , the system $x \\equiv a_1\\bmod n_1,\\quad x \\equiv a_2\\bmod n_2,\\quad \\cdots , \\quad x \\equiv a_j\\bmod n_j$ has a solution.", "Moreover, any two such solutions are congruent modulo $n_1\\times n_2\\times \\cdots \\times n_j$ .", "Remark 2.1.5 An equivalent statement of the CRT is that the map $\\mathbb {Z}/{(n_1\\times n_2\\times \\cdots \\times n_j)}\\mathbb {Z} &\\rightarrow \\mathbb {Z}/{n_1}\\mathbb {Z}\\times \\mathbb {Z}/{n_2}\\mathbb {Z}\\times \\cdots \\times \\mathbb {Z}/{n_j}\\mathbb {Z},\\\\x\\bmod {(n_1\\times n_2\\times \\cdots \\times n_j)} &\\mapsto (x\\bmod n_1, x\\bmod n_2, \\dots , x\\bmod n_j),$ is a ring isomorphism, which is why it respects the squaring operation.", "[Proof of Theorem REF ] The first claim basically boils down to the interpretation of Remark REF .", "Let us consider the map from the vertex set of $\\mathcal {G}({ab})$ to the vertex set of $\\mathcal {G}({a})\\square \\mathcal {G}({b})$ given by $x\\bmod (ab) \\mapsto (x\\bmod a, x\\bmod b)$ .", "By the CRT, this is a bijection of the vertex sets.", "Now, assume that there is an edge $x\\rightarrow y$ in $\\mathcal {G}({ab})$ — this will be true iff $x^2=y\\bmod {(ab)}$ .", "But then $x^2=y\\bmod a$ and $x^2=y\\bmod b$ , and therefore there is an edge $(x\\bmod a, x\\bmod b) \\rightarrow (y\\bmod a, y\\bmod b)$ in $\\mathcal {G}({a})\\square \\mathcal {G}({b})$ as well.", "Conversely, if there is an edge $(x\\bmod a, x\\bmod b) \\rightarrow (y\\bmod a, y\\bmod b)$ in $\\mathcal {G}({a})\\square \\mathcal {G}({b})$ , then by CRT we have $x^2=y\\bmod {(ab)}$ , and there is an edge $x\\rightarrow y$ in $\\mathcal {G}({ab})$ .", "Finally, the second claim follows directly from the first using induction." ], [ "The graph of units $\\mathcal {U}({p^k})$ when {{formula:5fa4052d-8604-4fb9-9035-935b727a3a1d}} is an odd prime", "In this section, we determine how to compute the graph containing the “units” modulo $p^k$ when $p$ is an odd prime.", "We start with a few definitions and state the main result, and then break down how we prove it.", "Definition 2.2.1 The set of multiplicative units modulo $n$ (or, more simply, the units modulo $n$ ) are those numbers in the set $\\lbrace 0,1,\\dots , n-1\\rbrace $ that are relatively prime to $n$ .", "This set is typically denoted $(\\mathbb {Z}/{n}\\mathbb {Z})^\\times $ and the function $\\varphi (n) = \\left|{(\\mathbb {Z}/{n}\\mathbb {Z})^\\times }\\right|$ is called Euler's phi function (or totient function).", "In this paper we will denote by $\\mathcal {U}({p^k})$ the graph induced by $x\\mapsto x^2\\bmod {p^k}$ on the set of units.", "Definition 2.2.2 Let $\\gcd (d,k)=1$ .", "We define the multiplicative order of $k$ , modulo $d$, denoted $\\mathsf {ord}_d(k)$ , as the smallest power $p$ such that $k^p\\equiv 1\\bmod d$ , or $d|(k^p-1)$ .", "Definition 2.2.3 Here we define several special graphs that appear all over the place in $\\mathcal {U}({p^k})$ : The (directed) cycle graph of length $k$ , denoted $C_k$ , is a directed graph with vertex set $\\lbrace 1,\\dots , k\\rbrace $ and arrows $v_i\\rightarrow v_{i+1\\bmod k}$ .", "Note that we also allow the cycle of length 2, given by the graph $1\\rightarrow 2, 2\\rightarrow 1$ , and the directed cycle of length 1, which is just a single vertex labelled 1 with a loop to itself.", "A tree is a connected graph with no cycles.", "A rooted tree is a tree with a distinguished vertex, called the root.", "A grounded tree is a rooted tree with two properties: all edges flow towards the root (i.e.", "for any vertex in the tree, there is a path from that vertex to the root), and the root has a loop to itself.", "The flower cycle of length $\\alpha $ and flower type $T$, denoted $C_\\alpha (T)$ , is the directed graph defined in the following manner.", "Take $k$ copies of the grounded tree $T$ , denoted $T_1,\\dots , T_k$ .", "Remove the loop at the root from each of these, and then replace it with an edge from the root of tree $i$ to the root of tree $i+1\\bmod \\alpha $ .", "The regular grounded tree of width $w>1$ and $\\ell >0$ layers (denoted $T_{w}^{\\ell }$ ) is a tree with $w^\\ell $ vertices, described as follows: start with a root vertex and a loop to itself, this forms layer 0.", "In layer 1, there are $w-1$ vertices, each with an edge to the root.", "For any layer $k< \\ell $ , and for any vertex in layer $k$ , there are $w$ vertices in layer $k+1$ that map to each vertex in layer $k$ .", "Layer $\\ell $ is given by leaves that map to vertices in level $\\ell -1$ .", "Remark 2.2.4 A few remarks: Another way to describe the flower cycle is that we start with a cycle, and then we glue $k$ copies of the tree at each vertex in the cycle.", "Note that the in-degree of every vertex in $T_w^\\ell $ is either $w$ or 0.", "The graph $C_\\alpha (T_w^\\theta )$ appears often in the sequel (esp.", "with $w=2$ ), and we give a colloquial description here: start with a cycle of length $C_\\alpha $ , and then to each vertex in the cycle, “glue” a tree of type $T_w^\\theta $ .", "In this picture, the nodes in the cycle itself are the periodic points, and the trees coming off each node are the preperiodic points that map into that particular orbit.", "See Figure REF for a few examples.", "Figure: Examples of C 6 (T 2 2 )C_6(T_2^2) and C 3 (T 2 3 )C_3(T_2^3) as defined in Definition .", "Recall that the subscript gives the number of terms in the cycle (or periodic orbit) of the graph, and the tree tells us what is “hanging off” of each of those periodic vertices.", "In one case we have the grounded tree of width two and depth two, and in the other width two and depth three.", "Trees of width two show up ubiquitously below — in particular they are the only trees we see in 𝒰(p k )\\mathcal {U}({p^k}) when pp is an odd prime.Theorem 2.2.5 [Structure of $\\mathcal {U}({p^k})$ ] Let $p$ be an odd prime, and write $m=\\varphi (p^k)$ .", "Write $m = 2^\\theta \\mu $ where $\\mu $ is odd.", "(Note that $\\varphi (p^k)$ is always even, so $\\theta \\ge 1$ .)", "For each divisor $d$ of $\\mu $ , take $\\varphi (d)/\\mathsf {ord}_{d}(2)$ disjoint copies of $C_{\\mathsf {ord}_{d}(2)}(T^\\theta _2)$ , and $\\mathcal {U}({p^k})$ is isomorphic to the disjoint union of these cycles over the divisors $d$ .", "Remark 2.2.6 Note that this implies that every vertex in $\\mathcal {U}({p^k})$ has an in-degree of two or zero (note that the trees attached to the cycles are always of the form $T_2^\\theta $ ).", "To describe the result a bit more colloquially: we first compute $m = \\varphi (p^k)$ , and pull out all possible powers of 2, until we obtain the odd number $\\mu $ .", "From there, we enumerate all of the divisors $d$ of $\\mu $ , and these $d$ determine all of the periodic orbits of $\\mathcal {U}({p^k})$ .", "Then the number of powers of 2 that we pulled out gives us the full structure of the pre-periodic points attached to these periodic orbits.", "Onto the proof: we start with the following result that was proven by Gauss in [4]: Proposition 2.2.7 If $n=p^k$ , where $p$ is an odd prime, then $(\\mathbb {Z}/{p^k}\\mathbb {Z})^\\times $ is a cyclic group of order $\\varphi (p^k) = p^k-p^{k-1}$ .", "The fact that this group is cyclic allows us to more or less completely characterize the graph structure of $\\mathcal {U}({p^k})$ .", "We follow the approach of [1] who completely characterized $\\mathcal {G}({p})$ for $p$ prime in the same way (in this section the novelty is that we are extending these ideas to $p^k$ , but since $(\\mathbb {Z}/{p^k}\\mathbb {Z})^\\times $ is cyclic the same idea goes through).", "We first give a definition and a lemma: Definition 2.2.8 We define $\\mathcal {A}_{{m}}$ as the graph whose vertices are the set $\\lbrace 0,1,\\dots , m-1\\rbrace $ and edges given by $x\\mapsto 2x\\bmod m$ .", "Lemma 2.2.9 Let $G$ be any cyclic group of order $m$ (where here we denote the group operation as multiplication).", "Call one of its generators $g$ .", "We can define a function on $G$ by $g^\\alpha \\mapsto g^{2\\alpha },\\quad \\alpha = 0,1,\\dots , m-1.$ This function is conjugate to the map $(\\mathbb {Z}/m\\mathbb {Z}, x\\mapsto 2x\\bmod m)$ , and so has the same dynamical structure (periodic orbits, fixed points, etc.)", "and its graph is isomorphic to $\\mathcal {A}_{{m}}$ .", "Let us define the maps $\\alpha \\colon \\mathbb {Z}/m\\mathbb {Z}\\rightarrow \\mathbb {Z}/m\\mathbb {Z}$ with $x\\mapsto 2x\\bmod m$ , $\\gamma \\colon G\\rightarrow G$ with $\\gamma (g^a) = g^{2a}$ , and $\\kappa \\colon \\mathbb {Z}/m\\mathbb {Z}\\rightarrow G$ with $\\kappa (a) = g^a$ .", "Since $g$ generates $G$ , the map $\\kappa $ is invertible.", "We see that $\\gamma \\circ \\kappa = \\kappa \\circ \\alpha $ , so $\\alpha $ and $\\gamma $ are conjugate.", "In particular, note that $\\alpha (x)=x$ iff $\\gamma (\\kappa (x)) = \\kappa (x)$ and moreover that $\\alpha ^k (x) = x$ iff $\\gamma ^k(\\kappa (x)) = \\kappa (x)$ , so that there is a one-to-one correspondence between the fixed/periodic points of $\\alpha $ and $\\gamma $ .", "In particular this implies that the two functions have isomorphic graphs.", "[Proof of Theorem REF ] By Lemma REF , the graph $\\mathcal {U}({p^k})$ is isomorphic to $\\mathcal {A}_{{\\varphi (p^k)}}$ and the result follows.", "Basically we just need to understand $\\mathcal {A}_{{m}}$ .", "Under addition, this is a cyclic group of order $m$ , with generator 1.", "We stress here that we are now moving to additive notation for simplicity, so here the identity is 0 and the generator is 1; when we convert this back to the multiplicative group $\\mathcal {U}({p^k})$ we will have an identity of 1 and a generator of $g$ .", "Some standard results from group theory [3] tell us that: For any $d$ dividing $m$ , the set of all elements with (additive) order $d$ is given by $S_d = \\lbrace j (m/d): j\\mbox{ such that }\\gcd (j,d)=1\\rbrace ;$ The set $S_d$ has $\\varphi (d)$ elements; Since $\\sum _{d|m}\\varphi (d) = m$ , these sets exhaust $\\mathbb {Z}/m\\mathbb {Z}$ .", "Let us first consider the case with $m$ odd: Proposition 2.2.10 Let $m$ be odd, and consider the map $f(x)=2x$ on $\\mathbb {Z}/m\\mathbb {Z}$ .", "Then this map has a unique fixed point at $x=0$ , and all other elements are periodic under this map.", "The map $f$ leaves each of the $S_d$ invariant, and $S_d$ decomposes into a disjoint union of periodic orbits, each with period $\\mathsf {ord}_d(2)$ .", "(Note that since $\\left|{S_d}\\right| = \\varphi (d)$ , there will be exactly $\\varphi (d)/\\mathsf {ord}_d(2)$ such disjoint orbits and note that $\\mathsf {ord}_{d}(2)$ divides $\\varphi (d)$ by Fermat's Little Theorem.)", "We can see by inspection that no point other than $x=0$ is fixed by the map.", "Now note that since $\\gcd (2,m)=1$ , the map $x\\mapsto 2x$ is invertible, and is thus effectively a permutation.", "Therefore all points are periodic under repeated application of this map.", "To prove the second claim in the proposition, note that $S_d$ is the set of points in $\\mathbb {Z}/m\\mathbb {Z}$ with additive order $d$ , i.e.", "$x\\in S_d$ iff $dx\\equiv 0\\bmod m$ and if $0<d^{\\prime }<d$ , $d^{\\prime }x\\lnot \\equiv 0\\pmod {m}$ .", "This is invariant under the map $x\\mapsto 2x$ since $m$ is odd: if $d^{\\prime }(2x)\\equiv 0\\bmod m$ then $d^{\\prime }x\\equiv 0 \\bmod m$ , etc.", "Let $x\\in S_d$ with $f^k(x) = 2^kx\\equiv x$ , and if $x\\in S_d$ then $x=j(d/m)$ for $\\gcd (j,d)=1$ .", "This means that we have $(2^k-1) j\\frac{m}{d} \\equiv 0\\bmod m,$ which is equivalent to saying that $(2^k-1)j/d\\in \\mathbb {Z}$ .", "Since $\\gcd (j,d)=1$ , this is true iff $d|(2^k-1)$ .", "The period of $x$ is the smallest $k$ for which $2^kx=x$ , which is the smallest $k$ for which $d|(2^k-1)$ , and this is $\\mathsf {ord}_d(2)$ by definition.", "We are now able to understand the cyclic group for more general $m$ .", "Proposition 2.2.11 Let $m = 2^\\theta \\mu $ where $\\mu $ is odd, and again consider the map $f(x)=2x\\bmod m$ .", "Then the graph $\\mathcal {A}_{{m}}$ consists of flower cycles, specifically: replace every cycle of length $\\alpha $ in $\\mathcal {A}_{{\\mu }}$ with a copy of $C_\\alpha (T_2^\\theta )$ .", "We first consider the basin of attraction of 0, which consists of the multiples of $\\mu $ .", "(Note that $f^\\theta (x)=2^\\theta x\\equiv 0\\bmod m$ for some $\\theta $ iff $x$ is a multiple of $\\mu $ .)", "In fact, we can see that if $x= 2^q \\nu \\mu $ with $q<\\theta $ and $\\nu $ odd, then $x$ reaches 0 in exactly $k-q$ iterations, each passing through higher orders of 2.", "For example, all of the odd multiples of $\\mu $ will be in the top layer (there will be exactly $2^{\\theta -1}$ of these); the odd multiples of $2\\mu $ will be in the next layer (exactly $2^{\\theta -2}$ of these), and so on.", "Note that every vertex in this tree has in-degree 2 or 0: nothing maps to the odd numbers in the top layer, but each subsequent number has two preimages: given an $x$ such that $2y\\equiv x\\bmod m$ , we also have $2(y+m/2) \\equiv x\\bmod m.$ So in general, we have a tree of type $T_2^\\theta $ with 0 at the root, the point $m/2 = 2^{\\theta -1}\\mu $ the single point in layer 1, the points $\\lbrace 1,3\\rbrace \\times 2^{\\theta -2}\\mu $ in layer two (each of which map to $2^{\\theta -1}\\mu $ ), the points $\\lbrace 1,3,5,7\\rbrace \\times 2^{\\theta -3}\\mu $ in layer three, etc.", "We will call this the “tree rooted at 0” below.", "We now claim that for any periodic orbit that appears in $\\mathcal {A}_{{\\mu }}$ , there is a corresponding flower orbit in $\\mathcal {A}_{{m}}$ .", "Consider any period $k$ orbit in $\\mathbb {Z}/{\\mu }\\mathbb {Z}$ with elements $x_1,\\dots , x_k$ .", "By definition, $x_{i+1}\\equiv 2x_i\\bmod \\mu $ and $x_1 = 2x_k\\bmod \\mu $ .", "Now let $y_i = 2^\\theta x_i$ and note that gives a periodic orbit modulo $m$ : if $x_{i+1}-2x_i$ is a multiple of $\\mu $ , then $y_{i+1}-2y_i = 2^\\theta (x_{i+1}-x_i)$ is a multiple of $m=2^\\theta \\mu $ .", "Now pick any of the $y_i$ .", "We know that there is at least one $z\\in \\mathbb {Z}/m\\mathbb {Z}$ that maps to $y_i$ (since it is part of a periodic orbit).", "Therefore there is another, $z_1 = z\\pm n/2$ that maps to $y_i$ , and this corresponds to the $n/2$ vertex in the tree rooted at 0.", "If $z$ is odd, then the tree terminates, but if it is even, there are two numbers that map to $z_1$ , namely $z_1/2$ and $z_1/2\\pm n/2$ .", "As we move up the branches of this tree, each layer removes one power of two, and each vertex has exactly two vertices mapping to it, until we reach the $\\theta $ 'th layer.", "Therefore there will be a $T_2^\\theta $ attached to $y_i$ — and the same argument applies to any of the elements in any of the periodic orbits, and we are done.", "Corollary 2.2.12 If $m=2^k$ for some power of $k$ , then the graph $\\mathcal {A}_{{m}}$ is a tree with $k$ layers.", "The $k$ th layer consists of the $2^{k-1}$ odd numbers, the $k-1$ st layer is those $2^{k-2}$ numbers that are two times an odd, etc.", "In particular, the graph $\\mathcal {A}_{{2^k}}$ is a $T_2^k$ .", "One direct corollary of this is that if $n$ is a Fermat prime, then the graph $\\mathcal {U}({n})$ is a tree." ], [ "The graph of nilpotent elements $\\mathcal {N}({p^k})$ when {{formula:c8f50deb-c660-4716-84a3-becdb5f1755b}} is an odd prime", "Now we can attack the case of those elements that are not units when $p$ is an odd prime.", "Definition 2.3.1 We say that $x$ is nilpotent modulo $n$ if there exists some power $a$ such that $x^a \\equiv \\bmod n$ .", "It is easy to see that $x$ is nilponent modulo $p^k$ iff $p|x$ .", "Also note that if $x^a\\equiv 0\\bmod n$ then $x^b\\equiv 0\\bmod n$ for any $b>a$ , and in particular it is true for $b$ being some power of two, so we have: Lemma 2.3.2 $x$ is nilpotent modulo $p^k$ $\\iff $ $p|x$ $\\iff $ $f_{p^k}^\\alpha (x) = 0$ for some $\\alpha >0$ .", "In particular, this implies that $\\mathcal {N}({p^k})$ is a (connected) tree.", "Now that we have established that $\\mathcal {N}({p^k})$ is a tree and we want to determine its structure.", "It can actually have an interesting and complex structure, especially when $k$ is large.", "Our main result is the following.", "Definition 2.3.3 Let $p$ be an odd prime.", "Fix $\\widehat{x}\\in \\mathcal {U}({p}) = \\lbrace 1,2,\\dots ,p-1\\rbrace $ , and for each $\\ell $ we define the tree ${\\mathrm {Tree}}_{p}^{({\\ell })}({\\widehat{x}})$ recursively: If $\\ell $ is odd: then ${\\mathrm {Tree}}_{p}^{({\\ell })}({\\widehat{x}})$ is just a single node with no edges; If $\\ell $ is even and $\\widehat{x}$ has no preimages modulo $p$ : then then ${\\mathrm {Tree}}_{p}^{({\\ell })}({\\widehat{x}})$ is just a single node with no edges; If $\\ell $ is even and $\\widehat{x}$ has two preimages modulo $p$ : denote these preimages by $z_1$ and $z_2$ , then ${\\mathrm {Tree}}_{p}^{({\\ell })}({\\widehat{x}})$ is a rooted tree with $2 p^{\\ell /2}$ trees mapping into this root: $\\mbox{ $p^{\\ell /2}$ copies of ${\\mathrm {Tree}}_{p}^{({\\ell /2})}({z_1})$ and $p^{\\ell /2}$ copies of ${\\mathrm {Tree}}_{p}^{({\\ell /2})}({z_2})$.", "}$ (Note by Remark REF that this exhausts all possible cases.)", "Theorem 2.3.4 The graph $\\mathcal {N}({p^k})$ is a rooted tree with 0 as the root and $p^{k-1}$ vertices.", "The in-neighborhood of 0 is constructed as follows: for any $k/2\\le \\ell < k$ and for any $1 \\le y \\le p^{k-\\ell }$ with $\\gcd (y,p)=1$ , attach a tree of type ${\\mathrm {Tree}}_{p}^{({\\ell })}({\\widehat{y}})$ , where $\\widehat{y}=y\\bmod p$ , to 0.", "For any $x$ (where $x\\bmod p = 0$ and $x\\bmod {p^k}\\ne 0$ ), write $x$ as the (unique) decomposition $x = \\widetilde{x}p^\\ell $ with $\\gcd (\\widetilde{x},p)=1$ , and the in-neighborhood of $x$ is the tree ${\\mathrm {Tree}}_{p}^{({\\ell })}({\\widehat{x}})$ , where $\\widehat{x}=\\widetilde{x}\\bmod p$ .", "Remark 2.3.5 The descriptions given in Theorem REF and Definition REF are recursive and not explicit — although it is relatively easy to use these descriptions to compute $\\mathcal {N}({p^k})$ when $k$ is not too large.", "We give an equivalent, but more explicit, description below.", "Theorem REF basically follows from the following lemma: Lemma 2.3.6 Consider $n=p^k$ , let $x=\\widetilde{x}p^\\ell $ (where $\\ell < k$ ) and consider the equation $q^2 = x\\bmod {p^k}.$ Write $\\widehat{x}$ as the element of $\\lbrace 1,2,\\dots , p-1\\rbrace $ that is congruent to $\\widetilde{x}$ modulo $p$ .", "If $\\ell $ is odd, or if $\\ell $ is even and $\\widehat{x}$ is a leaf in $\\mathcal {U}({p})$ , then (REF ) has no solutions.", "If there are two numbers $\\widetilde{z}_1,\\widetilde{z}_2$ such that $\\widetilde{z}_i^2 = \\widehat{x}\\bmod p$ , then (REF ) has $2p^{\\ell /2}$ solutions.", "Half of these ($p^{\\ell /2}$ of them) are solutions of the form $z p^{\\ell /2}$ where $z\\equiv z_1\\bmod p$ and the other half are solutions of the form $z p^{\\ell /2}$ where $z\\equiv z_2\\bmod p$ .", "Let us first consider the case where $\\ell = 2$ as a warm-up; the general case is similar but has more details.", "$\\ell =2$ .", "Let us write $x = \\widetilde{x}p^2$ , $y=\\widetilde{y}p$ , and assume that $y^2 = x\\bmod {p^k}$ .", "This tells us that $\\widetilde{y}^2 p^2 &\\equiv \\widetilde{x}p^2 \\bmod {p^k},\\\\\\widetilde{y}^2 &\\equiv \\widetilde{x}\\bmod {p^{k-2}}.$ Note that we have $1\\le \\widetilde{x}< p^{k-2}$ and $1\\le \\widetilde{y}< p^{k-1}$ — this is the range of prefactors we can put in front of each of those powers.", "Let us write $\\widehat{x}= \\widetilde{x}\\pmod {p}$ and $\\widehat{y}= \\widetilde{y}\\pmod {p}$ .", "If we consider the last equation modulo $p$ , this implies $\\widehat{y}^2\\equiv \\widehat{x}\\bmod p$ .", "This implies that if $\\widehat{x}$ has no preimages in $\\mathcal {G}({p})$ then there is no solution.", "Let us assume that $\\widehat{x}$ has two preimages in $\\mathcal {G}({p})$ , and pick one of them, say $z_1$ .", "If we can show that there are exactly $p$ solutions to the system of congruences $\\widetilde{y}^2 \\equiv \\widetilde{x}\\bmod {p^{k-2}},\\quad \\widetilde{y}\\equiv z_1\\bmod p,$ then we have established the result of the theorem for $\\ell =2$ .", "Let us define two sets: $S_x := \\lbrace \\widetilde{x}: 1 \\le \\widetilde{x}< p^{k-2} \\wedge \\widetilde{x}\\equiv \\widehat{x}\\bmod p\\rbrace ,\\quad S_y := \\lbrace \\widetilde{y}: 1 \\le \\widetilde{y}< p^{k-1} \\wedge \\widetilde{y}\\equiv \\widehat{y}\\bmod p\\rbrace .$ We can see that $\\left|{S_x}\\right| = p^{k-3}$ and $\\left|{S_y}\\right| = p^{k-2}$ , and of course $pS_y$ maps into $p^2S_x$ under the squaring map.", "Thus each point in $S_x$ is hit an average of $p$ times, but we want to show that it is exactly $p$ times.", "Let us parameterize $S_y$ by $\\widetilde{y}= z_1 + \\beta p$ , with $0\\le \\beta < p^{k-2}$ .", "We claim that as we run through these $\\beta $ , the values of $S_x$ each get hit exactly $p$ times.", "First note that if we replace $\\beta $ with $\\beta +p^{k-3}$ , then we get $(z_1 + (\\beta +p^{k-3})p)^2 \\equiv (z_1+\\beta p)^2\\bmod p^{k-2},$ so we only need consider $0\\le \\beta < p^{k-3}$ .", "We claim that these $\\beta $ give distinct values modulo $p^{k-2}$ , and since there are $p^{k-3}$ of them, they must cover the set $S_x$ — and then shifting the arguments by $p^{k-3}$ gives the same values, and this gives us $p$ copies.", "All we need to show now is that every element of $S_x$ gets hit once as $\\beta $ runs from 0 to $p^{k-3}-1$ .", "We compute: $(1+\\beta p)^2 &\\equiv (1+\\beta ^{\\prime } p)^2&\\pmod {p^{k-2}}\\\\1+2\\beta p + \\beta ^2p^2 &\\equiv 1+2\\beta ^{\\prime } p + (\\beta ^{\\prime })^2 p^2&\\pmod {p^{k-2}}\\\\2\\beta + \\beta ^2p &\\equiv 2\\beta ^{\\prime } + (\\beta ^{\\prime })^2 p&\\pmod {p^{k-3}},\\\\2(\\beta -\\beta ^{\\prime }) + p(\\beta ^2-(\\beta ^{\\prime })^2)&\\equiv 0&\\pmod {p^{k-3}},$ so we nowneed to show that the map $2\\xi + p\\xi ^2$ has only one root on $\\mathbb {Z}/{p^{k-3}}\\mathbb {Z}$ .", "If we assume that $2\\xi +p\\xi ^2\\equiv 0\\mod {p^{k-3}}$ , then by taking modulo $p$ , this gives $2\\xi \\equiv 0\\bmod p$ , and thus $\\xi \\equiv 0\\bmod p$ .", "From this we have $p\\xi ^2\\equiv 0\\bmod {p^3}$ , thus the same for $2\\xi $ , thus for $\\xi $ , etc.", "From this we get that only $\\xi =0$ can solve this equation, and we are done.", "General $\\ell $ .", "The argument is similar: writing $x = \\widetilde{x}p^\\ell $ , $y=\\widetilde{y}p^{\\ell /2}$ , and that $y^2 = x\\bmod {p^k}$ .", "This tells us that $\\widetilde{y}^2 p^\\ell &\\equiv \\widetilde{x}p^{\\ell } \\bmod {p^k},\\\\\\widetilde{y}^2 &\\equiv \\widetilde{x}\\bmod {p^{k-\\ell }}.$ Note that we now have $1\\le \\widetilde{x}< p^{k-\\ell }$ and $1\\le \\widetilde{y}< p^{k-\\ell /2}$ .", "Again assume that $\\widehat{x}$ has preimages in $\\mathcal {G}({p})$ and call one $z_1$ .", "We now show that there are $p^{\\ell /2}$ solutions to the system $\\widetilde{y}^2 \\equiv \\widetilde{x}\\bmod {p^{k-\\ell }},\\quad \\widetilde{y}\\equiv z_1\\bmod p.$ Here we parameterize $\\widetilde{y}= z_1 + \\beta p$ with $0\\le \\beta < p^{k-\\ell /2-1}$ .", "We obtain the same repeating argument when adding a multiple of $p^{k-\\ell -1}$ , and so we only need to consider $0\\le \\beta < p^{k-\\ell -1}$ .", "But this will repeat $p^{(k-\\ell /2-1)-(k-\\ell -1)} = p^{\\ell /2}$ times, so again we only need to show that every target gets hit in the cycle.", "But then the rest of the argument follows.", "[Proof of Theorem REF ] The lemma does most of the work for us here.", "Note that implicit in the lemma that for any $x$ , if we write $x = \\widetilde{x}p^\\ell $ , then the basin of attraction of $x$ depends only on $\\widehat{x}= \\widetilde{x}\\bmod p$ .", "We can go through the cases: if $\\ell $ is odd, then $x$ has no preimages modulo $p^k$ by the lemma, and so it is a leaf in $\\mathcal {N}({p^k})$ — and similarly for $\\ell $ even but $\\widehat{x}$ having no preimages modulo $p$ .", "Finally, for the neighborhood of 0: note that if $\\ell \\ge k/2$ , then $x^2\\equiv 0 \\bmod p^k$ .", "Thus any such point maps to 0 under squaring, and the preimage of each of these points is a tree of type ${\\mathrm {Tree}}_{p}^{({\\ell })}({\\widehat{x}})$ .", "Again it is worth pointing out that Lemma REF tells us that basically all that matters is the mod $p$ value of the prefactor.", "We can now define a non-recursive way to compute the trees defined in Definition REF .", "We now describe an algorithm that will compute this tree exactly.", "We will state but not prove the algorithm.", "Definition 2.3.7 For any $p$ prime, and $x\\in \\lbrace 0,\\dots , n-1\\rbrace $ , we define the unfurled preimage of $x$ of depth $\\zeta $, $Z_{p}({x},{\\zeta })$ , as a directed tree.", "The vertices of $Z_{p}({x},{\\zeta })$ are all sequences of the form $(a_1,\\dots , a_\\eta )$ with $\\eta \\le \\zeta $ such that $a_1 = x$ and $a_{i-1} = a_i^2 \\bmod p$ .", "The edges of $Z_{p}({x},{\\zeta })$ are $(a_1, a_2,\\dots , a_{\\eta -1}, a_\\eta )\\rightarrow (a_1,a_2,\\dots , a_{\\eta -1})$ Remark 2.3.8 It is always true that $Z_{p}({x},{\\zeta })$ forms a tree, since each edge moves from a sequence to a strictly smaller sequence and thus cannot have loops.", "(The construction above is sometimes referred to as the “universal cover of the graph $\\mathcal {U}({p})$ rooted at $x$ ”.", "Proposition 2.3.9 $Z_{p}({x},{\\zeta })$ is always a tree with degree 2.", "If $x$ is a transient point $\\bmod \\ p,$ then $Z_{p}({x},{\\zeta })$ is a regular 2-tree.", "If $p$ is prime, and $p-1=2^\\theta \\mu $ with $\\mu $ odd, and $\\zeta \\le \\mu $ , then $Z_{p}({x},{\\zeta })$ is a regular 2-tree.", "Otherwise, it is not a regular 2-tree.", "Example 2.3.10 Assume that $p\\equiv 3 \\bmod 4$ , so that $p-1 = 2\\mu $ .", "Let us first compute $Z_{p}({1},{\\zeta })$ .", "Note that 1 has two preimages: 1 and $-1$ , so $Z_{p}({1},{1})$ is a regular 2-tree.", "Now, if we want to go to depth $\\zeta =2$ , we see that $-1$ has no preimage, but 1 again has two, so the $-1$ note terminates, but the 1 node bifurcates.", "Again, going to depth $\\zeta =3$ , we again terminate at $-1$ and bifurcate at 1.", "Depending on the depth to which we want to unfurl this tree, we can continue as long as we'd like.", "Also note that we are labeling each node by the last term in the sequence defining the node, and not the whole sequence itself (it is useful to think of just this last value, but of course the sequence is important in the definition to give unique descriptors for each node).", "See Figure REF .", "Now, consider any periodic $x$ .", "We claim that $Z_{p}({x},{\\zeta })$ will be isomorphic to $Z_{p}({1},{\\zeta })$ .", "To see this, note that since $x$ is periodic, it has two preimages: one is the periodic $y_1$ with $y_1\\rightarrow x$ , and the other is the preperiodic $z_1$ with $z_1\\rightarrow x$ , so we get a regular 2-tree at depth 1.", "To go to depth $\\zeta =2$ , we have that $y_1$ is periodic so it again bifurcates into the periodic $y_2$ and perperiodic $z_2$ , whereas $z_1$ is a leaf in $\\mathcal {G}({p})$ , so it does not.", "Figure: Unfurling the case where θ=1\\theta =1.Definition 2.3.11 (Layered trees and expansions) We say that $T$ is a layered tree with $L$ layers if $V(T)$ can be written as the disjoint union of $V_1, V_2,\\dots , V_L$ and every edge in the graph goes from layer $k$ to layer $k-1$ .", "(This is basically the directed version of $L$ -partite.)", "Note that a vertex in a layer-$L$ tree is a leaf iff it is in layer $L$ .", "Given an $L$ -layer tree $T$ , and the integer vector ${\\bf a} = (a_1,a_2,a_3,\\dots ,a_{L-1})$ , we define $T^{({\\bf a})},$ $T$ expanded by ${\\bf a}$, to be the following $L$ -layer tree: Let $v\\in V(T)$ be a vertex in the $\\ell $ -th layer.", "Then the $a_1\\times a_2\\times \\cdots \\times a_{\\ell -1}$ vectors of the form $(v,(\\alpha _1,\\alpha _2,\\dots , \\alpha _\\ell )), 1\\le \\alpha _i\\le a_i$ are all in the $\\ell $ -th later of $V(T^{({\\bf a})})$ .", "We define the edges of $T^{({\\bf a})}$ by $(v,(\\alpha _1,\\alpha _2,\\dots , \\alpha _\\ell )) \\rightarrow (w,(\\alpha _1,\\dots , \\alpha _{\\ell -1})) \\quad \\iff \\quad v\\rightarrow w\\mbox{ in } T.$ Lemma 2.3.12 Let $n=p^k$ , and let $x=\\widetilde{x}p^\\ell $ for $\\ell < k$ , and $\\widehat{x}= \\widetilde{x}\\bmod p$ .", "Write $\\ell = 2^\\zeta \\lambda $ , where $\\lambda $ is odd.", "Define $Z_{p}({\\widehat{x}},{\\zeta })$ as in Definition REF and expand this graph by $(p^{\\ell /2}, p^{\\ell /4}, p^{\\ell /8},\\dots , p^{2\\lambda }, p^\\lambda )$ .", "Then this graph is isomorphic to the basin of attraction of $x$ in $\\mathcal {N}({p^k})$ ." ], [ "The graph $\\mathcal {G}({2^k})$", "We have dealt with all powers of odd primes above, but still need to deal with $n=2^k$ .", "This will be a bit different than what has gone before, both for the units and the nilpotents, but it will be similar enough that we can reuse some earlier ideas here.", "First, a preliminary result that we can prove pretty simply.", "Proposition 2.4.1 Let $n=2^k$ .", "Then $\\mathcal {G}({2^k})$ has exactly two components — one corresponding to $\\mathcal {U}({2^k})$ and the other to $\\mathcal {N}({2^k})$ , and they are both trees.", "(Equivalently, the graph $\\mathcal {G}({2^k})$ has exactly two fixed points: 0 and 1, and no periodic points.)", "The proof of the claim for $\\mathcal {N}({2^k})$ is similar to that for odd primes.", "The nilpotent elements modulo $2^k$ are exactly the even numbers, and when these are raised to a high enough power, they will be 0 modulo $2^k$ .", "Now we consider the units, which are the odd numbers.", "Let $x\\equiv 1\\bmod 2^\\ell $ with $1\\le \\ell < k$ .", "Then $x^2 = 1 + \\alpha 2^\\ell $ , and $x^2 = (1 + 2\\alpha 2^\\ell + \\alpha ^2 2^{2\\ell }) = 1+\\alpha 2^{\\ell +1} + \\alpha ^2 2^{2\\ell } \\equiv 1\\pmod {2^\\ell +1}.$ Since the power has increased, from this we see that any odd number will be 1 modulo $2^k$ in at most $k-1$ steps, and we are done.", "From this we just need to characterize each of the two trees.", "We consider $\\mathcal {U}({2^k})$ first.", "One complication here is that the units modulo $2^k$ do not form a cyclic group under multiplication, but we get something that is close enough to make it work: Lemma 2.4.2 [4] The powers of 3 modulo $2^k$ form a cyclic group of size $2^{k-2}$ under multiplication.", "If we write $S_k$ as the set $\\lbrace 3^q\\bmod 2^k: 0\\le q < 2^{k-2}\\rbrace ,$ then the units modulo $2^k$ (basically the odd numbers) can be written as $S_k \\cup -S_k$ .", "Using Lemma REF , we can characterize $\\mathcal {U}({2^k})$ : Proposition 2.4.3 The graph of $\\mathcal {U}({2^k})$ is a tree constructed in the following manner: Start with the tree $T_2^k$ , and for every node that is not a leaf, add two leaves directly mapping into it.", "Since $S_k$ forms a cyclic subgroup of the units of size $2^{k-2}$ , the graph of these numbers can be realized as $\\mathcal {A}_{{2^{k-2}}}$ , which by Corollary REF is $T_2^k$ .", "Note that all of the other units are negatives of powers of 3, and under squaring the negative sign cancels.", "So, for example, if $x\\in \\mathcal {U}({2^k})$ is such that $z_1^2\\equiv z_2^2\\equiv x\\bmod {2^k}$ , then we also have $(-z_1)^2\\equiv (-z_2)^2\\equiv x\\bmod {2^k}$ , so that $x$ now has in-degree four.", "Moreover, if there is any leaf in the $\\mathcal {A}_{{2^{k-2}}}$ , i.e.", "a number in $S_k$ that is not a square modulo $2^k$ , then there can be no number in $-S_k$ that squares to it, since the negative of that number would be in $S_k$ .", "Therefore if a number is a leaf in the original tree, then it remains a leaf when we add in the $-S_k$ terms as well.", "See Figure REF for an example with $n=2^7$ .", "We can also describe the graph $\\mathcal {U}({2^k})$ intuitively as follows: Start with the node at 1 and add a loop; add three nodes: $2^{k}-1$ and $2^{k/2}-1$ , which will be leaves, and the node $2^{k/2}+1$ , which will itself be a tree; to $2^{k/2}+1$ , we add four inputs: the leaves $2^{k/4}-1$ and $3\\cdot 2^{k/4}-1$ , and two trees: $2^{k/4}+1$ and $3\\cdot 2^{k/4}+1$ , etc.", "Figure: The graph 𝒰(128)\\mathcal {U}({128}).", "Note that it is a graph of depth 5 (because the cyclic subgraph generated by powers of 3 is of order 32, but instead of being a T 2 5 T_2^5, we add two leaves to each non-left node.", "So for example without the additional leaves, 65=2 6 +165 = 2^6+1 would have two preimages: 33=2 5 +133=2^5+1 and 97=3·2 5 +197 = 3\\cdot 2^5+1, which themselves are trees.", "But we also have the two leaves 31=2 5 -131 = 2^5-1 and 95=3·2 5 -195 = 3\\cdot 2^5-1 as well.", "(And similarly for all of the other nodes.", ")Now onto the nilpotent elements.", "This case is a bit more complex than the case where $p$ is odd.", "The main problem here is that there is no statement analogous to Lemma REF — the main idea there is that the basin of attraction of any point of the form $x=\\widetilde{x}p^\\ell $ depends only on the value of $\\widehat{x}= \\widetilde{x}\\bmod p$ , whenever $p$ is odd.", "This is clearly not true in the case where $p=2$ , since this would mean that the basin of attraction for all numbers of the form $\\mbox{(odd)}*2^\\ell $ would be the same, but this cannot be true since some odd numbers have square roots modulo $2^k$ and some do not (q.v.", "the discussion of units above).", "In particular, when computing the basin of attraction of a point $x = \\widetilde{x}p^\\ell $ , we didn't need to pay attention to the “ambient space” $p^k$ , but unfortunately this breaks when $p=2$ .", "So in theory one can compute this tree for any given $n$ , but we don't expect such a nice result as seen in Theorem REF or Lemma REF .", "We do have a partial result that we state without proof.", "Proposition 2.4.4 Consider $n=2^k$ , and let $x=\\widetilde{x}2^\\ell $ , where $\\widetilde{x}\\ne 1$ .", "Let us define $\\theta _1,\\theta _2$ by the equations $\\widetilde{x}- 1 = 2^{\\theta _1} \\cdot q_1,\\quad \\ell = 2^{\\theta _2}\\cdot q_2,$ where $q_1,q_2$ are odd.", "Define $\\theta = \\min (\\theta _1,\\theta _2)$ .", "Define $T_{2}^\\theta $ as in Definition REF , add on the extra nodes as we did in Proposition REF , and expand it as in Definition REF , where we expand the first layer by $2^{\\ell /2}$ , the second layer by $2^{\\ell /4}$ , etc.", "Then the basin of attraction of $x$ is isomorphic to this expanded graph." ], [ "Computing the Kronecker products in practice", "As we have seen above, the type of graph structures we see for primes, or powers of primes, come in certain forms — and the graphs for composite $n$ come from Kronecker products of these.", "We have two main results in this section: the first is that one can compute the Kronecker product “component by component”, and the second one is a list of the typical graph products will we see in practice.", "Definition 3.1.1 If $G$ are $H$ are two graphs with disjoint vertex sets, then we define the disjoint union of $G$ and $H$, denote $G\\oplus H$ , as the graph with $V(G\\oplus H) = V(G) \\cup V(H)$ and $E(G\\oplus H) = E(G)\\cup E(H)$ .", "Now, let us assume that there is a path from $(x_1,y_1)$ to $(x_2,y_2)$ in $G\\square H$ .", "If we ignore the $y$ 's, this gives a path from $x_1$ to $x_2$ in $G$ , and if we ignore the $x$ 's, this gives a path from $y_1$ to $y_2$ in $H$ .", "Therefore if $(x_1,y_1)$ and $(x_2,y_2)$ are in the same component of $G\\square H$ , then $x_1$ and $x_2$ must be in the same component in $G$ , and $y_1$ and $y_2$ must be in the same component in $H$ .", "This gives the following: Proposition 3.1.2 Let $G$ and $H$ be graphs, and let $G = \\bigoplus _{i\\in I} G_i,\\quad \\quad H = \\bigoplus _{j\\in J} H_j$ be the decomposition of each graph into its connected components.", "Then we have $G\\square H = \\bigoplus _{i\\in I, j\\in J} (G_i\\square H_j),$ where the individual terms in the union are themselves disjoint.", "More specifically, if there is a path from $(x_1,y_1)$ to $(x_2,y_2)$ in $G\\square H$ , then there must be a path from $x_1$ to $x_2$ in $G$ and one from $y_1$ to $y_2$ in $H$ .", "Basically this means that we can compute the Kronecker product “component by component” — more specifically, when computing the Kronecker product we can just look at each component individually and not worry about the impact from other components.", "This is good, because we can compartmentalize.", "However, one caveat, as we will see below: it is possible for the product of two connected graphs to not be connected, so that some of the $G_i\\square H_j$ terms in the expansion above might not themselves be a single components, but can be multiple components.", "However, we do see that the number of connected components of $G\\square H$ is bounded below by the product of the numbers of connected components of $G$ and $H$ , respectively.", "Ok, so what do these component-by-component products actually look like?", "As we learned from Theorems REF and REF , every component of $\\mathcal {G}({p^k})$ is either a flower cycle, or a looped tree (recall Definition REF ).", "This, with Proposition REF , tells us that the natural question is what happens when we take the Kronecker product of two graphs which are each of one of these two types.", "We categorize all of the cases we might expect to see later in the following Proposition.", "Proposition 3.1.3 We have the following results: (Two looped trees) If $T_1$ and $T_2$ are looped trees, then so is $T_1\\square T_2$ — specifically, this implies that $T_1\\square T_2$ is connected, has a single root vertex, that root vertex loops to itself, and all vertices flow toward that root.", "(Degrees) Assume that $T_1$ and $T_2$ are looped trees with the property that every vertex in $T_i$ either has in-degree $w_i$ or 0.", "Then every vertex in $T_1\\square T_2$ has in-degree $w_1\\times w_2$ or 0.", "(Specific grounded trees) For any widths $w_1,w_2$ and depth $\\theta $ , we have $T_{w_1}^\\theta \\square T_{w_2}^\\theta \\cong T_{w_1\\cdot w_2}^\\theta .$ (If we take two regular grounded trees of the same depth, then we get a regular grounded tree of that depth, just wider.)", "(Another description of flower cycles) $C_\\alpha \\square T \\cong C_\\alpha (T).$ (One looped tree and one flower cycle) $C_\\alpha (T_1)\\square T_2 \\cong C_\\alpha (T_1\\square T_2).$ (Two flower cycles) Let $G = C_\\alpha (T_1)$ and $H = C_\\beta (T_2)$ .", "Let $\\gamma = \\gcd (\\alpha ,\\beta )$ and $\\lambda = \\operatorname{lcm}(\\alpha , \\beta )$ .", "Then $G\\square H$ is $\\gamma $ disjoint copies of the graph $G_\\lambda (T_1\\square T_2)$ .", "Note the last case where two connected graphs have a disconnected product!", "Consider the vector $(0,0)$ in $V(T_1)\\times V(T_2)$ , where each 0 corresponds to the root in that tree.", "Then note that $(0,0)\\rightarrow (0,0)$ by definition, and we have this loop.", "More generally, if we take any $(x,y)\\in V(T_1)\\times V(T_2)$ , then there is a finite path from $x$ to 0 in $T_1$ and a finite path from $y$ to 0 in $T_2$ , so there is a finite path (whose length is the maximum of the two aforementioned paths) from $(x,y)$ to $(0,0)$ .", "In fact we prove something a bit more general: if $x\\in V(G)$ has an in-degree of $k$ and $y\\in V(H)$ has an in-degree of $l$ , then $(x,y)\\in V(G\\square H)$ has an in-degree of $k\\times l$ .", "To see this, note that if $z_i \\rightarrow x$ for $i=1,\\dots , k$ and $w_j\\rightarrow y$ for $j=1,\\dots , l$ , then $(z_i, w_j)\\rightarrow (x,y)$ for all $i,j$ .", "The claim follows directly.", "Let us consider the case where we have two trees $G_1 = T_{w_1}^\\theta $ and $G_2 = T_{w_2}^\\theta $ .", "See Lemma REF below for a parameterization of each of these trees.", "Then, the root of $G_1\\square G_2$ is $(0,0)$ where 0 is the root of each $G_1$ .", "Note that it has a loop to itself.", "Let us define the first layer of $G_1\\square G_2$ to be any pair $(k_1,k_2)$ with $0\\le k_1\\le w_1-1$ $0\\le k_2\\le w_2-1$ but not both zero.", "Note that all of these map to $(0,0)$ in one step, and there are exactly $w_1w_2-1$ of these.", "Now we define the $\\ell $ th layer of $G_1\\square G_2$ : given pair $(v_1,v_2)$ where $v_1$ is in level $\\ell _1$ of $G_1$ and $v_2$ is in level $\\ell _2$ of $G_2$ , define $\\ell = \\max (\\ell _1, \\ell _2)$ .", "Note that any such vertex will map down to layer $\\ell -1$ .", "Moreover, if $\\ell <\\theta $ , then each of these vertices has exactly $w_1w_2$ preimages, because we can append $w_1$ numbers to the end of $v_1$ and $w_2$ numbers to the end of $v_2$ .", "If $\\ell =\\theta $ , then these elements are all leaves, and we are done.", "This is actually a bit trickier than it looks, since these two descriptions are isomorphic but not equal.", "Recall the definition of $C_\\alpha (T)$ : we define the vertices of this graph to be $(v,\\beta )$ where $v\\in V(T)$ and $1\\le \\beta \\le \\alpha $ .", "The edges in $ C_\\alpha (T)$ are defined as follows: $(v,\\beta )\\rightarrow (w,\\gamma ) \\iff (\\beta =\\gamma \\wedge v\\rightarrow w) \\vee (v=w=r \\wedge \\gamma = \\beta +1).$ However, in $C_\\alpha \\square T$ (actually $T\\square C_\\alpha $ but who's counting) we have $(v^{\\prime },\\beta ^{\\prime }) \\rightarrow (w^{\\prime },\\gamma ^{\\prime }) \\iff v^{\\prime } \\rightarrow w^{\\prime } \\wedge \\gamma ^{\\prime } = \\beta ^{\\prime } +1.$ Now let us define a map $\\psi \\colon C_\\alpha (T) \\rightarrow C_\\alpha \\square T$ as $\\psi (v, \\beta ) = (v, \\beta -\\mathcal {L}(v))$ , where $\\mathcal {L} (v)$ is the level of the vertex $v$ (in this case, the number of edges it takes to get from $v$ to $r$ ).", "Now, let us assume that $\\varphi (v,\\beta )\\rightarrow \\psi (w,\\gamma )$ in $C_\\alpha \\square T$ .", "This is true iff $(v,\\beta -\\mathcal {L}(v))\\rightarrow (w,\\gamma -\\mathcal {L}(w))$ in $C_\\alpha \\square T$ , which if true iff $v\\rightarrow w$ in $T$ and $\\gamma -\\mathcal {L}(w) = \\beta -\\mathcal {L}(v) + 1$ .", "Now we claim that this is equivalent to $(v,\\beta )\\rightarrow (w,\\gamma )$ in $C_\\alpha (T)$ .", "Breaking up by cases: if $v\\ne w$ , then $\\mathcal {L}(w) = \\mathcal {L}(v) = 1$ , which would imply $\\gamma = \\beta $ , and if $v=r$ , then $w=r$ , and $\\mathcal {L}(v) = \\mathcal {L}(w) = 0$ , which would imply $\\gamma =\\beta +1$ .", "This follows from the previous result twice and associativity, since $C_\\alpha (T_1) \\square T_2 = (C_\\alpha \\square T_1)\\square T_2 = C_\\alpha \\square (T_1\\square T_2) = C_\\alpha (T_1\\square T_2).$ Let us first note that if we consider two bare cycles, we get a similar formula: Let $G_1 = C_\\alpha $ and $G_2 = C_\\beta $ .", "Note that we can think of these cycles as the map $x\\mapsto x+1$ on $\\mathbb {Z}/{\\alpha }\\mathbb {Z}$ and $\\mathbb {Z}/{\\beta }\\mathbb {Z}$ respectively, and here $G_1\\square G_2$ will be the graph of the map $(x_1,x_2) \\mapsto (x_1+1,x_2+1)$ on $\\mathbb {Z}/{\\alpha }\\mathbb {Z}\\times \\mathbb {Z}/{\\beta }\\mathbb {Z}$ .", "It is easy enough to see that the element $(1,1)$ has order $\\lambda $ , and so every point is on a periodic cycle of length $\\lambda $ .", "Since there are $\\alpha \\cdot \\beta $ total vertices, and $\\alpha \\cdot \\beta /\\lambda = \\gamma $ , there are exactly $\\gamma $ distinct periodic cycles.", "To get the full case: we have $C_\\alpha (T_1)\\square C_\\beta (T_2) = (C_\\alpha \\square T_1)\\square (C_\\beta \\square T_2) = (C_\\alpha \\square C_\\beta )\\square (T_1\\square T_2),$ and we have computed above that $C_\\alpha \\square C_\\beta $ is $\\gamma $ copies of $C_\\lambda $ .", "Lemma 3.1.4 We can parameterize $T_w^\\theta $ as follows: let $V_0 = \\lbrace 0\\rbrace $ , $V_1$ be the set $\\lbrace 1,\\dots , w-1\\rbrace $ and let $V_\\ell = \\lbrace (v_1,v_2,\\dots , v_\\ell ): 1 \\le v_1 \\le w-1, \\forall k > 1, 1 \\le v_k \\le w\\rbrace .$ (These correspond to the “layers” in the graph, so any vertex in $V_\\ell $ is “in layer $\\ell $ ”.)", "Let $V = \\cup _{i=0}^\\theta V_\\ell $ .", "Now define edges as follows: for all $k\\in V_1$ , add an edge $k\\rightarrow 0$ , and for all $v\\in V_\\ell $ with $2\\le \\ell \\le \\theta $ , add the edge $(v_1,v_2,\\dots , v_{\\ell -1}, v_\\ell ) \\rightarrow (v_1,v_2,\\dots , v_{\\ell -1}),$ i.e.", "throw out the last component.", "Then this graph is isomorphic to $T_w^\\theta $ ." ], [ "The same unit graph can show up in many places", "A natural question is how similar the graphs can be for different $n$ .", "However, note that if we have two primes $p,q$ with $p\\ne q$ , then $\\varphi (p)\\ne \\varphi (q)$ , so they do not have isomorphic sets of units.", "But it is possible that we can have two primes $p\\ne q$ with $\\varphi (p^k) = \\varphi (q)$ , and thus $\\mathcal {U}({p^k}) \\cong \\mathcal {U}({q})$ .", "For example, we have that $\\varphi (27) = \\varphi (19) = 18,$ so $\\mathcal {U}({27})\\cong \\mathcal {U}({19})\\cong \\mathcal {A}_{{18}}$ .", "We now work both cases out.", "Since $18 = 2\\cdot 9$ , we have $\\theta =1$ (so the flowers on the cycle are just one node sticking out) and the divisors are $1,3,9$ .", "Since $\\varphi (3) = \\mathsf {ord}_{3}(2) = 2$ , there is one orbit of period 2, and since $\\varphi (9) = \\mathsf {ord}_{9}(2) = 6$ , there is one orbit of period 6.", "This determines the graph of units completely.", "For $n=19$ , of course, we have that $\\mathcal {N}({19})$ is just 0 with a loop to itself, but $\\mathcal {N}({27})$ is more complicated.", "The set of points that map to zero modulo $27 = 3^3$ are the two points $9=3^2$ and $18=2\\cdot 3^2$ .", "If we recall the graph $\\mathcal {G}({3})$ , we see that 1 has a preimage of 2, and 2 has no preimage.", "Therefore 18 is a leaf in $\\mathcal {N}({27})$ , whereas 9 is a tree of the form ${\\mathrm {Tree}}_{3}^{({2})}({1})$ , which will be a regular tree of width 6 and depth 1 (there are three trees of type ${\\mathrm {Tree}}_{3}^{({1})}({1})$ and three of type ${\\mathrm {Tree}}_{3}^{({1})}({2})$ , which are themselves single nodes by definition, so are leaves in $\\mathcal {N}({27})$ ).", "See Figure REF .", "Figure: The graphs 𝒢(27)\\mathcal {G}({27}) and 𝒢(19)\\mathcal {G}({19}).", "Note that they are isomorphic on the units (the graphs are the same, but of course the values at the vertices are different), with only a different nilpotent tree." ], [ "Example where we really go to town on the Kronecker products", "As we proved in Proposition REF , all of the components that show up in $\\mathcal {G}({n})$ are themselves Kronecker products of graphs that show up in the factors of other graphs.", "So, let's say, for example, we want to find an $n$ that contains a copy of the graph $T_2^1 \\square T_2^2 \\square T_2^3 \\square T_2^4.$ One way to obtain this is to find primes that contain such graphs, and then multiply these together.", "We know that a $\\mathcal {G}({p})$ for $p$ prime will contain a $T_2^\\theta $ iff $p-1$ is divisible by exactly $\\theta $ powers of 2.", "For $\\theta =1$ , we can pick $p_1=3$ , and for $\\theta =2$ we can pick $p_2=5$ .", "For $\\theta =3$ we cannot pick $2^3+1$ , since it's not prime, and we don't want to pick any even multiple of 8, as we'll get too many powers of 2.", "So we can pick $p_3=41$ , and finally we can pick $p_4= 17$ .", "Of course, $p_1$ , $p_2$ , and $p_4$ are relatively small since they are Fermat primes, but we need to stretch a bit for $p_3$ .", "Now we know that the basin of attraction of 1 in $\\mathcal {G}({p_i})$ is exactly $T_2^i$ , for $i=1,2,3,4$ .", "We have $p_1p_2p_3p_4 = 10455$ , and therefore the basin of attraction of 1 in $\\mathcal {G}({10455})$ is exactly $T_2^1 \\square T_2^2 \\square T_2^3 \\square T_2^4$ , shown in Figure REF .", "(In fact, to generate this picture we actually just directly computed the subgraph of $\\mathcal {G}({10455})$ containing 1 directly.)", "It's a pretty wild graph: it has a radius of 4 due to the $T_2^4$ term, but the smaller terms give it some interesting heterogeneity (e.g.", "each intermediate node has leaves hanging off of it, etc.).", "Figure: The graphs T 2 1 □T 2 2 □T 2 3 □T 2 4 T_2^1 \\square T_2^2 \\square T_2^3 \\square T_2^4, which appears as the basin of attraction of 1 in the graph 𝒢(10455)\\mathcal {G}({10455}).", "At this point, it is perhaps clear why we gave “flower cycles” their names: theIn fact, we can completely understand the graph of $\\mathcal {G}({10455})$ using the tools above.", "Note that since $p_1,p_2,p_4$ are all Fermat primes, their graphs consist of a tree containing $p_i-1$ nodes, plus a 0 with a loop, so is the union $T_2^i \\cup T_1^1$ .", "For $p_3=41$ , we note that $p-1=40 = 8*5$ , and $\\mathsf {ord}_{5}(2) =\\varphi (5)= 4$ , so $\\mathcal {G}({41})$ has a single loop of period 4 in the form of a $C_4(T_2^3)$ , plus a $T_2^3$ going to 1, plus the single loop at 0.", "There are 24 possible products in the expansion $(T_2^1 \\cup T_1^1)\\square (T_2^2 \\cup T_1^1)\\square (C_4(T_2^3) \\cup T_2^3 \\cup T_1^1) \\square (T_2^4 \\cup T_1^1)$ and notice that there is only one loop that can be chosen in any of these products, so each product remains connected and there are precisely 24 components.", "Moreover, exactly eight of these will have a loop of period 4, and sixteen will not, for example for loops we can get a $C_4(T_2^3)$ by choosing $T_1^1\\square T_1^1 \\square C_4(T_2^3) \\square T_1^1$ , but we can also get $C_4(T_2^1\\square T_2^3)$ by choosing $T_2^1\\square T_1^1 \\square C_4(T_2^3) \\square T_1^1$ , etc.", "In general, we can extrapolate directly from this example that for any flower cycle of the form $C_\\alpha (T)$ , where $T$ is a tree that can be formed by taking Kronecker products of any collection of $T_2^\\theta $ 's, then we can find a (square-free) $n$ that contains $C_\\alpha (T)$ ." ], [ "Triggering divisors", "Definition 3.4.1 Given a period $k$ , we say that $d$ is a triggering divisor for $k$ if $\\mathsf {ord}_{d}(2) = k$ ; $\\mathsf {ord}_{e}(2)\\ne k$ for any $e$ that is a proper divisor of $d$ .", "Remark 3.4.2 It follows directly from the definition that $\\mathsf {ord}_{d}(2) = k$ iff $d$ is a multiple of a triggering divisor of $k$ .", "The name “triggering divisor” comes from the fact that the presence of such a divisor of $m$ will trigger a periodic orbit of period $k$ .", "(It might be that other divisors of $m$ also give periodic orbits of period $k$ , but this minimal one is the one that triggers the condition...) Corollary 3.4.3 The graph $\\mathcal {G}({p^k})$ has a flower orbit of type $C_k(T_2^\\theta )$ iff $m = \\varphi (p^k)$ is a multiple of $2^\\theta $ times a triggering divisor of $k$ .", "This follows directly from Theorem REF and the definition of triggering divisor.", "We present the triggering divisors for all periods up to 50 in Table REF , and we have also put the first $n$ for which a period appears.", "Note that the triggering divisors are not “unique” in some cases for certain periods — we can obtain multiple numbers, neither of which divides the other.", "Also note, interestingly, that these numbers might not even be relatively prime.", "In the table, when there are multiple triggering divisors we have bolded the one that gives rise to the first prime to have that period; for example, for period 18, the smallest prime that is $1\\pmod {19}$ is actually 191, whereas the smallest prime that is $1\\pmod {27}$ is 109.", "Also, one might ask whether a composite number gives rise to a particular periodic orbit before any prime does.", "In theory it is possible that one could observe a (composite) period first in a composite number.", "For example, to obtain a period-10 orbit, we can find a prime that gets it from a triggering divisor (in this case, 47) or we could find a number with a period-2 and a period-5 and multiply them.", "So, for example, $n=7*311 = 2177$ also has a period-10 orbit, but $2177 > 47$ .", "In practice we never found an example where a period first shows up in a composite number but there are a lot of integers that we didn't check.", "The author is agnostic on the claim as to whether a particular period length can ever show up first at a composite $n$ .", "There is a lot of heterogeneity in how large the triggering divisors are.", "One thing that follows directly from the definitions is that Proposition 3.4.4 If $p$ is a Mersenne prime ($p$ is prime and $2^p-1$ is also prime) then period-$p$ has the single triggering divisor $2^p-1$ .", "One can see this markedly at $2, 3, 5, 7, 13, 17, 19, 31$ in Table REF , whereas for primes that are not Mersenne primes (e.g.", "$p=11, 37, 39$ there is a “pretty small” triggering divisor.)", "The method to obtain the values in Table REF was as follows.", "For any period $r$ , we list all divisors of the number $2^r-1$ .", "For any $d$ that divides $2^r-1$ , we compute the multiplicative order of 2 modulo $d$ and retain those where $\\mathsf {ord}_{d}(2) = r$ .", "(Of course, since $d$ divides $2^r-1$ , then $\\mathsf {ord}_{d}(2)$ must divide $r$ , but it could be strictly smaller.)", "From this retained list of divisors, we remove those divisors that are multiples of others in the list.", "For example, let us consider period 6.", "We have $2^6-1 = 63$ , the divisors of which are $1,3,7,9,21,63$ .", "We have $\\mathsf {ord}_{1}(2)=1,\\mathsf {ord}_{3}(2)=2$ and $\\mathsf {ord}_{7}(2)=3$ and remove those.", "Of the remaining three, we can compute that they satisfy $\\mathsf {ord}_{d}(2) = 6$ .", "But then notice that 63 is a multiple of 9 (and also 21 for that matter) so we throw out 63, leaving 9 and 21.", "And also note that these divisors are independent in the sense that they give different sets of primes.", "For example, if we consider $p=19$ , then $19-1 = 18 = 2*9$ , which is not divisible by 21, and conversely $p=43$ has $p-1 = 2*21$ which is not divisible by 9.", "Note that $\\theta =1$ in each of these cases, so we see that $\\mathcal {G}({19})$ has a single period-6 orbit of type $C_6(T_2^1)$ (since $\\varphi (9) = 6$ ) but $\\mathcal {G}({43})$ has two disjoint such orbits (since $\\varphi (21) = 12$ ).", "See Figure REF .", "Table: List of triggering divisors for period up to 50Figure: The graphs 𝒢(19)\\mathcal {G}({19}) and 𝒢(43)\\mathcal {G}({43}).", "Note that they both have period-6 orbits, and all periodic orbits have a “spoke” sticking out from the T 2 1 T_2^1." ], [ "When cycles beget many more cycles", "Consider the two numbers considered in the previous section: the primes 19 and 43.", "These numbers were chosen as primes that give period-6 orbits, but “for different reasons”, because they derive from independent triggering divisors.", "Since they both have periodic orbits of the same period, if we take the Kronecker product, we will obtain many orbits of that period, from Proposition REF , since $\\alpha =\\beta = \\lambda = \\gamma $ — so that the product of two period-6 cycles is actually 6 distinct period-6 cycles.", "Again, we first consider $\\mathcal {G}({19})$ .", "Since $19-1 = 18 = 2*9$ , we need to consider the divisors of 9.", "We already saw that we have an orbit of period 6, and from the divisor 3 we get and orbit of period 2.", "Since $\\theta =1$ we have $T_2^1$ attached to the graphs (just giving a “spoke”).", "Therefore $\\mathcal {G}({19})$ has four components, namely $\\mathcal {G}({19})\\cong C_6(T_2^1) \\oplus C_2(T_2^1) \\oplus T_2^1 \\oplus T_1^1.$ For $\\mathcal {G}({43})$ , we have $43-1 = 42 = 2*21$ , and so we need to consider the divisors $1,3,7,21$ .", "As we computed above, the $d=21$ gives two period-6 orbits.", "Since $\\varphi (7) = 6$ and $\\mathsf {ord}_{7}(2) = 3$ , the $d=7$ term gives two period-3 orbits, and $d=3$ gives one period two orbit.", "Therefore $\\mathcal {G}({43})$ has 7 components: $\\mathcal {G}({43}) \\cong C_6(T_2^1) \\oplus C_6(T_2^1) \\oplus C_3(T_2^1)\\oplus C_3(T_2^1) \\oplus C_2(T_2^1) \\oplus T_2^1 \\oplus T_1^1.$ Now let us consider $n=817 = 19*43$ .", "We know that the graph $\\mathcal {G}({817})$ will contain at least 28 components, since one of the graphs has 4 and the other 7.", "But in fact, it will have many more components: each time we take the product of two period-6 orbits, we actually get six period-6 orbits, etc.", "For example, let us consider the $C_6(T_2^1)$ component of $\\mathcal {G}({19})$ , and multiply it against the seven components of $\\mathcal {G}({43})$ .", "We get $C_6(T_2^1) \\square C_6(T_2^1) &= C_6(T_4^1) \\quad \\mbox{times 6}; \\\\C_6(T_2^1) \\square C_3(T_2^1) &= C_6(T_4^1) \\quad \\mbox{times 3};\\\\C_6(T_2^1) \\square C_2(T_2^1) &= C_6(T_4^1) \\quad \\mbox{times 2};\\\\C_6(T_2^1) \\square T_2^1 &= C_6(T_4^1) \\quad \\mbox{times 1};\\\\C_6(T_2^1) \\square T_1^1 &= C_6(T_2^1).$ And also note that the first two components listed above each appear twice in $\\mathcal {G}({43})$ , so when we multiply $C_6(T_2^1)$ by the components of $\\mathcal {G}({43})$ we obtain $2*6+2*3+2+1 = 21$ copies of $C_6(T_4^1)$ and one copy of $C_6(T_2^1)$ .", "When we multiply the $C_2(T_2^1)$ component of $\\mathcal {G}({19})$ against the other seven, we obtain $C_2(T_2^1) \\square C_6(T_2^1) &= C_6(T_4^1) \\quad \\mbox{times 2}; \\\\C_2(T_2^1) \\square C_3(T_2^1) &= C_6(T_4^1) \\quad \\mbox{times 1};\\\\C_2(T_2^1) \\square C_2(T_2^1) &= C_2(T_4^1) \\quad \\mbox{times 2};\\\\C_2(T_2^1) \\square T_2^1 &= C_2(T_4^1) \\quad \\mbox{times 1};\\\\C_2(T_2^1) \\square T_1^1 &= C_2(T_2^1).$ This gives a total of $2*2+2*1 = 6$ copies of $C_6(T_4^1)$ , one copy of $C_2(T_4^1)$ , and one copy of $C_2(T_2^1)$ .", "The other two components are simpler, since they are trees.", "Multiplying $T_2^1$ against the components of $\\mathcal {G}({43})$ gives 2 copies of $C_6(T_4^1)$ , 2 copies of $C_3(T_4^1)$ , 1 copy of $C_2(T_4^1)$ , 1 copy of $T_4^1$ , and finally one copy of $T_2^1$ .", "Multiplying $T_1^1$ against these components just copies them.", "Putting this all together, we have Table: NO_CAPTIONWe can compare this against the graph given in Figure REF , which was obtained by direct computation.", "Note that $C_\\alpha (T_4^1)$ will be a cycle of period $\\alpha $ with three leaves sticking out of each node, and $C_\\alpha (T_2^1)$ will be a cycle of period $\\alpha $ with a single stem out of each node.", "Figure: The (unlabeled) graph 𝒢(817)\\mathcal {G}({817}), chosen because 817=19*43817=19*43 (q.v.", "Figure )" ], [ "When the nilpotents end up chillin' in the corner", "Consider $n=5^4=625$ .", "Here we want to study the graph $\\mathcal {G}({625})$ , and in particular, focus on $\\mathcal {N}({625})$ .", "Let us use the formula in Theorem REF .", "We first compute the graph ${\\mathrm {Tree}}_{5}^{({2})}({\\widetilde{y}})$ .", "Recalling the graph $\\mathcal {U}({5})$ , we see that 1 has two preimages: 1 and 4, and 4 has two preimages: 2 and 3.", "In this case the actual values of the preimages won't be important, since ${\\mathrm {Tree}}_{5}^{({1})}({\\widetilde{y}})$ is a leaf regardless of $\\widetilde{y}$ , due to the single power of 5.", "According to the theorem, ${\\mathrm {Tree}}_{5}^{({2})}({1})$ has five incoming copies of ${\\mathrm {Tree}}_{5}^{({1})}({1})$ and five incoming copies of ${\\mathrm {Tree}}_{5}^{({1})}({4})$ , but these are all leaves, so ${\\mathrm {Tree}}_{5}^{({2})}({1})$ is just a regular tree of width 10, or $T_{10}^1$ .", "We get the same result for ${\\mathrm {Tree}}_{5}^{({2})}({4})$ .", "And, of course, ${\\mathrm {Tree}}_{5}^{({2})}({2}) = {\\mathrm {Tree}}_{5}^{({2})}({3})$ is just a single node.", "We also have that ${\\mathrm {Tree}}_{5}^{({3})}({\\widetilde{y}})$ is a node simply because 3 is odd.", "Now, the in-neighborhood of 0 will be any numbers with 2 or 3 powers of 5 in them.", "There are four such preimages with power three (${\\mathrm {Tree}}_{5}^{({3})}({\\widetilde{y}})$ for $\\widetilde{y}=1,2,3,4$ , which are all nodes) and $\\varphi (25)=20$ such preimages of power two.", "As we say above, half of these are trees $T_{10}^1$ and the other half are nodes.", "Putting all of this together, the in-neighborhood of 0 is 10 copies of $T_{10}^1$ and 14 leaves.", "See Figure REF , where we have plotted $\\mathcal {N}({625})$ .", "Figure: The graph 𝒩(625)\\mathcal {N}({625}).Note the structure is as we say: the node 0 has fourteen leaves leading into it: the four that come from $5^3$ terms: 125, 250, 375, 500, and the ten that come from $5^2$ terms, where the prefactor is 2 or 3 modulo 5: $25*\\lbrace 2,3,7,8,12,13,17,18,22,23\\rbrace $ .", "We also see that there are ten trees that go into 0, coming from $5^2$ terms, where the prefactor is 1 or 4 modulo 5: $25*\\lbrace 1,4,6,9,11,14,16,19,21,24\\rbrace $ .", "Of course, if we want to understand the full graph $\\mathcal {G}({625})$ we also need to consider the units.", "Note that $\\varphi (625) = 625-125 = 500 = 2^2*125$ .", "The divisors are $1,5,25,125$ , where we have $\\varphi (125) = \\mathsf {ord}_{125}(2) = 100$ , so one period-100 orbit, $\\varphi (25) = \\mathsf {ord}_{25}(2) = 20$ , so one period-20 orbit, $\\varphi (5) = \\mathsf {ord}_{5}(2)=4$ , so one period-4 orbit, and of course one fixed point.", "Since $\\theta =2$ these are all decorated with copies of $T_2^2$ , so in fact we have $\\mathcal {U}({625}) = C_{100}(T_2^2) \\oplus C_{20}(T_2^2) \\oplus C_{4}(T_2^2) \\oplus T_2^2,$ so $\\mathcal {G}({625})$ has five components, see Figure REF .", "(We can see that the tree $\\mathcal {N}({625})$ is just chilling down there in the corner.)", "Figure: The graph 𝒢(625)\\mathcal {G}({625})." ], [ "Well, *these* nilpotents ain't messing around", "Here we are going to describe the graph $\\mathcal {G}({n})$ when $n=177147=3^{11}$ .", "First we consider $\\mathcal {U}({3^{11}})$ .", "We have $\\varphi (3^{11}) = 2\\cdot 3^{10}$ , so we have $\\theta = 1$ and $\\mu = 3^{10}$ .", "The divisors $d$ of $3^{10}$ are, of course, $3^k$ for $k=0,1,\\dots , 10$ .", "In each of these cases, it turns out that $\\varphi (3^k) = \\mathsf {ord}_{3^k}(2) = 2\\cdot 3^{k-1}$ , so we have exactly 11 periodic orbits: a fixed point, one of period 2, one of period 6, etc.", "all the way up to one of period $2\\cdot 3^9 = 39366$ .", "Since $\\theta =1$ , all of these periodic orbits have a “spoke” coming out of each periodic point.", "Now for $\\mathcal {N}({3^{11}})$ .", "Let us first consider the neighborhood of 0: any $x$ of the form $x = \\widetilde{x}p^\\ell $ with $\\ell \\ge 6$ will map directly into 0.", "If $\\ell $ is odd, then these are leaves.", "If $\\ell $ is even but $\\widetilde{x}\\equiv 2\\bmod 3$ , then this is also a leaf.", "In the other cases, we will have some more complex trees.", "For each even $\\ell $ , if $\\widetilde{x}$ is $1\\pmod {3}$ we have a tree of type ${\\mathrm {Tree}}_{3}^{({\\ell })}({1})$ , giving a total of $1/2\\varphi (3^{11-k}) = 3^{10-k}$ of them, and if $\\widetilde{x}$ is $2\\pmod {3}$ it is a leaf, so we have $1/2\\varphi (3^{11-k}) =3^{10-k}$ leaves.", "If $\\ell $ is odd we have $phi(3^{11-k}) = 2\\cdot 3^{10-k}$ leaves.", "Therefore the total number of leaves is $1 + 6 + 9 + 54 + 81 = 151$ .", "From here we see that we should have attached to 0: Table: The structures directly adjacent to 0 in the graph 𝒩(3 11 )\\mathcal {N}({3^{11}})Moreover, we can see that since $10=2\\cdot 5$ , ${\\mathrm {Tree}}_{3}^{({10})}({1})$ is just a tree of depth one, since $y^2 = 3^{10}\\bmod {3^{11}}$ iff $y = \\widetilde{x}3^5$ with $\\gcd (\\widetilde{x},3)=1$ and $1\\le \\widetilde{x}\\le 3^6$ , and there are $\\varphi (3^6) = 486$ such $\\widetilde{x}$ 's, so ${\\mathrm {Tree}}_{3}^{({10})}({1}) \\cong T_{486}^{(1)}$ is just a regular tree of width 486 and depth 1.", "Similarly, ${\\mathrm {Tree}}_{3}^{({6})}({1})$ is a tree of depth 1 and width $\\varphi (3^3) = 2\\cdot 3^2 = 18$ .", "The more complicated case is ${\\mathrm {Tree}}_{3}^{({8})}({1})$ .", "Note that $8=2^3$ , so we can consider the unfurled graph of depth three $Z_{3}({1},{3})$ , see Figure REF .", "According to Lemma REF , we take this graph and expand it by the powers $3^4, 3^2, 3^1$ , so that the in-neighborhood of $x=3^8$ has $3^4$ leaves coming in, and $3^4$ trees of type ${\\mathrm {Tree}}_{3}^{({4})}({1})$ , which themselves have $3^2$ leaves coming in, and $3^2$ trees coming in, and those are regular trees of the form $T_6^1$ .", "(And of course, for any $\\widetilde{x}3^8$ with $\\widetilde{x}\\equiv 1\\bmod 3$ we get an isomorphic tree, so there are 9 of them.)", "See Figure REF for a visualization of ${\\mathrm {Tree}}_{3}^{({8})}({1})$ , which is also complicated and fun.", "(We would have liked to put a visualization of the full $\\mathcal {N}({3^{11}})$ here but at $59,049$ vertices and $177,147$ edges, it made our computer sad.)", "Figure: The subgraph of 𝒢(3 11 )\\mathcal {G}({3^{11}}) that goes to x=3 8 x=3^8, giving an example of Tree 3 (8) (1){\\mathrm {Tree}}_{3}^{({8})}({1}).", "Its details are given in the text, but note that there are nine copies of this bad boy attached to zero, not to mention all of the other things going in (see Table )." ], [ "Conclusions", "We have presented some basic theory and give a few nice examples.", "The examples were a lot of fun, and there's no doubt that one can find a whole host of other interesting examples.", "I leave it to the readers of this article to explore and find more." ] ]
2207.10512
[ [ "Incorporating Prior Knowledge into Reinforcement Learning for Soft\n Tissue Manipulation with Autonomous Grasping Point Selection" ], [ "Abstract Previous soft tissue manipulation studies assumed that the grasping point was known and the target deformation can be achieved.", "During the operation, the constraints are supposed to be constant, and there is no obstacles around the soft tissue.", "To go beyond these assumptions, a deep reinforcement learning framework with prior knowledge is proposed for soft tissue manipulation under unknown constraints, such as the force applied by fascia.", "The prior knowledge is represented through an intuitive manipulation strategy.", "As an action of the agent, a regulator factor is used to coordinate the intuitive approach and the deliberate network.", "A reward function is designed to balance the exploration and exploitation for large deformation.", "Successful simulation results verify that the proposed framework can manipulate the soft tissue while avoiding obstacles and adding new position constraints.", "Compared with the soft actor-critic (SAC) algorithm, the proposed framework can accelerate the training procedure and improve the generalization." ], [ "Introduction", "Robot-assisted minimally invasive surgery has promise for improving the flexibility and control accuracy of instruments.", "Over the past two decades, many surgeons have performed laparoscopic surgery assisted by robotic systems, such as radical laparoscopic prostatectomy assisted by the da Vinci surgical system.", "However, most surgeons use the teleoperation system to control the instruments.", "Now roboticists try to increase the levels of autonomy for surgical robots [1], for example, automatic suturing[2], [3] and cutting[4].", "One of the key issues of autonomous robotic surgery is automatic soft tissue manipulation, which should often be performed before suturing and cutting.", "Many previous studies successfully manipulated the cloth and string-like objects to the desired deformation.", "Only a few studies were carried out on soft tissue manipulation[5], [6], [7], [8].", "Compared with clothe and string-like deformable objects, the soft tissue is constrained by other connected organs and instruments in the in vivo environment.", "These connections may be changed because of the operation, such as pulling and cutting.", "Moreover, surrounding tissues or instruments should be avoided during the soft tissue manipulation period.", "Therefore, autonomous soft tissue manipulation with variable constraints is still a challenge.", "Preliminary studies explored the dynamics of deformable objects manipulated by instruments.", "The mass–spring model is often used to simulate the deformable object[9], which is inaccurate for large deformation of soft tissues.", "The finite-element model is another good approach to simulate soft tissue, but this model is sensitive to the constraints of soft tissue and external disturbances[10].", "Few research studies explored the active deformation control when the connection constraints of soft tissue are variable and unknown, which is a common situation in laparoscopic surgery.", "Moreover, soft tissues have infinite degrees of freedom (DOFs), which brings challenges to estimating their shape and physical parameters in the in vivo environment.", "Furthermore, the carefully designed controller based on the dynamics may become unstable because of uncertainties and inaccurate estimation parameters.", "Hence, achieving the performances of deformation control strategies based on the accurate model in actual robotic surgeries is difficult.", "By contrast, some researchers appear to investigate model-less approaches for soft tissue manipulation.", "For example, Navarro-Alarcon et al.", "[5], [11] tried to estimate the deformation Jacobian matrix of soft tissue in real time and designed an adaptive controller with visual feedback.", "The deformation Jacobian matrix is a linear approximation of the deformation model in a short time, which works well if the instruments move slowly.", "Hu et al.", "[12] proposed approximating the map between the instrument’s movement and the deformation using a deep neural network (DNN).", "An online learning approach is used to update the neural network.", "The learned DNN controller may fail because of limited online data when an external force is suddenly applied to the soft tissue.", "Recent success in reinforcement learning, such as solving Rubik's Cube with a robot hand [13], provides promise for soft tissue manipulation.", "Sahba et al.", "[14] employed Q learning to manipulate a soft tissue, where the agent has only 25 possible actions.", "Shin et al.", "[6] compared the reinforcement learning and imitation learning approaches for soft tissue manipulation.", "Both approaches can achieve the manipulation task, but imitation learning can reduce the amount of exploration.", "To accelerate the training procedure, the policy network is initialized through imitation learning in some deep reinforcement learning frameworks[15], [16].", "However, demonstrating all of the soft tissue manipulation scenarios is impossible because of the high-dimensional deformation space and variable contact constraints.", "The demonstration trajectories may be generated by different experts.", "The distribution between the demonstrations and test scenarios may be non-identical.", "To simplify the scenario, many existing algorithms assume that the manipulation point is appropriately selected before the deformation control and the soft tissue can be manipulated to the target deformation.", "However, the grasping point should be reselected to avoid over deformation during the training process, which is also a typical case in practical surgery.", "Some researchers explored the manipulation point adjustment issue in recent year.", "Sundaresan et al.", "[17] presented a grasping point selection method by defining a disentangling hierarchy over cable crossings for robotic untangling of cable.", "Huang et al.", "[18] proposed an active adjustment algorithm for soft tissue manipulation with non-fixed contact.", "The contact area can be adjusted by sliding the end effector, but the grasping point is fixed unless opening the grasper.", "Therefore, grasping point selection for soft tissue manipulation is still an unexplored topic.", "We propose a deep reinforcement learning framework with prior knowledge for soft tissue manipulation under unknown constraints.", "The agent is similar to the brain that has two modes of thinking: the intuitive and the deliberate mode.", "The intuitive mode is defined by a simple manipulation approach that the agent always pulls the soft tissue toward the target deformation.", "The soft actor-critic (SAC) algorithm[19], [20] is applied to tackle complex manipulation issues, such as avoiding obstacles.", "The agent coordinates the two modes by regulating a weight, which is set as action and updated by the manipulation policy based on the last state.", "The grasping point selection is treated as a contextual bandit problem[21].", "The agent learns the grasping point selection policy using deep Q learning[22].", "The final state-action value represents the success rate of each grasping point.", "The agent selects the optimal grasping point based on the success rate, given the desired deformation.", "We explore three types of deformation tasks in this paper.", "When the target deformation is described by a curve or a region, the reward is determined by the feature point farthest to the target.", "To overcome the long horizons, a piece-wise reward function is designed for large deformation.", "The experiment results verify that the proposed framework can select an appropriate grasping point and control the deformation successfully.", "The deliberate mode is activated when obstacles appear around the tissue or the external forces are applied to the tissue.", "Furthermore, the proposed framework can accelerate training procedure and improve the generalization.", "The main contributions of this study are summarized as follows: We propose a reinforcement learning framework for deformation control of soft tissue under unknown constraints.", "Previous studies suppose the constraints of soft tissue are constant and no obstacles.", "The proposed framework does not initialize the policy with demonstrations but incorporates an intuitive manipulation approach into the reinforcement learning framework.", "The agent activates the deliberate policy by an action.", "We present an autonomous grasping point selection algorithm using deep Q learning for soft tissue manipulation.", "Existing research assumes an appropriate grasping point has been selected before soft tissue manipulation.", "The proposed algorithm can determine the optimal grasping point given the target deformation based on the state-action value.", "This proposed pipeline promises an automatic training process for soft tissue manipulation.", "We explore three types of deformation control tasks and their reward functions.", "In[6], [14], they only investigate position-based deformation, and the reward functions are inapplicable to the curve-based and region-based deformation.", "A piece-wise reward function is further presented to guide the exploration and exploitation in large deformation tasks.", "Figure: Conceptual representation of soft tissue manipulation.", "(a)(a) Constraints of soft tissue manipulation; (b)(b) Three types of deformation of soft tissue." ], [ "Overview and Problem Statement", "Active soft tissue manipulation are mainly developed for assisting surgeons in this paper.", "The agent is similar to a physician assistant in robotic surgery.", "The deformation of soft tissue $X$ is described as the set of feature points $\\textbf {x}_i(t)$ on the surface, which provides the possibility to identify constraints.", "Although Fast Point Feature Histogram (FPFH)[23] can be used to encode the deformation, FPFH is not intuitive for the surgeon.", "The deformation of soft tissue is subject to fascias and instruments in the in vivo environment.", "Position constraints are defined as the part of the tissue that cannot move during manipulation, i.e., $\\textbf {x}_p^c(t)=\\textbf {x}_p^c(0)$ , where $\\textbf {x}_p^c\\in X$ , $p=1,\\dots ,P$ , and $P$ is the total number of constant position constraints.", "Two types of unknown constraints of soft tissue are considered [see Figure 1(a)]: new position constraints $\\textbf {x}_q^a(t)=\\textbf {x}_q^a(\\tau )$ and unknown force constraints $\\textbf {f}_l(t)$ , where $\\textbf {x}_q^a\\in X$ , $\\tau $ is a piece-wise function with respect to the time $t$ , $q=1,\\dots ,Q$ , $l=1,\\dots ,L$ , and $Q$ and $L$ are the total number of each constraint, respectively.", "For example, an additional grasping point, the forces applied by a visceral fascia, just name a few.", "The deformation control tasks mainly include pulling part of the tissue to the target region or the tip of a needle and shaping the contour line to the desired curve.", "Surgeons rarely give the global deformation of soft tissue in surgery.", "Hence, we present three definitions of local target deformations for soft tissue manipulation with visual feedback [ see Figure 1(b)].", "Position-based deformation: The target deformation is given by the position $\\textbf {v}^d$ of a feature point $\\textbf {v}^f \\in X$ .", "Position-based deformation control is a basic operation task in soft tissue manipulation.", "Curve-based deformation: The target deformation is described by a curve.", "In this paper, the curve is determined by discrete points $\\textbf {y}_j$ .", "Region-based deformation: A part of the tissue is manipulated to the target region for cutting or exposure tissue.", "The region is given by the center $\\textbf {o}_h$ of a circle and its diameter $\\textit {d}_c$ in this paper.", "Most robot systems for laparoscopic surgery only provide stereo vision, but the surgery can be performed by the surgeon successfully.", "It is also difficult to detect the force from the environment in robotic surgery.", "Hence, we explore the soft tissue manipulation through the position control of the grasper $\\textbf {v}^g(t)$ under unknown constraints.", "Moreover, the agent has to avoid obstacles around the tissue.", "Based on the above definitions, the soft tissue manipulation under unknown constraints can be formulated as follow: $\\begin{aligned}\\arg \\min \\limits _{\\textbf {v}^g(t)}&\\quad \\mu (\\textbf {x}_1,\\ldots ,\\textbf {x}_K,\\textbf {y}_1,\\ldots ,\\textbf {y}_M) \\\\s.t.&\\quad \\textbf {x}_p^{c}(t)=\\textbf {x}_p^{c}(0), p=1,\\dots ,P\\\\&\\quad \\textbf {x}_q^{a}(t)=\\textbf {x}_q^{a}(\\tau ), q=1,\\dots ,Q\\\\&\\quad \\textbf {x}_i(t)=\\textbf {x}_i^{\\prime }(t)+\\omega (\\textbf {f}_l(t)),l=1,\\dots ,L\\\\&\\quad \\textbf {v}^g(t)-\\textbf {o}_b^h \\notin B_h^O,h=1,\\dots ,H\\\\&\\quad \\textbf {v}^g(t) \\in B^s\\end{aligned}$ where $\\mu (\\cdot )$ denotes the measure function of the error between the current state and the target deformation, $\\textbf {y}_j$ represents the discrete points for defining the target deformation, $x_i^{\\prime }$ is the position of the $i$ th feature point without disturbances, $B^s$ is the position bound, respectively, $\\textbf {o}_b$ is the center of the $h$ th obstacle $O_h$ , $B_h^o$ denotes obstacle space, and $\\omega (\\textbf {f}_l(t))$ represents the displacement caused by the $l$ th unknown force constraint.", "Control and planning algorithms based on models is unreliable to solve the problem because of the unknown constraints and disturbances from the in vivo environment.", "Hence, we explore model-free reinforcement learning for solving the soft tissue manipulation problem.", "[t] Soft Tissue Manipulation with IM_SAC.", "Initialize parameters $ {\\theta }_1, {\\theta }_2 \\text{ and } \\varphi $ Initialize replay memory $\\mathnormal {D}$ to capacity $\\mathnormal {H}$ Select initial grasping point $\\mathnormal {G}$ For $\\text{episode} = 1,...,\\mathnormal {M}$ do      For $\\text{t} = 1,...,\\mathnormal {N}$ do         Input $s_t$ to the SAC actor and output ${\\alpha (t),\\pi _{dm}(s_t)}$         $\\mathnormal {a_t} \\leftarrow \\alpha (t) * \\textbf {W}_a \\pi _{dm}(s_t) + (1-\\alpha (t)) *\\pi _{im}(s_t)$         Execute action ${\\mathnormal {a}}_t \\text{ and observe reward } {\\mathnormal {r}}_t,\\text{ done } {\\mathnormal {d}}_t,$         and next state ${\\mathnormal {s}}_{t+1}$         Store transition $\\lbrace {\\mathnormal {s}}_t,{\\mathnormal {a}}_t,{\\mathnormal {r}}_t,{\\mathnormal {s}}_{t+1},{\\mathnormal {d}}_{t},{\\mathnormal {\\alpha }}(t)\\rbrace \\text{ in } \\mathnormal {D} $         Sample random minibatch from $\\mathnormal {D}$ to calculate         $\\mathnormal {Q}(\\mathnormal {s},\\mathnormal {a},\\alpha )$ to update ${\\theta }_1, {\\theta }_2 \\text{ and } \\varphi $      End For End For" ], [ "Reinforcement Learning with Manipulation Knowledge", "To deploy the reinforcement learning in soft tissue manipulation, we have to address two questions: what is the optimal initial grasping point given the target deformation, and how to manipulate the soft tissue under unknown constraints.", "The first question is depend on the second question in the reinforcement learning framework.", "If the agent cannot well manipulate the soft tissue, the evaluation of the initial grasping point is inaccurate.", "To tackle this problem, we propose a reinforcement learning framework with manipulation knowledge for soft tissue manipulation.", "The state $S(t)$ is defined as the set of feedback points $\\textbf {x}_i(t)$ , the target points $\\textbf {y}_j$ , the grasping point $\\textbf {v}^g(t)$ , obstacles $\\textbf {o}_b$ and the constant position constraint $\\textbf {x}_p^c$ .", "Inputting these features to the neural network assists the agent realize the unknown constraints during manipulation.", "The action $A_t\\in \\mathbb {R}^3$ is set as the movement of the grasper.", "Here, we suppose an appropriate initial grasping point is selected.", "Similar to the brain, we designed two modes of thinking for the agent: the intuitive and the deliberate mode.", "We try to find a simple control approach $\\pi _{im}(S_t)$ from actual surgery, for example, pull a feature point toward the target point in the position-based deformation control task, i.e., $\\pi _{im}(S_t)=K_p(\\textbf {v}^d-\\textbf {v}^f)$ .", "A model-free fuzzy controller is another good option.", "However, the simple control approach can only achieve a few deformation tasks.", "The deliberate mode should be activated at a complex situation.", "SAC is employed to train the manipulation policy $\\pi _{dm}(S_t)$ in deliberate.", "One of the issues is when to activate the deliberate mode, i.e., how to evaluate the reliability of each model.", "We set $\\alpha (t)$ as an action of the actor.", "The output of $\\pi _{dm}(S_t)$ includes the movement of grasper and the complex index.", "To coordinate the two control modes, the movement of the grasper is expressed as $\\mathnormal {\\textbf {a}_t} = \\alpha (t) * \\textbf {W}_a \\pi _{dm}(S_t) + (1-\\alpha (t)) *\\pi _{im}(S_t)$ where $\\textbf {W}_a = [\\textbf {I}_3 \\ \\textbf {0}]$ , and $\\alpha (t)\\in [0, 1]$ .", "The reliability $\\alpha (t)$ is updated in real-time according to the state.", "When $\\alpha $ is close to 0, the agent inclines to use $\\pi _{im}$ .", "As $\\alpha $ is close to 1, the deliberate mode $\\pi _{dm}$ .", "The proposed framework can also be used to coordinate the surgeon and the machine behavior in robotic surgery.", "Training details can be found in Algorithm 1." ], [ "Reward Design", "We explore the reward design problem for the target deformations described by single or multiple points.", "In a position-based deformation control task, the agent gets a penalty of $-\\rho \\parallel \\mathbf {d}(t) \\parallel $ at any time, where $\\mathbf {d}(t)=\\textbf {v}^d-\\textbf {v}^f$ , such that the agent tries its best to find the optimal trajectory.", "If the grasper goes out of the boundary, the agent gets a penalty $R_b$ .", "When the agent reaches the goal, the agent achieves a reward of $R_a$ .", "Based on these settings, the reward function $R_t$ is expressed as follows: ${R}_t = a * {R}_a + (1-a) * ((1-b) * {R}^\\prime + b * {R}_b)$ where $a\\in \\lbrace 0,1\\rbrace $ , $b\\in \\lbrace 0,1\\rbrace $ , and $R^{\\prime }=-\\rho \\parallel \\mathbf {d}(t) \\parallel $ .", "If the target deformation is described by multiple points, such as curve-based deformation, we set the penalty signal $R^{\\prime }$ equal to $-\\rho \\max *\\parallel \\mathbf {d}(t) \\parallel $ , i.e., not one less.", "For large deformation task, the agent frequently fails reach the goal.", "The reply buffer stores many invalid data.", "The intuitive manipulation approach mentioned in previous section can also supervise the exploration, but the agent may fail in complex scenario.", "To overcome the long horizon, we try to guide the agent layer by layer.", "Suppose the deformation space divide into $Z$ parts by homocentric spheres.", "The center of these spheres is the target deformation.", "If the agent enter $i$ th layer from the $i-1$ stage, the penalty is decreased; if the agent enter $i-1$ th layer from the $i$ stage, the penalty is increase; if the agent exists the $i$ th, the penalty is $R^{\\prime }=-\\rho \\parallel \\mathbf {d}(t) \\parallel $ .", "Thus, the reward $R^{\\prime }$ is formulated as ${R}^\\prime = a^{\\prime } * {R}_i^r + (1-a^{\\prime }) * (1-b^{\\prime }) * (-\\rho \\parallel \\mathbf {d}(t) \\parallel ) + b^{\\prime } * {R}_i^p$ where $a’\\in \\lbrace 0,1\\rbrace $ , $b’\\in \\lbrace 0,1\\rbrace $ , ${R}_i^r > -\\rho \\parallel \\mathbf {d}(t) \\parallel $ , and ${R}_i^p < -\\rho \\parallel \\mathbf {d}(t) \\parallel $ .", "The agent will explore a layer and try to enter a higher layer.", "[t] Initial Grasping Point Selection and Evaluation.", "Initialize replay memory $\\mathnormal {D}$ to capacity $\\mathnormal {H}$ Initialize action-value function $\\mathnormal {Q}$ with random weights $\\theta _1$ Initialize evaluation network $\\psi $ with random weights $\\theta _2$ For $\\text{t} = 1,...,\\mathnormal {T}$ do      With probability $\\varepsilon $ select a random action $\\mathnormal {a}_t$      otherwise select $\\mathnormal {a}_t=\\mathnormal {argmax}_a\\mathnormal {Q}(\\mathnormal {s}_t,\\mathnormal {a},\\theta _1)$      $s_{\\psi }(t) \\leftarrow [{\\mathbf {v}}^g,{\\mathbf {v}}^f,{\\mathbf {v}}^d]$      Calculate $\\psi (s_{\\psi }(t))$      Execute algorithm 1      If $z(t)>\\delta $ then         Set $d_t=0 \\text{ and observe } r_t$      Else         observe ${\\mathnormal {r}}_t \\text{ and } {\\mathnormal {d}}_t$      End If      Store transition $\\lbrace {\\mathnormal {s}}_t,{\\mathnormal {a}}_t,{\\mathnormal {r}}_t,{\\mathnormal {d}}_{t}, s_{\\psi }(t)\\rbrace \\text{ in } \\mathnormal {D} $      Sample random minibatch from $\\mathnormal {D}$      Set $\\mathnormal {y}_{1}=\\mathnormal {r}_t, \\mathnormal {y}_{2}=d_t$      Calculate $(y_{1}-\\mathnormal {Q}(\\mathnormal {s},\\mathnormal {a},\\theta _1))^2$ to update $\\theta _1$      Calculate $(y_2-\\psi ({\\mathbf {v}}^g,{\\mathbf {v}}^f,{\\mathbf {v}}^d))^2$ to update $\\theta _2$ End For" ], [ "Grasping Point Selection", "The initial grasping point selection is a contextual bandit problem because the grasping point cannot be changed during soft tissue manipulation.", "The grasping point is similar to the slot machine.", "If the agent can manipulate the soft tissue to the target deformation, the reward is calculated based on the trajectory of the feedback points.", "On the other side, the agent must determine when to abandon the manipulation task to avoid over deformation.", "we define four indexes to evaluate the trajectory: $\\begin{aligned}r_{d} = 1-\\sum {\\parallel \\mathbf {d}(t) \\parallel }/\\sum {\\parallel \\mathbf {d}(0) \\parallel }\\end{aligned}$ $\\begin{aligned}r_{g} = 1-\\beta _g(\\sum {\\parallel \\mathbf {g}(t) \\parallel }-\\sum {\\parallel \\mathbf {g}(0) \\parallel }\\end{aligned}$ $\\begin{aligned}r_{g0} = \\beta _{g0}\\parallel \\mathbf {g}(0) \\parallel \\end{aligned}$ $\\begin{aligned}r_t = 1-(n-N^{\\prime })/N\\end{aligned}$ where $\\mathbf {g}(t)=\\textbf {v}^f-\\textbf {v}^g$ , $n$ is the total step, $N$ is the maximum step, $N^{\\prime }$ is a step threshold, $n=N^{\\prime }$ if $n$ is less than $N^{\\prime }$ , and $\\beta _{g}$ and $\\beta _{g0}$ are positive constant.", "The index $r_d$ means that the deformation is expected to be close to the target deformation during the manipulation process.", "If most deformation errors are larger than the initial error, the quality of the initial grasping point is poor.", "The index $r_g$ evaluates the strain between the grasping the point and the feedback points.", "If the index $r_g$ is small, the agent may injure the soft tissue although the feedback points reach the target.", "The index $r_{g0}$ constrains that the initial grasping point should be not far away from the feedback points.", "The index $r_t$ evaluates the manipulation episode length.", "After finishing an episode, the agent receives a reward signal ${R}_{tr}$ : $\\begin{aligned}{R}_{tr} = \\textbf {1}_{E} \\rho _{tr} R_{e}^{in}+(1-\\textbf {1}_{E})\\rho _{tr}R_{e}^{out}\\end{aligned}$ where $\\rho _{tr}=\\lambda _t r_t+\\lambda _d r_d + \\lambda _g r_g - \\lambda _{g0}r_{g0}^2$ ; $\\lambda _{(\\cdot )}\\in [0,1]$ .", "If the grasping point is at the edge, $\\textbf {1}_{E} = 1$ , and the agent get a reward of $\\rho _{tr}R_{e}^{in}$ .", "If the grasping point is out of the edge, the reward is $\\rho _{tr}R_{e}^{out}$ .", "If the agent fail to control the deformation, but the grasping point is at the edge, the reward is set as $R_{se}$ .", "Otherwise, the reward is $R_{fe}$ .", "The total reward can be formulated as $\\begin{aligned}{R}_t^g = (1-\\textbf {1}_D)(\\textbf {1}_E R_{se} + (1 - \\textbf {1}_E) R_{fe}) +\\textbf {1}_D R_{tr}.\\end{aligned}$ During the training, a security constraint index $z_s$ is defined to abandon the manipulation task if the soft tissue is over-deformation: $z_s ={\\left\\lbrace \\begin{array}{ll}1,&{\\text{if}}\\ \\sum _{i}^{i=N} \\textbf {1}_{S^{\\prime }}^i > {\\delta }\\\\{0,}&{\\text{otherwise.}}\\end{array}\\right.", "}$ where $S^{\\prime }=\\lbrace s_t\\ | \\ \\beta _{sc}(\\vert {\\parallel \\mathbf {g}(t) \\parallel }-{\\parallel \\mathbf {g}(0) \\parallel }\\vert ) > 1 \\ || \\ {\\parallel \\mathbf {d}_{t-1} \\parallel }<{\\parallel \\mathbf {d}(t) \\parallel }\\rbrace $ , and $\\delta $ is a threshold.", "If the index $z_s$ is greater than the threshold $\\delta $ , the agent will give up the deformation control task.", "Deep Q learning is used to train the grasping policy.", "The normalized final state-action value $Q_{sr}$ indicates the success rate of each grasping point.", "The agent selects the grasping point based the success rate $\\Omega $ given the target deformation.", "Training details can be found in Algorithm 2.", "Figure: Conceptual representation of autonomous soft tissue manipulation network.Table: Reward R ' R^\\prime designs in position-based deformation" ], [ "Experimental Setup", "Experiments were performed in the SOFA[24] simulation platform.", "The agent tries to control the deformation of the liver model, which composes FEM triangular patches.", "The liver has 181 triangular patch vertices, where 139 vertices are on the surface.", "The liver model is fixed by four constant position constraints, which has 135 grasping points on the surface.", "We define the shape of the obstacle as a cuboid outside the liver, and the target point is behind the obstacle.", "The sampling time is 0.02s.", "Our experiments use one Titan RTX GPU.", "In the soft tissue manipulation control tasks, we reduce 181 vertices to 30 using Principle Component Analysis.", "The state of the actor contains 30 feature points $\\textbf {x}_i$ , the target points $\\textbf {y}_j$ , initial and real-time position of the grasping point $\\textbf {v}^g(t)$ , the cuboid obstacle $\\textbf {o}_1$ , three constant position constraint $\\textbf {x}_p^c$ and one unknown position constraint.", "In position-based deformation control tasks, $\\textbf {y}_j=\\textbf {v}^d$ , $\\rho =5, R_a=100, R_b=-1000$ .", "The details of the reward $R^{\\prime }$ are shown in Table 1.", "Four target points are used to determine the target deformation in curve-based deformation control task.", "In region-based deformation control tasks, the region was descried by a sphere with diameter of 1.", "Eleven feedback points should be manipulated to the sphere.", "The output action contains the movement of the grasper $\\Delta \\textbf {v}^g$ and the regulatory factor $\\alpha $ .", "To avoid over-deformation, we set safety space $B^s$ is a cube with the length of 2, and its center is the initial grasping point.", "The displacement of the grasper $\\Delta \\textbf {v}^g$ is limited within the range of $[-0.3, 0.3]$ .", "The input of the critic includes the state of the soft tissue $S_t$ , the displacement $\\Delta \\textbf {v}^g$ , and the regular factor $\\alpha $ .", "Figure: Three examples of successful soft tissue manipulation tasks.", "(1) position-based deformation, (2) curve-based deformation, and (3) region-based deformation.", "a is the initial state of the task, and c is the completed state.We verified the proposed grasping point selection algorithm in the position-based deformation control task.", "The state is the initial deformation of the liver model.", "The actions are 135 grasping points on the surface.", "The reward $R_{se}$ and $R_{fe}$ are set as $-5000$ , and $-5500$ , respectively.", "The reward $R_e^{in}$ and $R_e^{out}$ are set as 15 and 10, respectively.", "We set $\\beta _g=0.5$ , $\\beta _{g0}=1/3$ , $N^{\\prime }=750$ , $N=800$ , $\\lambda _t=0.2$ , $\\lambda _d=0.4$ , $\\lambda _g=0.4$ , $\\lambda _{g0}=0.3$ and $\\delta =150$ .", "The output of the evaluation policy is the success rate of the grasping point.", "The grasping point is reselected when the rate $\\Omega $ is lower than 70% during the manipulation period.", "The proposed reinforcement learning framework for the deformation control under unknown constraints is denoted as IM_SAC.", "All three proposed networks are MLP with 2 hidden layers of 256 units, as shown in Figure 2.", "We choose the activation function ReLU and optimizer Adam and set the learning rate to $10^{-3}$ .", "We set the replay buffer size to 2e6, the discount factor $\\gamma $ to 0.99, and the batch size to 64.", "Every $10^4$ cumulative operation, 60% of the time used for training, 40% for testing.", "Save network parameters every $5\\times 10^4$ times.", "Figure: Trajectories of feedback points.Figure: Deformation error 𝐝(T)\\mathbf {d}(T) during the training.Figure: Performance comparison of different reward functions for position-based deformation.", "𝐝(T)\\mathbf {d}(T) using different reward functions during SAC-based training.Figure: Coordination between the intuitive and deliberate modes.", "(a)The trajectories of the policy trained with SAC and IM_SAC.", "(b)The reliability of each mode.Figure: A successful example of autonomously generating initial grasp points based on task 1.", "The first column shows three different scenes of the initial grasping state, (1-a) represents the static state of the liver model; (2-a) adds unknown dynamic interference based on (1-a); (3-a) adds unknown dynamic interference and constraint areas that cannot be grasped.", "(a)-(c) represent the process of the agent achieving autonomous grasping point selection and control tasks." ], [ "Deformation Control Results", "We gave the 100 random target deformation and set three levels of acceptable ranges of error: 0.2, 0.4, and 0.6.", "We define generalization as the success rate of completing the task among 100 random target points.", "As shown in figure 3, the proposed reinforcement learning framework can achieve all of the soft tissue manipulation tasks.", "The trajectories of the feedback points are shown in Figure 4, where the agent was trained with SAC and the proposed IM-SAC, respectively.", "We can see that both methods can accomplish the tasks; the proposed IM-SAC has better manipulation trajectories.", "As shown in Table 2, after $5\\times 10^4$ episodes, we find that the agent trained with IM-SAC can achieve the control tasks.", "The agent learned to rely on the intuitive manipulation approach.", "However, the success rate of SAC is still after $1.65\\times 10^6$ episodes.", "The experiment results show that SAC should be deployed for complex tasks.", "An intuitive manipulation approach is more suitable for simple control tasks.", "Table: The training speed and generalization (success rate of 100 manipulations) of our method and SAC are compared on task 1.To verify the deliberate mode, obstacles and external forces were applied to the soft tissue during the manipulation period.", "We added new position constraint points and pulled part of the liver model to mimic unknown external forces.", "As shown in Figure 5, the success rate of IM-SAC is higher than that of SAC after $4\\times 10^5$ episodes, when an obstacle is around the tissue.", "The 66% generalization of IM-SAC in Table 2 is a result of our testing with the model that implemented the task for the first time, and the generalization can reach 100% as the number of training increases.", "As shown in Figure 7(a), as the external forces are applied to the tissue, the agent trained with SAC cannot learn the manipulation behaviour after three million episodes.", "The agent trained with IM-SAC can achieve the deformation task after $1.7\\times 10^6$ episodes.", "The deliberate mode was activated as the agent encountered the obstacles as shown in Figure 7(b).", "After avoiding the obstacle, the agent inclines to rely on the intuitive mode.", "Moreover, the success rate will decrease without the proposed piece-wise reward function, as shown in Figure 6 and Table 2.", "The proposed piecewise reward function can guide the agent to explore and overcome the long horizon.", "Figure: Evaluation of the grasping points.", "(a) The final reward of each grasping reward.", "(b) Success rate of grasping points.Figure: (a) Several grasping points on the liver model.", "(b) Success rate of these grasping points." ], [ "Grasping Point Selection using Deep Q Learning", "The success rate of the proposed reinforcement learning framework is high, such that it can be used to evaluate the quality of the initial grasping points.", "As shown in Figure 8-S1, the agent selected an appropriate grasping point given the target deformation.", "Moreover, when an external force is applied to tissue, the agent can still select an appropriate grasping point[see Figure 8-S2].", "On the other side, we further defined a region that cannot be the initial grasping point, similar to a cancerous region.", "As shown in Figure 8-S3, the agent could find an appropriate grasping point out of the forbidden region.", "Figure 9(a) shows the reward value of all grasping points on the liver model.", "Figure 9(b) shows the predicted success rate of these grasping points.", "Figure 10(a) shows some grasping points on the liver, including those that can and cannot achieve the task.", "Figure 10(b) shows the predicted success rate of these grasping points.", "The experiment results verify that the agent can evaluate the initial grasping points for the soft tissue manipulation, and select the optimal grasping point given the target deformation." ], [ "CONCLUSION AND FUTURE WORK", "Active deformation control of soft tissue manipulation is fundamental for autonomous robotic surgery.", "We propose a reinforcement learning framework incorporating intuitive manipulation knowledge to tackle soft tissue manipulation with variable constraints and autonomous grasping point selection before deformation control.", "The SOFA simulation platform performs three types of soft tissue manipulation tasks.", "Experiment results show that the agent can actively select the optimal grasping point given the desired deformation and successfully control the soft tissue deformation as appearing obstacles and applying external forces.", "The intuitive manipulation knowledge and the designed reward function guide the agent to explore the configuration space efficiently.", "The deliberate manipulation policy is activated in a complex scenario.", "The proposed pipeline applies for simulation to real deployment, which will be verified in future work." ] ]
2207.10438
[ [ "Model geometries of finitely generated groups" ], [ "Abstract We study model geometries of finitely generated groups.", "If a finitely generated group does not contain a non-trivial finite rank free abelian commensurated subgroup, we show any model geometry is dominated by either a symmetric space of non-compact type, an infinite locally finite vertex-transitive graph, or a product of such spaces.", "We also prove that a finitely generated group possesses a model geometry not dominated by a locally finite graph if and only if it contains either a commensurated finite rank free abelian subgroup, or a uniformly commensurated subgroup that is a uniform lattice in a semisimple Lie group.", "This characterises finitely generated groups that embed as uniform lattices in locally compact groups that are not compact-by-(totally disconnected).", "We show the only such groups of cohomological two are surface groups and generalised Baumslag-Solitar groups, and we obtain an analogous characterisation for groups of cohomological dimension three." ], [ "Introduction", "Geometric group theory is the study of groups via their isometric actions on metric spaces.", "If $\\Gamma $ is a finitely generated group and $X$ is a proper quasi-geodesic metric space on which $\\Gamma $ acts geometrically, we say that $X$ is a model geometry of $\\Gamma $ .", "The Milnor–Schwarz lemma, sometimes called the fundamental lemma of geometric group theory, says that all model geometries of $\\Gamma $ are quasi-isometric to one another.", "Every finitely generated group has a model geometry that is a locally finite vertex-transitive graph, namely its Cayley graph with respect to any finite generating set.", "In this article we investigate when locally finite vertex-transitive graphs are essentially the only model geometries of a fixed finitely generated group.", "To phrase this question precisely, we introduce the notion of domination of metric spaces.", "A key idea in geometry, going back to at least Klein's Erlangen program, is that the isometry group $\\operatorname{Isom}(X)$ of a metric space $X$ better captures the salient geometric features of $X$ than the actual metric.", "This viewpoint was crucial in Thurston's definition of his eight model geometries, in which infinitely many non-isometric Riemannian manifolds may correspond to the same model geometry; see the discussion in [45].", "For example, rescaling the metric on $\\mathbb {H}^3$ by any non-zero constant does not change its isometry group and more generally, does not alter the synthetic geometry of hyperbolic 3-space.", "This idea is particularly natural when studying isometric group actions on a metric space $X$ , which are simply representations of a group into $\\operatorname{Isom}(X)$ .", "If $X$ is a proper metric space, then its isometry group $\\operatorname{Isom}(X)$ , endowed with the compact-open topology, is a locally compact topological group.", "If $X$ is a model geometry of $\\Gamma $ , then modulo a finite normal subgroup, $\\Gamma $ is a uniform lattice in $\\operatorname{Isom}(X)$ .", "A homomorphism $\\phi :G\\rightarrow H$ between locally compact groups is said to be copci if it is continuous and proper with cocompact image.", "Named by Cornulier [11], a copci homomorphism is an isomorphism up to compact error, and is a generalisation of the class of homomorphisms between discrete groups with finite kernel and finite index image.", "Definition Given proper quasi-geodesic metric spaces $X$ and $Y$ , we say that $X$ is dominated by $Y$ if there is a copci homomorphism $\\operatorname{Isom}(X)\\rightarrow \\operatorname{Isom}(Y)$ .", "If $X$ is dominated by $Y$ and $X$ is a model geometry of $\\Gamma $ , then $Y$ is also a model geometry of $\\Gamma $ .", "In particular, $X$ and $Y$ are quasi-isometric.", "More specifically, $\\operatorname{Isom}(X)$ acts isometrically on $Y$ and there is a quasi-isometry $f:X\\rightarrow Y$ that is coarsely $\\operatorname{Isom}(X)$ -equivariant.", "Before stating our results, we give an illustrative concrete example of a metric space dominating another metric space.", "Example A Let $Z$ be the torus with two antennae pictured in Figure REF .", "Formally, $Z$ is the length space obtained as the wedge of a flat torus with two unit intervals, attached along their endpoints and equipped with the induced path metric.", "The universal cover $\\widetilde{Z}$ is a model geometry of $\\pi _1(Z)\\cong \\mathbb {Z}^2$ consisting of a copy of the Euclidean plane $\\mathbb {E}^2$ with a pair of unit length antennae attached to every point of a lattice $A\\subseteq \\mathbb {E}^2$ .", "Figure: The universal cover of a flat torus with two antennae is dominated by the Euclidean plane.Each isometry of $\\widetilde{Z}$ stabilises the subspace $\\mathbb {E}^2$ , so there is an induced map $\\phi :\\operatorname{Isom}(\\widetilde{Z})\\rightarrow \\operatorname{Isom}(\\mathbb {E}^2)$ that is copci; thus $\\widetilde{Z}$ is dominated by $\\mathbb {E}^2$ .", "Non-trivial isometries of $\\widetilde{Z}$ fixing $\\mathbb {E}^2$ pointwise lie in the compact normal subgroup $\\ker (\\phi )$ , and should be thought of as unwanted noise in $\\operatorname{Isom}(\\widetilde{Z})$ .", "Passing from $\\widetilde{Z}$ to $\\mathbb {E}^2$ has the effect of forgetting this noise, whilst simultaneously gaining many isometries of $\\mathbb {E}^2$ not stabilising the lattice $A$ .", "The study of model geometries and lattice embeddings of discrete groups has a long history going back to work of Furstenberg and Mostow.", "Mosher–Sageev–Whyte studied model geometries of virtually free groups [37], [36].", "Furman and Bader–Furman–Sauer classified lattice envelopes for a large class of groups, including lattices in Lie groups and $S$ -arithmetic lattices [17], [1].", "Dymarz studied model geometries of certain solvable groups [13].", "We say a graph is singular if not all vertices have valence two.", "For ease of notation, we use the term locally finite graph as shorthand for a connected singular locally finite graph, equipped with the induced path metric in which each edge has length one.", "The aim of this article is to give a complete solution to the following problem.", "Problem B Characterise the class of finitely generated groups all of whose model geometries are dominated by locally finite vertex-transitive graphs.", "A proper quasi-geodesic metric space with cocompact isometry group is dominated by a locally finite graph if and only if it is dominated by a vertex-transitive locally finite graph.", "A model geometry $X$ is dominated by a locally finite graph if and only if the identity component of $\\operatorname{Isom}(X)$ is compact.", "Problem REF is thus equivalent to characterising the class of finitely generated groups that are, up to a finite normal subgroup, uniform lattices in some locally compact group that is not compact-by-(totally disconnected).", "We first give some simple yet important examples of model geometries.", "Example C Any finitely generated group has a model geometry that is a locally finite vertex-transitive graph, namely its Cayley graph with respect to a finite generating set.", "Let $S$ be a closed hyperbolic surface.", "The hyperbolic plane is a model geometry of $\\pi _1(S)$ , and is an example of a symmetric space of non-compact type.", "Symmetric spaces are one of the most fundamental objects in geometry, and were classified by Cartan.", "We refer the reader to Helgason's book for more details [21].", "Example D Let $X$ be a symmetric space of non-compact type.", "Borel showed $\\operatorname{Isom}(X)$ contains a uniform lattice [2], hence $X$ is a model geometry of some finitely generated group.", "As $\\operatorname{Isom}(X)$ is virtually connected, $X$ is not dominated by a locally finite graph.", "Combining Examples REF and REF , there exist model geometries that are products of symmetric spaces and locally finite vertex-transitive graphs: Example E Let $\\Gamma _1$ be a uniform lattice in the isometry group of a symmetric space $X$ of non-compact type, and let $\\Gamma _2$ be an arbitrary finitely generated group.", "Suppose $Y$ is a Cayley graph of $\\Gamma _2$ with respect to a finite generating set.", "Then $X\\times Y$ is a model geometry of $\\Gamma _1\\times \\Gamma _2$ .", "Since the identity component of $\\operatorname{Isom}(X\\times Y)$ is non-compact, $X\\times Y$ is not dominated by a locally finite graph.", "As we will shortly see in Theorem REF , for a large class of finitely generated groups all model geometries are dominated by one of the model geometries described in Examples REF –REF .", "Before stating this result, we give an example of a more exotic model geometry not dominated by the model geometries described in Examples REF –REF .", "Two subgroups $\\Lambda $ and $\\Lambda ^{\\prime }$ of $\\Gamma $ are commensurable if the intersection $\\Lambda \\cap \\Lambda ^{\\prime }$ has finite index in both $\\Lambda $ and $\\Lambda ^{\\prime }$ .", "We say that $\\Lambda $ is commensurated or almost normal in $\\Gamma $ , denoted $\\Lambda \\text{\\;Q\\; }\\Gamma $ , if every conjugate of $\\Lambda $ is commensurable to $\\Lambda $ .", "Example F Let $\\Gamma $ be the Baumslag–Solitar group $BS(1,2)$ with presentation $\\langle a,t\\mid tat^{-1}=a^2\\rangle $ .", "The infinite cyclic subgroup $\\langle a \\rangle $ is commensurated.", "The group $\\Gamma $ has a piecewise Riemannian model geometry $X$ that is a warped product of the regular 3-valent tree $T$ with $\\mathbb {R}$ ; see for example Farb–Mosher [14].", "Since $X$ admits a 1-parameter subgroup of translations along the $\\mathbb {R}$ -direction, the identity component of $\\operatorname{Isom}(X)$ is non-compact and therefore $X$ is not dominated by a locally finite graph.", "Our first result says that if we exclude groups containing commensurated finite rank free abelian subgroups such as the Baumslag–Solitar group in Example REF , then all model geometries are dominated by the model geometries described in Examples REF –REF .", "Theorem A Let $\\Gamma $ be an infinite finitely generated group that does not contain a non-trivial finite rank free abelian commensurated subgroup.", "Any model geometry of $\\Gamma $ is dominated by a space of the form $X$ , $Y$ or $X\\times Y$ , where: $X$ is a symmetric space of non-compact type; $Y$ is an infinite locally finite vertex-transitive graph.", "We also have an analogue of Theorem REF for lattices.", "The following theorem is strictly stronger than Theorem REF as it applies to all lattices, including non-uniform ones.", "Theorem B Let $\\Gamma $ be an infinite finitely generated group that does not contain a non-trivial finite rank free abelian commensurated subgroup.", "Let $G$ be a locally compact group containing $\\Gamma $ as a lattice.", "Then there is a continuous proper map $\\phi :G\\rightarrow S\\times D$ , with compact kernel and finite index open image, where $S$ is a centre-free semisimple Lie group with finitely many components and no compact factors, and $D$ is a compactly generated totally disconnected locally compact group.", "A subset $\\Omega \\subseteq \\operatorname{QI}(X)$ of the quasi-isometry group of a space $X$ is said to be uniform if there exist constants $K$ and $A$ such that every element of $\\Omega $ can be represented by a $(K,A)$ -quasi-isometry.", "Definition A finitely generated commensurated subgroup $\\Lambda \\text{\\;Q\\; }\\Gamma $ is uniformly commensurated if the image of the natural map $\\Gamma \\rightarrow \\operatorname{Comm}(\\Lambda )\\rightarrow \\operatorname{QI}(\\Lambda )$ induced by conjugation is a uniform subgroup of $\\operatorname{QI}(\\Lambda )$ .", "Elaborating on this definition, if $\\Lambda \\text{\\;Q\\; }\\Gamma $ is a finitely generated commensurated subgroup, then for each $g\\in \\Gamma $ there is always some quasi-isometry $f_g:\\Lambda \\rightarrow \\Lambda $ that agrees with conjugation by $g$ on some finite index subgroup of $\\Lambda $ .", "The preceding definition says $\\Lambda $ is uniformly commensurated precisely when all the $f_g$ can be chosen with quasi-isometry constants independent of $g$ .", "We now give an example of a normal hence commensurated subgroup that is not uniformly commensurated.", "Example G Let $S$ be a closed hyperbolic surface and let $H\\le \\operatorname{MCG}(S)$ be infinite.", "Let $\\Gamma _H$ be the surface group extension fitting into the short exact sequence $1\\rightarrow \\pi _1(S)\\rightarrow \\Gamma _H\\rightarrow H\\rightarrow 1,$ where the action of $H$ on $\\pi _1(S)$ is given by $H\\le \\operatorname{MCG}(S)\\cong \\operatorname{Out}(\\pi _1(S))$ .", "Elements of $H$ induce automorphisms of $\\pi _1(S)$ with arbitrarily large quasi-isometry constants.", "This follows from Thurston's quasi-isometry metric on Teichmüller space [45] or from Corollary REF .", "Therefore, the normal subgroup $\\pi _1(S)\\vartriangleleft \\Gamma _H$ is not uniformly commensurated.", "Two groups $\\Gamma _1$ and $\\Gamma _2$ are virtually isomorphic if for $i=1,2$ there exist finite index subgroup $\\Gamma ^{\\prime }_i\\le \\Gamma _i$ and finite normal subgroups $F_i\\vartriangleleft \\Gamma ^{\\prime }_i$ such that $\\Gamma ^{\\prime }_1/F_1$ and $\\Gamma ^{\\prime }_2/F_2$ are isomorphic.", "The following is the main result of this article and provides a complete solution to Problem REF .", "Theorem C Let $\\Gamma $ be a finitely generated group.", "The following are equivalent: $\\Gamma $ has a model geometry that is not dominated by a locally finite graph.", "$\\Gamma $ contains a finite normal subgroup $F$ such that $\\Gamma /F$ is a uniform lattice in a locally compact group whose identity component is non-compact.", "$\\Gamma $ contains an infinite commensurated subgroup $\\Lambda $ such that one of the following hold: $\\Lambda $ is a finite rank free abelian group; $\\Lambda $ is uniformly commensurated and is virtually isomorphic to a uniform lattice in a connected centre-free semisimple Lie group without compact factors.", "The equivalence of the first two conditions of Theorem REF is not difficult; see Section for details.", "The content of the theorem is therefore that the first two conditions are equivalent to the third.", "We also remark that the uniformly commensurated hypothesis in (REF ) holds automatically if $\\Lambda $ is virtually isomorphic to a lattice satisfying Mostow rigidity; see Proposition REF .", "To understand the geometric and topological structure of 3-manifolds, Thurston defined his eight model geometries, which are $\\mathbb {H}^3$ , $\\mathbb {E}^3$ , $\\operatorname{Sol}$ , $\\mathbb {H}^2\\times \\mathbb {R}$ , $\\widetilde{\\operatorname{SL}(2,\\mathbb {R})}$ , $\\operatorname{Nil}$ , $S^3$ and $S^2\\times \\mathbb {E}^1$ .", "A closed geometric 3-manifold is a 3-manifold obtained by taking the quotient of one of the eight model geometries by a group of isometries acting freely and cocompactly.", "The Geometrisation Theorem, conjectured by Thurston in 1982 and proved by Perelman in 2002, states that every closed 3-manifold is either geometric or can be decomposed into geometric pieces [44], [40].", "If $M$ is a closed geometric 3-manifold with infinite fundamental group, then its fundamental group has a model geometry not dominated by a locally finite graph.", "In Theorems REF –REF we apply Theorem REF to give structural descriptions of low dimensional groups that have model geometries not dominated by locally finite graphs.", "These results are an analogue of Thurston's classification of his eight model geometries from the setting of 3-manifold groups to finitely generated groups of low cohomological dimension.", "The following result is a straightforward consequence of theorems of Stallings and Mosher–Sageev–Whyte [42], [37], which we state for completeness and comparison with Theorems REF and REF .", "Theorem D Let $\\Gamma $ be a finitely generated group of cohomological dimension one.", "Then $\\Gamma $ has a model geometry that is not dominated by a locally finite vertex-transitive graph if and only if $\\Gamma $ is infinite cyclic.", "A surface group is the fundamental group of a closed surface of non-positive Euler characteristic.", "A generalised Baumslag–Solitar group of rank $n$ is the fundamental group of a finite graph of groups in which every vertex and edge group is virtually $\\mathbb {Z}^n$ , and the associated Bass–Serre tree is infinite-ended.", "It was already observed in Examples REF and REF that surface groups and generalised Baumslag–Solitar groups of rank one admit model geometries that are not dominated by a locally finite graph.", "The next theorem shows these are the only such examples amongst groups of cohomological dimension two.", "Theorem E Let $\\Gamma $ be a finitely generated group of cohomological dimension two.", "Then $\\Gamma $ has a model geometry that is not dominated by a locally finite graph if and only if either: $\\Gamma $ is a surface group, hence acts geometrically on $\\mathbb {E}^2$ or $\\mathbb {H}^2$ .", "$\\Gamma $ is a generalised Baumslag–Solitar group of rank one.", "Baumslag–Solitar groups are one of jewels in the crown of combinatorial and geometric group theory, and have received much attention over the years [6], [8], [28], [14], [15], [49], [37], [30].", "Theorem REF further highlights their centrality and importance.", "We obtain an analogous classification for groups of cohomological dimension three.", "Theorem F Let $\\Gamma $ be a finitely generated group of cohomological dimension three.", "Then $\\Gamma $ has a model geometry that is not dominated by a locally finite graph if and only if either: $\\Gamma $ acts geometrically on $\\mathbb {H}^3$ , $\\mathbb {E}^3$ or $\\operatorname{Sol}$ .", "$\\Gamma $ acts geometrically on $\\mathbb {H}^2\\times T$ for some locally finite infinite-ended tree $T$ .", "$\\Gamma $ is a generalised Baumslag–Solitar group of rank two.", "$ \\Gamma $ contains an infinite cyclic commensurated subgroup.", "We comment on the four alternatives in Theorem REF .", "The spaces $\\mathbb {H}^3$ , $\\mathbb {E}^3$ and $\\operatorname{Sol}$ are three of Thurston's eight model geometries.", "If $\\Gamma $ acts geometrically on any of the three remaining aspherical Thurston model geometries, namely $\\mathbb {H}^2\\times \\mathbb {R}$ , $\\widetilde{\\operatorname{SL}(2,\\mathbb {R})}$ or $\\operatorname{Nil}$ , then $\\Gamma $ contains an infinite cyclic normal subgroup, so is encompassed by alternative (REF ) of Theorem REF .", "Fundamental groups of closed 3-manifolds modelled on the spherical-type model geometries $S^3$ and $S^2\\times \\mathbb {E}^1$ do not have cohomological dimension three.", "Groups acting geometrically on $\\mathbb {H}^2\\times T$ include arithmetic lattices in $\\operatorname{Isom}(\\mathbb {H}^2)\\times \\operatorname{PSL}(\\mathbb {Q}_p)$ , as well as more exotic non-residually finite examples constructed by Hughes–Valiunas [24].", "Generalised Baumslag–Solitar groups of rank two were classified up to quasi-isometry by Mosher–Sageev–Whyte, Farb–Mosher and Whyte [16], [49], [37], [50].", "Leary–Minasyan recently provided the first examples of $\\operatorname{CAT}(0)$ groups that are not biautomatic; their groups are generalised Baumslag–Solitar group of rank two, and act geometrically on $\\mathbb {E}^2\\times T$ .", "Groups in (REF ) comprise a large class of groups including groups of the form $\\mathbb {Z}\\times \\Lambda $ where $\\Lambda $ is any finitely presented group of cohomological dimension two." ], [ "Two types of rigidity", "In order to explain the phenomena underlying the results of this article, we discuss two different types of rigidity that lattices in connected Lie groups may possess.", "We motivate the discussion with three examples of lattices.", "$\\mathbb {Z}^n$ for $n\\ge 1$ , which is a uniform lattice in $\\mathbb {R}^n$ .", "$\\pi _1(S)$ for $S$ a closed hyperbolic surface, which is a uniform lattice in $\\operatorname{PSL}(2,\\mathbb {R})$ .", "$\\pi _1(M)$ for $M$ a closed hyperbolic 3-manifold, which is a uniform lattice in $\\operatorname{PSL}(2,\\mathbb {C})$ .", "The hyperbolic 3-manifold group $\\pi _1(M)$ is undoubtedly the most rigid, satisfying both types of rigidity.", "Firstly, every automorphism of $\\pi _1(M)$ induces an automorphism of $\\operatorname{PSL}(2,\\mathbb {C})$ .", "We call this lattice rigidity.", "Secondly, the locally compact group $\\operatorname{PSL}(2,\\mathbb {C})$ containing $\\pi _1(M)$ as a lattice is itself rigid, since $\\operatorname{Out}(\\operatorname{PSL}(2,\\mathbb {C}))$ is finite.", "We call this envelope rigidity.", "The free abelian group $\\mathbb {Z}^n$ satisfies lattice rigidity, since every automorphism of $\\mathbb {Z}^n$ induces an automorphism of $\\mathbb {R}^n$ .", "However, $\\operatorname{Out}(\\mathbb {R}^n)=\\operatorname{Aut}(\\mathbb {R}^n)=\\operatorname{GL}_n(\\mathbb {R})$ is infinite, so $\\mathbb {Z}^n$ does not satisfy envelope rigidity.", "In contrast, $\\pi _1(S)$ does not satisfy lattice rigidity, since infinite order mapping classes in $\\operatorname{MCG}(S)\\cong \\operatorname{Out}(\\pi _1(S))$ do not induce automorphisms of $\\operatorname{PSL}(2,\\mathbb {R})$ .", "However, $\\operatorname{PSL}(2,\\mathbb {R})$ has finite outer automorphism group, so $\\pi _1(S)$ does satisfy envelope rigidity.", "Envelope rigidity is the phenomenon responsible for the product structure of model geometries in Theorem REF .", "The fact that $\\mathbb {Z}^n$ is not envelope rigid ensures commensurated abelian subgroups may give rise to model geometries with a warped product structure such as the Baumslag–Solitar group $BS(1,2)$ in Example REF .", "Lattice rigidity of free abelian groups ensures that every commensurated $\\mathbb {Z}^n$ subgroup gives rise to some model geometry not dominated by a locally finite graph as in Theorem REF .", "In contrast, the fact that $\\pi _1(S)$ is envelope rigid but not lattice rigid means that whilst groups acting geometrically on $\\mathbb {H}^2\\times T$ appear in the statement of Theorem REF , surface-by-free groups of the form considered in Example REF do not.", "In [1], Bader–Furman–Sauer introduce the conditions (Irr), (CAF) and (NbC) and study lattice envelopes of countable groups that satisfy these conditions.", "When these conditions are satisfied, they obtain results that are stronger and more complete than the results of this article.", "The results of this article complement [1] by providing information on finitely generated groups in which at least one of the conditions (Irr), (CAF) and (NbC) does not hold.", "Both the statement and proof of Proposition REF of this article are similar to and inspired by Proposition 5.3 of [1].", "However, we replace the (CAF) condition, which says the group does not contain an infinite amenable commensurated subgroup, with the condition that the group does not contain an infinite finite rank abelian commensurated subgroup.", "In [33], the author introduced discretisable spaces.", "These are spaces for which every cobounded quasi-action can be quasi-conjugated to an isometric action on a locally finite graph.", "If the space in question is a finitely generated group $\\Gamma $ , then discretisability is a coarse analogue of the property that every model geometry of $\\Gamma $ is dominated by a locally finite graph.", "Indeed, it is shown in [33] that if a finitely generated group $\\Gamma $ is a lattice in a locally compact group that is not compact-by-(totally disconnected), then $\\Gamma $ is not discretisable.", "Combined with Theorem REF , this proves: Corollary G Let $\\Gamma $ be a finitely generated group that contains an infinite commensurated subgroup that is either: a finite rank free abelian group a uniformly commensurated group virtually isomorphic to a uniform lattice in a centre-free semisimple Lie group with finitely many components.", "Then $\\Gamma $ is not discretisable.", "All examples of non-discretisable groups that the author is aware of arise from Corollary REF .", "This corollary was one of the motivations of the author for writing this article, giving sufficient algebraic conditions for a group to not be discretisable.", "It will be used in upcoming work of the author to further investigate which groups are and are not discretisable.", "In Section we gather preliminary results concerning locally compact groups, uniform lattices, model geometries and copci homomorphisms.", "In Section we discuss engulfing groups of commensurated subgroups.", "To illustrate the idea, if $\\mathbb {Z}$ is a commensurated subgroup of $\\Gamma $ , there is a canonical map $\\Gamma \\rightarrow \\operatorname{Aut}(\\mathbb {R})$ compatible with the action of $\\Gamma $ by conjugation on subgroups of $\\mathbb {Z}$ of sufficiently large finite index.", "In such a situation, we say $\\mathbb {R}$ is an engulfing group.", "Section is devoted to the proof of Proposition REF , from which we deduce Theorems REF , REF and one direction of Theorem REF .", "The proof of Proposition REF is a combination of several results, the most important of which is a theorem of Gleason–Yamabe solving Hilbert's fifth problem.", "Coupled with van Dantzig's theorem, this essentially reduces Proposition REF to a problem about lattices in Lie groups.", "Applying tools from the structure theory of Lie groups and a theorem of Mostow regarding lattice-hereditary subgroups of Lie groups allows us to deduce Proposition REF .", "In Section we further develop the notion of an engulfing group.", "The main technical result is Theorem REF .", "Informally, this theorem can be thought of as a way of approximating a commensurated subgroup with a normal subgroup by embedding the ambient group into a locally compact group.", "More precisely, Theorem REF takes as input a finitely generated group $\\Gamma $ containing a commensurated subgroup $\\Lambda $ with engulfing group $E$ , and embeds $\\Gamma $ as a uniform lattice in a locally compact group $G$ containing $E$ is a closed normal subgroup.", "The quotient $G/E$ is the Schlichting completion of $\\Gamma $ with respect to $\\Lambda $ .", "Theorem REF builds a group extension using a continuous generalised cocycle that is determined by $\\Gamma $ and $\\Lambda $ .", "Theorem REF follows readily from Theorem REF .", "We anticipate Theorem REF to be a useful stand-alone result for investigating algebraic and geometric properties of commensurated subgroups.", "We conclude with proofs of Theorems REF and REF in Section .", "The main ingredient is work of the author concerning commensurated subgroups of cohomological codimension one [32], generalising results of Kropholler [28], [29].", "We also highlight Proposition REF , a result of independent interest that uses work of Grunewald–Platonov [20] to extend a geometric action of a finite index torsion-free subgroup on a model geometry in the sense of Thurston to an action of the ambient group.", "This obviates the need to pass to finite index subgroups in the conclusions of Theorems REF and REF .", "The author would like to thank Sam Shepherd for helpful feedback and comments on a draft of this article." ], [ "Locally compact groups and metric spaces", "Topological groups are always assumed to be Hausdorff.", "A topological group is locally compact if every point admits a compact neighbourhood, or equivalently, every point admits a fundamental system of compact neighbourhoods.", "A topological group is compactly generated if it contains a compact generating set.", "We refer the reader to [23] for general background on topological groups, and to [7] for background on the coarse geometry of locally compact groups.", "All Lie groups are assumed to be real Lie groups.", "As a continuous map between Lie groups is automatically smooth, a topological group that is topologically isomorphic to a Lie group can be endowed with a unique smooth structure making it a Lie group.", "We thus say a topological group $G$ , not a priori equipped with any smooth manifold structure, is a Lie group if it is topologically isomorphic to a Lie group.", "The following result describes the structure of Lie groups with finitely many components: Theorem 2.1 (see e.g.", "[3]) Let $G$ be a Lie group with finitely many components.", "Then $G$ contains a maximal compact subgroup $K\\le G$ , unique up to conjugation.", "Moreover, $G$ is diffeomorphic to $K\\times \\mathbb {R}^n$ and so the homogeneous space $G/K$ is diffeomorphic to $\\mathbb {R}^n$ .", "If $G$ is a topological group, let $G^\\circ $ denote the connected component of the identity.", "A topological space is totally disconnected if every connected component is a singleton.", "The following elementary proposition says that to some extent, the structure of locally compact groups reduces to the structure of connected and of totally disconnected locally compact groups.", "Proposition 2.2 If $G$ is a locally compact group, then $G^\\circ $ is a closed normal subgroup of $G$ ; The quotient group $G/G^\\circ $ is a totally disconnected locally compact group.", "In particular, $G$ is compact-by-(totally disconnected) if and only if $G^\\circ $ is compact.", "Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces.", "For constants $K\\ge 1$ and $A\\ge 0$ , a map $f:X\\rightarrow Y$ is a $(K,A)$ -quasi-isometry if: for all $x,x^{\\prime }\\in X$ $\\frac{1}{K}d_X(x,x^{\\prime })-A\\le d_Y(f(x),f(x^{\\prime }))\\le Kd_X(x,x^{\\prime })+A;$ for all $y\\in Y$ there exists an $x\\in X$ such that $d_Y(f(x),y)\\le A$ .", "We say $f$ is a quasi-isometry if it is a $(K,A)$ -quasi-isometry for some $K\\ge 1$ and $A\\ge 0$ .", "Two quasi-isometries $f,g:X\\rightarrow Y$ are $A$ -close if $d(f(x),g(x))\\le A$ for all $x\\in X$ , and are close if they are $A$ -close for some $A$ .", "A metric space is proper if closed balls are compact, and quasi-geodesic if it is quasi-isometric to a geodesic metric space.", "A metric space $X$ is said to be cocompact if the natural action of $\\operatorname{Isom}(X)$ on $X$ is cocompact.", "If $X$ is a metric space, then its isometry group $\\operatorname{Isom}(X)$ inherits the structure of a topological group endowed with the compact-open topology.", "Moreover, if $X$ is cocompact, proper and quasi-geodesic, then $\\operatorname{Isom}(X)$ is second countable, locally compact and compactly generated [7].", "We now define what it means for a locally compact group to act geometrically on a proper metric space.", "The following definition is stronger than that considered in [7], where $X$ need not be proper and $\\rho $ need not be continuous.", "Definition 2.3 Let $G$ be a locally compact topological group and $X$ be a proper metric space.", "An isometric action $\\rho :G\\rightarrow \\operatorname{Isom}(X)$ is said to be: continuous if $\\rho $ is continuous; proper if for every compact set $K\\subseteq X$ , the set $\\lbrace g\\in G\\mid \\rho (g)K\\cap K\\ne \\emptyset \\rbrace $ has compact closure; cocompact if there is a compact set $K\\subseteq X$ such that $X=GK$ .", "An action of $G$ on $X$ is said to be geometric if it is isometric, continuous, proper and cocompact.", "Remark 2.4 Continuity of $\\rho :G\\rightarrow \\operatorname{Isom}(X)$ is equivalent to continuity of the map $\\alpha :G\\times X\\rightarrow X$ given by $\\alpha (g,x)=\\rho (g)(x)$ .", "A continuous map between topological spaces is said to be proper if the preimage of a compact set is compact.", "Recall a homomorphism $G\\rightarrow H$ is said to be copci if it is continuous and proper with cocompact image.", "A copci homomorphism $\\phi :G\\rightarrow H$ between locally compact groups factors as $G\\rightarrow G/\\ker (\\phi )\\xrightarrow{} \\phi (G)\\rightarrow H,$ where $\\phi (G)$ is a closed subgroup of $H$ equipped with the subspace topology and $\\widehat{\\phi }$ is a topological isomorphism [4].", "The following lemma shows an action $\\rho :G\\rightarrow \\operatorname{Isom}(X)$ is geometric if and only if $\\rho $ is copci.", "Lemma 2.5 Let $X$ be a proper quasi-geodesic cocompact metric space.", "The action of $\\operatorname{Isom}(X)$ on $X$ is geometric.", "Let $G$ be a locally compact group.", "An action $\\rho :G\\rightarrow \\operatorname{Isom}(X)$ is geometric if and only if it is copci.", "The first part of Lemma REF was shown in [7].", "The second part was noted by Cornulier [11] and readily follows from the definitions.", "We recall from the introduction that if $X$ and $Y$ are metric spaces, then $X$ is dominated by $Y$ if there is a copci homomorphism $\\phi :\\operatorname{Isom}(X)\\rightarrow \\operatorname{Isom}(Y)$ .", "Since the composition of two copci homomorphisms is copci, Lemma REF can be used to deduce the following properties of domination.", "Lemma 2.6 Let $X$ be a cocompact proper quasi-geodesic metric space.", "If $X$ is dominated by $Y$ and $G$ acts geometrically on $X$ , then $G$ acts geometrically on $Y$ .", "Suppose a locally compact group $G$ acts geometrically on a proper quasi-geodesic metric space $Y$ .", "If there is a copci homomorphism $\\phi :\\operatorname{Isom}(X)\\rightarrow G$ , then $X$ is dominated by $Y$ .", "A locally compact compactly generated group can be considered as a metric space by equipping it with the word metric with respect to a compact generating set.", "This metric is well-defined up to equivariant quasi-isometry.", "More generally, such a group can be equipped with any geodesically adapted metric in the sense of [7].", "Lemma 2.7 ([7]) Let $G$ and $H$ be locally compact compactly generated groups and let $X$ be a cocompact proper quasi-geodesic metric space.", "A copci homomorphism $\\phi :G\\rightarrow H$ is a quasi-isometry.", "For any $x_0\\in X$ , the orbit map $\\operatorname{Isom}(X)\\rightarrow X$ given by $\\phi \\mapsto \\phi (x_0)$ is a quasi-isometry.", "A (left-invariant) Haar measure $\\mu $ on a locally compact group $G$ is a Radon measure on the Borel $\\sigma $ -algebra $\\mathcal {B}$ of $G$ such that $\\mu (A)=\\mu (gA)$ for all $g\\in G$ and $A\\in \\mathcal {B}$ .", "On each locally compact group the Haar measure exists and is unique up to rescaling.", "Definition 2.8 Let $G$ be a locally compact group and let $\\mu $ be a Haar measure on $G$ .", "A subgroup $\\Gamma \\le G$ is a lattice if it is discrete and there exists a set $K\\subseteq G$ such that $\\Gamma K=G$ and $\\mu (K)<\\infty $ .", "If $K$ can be chosen to be compact, we say $\\Gamma $ is a uniform or cocompact lattice.", "If $\\Lambda $ is discrete, a homomorphism $\\rho :\\Lambda \\rightarrow G$ is a virtual (uniform) lattice embedding if $\\ker (\\rho )$ is finite and $\\operatorname{Im}(\\rho )$ is a (uniform) lattice.", "Remark 2.9 The terminology virtual lattice embedding was chosen to be consistent with the notion of a virtual isomorphism.", "Proposition 2.10 Let $\\Gamma $ be a discrete group, $G$ be a locally compact group and $\\rho :\\Gamma \\rightarrow G$ be a homomorphism.", "The following are equivalent: $\\rho $ is a virtual uniform lattice embedding; $\\rho $ is copci.", "Note that continuity of $\\rho $ is automatic since $\\Gamma $ is discrete.", "(REF )$\\Rightarrow $ (REF ): As $\\rho (\\Gamma )$ is a uniform lattice, $\\rho $ has cocompact image.", "Moreover, since $\\rho $ has finite kernel and $\\rho (\\Gamma )$ is discrete, $\\rho ^{-1}(K)$ is finite for every compact $K\\subseteq G$ , hence $\\rho $ is proper.", "(REF )$\\Rightarrow $ (REF ): Since $\\rho $ is proper and $\\Gamma $ is discrete, $\\ker (\\rho )$ is finite and $\\rho (\\Gamma )\\cap K$ is finite for every compact $K\\subseteq G$ .", "Thus $\\rho (\\Gamma )$ is discrete.", "Since $\\rho (\\Gamma )$ is cocompact, $\\rho $ is a virtual uniform lattice embedding.", "We now restrict our attention to groups that act geometrically on two important classes of metric spaces: symmetric spaces and locally finite graphs.", "A symmetric space is a connected Riemannian manifold $X$ such that for each $p\\in X$ , there is an isometry $\\phi _p:X\\rightarrow X$ such that $\\phi _p(p)=p$ and $(d\\phi _p)_p=-\\operatorname{Id}$ .", "Such spaces are sometimes called global symmetric spaces.", "Symmetric spaces were classified by Cartan.", "A symmetric space is said to be of non-compact type if it has non-positive sectional curvature and does not split isometrically as a Cartesian product of the form $X=\\mathbb {E}^n\\times X^{\\prime }$ for some $n>0$ .", "Every symmetric space of non-compact type admits a canonical de Rham decomposition $X=X_1\\times \\dots \\times X_n$ , where $X_1,\\dots ,X_n$ are symmetric spaces of non-compact type that are irreducible, i.e.", "do not decompose as a non-trivial direct product.", "A Lie group $G$ is semisimple with no compact factors if its Lie algebra is semisimple and $G$ contains no connected non-trivial compact normal subgroup.", "Note $G$ is semisimple with no compact factors if and only if its identity component $G^\\circ $ is.", "There is a correspondence between symmetric spaces of non-compact type and semisimple Lie groups.", "Proposition 2.11 ([21]) Let $X$ be a symmetric space of non-compact type.", "Then $\\operatorname{Isom}(X)$ has a Lie group structure acting smoothly, compatible with the compact-open topology.", "Moreover, $\\operatorname{Isom}(X)$ is a centre-free semisimple Lie group with no compact factors and finitely many components and $\\operatorname{Isom}(X)^\\circ $ acts transitively on $X$ .", "Conversely, let $G$ be a semisimple Lie group with finite centre and finitely many components.", "Let $K\\le G$ be a maximal compact subgroup.", "Then $G/K$ , equipped with any $G$ -invariant Riemannian metric, is a symmetric space of non-compact type.", "In particular, $G$ acts geometrically on a symmetric space of non-compact type.", "For the purposes of this article, a graph is a combinatorial 1-dimensional cell complex, where vertices are 0-cells and edges are 1-cells.", "A connected graph $X$ will be considered as a metric space by endowing it with the induced path metric in which edges have length one.", "The group $\\operatorname{Aut}(X)$ of graphical automorphisms of $X$ is thus a subgroup of $\\operatorname{Isom}(X)$ , and is equipped with the subspace topology.", "Note that $\\operatorname{Aut}(X)$ is a totally disconnected group, and is locally compact if $X$ is locally finite.", "The following fundamental theorem is due to van Dantzig: Theorem 2.12 ([46]) If $G$ is locally compact and totally disconnected, then every compact identity neighbourhood contains a compact open subgroup.", "As noted in the introduction, locally finite graphs are assumed to be both connected and singular.", "If a graph $X$ is singular, then $\\operatorname{Aut}(X)=\\operatorname{Isom}(X)$ .", "Proposition REF and hence Theorem REF is false without this singularity hypothesis, since $\\mathbb {R}$ is dominated by a non-singular graph, namely the Cayley graph of $\\mathbb {Z}$ with generator 1.", "This is essentially the only counterexample, since every connected non-singular connected graph is homeomorphic to either $\\mathbb {R}$ or $S^1$ .", "Proposition 2.13 If $G$ is locally compact and compactly generated, the following are equivalent: $G$ is compact-by-(totally disconnected); $G$ contains a compact open subgroup; $G$ acts geometrically on a locally finite vertex-transitive graph; $G$ acts geometrically on a locally finite graph.", "(REF )$\\Rightarrow $ (REF ): Suppose $G$ has a compact normal subgroup $K\\vartriangleleft G$ such that $G/K$ is totally disconnected.", "Then Theorem REF ensures there exists a compact open subgroup $U/K\\le G/K$ , whence $U$ is a compact open subgroup of $G$ .", "(REF )$\\Rightarrow $ (REF ): This follows from the Cayley–Abels graph construction; see for instance [27].", "(REF )$\\Rightarrow $ (REF ) is obvious.", "(REF )$\\Rightarrow $ (REF ): As $G$ acts geometrically on a singular locally finite graph $X$ , there is a copci map $\\rho :G\\rightarrow \\operatorname{Isom}(X)$ .", "As $X$ is singular, $\\operatorname{Isom}(X)=\\operatorname{Aut}(X)$ is totally disconnected and so $G$ is compact-by-(totally disconnected).", "Combining Lemma REF and Proposition REF , we deduce: Corollary 2.14 If $X$ is a cocompact proper quasi-geodesic metric space, then $X$ is dominated by a locally finite graph if and only if $\\operatorname{Isom}(X)$ is compact-by-(totally disconnected).", "The following proposition translates the problem of finding a model geometry of $\\Gamma $ not dominated by a locally finite graph, to that of embedding $\\Gamma $ as a uniform lattice in a suitable locally compact group: Proposition 2.15 Let $\\Gamma $ be a finitely generated group.", "The following are equivalent: $\\Gamma $ has a model geometry not dominated by a locally finite graph; There is a locally compact group $G$ and virtual uniform lattice embedding $\\Gamma \\rightarrow G$ such that $G^\\circ $ is non-compact.", "To prove this, we use the following lemma.", "Lemma 2.16 ([36]) If $G$ is a locally compact compactly generated group, it has can be endowed with a left-invariant metric $(G,d)$ that is proper and quasi-geodesic.", "In particular, there exists a proper quasi-geodesic metric space $X$ such that $G$ acts geometrically on $X$ .", "(REF )$\\Rightarrow $ (REF ): Suppose $X$ is a model geometry of $\\Gamma $ not dominated by a locally finite graph.", "By Corollary REF , $\\operatorname{Isom}(X)$ is not compact-by-(totally disconnected) or equivalently, $\\operatorname{Isom}(X)^\\circ $ is non-compact.", "Since $X$ is a model geometry, there is a geometric action $\\rho :\\Gamma \\rightarrow \\operatorname{Isom}(X)$ .", "Lemma REF and Proposition REF ensure $\\rho $ is a virtual uniform lattice embedding.", "(REF )$\\Rightarrow $ (REF ): Suppose $\\rho :\\Gamma \\rightarrow G$ is a virtual uniform lattice embedding in a locally compact group $G$ with $G^\\circ $ non-compact.", "By Proposition REF , $\\rho $ is copci.", "Lemmas REF and REF ensure there is a proper quasi-geodesic metric space $X$ and a copci map $\\phi :G\\rightarrow \\operatorname{Isom}(X)$ .", "In particular, Lemma REF ensures $X$ is a model geometry of $\\Gamma $ .", "Since $G^\\circ $ is non-compact, neither is the identity component of $\\operatorname{Isom}(X)$ .", "Therefore, Corollary REF implies $X$ is not dominated by a locally finite graph.", "We briefly mention a useful consequence of van Dantzig's theorem.", "Proposition 2.17 Let $G$ be a Lie group.", "Any totally disconnected subgroup $H\\le G$ is discrete.", "In particular, if $H$ is both totally disconnected and compact, then it is finite.", "Lie groups satisfy the no small subgroups property: there is a sufficiently small identity neighbourhood $U\\subseteq G$ containing no non-trivial subgroups.", "In particular, the only subgroup contained in $H\\cap U$ is trivial.", "By Theorem REF , $H\\cap U$ contains a subgroup that is open in $H$ , hence $\\lbrace 1\\rbrace $ is open in $H$ and so $H$ is discrete." ], [ "Engulfing groups and uniformly commensurated subgroups", "Let $\\Lambda $ be a group.", "A commensuration of $\\Lambda $ is an isomorphism $\\Lambda _1\\rightarrow \\Lambda _2$ between finite index subgroups of $\\Lambda $ .", "The abstract commensurator $\\operatorname{Comm}(\\Lambda )$ of $\\Lambda $ is the group of all equivalence classes of commensurations of $\\Lambda $ , where two commensurations are equivalent if they agree on a common finite index subgroup of $\\Lambda $ .", "The group operation on $\\operatorname{Comm}(\\Lambda )$ is composition after restricting the domain to a finite index subgroup so that composition is defined.", "Example 3.1 The abstract commensurator of $\\mathbb {Z}^n$ is $\\operatorname{GL}_n(\\mathbb {Q})$ .", "More specifically, identifying $\\mathbb {Z}^n$ with the integer lattice in $\\mathbb {R}^n$ , the natural action of $\\operatorname{GL}_n(\\mathbb {Q})$ on $\\mathbb {R}^n$ acts as commensurations of $\\mathbb {Z}^n$ , and every commensuration of $\\mathbb {Z}^n$ arises in this way.", "If $\\Gamma $ is a group and $\\Lambda \\text{\\;Q\\; }\\Gamma $ is a commensurated subgroup, then there is a homomorphism $\\Phi _{\\Gamma ,\\Lambda }:\\Gamma \\rightarrow \\operatorname{Comm}(\\Lambda )$ induced by conjugation.", "Indeed, for every $g\\in G$ the isomorphism $\\phi _g:\\Lambda \\cap g^{-1}\\Lambda g\\rightarrow g\\Lambda g^{-1}\\cap \\Lambda $ given by $h\\mapsto ghg^{-1}$ is a commensuration of $\\Lambda $ .", "The map $\\Phi _{\\Gamma ,\\Lambda }:\\Gamma \\rightarrow \\operatorname{Comm}(\\Lambda )$ given by $g\\mapsto [\\phi _g]$ is called the modular homomorphism.", "If $X$ is a metric space, the quasi-isometry group $\\operatorname{QI}(X)$ is the set of all equivalence classes of self quasi-isometries of $X$ , where two quasi-isometries $f,g:X\\rightarrow X$ are equivalent if $\\sup _{x\\in X}d(f(x),g(x))<\\infty $ .", "The group operation on $\\operatorname{QI}(X)$ is induced by composition.", "If $X$ and $Y$ are quasi-isometric, then a quasi-isometry $X\\rightarrow Y$ naturally induces an isomorphism $\\operatorname{QI}(X)\\rightarrow \\operatorname{QI}(Y)$ .", "A subset $\\Omega \\subseteq \\operatorname{QI}(X)$ is uniform if there exist constants $K$ and $A$ such that each equivalence class in $\\Omega $ contains a $(K,A)$ -quasi-isometry.", "Let $\\Lambda $ be a finitely generated group equipped with the word metric.", "For every commensuration $\\phi :\\Lambda _1\\rightarrow \\Lambda _2$ of $\\Lambda $ , the map $\\phi ^*=i_{\\Lambda _2}\\circ \\phi \\circ q_{\\Lambda _1}:\\Lambda \\rightarrow \\Lambda $ is a quasi-isometry, where $i_{\\Lambda _2}:\\Lambda _2\\rightarrow \\Lambda $ and $q_{\\Lambda _1}:\\Lambda \\rightarrow \\Lambda _1$ are the inclusion map and a closest point projection respectively.", "This induces a homomorphism $\\Psi _\\Lambda :\\operatorname{Comm}(\\Lambda )\\rightarrow \\operatorname{QI}(\\Lambda )$ given by $[\\phi ]\\rightarrow [\\phi ^*]$ , which is independent of the choice of commensuration and closest point projection.", "Recall from the introduction that a finitely generated commensurated subgroup $\\Lambda \\text{\\;Q\\; }\\Gamma $ is uniformly commensurated if the image of the map $\\Gamma \\xrightarrow{} \\operatorname{Comm}(\\Lambda )\\xrightarrow{} \\operatorname{QI}(\\Lambda )$ is uniform.", "Remark 3.2 If $\\Lambda $ is a finitely generated group, the natural map $\\Lambda \\rightarrow \\operatorname{QI}(\\Lambda )$ induced by the action of $\\Lambda $ on itself by left-multiplication, coincides with the composition $\\Lambda \\rightarrow \\operatorname{Comm}(\\Lambda )\\rightarrow \\operatorname{QI}(\\Lambda )$ .", "A quasi-action of a group $G$ on a space $X$ is a map $\\phi :G\\rightarrow X^X$ such that there exists constants $K\\ge 1$ and $A\\ge 0$ such that the following hold: for all $g\\in G$ , $\\phi (g)$ is a $(K,A)$ -quasi-isometry; for all $g,h\\in G$ and $x\\in X$ , $d(\\phi (gh)(x),\\phi (g)(\\phi (h)(x)))\\le A$ ; for all $x\\in X$ , $d(x,\\phi (1)(x))\\le A$ .", "We typically suppress the map $\\phi $ and write a quasi-action $\\phi :G\\rightarrow X^X$ as $G\\operatorname{{\\overset{\\scriptscriptstyle \\text{q.a.", "}}{\\curvearrowright }}}X$ .", "Two quasi-actions $G\\operatorname{{\\overset{\\scriptscriptstyle \\text{q.a.", "}}{\\curvearrowright }}}X$ and $G\\operatorname{{\\overset{\\scriptscriptstyle \\text{q.a.", "}}{\\curvearrowright }}}Y$ are quasi-conjugate if there is a quasi-isometry $f:X\\rightarrow Y$ that is coarsely uniformly $G$ -equivariant.", "A metric space $X$ is tame if for every $K$ and $A$ , there exists a constant $B$ such if two $(K,A)$ -quasi-isometries from $X$ to $X$ are close, they are $B$ -close.", "We will use the following theorem of Kapovich–Kleiner–Leeb, which is a combination of Propositions 4.8 and 5.9 in [25]: Proposition 3.3 ([25]) Symmetric spaces of non-compact type are tame.", "The following lemma will be used as a source of quasi-actions: Lemma 3.4 If $X$ is tame and $\\rho :G\\rightarrow \\operatorname{QI}(X)$ is a homomorphism with uniform image, then there is a quasi-action $\\phi :G\\rightarrow X^X$ such that $[\\phi (g)]=\\rho (g).$ Since $\\rho $ has a uniform image, there exist $K$ and $A$ such that for each $g\\in G$ , there is some $(K,A)$ -quasi-isometry $\\phi (g):X\\rightarrow X$ such that $[\\phi (g)]=\\rho (g)$ .", "As $\\rho $ is a homomorphism, for all $g,h\\in G$ we have $[\\phi (g)\\phi (h)]=[\\phi (gh)]$ .", "Since $X$ is tame, there is a constant $B$ , depending only on $(K,A)$ such that $\\phi (g)\\phi (h)$ and $\\phi (gh)$ are $B$ -close for all $g,h\\in A$ , and $\\phi (1)$ is $B$ -close to the identity.", "This shows $\\phi $ is a quasi-action.", "A symmetric space $X$ with de Rham decomposition $X_1\\times \\dots \\times X_n$ is said to be normalised if whenever $X_i$ and $X_j$ are homothetic, they are isometric.", "We recall the following theorem of Kleiner–Leeb: Theorem 3.5 ([26]) Let $X$ be a normalised symmetric space of non-compact type and $G\\operatorname{{\\overset{\\scriptscriptstyle \\text{q.a.", "}}{\\curvearrowright }}}X$ be a quasi-action.", "Then $G$ is quasi-conjugate to an isometric action on $X$ .", "Given a group $\\Gamma $ containing an infinite cyclic commensurated subgroup $\\Lambda = \\langle a \\rangle \\cong \\mathbb {Z}$ , $\\Gamma $ does not act on $\\Lambda $ by conjugation unless $\\Lambda $ is normal.", "To remedy this, we embed $\\Lambda $ as a lattice in $\\mathbb {R}$ and define an action of $\\Gamma $ on $\\mathbb {R}$ as follows.", "If $g\\in \\Gamma $ and $ga^mg^{-1}=a^n$ for $n,m\\ne 0$ , then $g$ acts on $\\mathbb {R}$ via multiplication by $\\frac{n}{m}$ .", "Note that as $\\Lambda $ is commensurated, such $m$ and $n$ exist.", "This gives a well-defined action $\\Delta :\\Gamma \\rightarrow \\operatorname{Aut}(\\mathbb {R})$ such that $\\Delta (g)$ agrees with conjugation by $g$ on a finite index subgroup of $\\Lambda $ .", "This phenomenon motivates the following definition: Definition 3.6 A Hecke pair $(\\Gamma ,\\Lambda )$ consists of a group $\\Gamma $ and a commensurated subgroup $\\Lambda \\text{\\;Q\\; }\\Gamma $ .", "We say that a locally compact group $E$ is an engulfing group of the Hecke pair $(\\Gamma ,\\Lambda )$ if there is a virtual uniform lattice embedding $\\rho :\\Lambda \\rightarrow E$ and a homomorphism $\\Delta :\\Gamma \\rightarrow \\operatorname{Aut}(E)$ such that $\\Delta $ extends the map $\\Lambda \\xrightarrow{} E\\rightarrow \\operatorname{Aut}(E)$ , where $E\\rightarrow \\operatorname{Aut}(E)$ is the action of $E$ on itself by left conjugation.", "for all $g\\in \\Gamma $ , there is a finite index subgroup $\\Lambda _g\\le \\Lambda $ such that $\\Delta (g)(\\rho (h))=\\rho (ghg^{-1})$ for all $h\\in \\Lambda _g$ .", "More generally, we say $(E,\\rho ,\\Delta )$ as above is an engulfing triple of the Hecke pair $(\\Gamma ,\\Lambda )$ .", "Engulfing groups play a central role in Theorem REF , which is used to prove one direction of Theorem REF .", "The following proposition generalises the discussion preceding Definition REF .", "Proposition 3.7 Let $(\\Gamma ,\\Lambda )$ be a Hecke pair with $\\Lambda $ isomorphic to $\\mathbb {Z}^n$ .", "Then $(\\Gamma ,\\Lambda )$ has engulfing group $\\mathbb {R}^n$ .", "We fix an embedding $\\rho :\\Lambda \\rightarrow \\mathbb {R}^n$ with image the integer lattice and identify $\\Lambda $ with its image.", "As noted in Example REF , all commensurations of $\\Lambda $ arise from the standard action of an element $\\operatorname{GL}_n(\\mathbb {Q})$ restricted to a finite index subgroup of $\\Lambda $ .", "The modular homomorphism $\\Gamma \\rightarrow \\operatorname{Comm}(\\Lambda )$ thus induces a homomorphism $\\Delta :\\Gamma \\rightarrow \\operatorname{GL}_n(\\mathbb {Q})\\le \\operatorname{GL}_n(\\mathbb {R})=\\operatorname{Aut}(\\mathbb {R}^n)$ .", "By construction, for each $g\\in G$ there is a finite index subgroup $\\Lambda _g\\le \\Lambda $ such that $\\Delta (g)(\\rho (h))=\\rho (ghg^{-1})$ for all $h\\in \\Lambda _g$ .", "Thus $\\mathbb {R}^n$ is an engulfing group of $(\\Gamma ,\\Lambda )$ .", "Remark 3.8 Using a result of Malcev [31], an analogue of Proposition REF holds when $\\Lambda $ is a finitely generated torsion-free nilpotent group.", "We have the following sufficient criterion for $E$ to be an engulfing group.", "Lemma 3.9 Let $(\\Gamma ,\\Lambda )$ be a Hecke pair with $\\Lambda $ finitely generated.", "Suppose there is a locally compact group $E$ and a homomorphism $\\phi :\\Gamma \\rightarrow E$ such that $\\phi |_\\Lambda $ is a virtual uniform lattice embedding.", "Then $E$ is an engulfing group of $(\\Gamma ,\\Lambda )$ and $\\Lambda $ is uniformly commensurated.", "We define $\\Delta :\\Gamma \\rightarrow \\operatorname{Aut}(E)$ by $\\Delta (g)(l)=\\phi (g)l\\phi (g)^{-1}$ and set $\\rho \\phi |_\\Lambda $ .", "Thus $\\Delta (g)\\phi (h)=\\phi (ghg^{-1})$ for all $g,h\\in \\Gamma $ and so $(E,\\rho ,\\Delta )$ is an engulfing triple of $(\\Gamma ,\\Lambda )$ .", "We equip $\\Lambda $ and $E$ with the word metrics $d_\\Lambda $ and $d_E$ with respect to finite and compact generating sets respectively.", "By Lemma REF and Proposition REF , $\\rho :\\Lambda \\rightarrow E$ is a quasi-isometry.", "Let $f:E\\rightarrow \\Lambda $ be a coarse inverse to $\\rho $ .", "For $e\\in E$ , let $L_e:E\\rightarrow E$ be left multiplication by $e$ , which is an isometry of $E$ .", "Then there exist constants $K$ and $A$ such for every $g\\in \\Gamma $ , the map $f_{g}f\\circ L_{\\phi (g)}\\circ \\rho :\\Lambda \\rightarrow \\Lambda $ is a $(K,A)$ -quasi-isometry.", "We may also assume, by increasing $K$ and $A$ if needed, that $f$ and $\\rho $ are $(K,A)$ -quasi-isometries and $ \\rho \\circ f$ is $A$ -close to the identity.", "Then for all $g\\in G$ and $h\\in g^{-1}\\Lambda g\\cap \\Lambda $ , we have $d_\\Lambda (ghg^{-1},f_g(h)) &\\le Kd_E(\\rho (ghg^{-1}),\\rho (f(\\phi (g)\\rho (h))))+KA\\\\&\\le Kd_E(\\phi (ghg^{-1}),\\phi (gh))+2KA\\\\&= Kd_E(\\phi (g^{-1}),1)+2KA,$ noting that this bound depends only on $g$ and is independent of $h$ .", "This ensures the map $\\Gamma \\rightarrow \\operatorname{Comm}(\\Lambda )\\rightarrow \\operatorname{QI}(\\Lambda )$ is given by $g\\mapsto [f_g]$ .", "Since every $f_g$ is a $(K,A)$ -quasi-isometry, $\\Lambda $ is uniformly commensurated.", "The following proposition gives a situation in which Lemma REF can be applied.", "Proposition 3.10 Let $\\Gamma $ be a group and let $\\Lambda \\text{\\;Q\\; }\\Gamma $ be a uniformly commensurated subgroup that is a virtual uniform lattice in a connected centre-free semisimple Lie group.", "Then there is a centre-free semisimple Lie group $S$ with no compact factors and finitely many components, and a homomorphism $\\phi :\\Gamma \\rightarrow S$ such that $\\phi |_{\\Lambda }$ is a virtual uniform lattice embedding.", "In particular, $S$ is an engulfing of the pair $(\\Gamma ,\\Lambda )$ .", "First note that $\\Lambda $ acts geometrically on a symmetric space $X$ of non-compact type; see Proposition REF .", "Without loss of generality, we may assume $X$ is normalised.", "Since $X$ is tame and $\\Lambda $ is uniformly commensurated, Lemma REF implies the map $\\Gamma \\rightarrow \\operatorname{Comm}(\\Lambda )\\rightarrow \\operatorname{QI}(\\Lambda )\\cong \\operatorname{QI}(X)$ induces a quasi-action $\\Gamma \\operatorname{{\\overset{\\scriptscriptstyle \\text{q.a.", "}}{\\curvearrowright }}}X$ .", "By Theorem REF , this quasi-action is quasi-conjugate to an isometric action $\\phi :\\Gamma \\rightarrow \\operatorname{Isom}(X)$ .", "By Remark REF , this restricts to an isometric action $\\phi |_\\Lambda $ that is quasi-conjugate to the natural action of $\\Lambda $ on itself, hence is geometric.", "By Lemma REF and Proposition REF , $\\phi |_\\Lambda $ is a virtual uniform lattice embedding.", "Lemma REF now ensures $\\operatorname{Isom}(X)$ is an engulfing of the pair $(\\Gamma ,\\Lambda )$ .", "Moreover, Proposition REF ensures $\\operatorname{Isom}(X)$ is a semisimple Lie group of the required type.", "We can now use Proposition REF to describe the structure of uniformly commensurated normal subgroups that are lattices in semisimple Lie groups.", "Corollary 3.11 Let $\\Lambda \\vartriangleleft \\Gamma $ be a uniformly commensurated normal subgroup that is isomorphic to a uniform lattice in a connected centre-free semisimple Lie group with no compact factors and finitely many components.", "Then there is a normal subgroup $\\Delta \\vartriangleleft \\Gamma $ such that some finite index subgroup of $\\Gamma $ splits as a direct product $\\Lambda \\times \\Delta $ .", "The hypothesis on $\\Lambda $ ensures it contains no finite normal subgroups.", "By Proposition REF , there is a centre-free semisimple Lie group $S$ with no compact factors and finitely many components, and a map $\\phi :\\Gamma \\rightarrow S$ such that $\\phi |_\\Lambda $ is a virtual uniform lattice embedding.", "Since $\\Lambda $ has no finite normal subgroups, $\\phi |_\\Lambda $ is injective.", "As $\\phi |_\\Lambda $ is injective, $\\Lambda \\cap \\ker (\\phi )=\\lbrace 1\\rbrace $ .", "Let $g\\in \\ker (\\phi )$ and $h\\in \\Lambda $ .", "Since $\\phi (g)=1$ , $\\phi ([g,h])=1$ .", "As $\\Lambda $ is normal, $[g,h]\\in \\Lambda $ and so injectivity of $\\phi |_\\Lambda $ ensures $[g,h]=1$ .", "Since $\\ker (\\phi )$ and $\\Lambda $ commute and have trivial intersection, $\\Gamma _1\\phi ^{-1}(\\phi (\\Lambda ))=\\ker (\\phi )\\Lambda $ is isomorphic to the direct product $\\ker (\\phi )\\times \\Lambda $ .", "As $\\Lambda $ is normal in $\\Gamma $ , $\\phi (\\Gamma )\\subseteq N_{S}(\\phi (\\Lambda ))$ .", "However, since $\\phi (\\Lambda )$ is a uniform lattice in $S$ , a consequence of Borel density ensures $\\phi (\\Lambda )$ is of finite index in its normaliser $N_{S}(\\phi (\\Lambda ))$ [41], implying $\\Gamma _1$ is a finite index subgroup of $\\Gamma $ .", "We digress briefly to discuss Mostow rigidity, giving examples of commensurated subgroups that are automatically uniformly commensurated.", "Let $G$ be a centre-free semisimple Lie group with no compact factors and finitely many components.", "Then $G$ splits as a direct product $G_1\\times \\dots \\times G_n$ , where each $G_i$ is simple and non-compact.", "A uniform lattice $\\Gamma \\le G$ is said to be Mostow rigid if there is no factor $G_i$ isomorphic to $\\operatorname{PSL}(2,\\mathbb {R})$ such that $\\Gamma G_i\\le G$ is closed.", "Equivalently, $\\Gamma $ does not virtually split as a direct product with some direct factor a uniform lattice in $\\operatorname{PSL}(2,\\mathbb {R})$ .", "Theorem 3.12 ([35]) If $G$ is a connected centre-free semisimple Lie group with no compact factors and $\\Gamma ,\\Gamma ^{\\prime }\\le G$ are two Mostow rigid lattices, then any isomorphism $\\Gamma \\rightarrow \\Gamma ^{\\prime }$ extends uniquely to a smooth automorphism $G\\rightarrow G$ .", "We can use this rigidity to show the following.", "Proposition 3.13 Let $(\\Gamma ,\\Lambda )$ be a Hecke pair and let $S$ be a connected centre-free semisimple Lie group $S$ with no compact factors.", "Suppose $\\rho :\\Lambda \\rightarrow S$ is a virtual uniform lattice embedding with Mostow rigid image.", "Then $\\Lambda $ is a uniformly commensurated subgroup of $\\Gamma $ .", "We first recall the structure of automorphism groups of semisimple Lie groups.", "Let $G$ be a connected centre-free semisimple Lie group with Lie algebra $\\mathfrak {g}$ .", "Since $G$ is a connected, there is an injective homomorphism $d:\\operatorname{Aut}(G)\\rightarrow \\operatorname{Aut}(\\mathfrak {g})$ .", "Because $G=\\widetilde{G}/Z(\\widetilde{G})$ and $Z(\\widetilde{G})$ is characteristic and discrete, the map $d:\\operatorname{Aut}(G)\\rightarrow \\operatorname{Aut}(\\mathfrak {g})$ is surjective hence an isomorphism.", "Thus $\\operatorname{Aut}(G)$ can be given the structure of a Lie group; for details see [39].", "As $G$ is centre-free, the adjoint map $\\operatorname{Ad}:G\\rightarrow \\operatorname{Aut}(G)$ is injective.", "Moreover, $\\operatorname{Ad}(G)=\\operatorname{Inn}(G)$ maps isomorphically to $\\operatorname{Inn}(\\mathfrak {g})$ .", "Since $\\mathfrak {g}$ is semisimple, $\\operatorname{Aut}(\\mathfrak {g})$ has finitely many components and $\\operatorname{Inn}(\\mathfrak {g})$ is the identity component of $\\operatorname{Aut}(\\mathfrak {g})$ ; see for instance [38].", "Thus $G\\cong \\operatorname{Aut}(G)^\\circ $ .", "Note $Z(\\operatorname{Aut}(G))=1$ since every element in $Z(\\operatorname{Aut}(G))$ acts trivially on $G\\cong \\operatorname{Aut}(G)^\\circ $ by conjugation, hence is the identity.", "To summarise: Lemma 3.14 Let $G$ be a connected centre-free semisimple Lie group.", "Then $\\operatorname{Aut}(G)$ is also a centre-free semisimple Lie group with finitely many components.", "Moreover, $\\operatorname{Ad}:G\\rightarrow \\operatorname{Aut}(G)$ is a Lie group isomorphism onto $\\operatorname{Inn}(G)=\\operatorname{Aut}(G)^\\circ $ .", "Let $\\rho :\\Lambda \\rightarrow S$ be a virtual uniform lattice embedding with Mostow rigid image.", "For every finite index $\\Lambda ^{\\prime }\\le \\Lambda $ , $\\rho (\\Lambda ^{\\prime })$ is also Mostow rigid.", "By Theorem REF , for each $g\\in \\Gamma $ , there is an automorphism $\\Delta (g):S\\rightarrow S$ such that $\\Delta (g)(\\rho (h))=\\rho (ghg^{-1})$ for all $h\\in \\Lambda \\cap g^{-1}\\Lambda g$ .", "For all $g,k\\in \\Gamma $ , $\\Delta (gk)$ and $\\Delta (g)\\Delta (k)$ agree on a finite index subgroup of $\\Lambda $ , hence by Theorem REF we have $\\Delta (gh)=\\Delta (g)\\Delta (k)$ .", "Thus $\\Delta :\\Gamma \\rightarrow \\operatorname{Aut}(S)$ is a homomorphism.", "Lemma REF ensures the image of the map $\\Delta |_\\Lambda :\\Lambda \\xrightarrow{} S{\\operatorname{Ad}} \\operatorname{Aut}(S)$ is a uniform lattice.", "Applying Proposition REF to $\\Delta $ , we deduce $\\Lambda $ is uniformly commensurated in $\\Gamma $ ." ], [ "The structure of lattices in locally compact groups", "In this section we prove Theorems REF , REF and one direction of Theorem REF .", "These will follow from Proposition REF .", "The following major result forms part of the solution to Hilbert's fifth problem: Theorem 4.1 ([19], [51]) Let $G$ be a connected-by-compact locally compact group.", "Then there is a compact subgroup $K\\vartriangleleft G$ such that $G/K$ is a Lie group with finitely many components.", "A useful consequence of Theorem REF is the following, which allows us to assume, after quotienting out by a compact normal subgroup of $G$ , that $G^\\circ $ has no non-trivial compact normal subgroup.", "Lemma 4.2 Let $G$ be a locally compact group.", "There is a compact normal subgroup $K\\vartriangleleft G$ such that $(G/K)^\\circ $ is a connected Lie group with no non-trivial compact normal subgroups.", "Moreover, $G^\\circ $ is compact if and only if $(G/K)^\\circ $ is compact.", "Since $G^\\circ $ is connected, Theorem REF ensures it contains a compact subgroup $K_0\\vartriangleleft G^\\circ $ such that $G^\\circ /K_0$ is a connected Lie group.", "As $G^\\circ /K_0$ is a connected Lie group, Theorem REF ensures it has a maximal compact subgroup $L/K_0$ such that every compact subgroup of $G^\\circ /K_0$ is contained in a conjugate of $L/K_0$ .", "Let $K/K_0$ be the normal core of $L/K_0$ .", "Then $K$ is a maximal compact normal subgroup of $G^\\circ $ , i.e.", "it contains every other compact normal subgroup of $G^\\circ $ .", "In particular, $K$ is topologically characteristic in $G^\\circ $ and therefore normal in $G$ .", "Let $\\pi :G\\rightarrow G/K$ be the quotient map and set $H\\pi ^{-1}((G/K)^\\circ )$ .", "As $\\pi (G^\\circ )$ is connected, $G^\\circ \\le H$ .", "The quotient $H/G^\\circ $ is totally disconnected by Proposition REF , and is also the quotient of the connected group $H/K=(G/K)^\\circ $ , hence is connected.", "Thus $H/G^\\circ $ is trivial and so $G^\\circ =H$ .", "Since $K$ is a maximal compact normal subgroup of $G^\\circ $ , $G^\\circ /K=(G/K)^\\circ $ contains no non-trivial compact normal subgroups, hence by Theorem REF , $(G/K)^\\circ $ is a connected Lie group.", "As $K$ is compact and $G^\\circ /K=(G/K)^\\circ $ , we see $G^\\circ $ is compact if and only if $(G/K)^\\circ $ is compact.", "A connected Lie group $G$ has a Levi decomposition $G=RS$ , where: $R$ is the unique maximal closed connected normal solvable subgroup of $G$ known as the radical of $G$ ; $S$ is a closed connected semisimple subgroup of $G$ , unique up to conjugation, known as the Levi subgroup of $G$ .", "The nilradical of a connected Lie group $G$ consists of the unique maximal closed connected normal nilpotent subgroup of $G$ .", "Every Lie group with non-trivial radical has non-trivial nilradical.", "Engel's theorem implies a maximal compact subgroup $K$ of a connected nilpotent Lie group $N$ is necessary central in $N$ ([48]), hence $K$ is a compact characteristic subgroup of $N$ .", "Thus if $G$ has no non-trivial compact normal subgroup, then a maximal torus of its nilradical $N$ is trivial, hence $N$ is simply connected.", "We thus deduce: Proposition 4.3 Let $G$ be a connected Lie group with no non-trivial compact normal subgroup.", "Then exactly one of the following occurs: the nilradical $N$ of $G$ is non-trivial and simply connected; $G$ is a centre-free semisimple Lie group.", "The following proposition partially mitigates the fact that $G^\\circ $ will not typically be open.", "Proposition 4.4 Let $G$ be a locally compact group such that $G^\\circ $ contains no non-trivial compact normal subgroup.", "There is an open subgroup $L\\le G$ and a compact totally disconnected subgroup $K\\vartriangleleft L$ such that: the map $G^\\circ \\times K\\rightarrow L$ induced by the inclusions $G^\\circ \\rightarrow L$ and $K\\rightarrow L$ is a topological isomorphism; for any topologically characteristic subgroup $M\\le G^\\circ $ , $M\\times K\\cong MK\\le G$ is a commensurated subgroup of $G$ .", "Since $G^\\circ $ has no non-trivial compact normal subgroup, it is necessarily a Lie group by Theorem REF .", "Let $\\pi :G\\rightarrow G/G^\\circ $ be the quotient map.", "By Proposition REF and Theorem REF , there is a compact open subgroup $U\\le G/G^\\circ $ .", "Let $L\\pi ^{-1}(U)$ .", "Thus $L$ is open and connected-by-compact, hence by Theorem REF , $L$ contains a compact normal subgroup $K$ such that $L/K$ is a virtually connected Lie group.", "Replacing $L$ with a finite index open subgroup if needed, we may assume $L/K$ is connected.", "We claim $L$ is topologically isomorphic to $G^\\circ \\times K$ .", "Since $K$ is compact and $G^\\circ $ is closed, $G^\\circ K$ is closed.", "Since $L/G^\\circ K$ is the quotient of the connected group $L/K$ and the totally disconnected group $L/G^\\circ $ , we deduce $L=G^\\circ K$ .", "Since $G^\\circ \\cap K$ is a compact normal subgroup of $G^\\circ $ , $G^\\circ \\cap K=\\lbrace 1\\rbrace $ .", "Since $L=G^\\circ K$ , $G^\\circ \\cap K=\\lbrace 1\\rbrace $ and both $G^\\circ $ and $K$ are closed and normal in $L$ , they commute and so there is a continuous abstract isomorphism $\\phi :G^\\circ \\times K\\rightarrow L$ induced by the inclusion in each factor.", "Let $\\pi :L\\rightarrow L/G^\\circ $ be the quotient map.", "The restriction $ \\pi |_K=\\psi :K\\rightarrow L/G^\\circ $ is a continuous abstract isomorphism from a compact space to a Hausdorff space, so is a topological isomorphism.", "This ensures $\\phi $ has a topological inverse, hence $L$ is topologically isomorphic to $G^\\circ \\times K$ (see [4]).", "We claim $K$ is a commensurated subgroup of $G$ .", "Fix $g\\in G$ .", "Let $\\psi :L\\rightarrow L/K\\cong G^\\circ $ be the quotient map.", "As $L$ is open and $g^{-1}Kg$ is compact, $g^{-1}Kg\\cap L$ is a finite index subgroup of $g^{-1}Kg$ .", "Moreover, $\\psi (g^{-1}Kg\\cap L)$ is a compact totally disconnected subgroup of the Lie group $G^\\circ $ , hence is finite by Proposition REF .", "Therefore $g^{-1}Kg\\cap K$ is a finite index subgroup of $g^{-1}Kg$ .", "Since this is true for all $g\\in G$ , it follows $K$ is commensurated.", "Finally, let $M$ be a topologically characteristic subgroup of $G^\\circ $ and let $g\\in G$ .", "As $K$ is commensurated and $gMg^{-1}=M$ , we see $g(MK)g^{-1}=MgKg^{-1}$ is commensurable to $MK$ , hence $M K\\cong M\\times K$ is a commensurated subgroup of $G$ as required.", "We use Lemma REF and Proposition REF to deduce the following useful proposition.", "Proposition 4.5 Let $G$ be a locally compact group such that $G^\\circ $ is a centre-free semisimple Lie group with no compact factors.", "Then there is a continuous open monomorphism $\\phi :G\\rightarrow \\operatorname{Aut}(G^\\circ ) \\times G/G^\\circ $ with finite index image.", "In particular, $\\phi $ is copci.", "There is a continuous homomorphism $\\phi _1:G\\rightarrow \\operatorname{Aut}(G^\\circ )$ induced by conjugation.", "Lemma REF implies that $\\phi _1|_{G^\\circ }=\\operatorname{Ad}$ is a Lie group isomorphism $G^\\circ \\rightarrow \\operatorname{Inn}(G^\\circ )=\\operatorname{Aut}(G^\\circ )^\\circ $ .", "Therefore, if $U\\subseteq G$ is open, then so is $\\phi _1(U)=\\bigcup _{g\\in G}\\phi _1(U\\cap g G^\\circ )=\\bigcup _{g\\in G}\\phi _1(g)\\phi _1(g^{-1}U\\cap G^\\circ ).$ Thus $\\phi _1$ is an open map.", "Let $\\phi _2:G\\rightarrow G/G^\\circ $ be the quotient map, which is continuous and open, and let $\\phi (\\phi _1,\\phi _2):G\\rightarrow \\operatorname{Aut}(G^\\circ ) \\times G/G^\\circ $ .", "Since $\\phi _1$ and $\\phi _2$ are continuous, so is $\\phi $ .", "Lemma REF implies $\\phi _1(G^\\circ )$ is a finite index subgroup of $\\operatorname{Aut}(G^\\circ )$ , hence so is $\\phi _1(G)$ .", "Since $\\phi _2$ is surjective, $\\phi (G)$ is a finite index subgroup of $\\operatorname{Aut}(G^\\circ ) \\times G/G^\\circ $ .", "Since $\\phi _1|_{G^\\circ }$ is injective and $\\ker (\\phi _2)=G^\\circ $ , $\\phi $ is injective.", "All that remains is to show $\\phi $ is an open map.", "Let $U\\subseteq G$ be open and let $g\\in U$ .", "Let $K$ and $L\\cong G^\\circ \\times K$ be as in Proposition REF .", "Since $g^{-1}U\\cap L$ is open and contains the identity, there exists an identity neighbourhood contained in $g^{-1}U\\cap L$ of the form $U_1U_2\\cong U_1\\times U_2$ , where $U_1\\subseteq G^\\circ $ and $U_2\\subseteq K$ .", "Since $U_2$ commutes with $G^\\circ $ , we deduce that $\\phi _1(gU_1U_2)=\\phi _1(gU_1)$ and $\\phi _2(gU_1U_2)=\\phi _2(gU_2)$ .", "We claim $\\phi _1(gU_1)\\times \\phi _1(gU_2)\\subseteq \\phi (U)$ .", "To see this, let $(x,y)\\in \\phi _1(gU_1)\\times \\phi _1(gU_2)$ .", "Thus $x=\\phi _1(gu_1)$ and $y=\\phi _2(gu_2)$ for some $u_1\\in U_1$ and $u_2\\in U_2$ , and so $\\phi (gu_1u_2)=(x,y)\\in \\phi (U)$ .", "Therefore $\\phi _1(gU_1)\\times \\phi _1(gU_2)\\subseteq \\operatorname{Aut}(G^\\circ ) \\times G/G^\\circ $ is open neighbourhood of $\\phi (g)$ contained in $\\phi (U)$ .", "Thus $\\phi (U)$ is open, hence $\\phi $ is an open map.", "The following lemma ensures that quotienting out by a compact normal subgroup preserves lattices.", "Lemma 4.6 ([47]) Let $G$ be a locally compact group and $\\Gamma \\le G$ be a (uniform) lattice.", "If $K\\vartriangleleft G$ is a compact normal subgroup of $G$ and $\\pi :G\\rightarrow G/K$ is the quotient map, then $\\Gamma /(\\Gamma \\cap K)\\cong \\pi (\\Gamma )$ is a (uniform) lattice in $G/K$ .", "The following two lemmas demonstrate that the intersection of a lattice in $G$ with certain subgroups $H\\le G$ are lattices in $H$ .", "Such subgroups are called lattice-hereditary.", "Lemma 4.7 (see [9]) Let $G$ be a locally compact group and $\\Gamma \\le G$ be a (uniform) lattice.", "If $H\\le G$ is open, then $H\\cap \\Gamma $ is a (uniform) lattice in $H$ .", "Lemma 4.8 Let $G$ be a connected Lie group with no non-trivial compact normal subgroup and let $\\Gamma \\le G$ be a lattice.", "Let $N$ be the nilradical of $G$ and let $C_k(N)$ be the $k$ th term in the lower central series of $N$ .", "Then $C_k(N)\\cap \\Gamma $ is a uniform lattice in $C_k(N)$ .", "A lemma of Mostow states that if $N\\le G$ is the nilradical of $G$ , then the intersection $N\\cap \\Gamma $ is a uniform lattice in $N$ [34].", "If $N=\\lbrace 1\\rbrace $ there is nothing to prove.", "If not, then we are in Case (2) of Proposition REF , and so $N$ is simply connected.", "Since $\\Gamma \\cap N$ is a lattice in $N$ , $C_k(N)\\cap \\Gamma $ is a uniform lattice in $C_k(N)$ by a result of Malcev [31]; see [48] for details.", "Remark 4.9 The problem of determining lattice-hereditary subgroups of Lie groups has “a long and rather dramatic history” according to Gorbatsevich's MathSciNet review of [18].", "We refer the reader to this article of Geng for a discussion of the topic and a summary of what is known, noting that several disputed claims appear in the literature [18].", "We will also make use of the following lemma to show commensurated subgroups are preserved under images and preimages.", "Lemma 4.10 ([10]) Suppose $G,H$ are groups and $\\phi :G\\rightarrow H$ is a homomorphism.", "If $K\\text{\\;Q\\; }H$ is commensurated, then $\\phi ^{-1}(K)\\text{\\;Q\\; }G$ is commensurated.", "If $L\\text{\\;Q\\; }G$ is commensurated and $\\phi $ is surjective, then $\\phi (L)$ is commensurated.", "The following lemma follows easily from the fact that the centre of a finitely generated group $\\Gamma $ has finite index when the commutator subgroup $\\Gamma ^{\\prime }$ is finite.", "Lemma 4.11 Let $\\Gamma $ be a finitely generated group containing a finite normal subgroup $F\\vartriangleleft \\Gamma $ such that $\\Gamma /F\\cong \\mathbb {Z}^n$ .", "Then $\\Gamma $ contains a finite index subgroup isomorphic to $\\mathbb {Z}^n$ .", "We now come to the main technical result of this section.", "Proposition 4.12 Let $\\Gamma $ be a finitely generated group, $G$ be a locally compact group, and suppose $\\rho :\\Gamma \\rightarrow G$ is a virtual lattice embedding.", "If $G^\\circ $ is non-compact, then $\\Gamma $ contains an infinite commensurated subgroup $\\Lambda \\le \\Gamma $ such that either: $\\Lambda $ is a finite rank free abelian group; There is a copci homomorphism $\\tau :G\\rightarrow S\\times D$ with finite index open image such that: $S$ is a centre-free semisimple Lie group with no compact factors and finitely many components $D$ is a compactly generated totally disconnected locally compact group The composition $\\Lambda \\xrightarrow{} \\Gamma \\xrightarrow{}G\\xrightarrow{}S\\times D\\xrightarrow{}S$ is a virtual lattice embedding, where $\\iota $ is the inclusion and $q$ is the projection.", "Moreover, if $\\rho $ is a uniform virtual lattice embedding, then $\\Lambda $ is uniformly commensurated and $q\\circ \\tau \\circ \\rho \\circ \\iota $ is a virtual uniform lattice embedding.", "Lemma REF implies there is some compact normal subgroup $K_1\\vartriangleleft G$ such that $(G/K_1)^\\circ $ has no non-trivial compact normal subgroups.", "By Lemma REF , postcomposing $\\rho $ with the quotient map $G\\rightarrow G/K_1$ and replacing $G$ with $G/K_1$ , we can thus assume $G^\\circ $ has no non-trivial compact normal subgroup.", "In particular, Theorem REF implies that $G^\\circ $ is a connected Lie group.", "We choose an open subgroup $L\\le G$ and a compact subgroup $K\\vartriangleleft L$ as in Proposition REF and let $\\pi _L:L\\cong G^\\circ \\times K\\rightarrow G^\\circ $ be the quotient map.", "Let $\\Gamma _L\\rho ^{-1}(L)$ .", "Since $L$ is commensurated in $G$ , Lemma REF ensures $\\Gamma _L\\text{\\;Q\\; }\\Gamma $ .", "As $\\rho (\\Gamma _L)=\\rho (\\Gamma )\\cap L$ and $L$ is open, Lemma REF implies $\\rho (\\Gamma _L)$ is a lattice in $L$ .", "Let $\\omega \\pi _L\\circ \\rho |_{\\Gamma _L}:\\Gamma _L\\rightarrow G^\\circ $ .", "Lemma REF implies $\\operatorname{Im}(\\omega )=\\pi _L(\\rho (\\Gamma _L))$ is a lattice in $G^\\circ $ .", "Since $\\ker (\\rho )$ is finite, $\\rho (\\Gamma _L)$ is discrete and $\\ker (\\pi _L)$ is compact, $\\ker (\\omega )$ is finite and so $\\omega $ is a virtual lattice embedding.", "By Proposition REF , $G^\\circ $ either has non-trivial simply connected nilradical, or is a centre-free semisimple Lie group.", "Assume first $G^\\circ $ is a centre-free semisimple Lie group.", "Proposition REF ensures the map $\\tau :G\\rightarrow \\operatorname{Aut}(G^\\circ )\\times G/G^\\circ $ is continuous and open with finite index image.", "By Lemma REF , $\\operatorname{Aut}(G^\\circ )$ is a centre-free semisimple Lie group with finitely many components, whilst Proposition REF ensures $G/G^\\circ $ is totally disconnected.", "Since $\\omega $ is a virtual lattice embedding, Lemma REF ensures the composition $\\Gamma _L\\xrightarrow{}G^\\circ \\xrightarrow{}\\operatorname{Aut}(G^\\circ )$ is also a virtual lattice embedding.", "We claim $\\operatorname{Ad}\\circ \\omega $ coincides with $ \\Gamma \\xrightarrow{}G\\xrightarrow{}\\operatorname{Aut}(G^\\circ )\\times G/G^\\circ \\xrightarrow{} \\operatorname{Aut}(G^\\circ )$ on $\\Gamma _L$ , where $q$ is the projection.", "Indeed, for each $g\\in \\Gamma _L$ , we have $\\rho (g)\\in L\\cong G^\\circ \\times K$ so that $\\rho (g)=s_gk_g$ for unique $s_g\\in G^\\circ $ and $k_g\\in K$ .", "Thus $\\pi _L(s_gk_g)=s_g=\\omega (g)$ .", "The definition of $\\tau $ in Proposition REF says that the map $G\\xrightarrow{} \\operatorname{Aut}(G^\\circ )$ is induced by the action of $G$ on $G^\\circ $ by conjugation.", "Therefore, for all $h\\in G^\\circ $ , we have $q(\\tau (\\rho (g)))(h)=s_gk_ghk_g^{-1}s_g^{-1}=s_ghs_g^{-1}=\\operatorname{Ad}(\\omega (g))(h),$ since $k_g\\in K$ commutes with $G^\\circ $ .", "Thus $\\operatorname{Ad}\\circ \\omega $ and $q\\circ \\tau \\circ \\rho $ agree on $\\Gamma _L$ , and so $q\\circ \\tau \\circ \\rho |_{\\Gamma _L}$ is a virtual lattice embedding.", "Suppose in addition that $\\rho $ is a uniform virtual lattice embedding.", "Then Lemmas REF and REF imply $\\omega $ is a uniform lattice embedding, hence so is $\\operatorname{Ad}\\circ \\omega $ .", "The preceding paragraph ensures $q\\circ \\tau \\circ \\rho |_{\\Gamma _L}$ is a virtual uniform lattice embedding, so Lemma REF may be applied to the map $q\\circ \\tau \\circ \\rho $ to deduce $\\Gamma _L$ is uniformly commensurated in $\\Gamma $ .", "We now assume $G^\\circ $ has non-trivial simply-connected nilradical.", "Choose the maximal $k$ such that the term $C_k(N)$ of the lower central series is non-zero.", "Then $C_k(N)$ is a non-trivial simply-connected abelian Lie group, hence is isomorphic to $\\mathbb {R}^n$ for some $n>0$ .", "Since $N$ is a characteristic subgroup of $G^\\circ $ and $C_k(N)$ is a characteristic subgroup of $N$ , $C_k(N)$ is a characteristic subgroup of $G^\\circ $ .", "By Proposition REF , $C_k(N)\\times K$ is a commensurated subgroup of $G$ .", "Thus Lemma REF says $\\Lambda \\rho ^{-1}(C_k(N)\\times K)$ is a commensurated subgroup of $\\Gamma $ .", "As $\\omega (\\Gamma _L)$ is a lattice in $G^\\circ $ , Lemma REF implies $\\omega (\\Gamma _L)\\cap C_k(N)=\\omega (\\Lambda )$ is a uniform lattice in $C_k(N)\\cong \\mathbb {R}^n$ , hence is isomorphic to $\\mathbb {Z}^n$ .", "Since $\\ker (\\omega )$ is finite, Lemma REF ensures $\\Lambda $ has a finite index subgroup isomorphic to $\\mathbb {Z}^n$ ; such a subgroup is also commensurated.", "Let $\\Gamma $ be a finitely generated group that does not contain a non-trivial finite rank free abelian commensurated subgroup.", "Let $Z$ be a model geometry of $\\Gamma $ and let $\\rho :\\Gamma \\rightarrow \\operatorname{Isom}(Z)$ be the geometric action of $\\Gamma $ on $Z$ .", "Let $G=\\operatorname{Isom}(Z)$ .", "If $G^\\circ $ is compact, then $G$ is compact-by-(totally disconnected), hence Lemma REF and Proposition REF ensure $Z$ is dominated by a locally finite vertex-transitive graph.", "If $G^\\circ $ is non-compact, Proposition REF ensures there is a copci homomorphism $\\lambda :G\\rightarrow S\\times Q$ with $S$ and $Q$ as in the statement of Proposition REF .", "By Lemma REF and Propositions REF and REF , there exists a symmetric space $X$ , a locally finite vertex-transitive graph $Y$ , and copci homomorphisms $\\phi :S\\rightarrow \\operatorname{Isom}(X)$ and $\\psi :Q\\rightarrow \\operatorname{Isom}(Y)$ .", "The composition $\\operatorname{Isom}(Z)\\xrightarrow{}S\\times Q\\xrightarrow{}\\operatorname{Isom}(X)\\times \\operatorname{Isom}(Y)\\rightarrow \\operatorname{Isom}(X\\times Y)$ is also copci, where the last map is the product action.", "Thus $Z$ is dominated by $X\\times Y$ .", "Let $\\Gamma $ be a finitely generated group that does not contain a non-trivial finite rank free abelian commensurated subgroup and suppose $\\rho :\\Gamma \\rightarrow G$ is a lattice embedding.", "If $G^\\circ $ is compact, then the quotient map $\\phi :G\\rightarrow G/G^\\circ $ is the required proper continuous map to a totally disconnected locally compact group.", "If $G^\\circ $ is non-compact, then we can apply Proposition REF to get a copci map $\\phi :G\\rightarrow S\\times D$ with finite index open image, where $S$ and $D$ satisfy the required properties." ], [ "Building lattice embeddings", "In Section , we proved one direction of Theorem REF , showing that if a finitely generated group $\\Gamma $ has a model geometry not dominated by a locally finite graph, this implies the existence of a certain commensurated subgroup.", "In this section we complete the proof of Theorem REF , showing how to embed a group $\\Gamma $ containing a certain commensurated subgroup $\\Lambda $ into a locally compact group.", "The technical result of this section, Theorem REF shows that given a Hecke pair with $(\\Gamma ,\\Lambda )$ with engulfing group $E$ , $\\Gamma $ is a uniform lattice in a locally compact group containing $E$ as a closed normal subgroup.", "Given a Hecke pair $(\\Gamma ,\\Lambda )$ , we consider the set $\\Gamma /\\Lambda $ of left $\\Lambda $ -cosets.", "There is an action $\\phi :\\Gamma \\rightarrow \\operatorname{Sym}(\\Gamma /\\Lambda )$ given by left multiplication.", "The group $\\operatorname{Sym}(\\Gamma /\\Lambda )$ can be equipped with the topology of pointwise convergence.", "The Schlichting completion of $(\\Gamma ,\\Lambda )$ consists of the pair $(G,\\lambda )$ , where $G=\\overline{\\operatorname{Im}(\\phi )}$ and $\\lambda :\\Gamma \\rightarrow G$ is the corestriction of $\\phi $ to $G$ .", "For ease of notation, we sometimes refer to $G$ as the Schlichting completion.", "The following properties characterise Schlichting completions: Proposition 5.1 ([43]) If $(G,\\lambda )$ is the Schlichting completion of the Hecke pair $(\\Gamma ,\\Lambda )$ , then: $G$ is a totally disconnected locally compact group; $\\lambda (\\Gamma )$ is dense in $G$ ; There is a compact open subgroup $K\\le G$ such that $\\rho ^{-1}(K)=\\Lambda $ .", "We complete the proof of Theorem REF by proving the following theorem.", "Theorem 5.2 Let $\\Gamma $ be a finitely generated group and let $(\\Gamma ,\\Lambda )$ be a Hecke pair with engulfing triple $(E,\\rho ,\\Delta )$ .", "Then there exists a locally compact group $G$ and a virtual uniform lattice embedding $\\gamma :\\Gamma \\rightarrow G$ such that $G$ fits into the short exact sequence $1\\rightarrow E\\xrightarrow{} G\\xrightarrow{} Q\\rightarrow 1$ and the following hold: $(Q,\\lambda )$ is the Schlichting completion of $(\\Gamma ,\\Lambda _0)$ for some finite index subgroup $\\Lambda _0\\le \\Lambda $ $p$ is a topological embedding and $q$ is a topological quotient map; $\\lambda =q\\circ \\gamma $ ; For all $g\\in \\Gamma $ and $v\\in E$ , $p(\\Delta (g)(v))=\\gamma (g)p(v)\\gamma (g)^{-1}$ .", "Before proving Theorem REF , we first explain how to deduce Theorem REF from it.", "Suppose $\\Gamma $ has a model geometry that is not dominated by a locally finite graph.", "By Proposition REF , $\\Gamma $ is virtually a uniform lattice in a locally compact group $G$ with $G^\\circ $ non-compact.", "Then Proposition REF says $\\Gamma $ contains a commensurated subgroup $\\Lambda $ as in the statement of Theorem REF .", "Conversely, suppose $\\Gamma $ contains a commensurated subgroup $\\Lambda $ as in the statement of Theorem REF , i.e.", "$\\Lambda $ is either finite rank free abelian or is uniformly commensurated and virtually isomorphic to a uniform lattice in a semisimple Lie group.", "Propositions REF and REF imply that in both cases, $(\\Gamma ,\\Lambda )$ has a non-compact virtually connected Lie engulfing group $E$ .", "By Theorem REF , $\\Gamma $ is a virtual uniform lattice in a locally compact group $G$ containing a closed normal subgroup topologically isomorphic to $E$ .", "Since $E^\\circ $ is non-compact, neither is $G^\\circ $ .", "Therefore, Proposition REF implies there exists a model geometry of $\\Gamma $ that is not dominated by a locally finite vertex-transitive graph.", "We now begin our proof of Theorem REF .", "Let $(\\Gamma ,\\Lambda )$ and $(E,\\rho ,\\Delta )$ be as in Theorem REF .", "We automatically know from the Definition REF that for all $g\\in \\Gamma $ , the equation $\\Delta (g)(\\rho (h))=\\rho (ghg^{-1})$ holds for all $h$ in some finite index subgroup of $\\Lambda $ .", "However, to facilitate continuity arguments needed in the proof of Theorem REF , we would like to show that this subgroup can be taken to be the intersection of finitely many conjugates of $\\Lambda $ .", "The following lemma says this can be arranged after replacing $\\Lambda $ with a suitable finite index subgroup.", "Lemma 5.3 Let $(\\Gamma ,\\Lambda )$ be a Hecke pair with $\\Gamma $ finitely generated and with engulfing triple $(E,\\rho ,\\Delta )$ .", "Then there exists a finite index subgroup $\\Lambda _0\\le \\Lambda $ such that for every $g\\in \\Gamma $ , there is a finite set $F_g\\subseteq \\Gamma $ such that $e,g^{-1}\\in F_g$ and $\\Delta (g)(\\rho (h))=\\rho (ghg^{-1})$ for all $h\\in \\cap _{f\\in F_g} f\\Lambda _0f^{-1}$ .", "Let $S$ be a finite symmetric generating set of $\\Gamma $ containing the identity.", "Definition REF ensures that for each $s\\in S$ , there is some finite index subgroup $\\Lambda _s\\le \\Lambda \\cap s^{-1}\\Lambda s$ such that $\\Delta (s)(\\rho (h))=\\rho (shs^{-1})$ for all $h\\in \\Lambda _s$ .", "Set $\\Lambda _0\\cap _{s\\in S}\\Lambda _s$ .", "We claim that $\\Lambda _0$ satisfies the required property.", "Indeed, if $g\\in \\Gamma $ , then $g=s_1s_2\\dots s_n$ for some $s_1,\\dots ,s_n\\in S$ .", "Set $g_i=s_{i}\\dots s_n$ for all $1\\le i\\le n$ and let $g_{n+1}=1$ .", "Thus $g_1=g$ and $s_ig_{i+1}=g_i$ for all $1\\le i\\le n$ .", "Let $F_g\\lbrace 1,g_1^{-1}, \\dots , g_n^{-1}\\rbrace $ .", "If $h\\in \\cap _{f\\in F_g} f\\Lambda _0f^{-1}$ , then for each $1\\le i\\le n$ , $g_{i+1}hg_{i+1}^{-1}\\in \\Lambda _0\\le \\Lambda _{s_i}$ so that $\\Delta (s_i)(\\rho (g_{i+1}hg_{i+1}^{-1}))=\\rho (s_ig_{i+1}hg_{i+1}^{-1}s_i^{-1})=\\rho (g_{i}hg_{i}^{-1}).$ It follows that $\\Delta (g)(\\rho (h))=\\rho (ghg^{-1})$ as required.", "We prove Theorem REF by defining a group extension $G$ using generalised cocycles, also called factor sets, as described in [5].", "Suppose we are given groups $E$ and $Q$ and a homomorphism $\\phi :Q\\rightarrow \\operatorname{Out}(E)$ .", "We want to study group extensions $1\\rightarrow E\\rightarrow G\\rightarrow Q\\rightarrow 1$ such that the action of $G$ on $E$ by conjugation descends to the given map $\\phi $ .", "To do so, we require the following data: A map $\\omega :Q\\rightarrow \\operatorname{Aut}(E)$ such that $\\omega (q)$ represents the outer automorphism $\\phi (q)$ and $\\omega (1)=1$ .", "A map $f:Q\\times Q\\rightarrow E$ satisfying the following conditions.", "The generalised cocycle condition $f(g,h)f(gh,k)=\\omega (g)(f(h,k))f(g,hk)$ for all $g,h,k\\in Q$ .", "The compatibility condition $ \\omega (g)\\omega (h)=\\operatorname{Inn}(f(g,h))\\omega (gh)$ for all $g,h\\in Q$ , where $\\operatorname{Inn}(a)\\in \\operatorname{Aut}(E)$ is the inner automorphism $b\\mapsto aba^{-1}$ .", "The normalisation condition $f(1,g)=f(g,1)=1$ for all $g\\in Q$ .", "We then define a group $G_{\\omega ,f}$ that is equal as a set to $E\\times Q$ with multiplication defined by $(a,g)\\cdot (b,h)=(a\\omega (g)(b)f(g,h),gh).$ The group $G_{\\omega ,f}$ is a well-defined extension of $E$ by $Q$ and the conjugation action $G_{\\omega ,f}\\rightarrow \\operatorname{Aut}(E)$ descends to the given map $\\phi :Q\\rightarrow \\operatorname{Out}(E)$ .", "Moreover, if $E$ and $Q$ are locally compact topological groups and $\\omega $ and $f$ are continuous, then $G_{\\omega ,f}$ is a locally compact topological group when endowed with the product topology.", "Remark 5.4 As noted by Hochschild [22], for a continuous short exact sequence $1\\rightarrow E\\rightarrow G\\rightarrow Q\\rightarrow 1$ of locally compact topological groups, we cannot necessarily choose a continuous generalised cocycle $f$ as above.", "Although the proof of Theorem REF given below works for general $E$ , we encourage the reader the keep in mind the special case where $\\Lambda \\cong \\mathbb {Z}^n$ and $\\Gamma \\cong \\mathbb {R}^n$ , which encapsulates the difficulties and intricacies of the general case.", "In the situation where $E$ has trivial centre, the proof can likely be substantially simplified.", "Since the proof is rather long, we split it into several parts.", "Part 1: Defining the generalised cocycle.", "Let $(\\Gamma ,\\Lambda )$ be a Hecke pair with an engulfing triple $(E,\\rho ,\\Delta )$ .", "Pick a finite index subgroup of $\\Lambda $ such that the conclusion of Lemma REF holds.", "Without loss of generality, we replace $\\Lambda $ by such a subgroup.", "Thus for every $g\\in \\Gamma $ , there is a finite set $F_g\\subseteq \\Gamma $ such that $\\Delta (g)(\\rho (h))=\\rho (ghg^{-1})$ for all $h\\in \\Lambda _g\\cap _{f\\in F_g} f\\Lambda f^{-1}$ .", "Let $\\sigma :\\Gamma /\\Lambda \\rightarrow \\Gamma $ be a section of the quotient map $g\\mapsto g\\Lambda $ .", "We assume $\\sigma (\\Lambda )=1$ .", "For each $g\\in \\Gamma $ , let $\\sigma _g=\\sigma (g\\Lambda )$ and $h_g=\\sigma _g^{-1}g\\in \\Lambda $ so that $g=\\sigma _gh_g$ .", "We set $v_g\\rho (h_g)\\in E$ and define $\\omega :\\Gamma \\rightarrow \\operatorname{Aut}(E)$ by $\\omega (g)=\\Delta (\\sigma _g)$ .", "Although $\\omega $ is not typically a homomorphism, since $\\Delta $ is a homomorphism and $\\omega (g)=\\Delta (\\sigma _g)=\\Delta (gh_g^{-1})=\\Delta (g)\\operatorname{Inn}(v_g^{-1}),$ $\\omega $ descends to a homomorphism $\\Gamma \\rightarrow \\operatorname{Out}(E)$ .", "Since $\\sigma (\\Lambda )=1$ , we see $v_1=1$ and $\\omega (1)=1$ .", "Observe that for all $\\beta \\in \\operatorname{Aut}(E)$ and $k\\in E$ , we have the identity $\\beta \\operatorname{Inn}(k)\\beta ^{-1}=\\operatorname{Inn}(\\beta (k))$ .", "Thus for all $g,k\\in \\Gamma $ , we have $\\omega (g)\\omega (k)\\omega (gk)^{-1}&=\\Delta (g)\\operatorname{Inn}(v_g^{-1})\\Delta (k)\\operatorname{Inn}(v_k^{-1})\\operatorname{Inn}(v_{gk})\\Delta (k)^{-1}\\Delta (g)^{-1}\\\\& =\\Delta (g)\\operatorname{Inn}(v_g^{-1}\\Delta (k)(v_k^{-1}v_{gk}))\\Delta (g)^{-1}\\\\& =\\operatorname{Inn}(\\Delta (g)(v_g^{-1})\\Delta (gk)(v_k^{-1}v_{gk})).$ We thus set $f(g,k)=\\Delta (g)(v_g^{-1})\\Delta (gk)(v_k^{-1}v_{gk})$ .", "By construction, the compatibility condition $\\omega (g)\\omega (k)=\\operatorname{Inn}(f(g,k))\\omega (gk)$ is satisfied for all $g,k\\in \\Gamma $ .", "We now see that for all $g,h,k\\in \\Gamma $ , we have $\\omega (g)&(f(h,k))f(g,hk)=\\Delta (g)\\operatorname{Inn}(v_g^{-1})(\\Delta (h)(v_h^{-1})\\Delta (hk)(v_k^{-1}v_{hk}))f(g,hk)\\\\&=\\Delta (g)(v_g^{-1}\\Delta (h)(v_h^{-1})\\Delta (hk)(v_k^{-1}v_{hk})v_g)\\Delta (g)(v_g^{-1})\\Delta (ghk)(v_{hk}^{-1}v_{ghk})\\\\&=\\Delta (g)(v_g^{-1})\\Delta (gh)(v_h^{-1})\\Delta (ghk)(v_{k}^{-1}v_{ghk})\\\\&=\\Delta (g)(v_g^{-1})\\Delta (gh)(v_h^{-1}v_{gh})\\Delta (gh)(v_{gh}^{-1})\\Delta (ghk)(v_{k}^{-1}v_{ghk})\\\\&=f(g,h)f(gh,k)$ and so the generalised cocycle condition holds.", "Moreover, since $v_1=1$ , we see that $f(g,1)=f(1,g)=1$ for all $g\\in \\Gamma $ ; thus $f$ is normalised.", "Part 2: Showing $\\omega $ and $f$ are continuous We say a sequence $(g_i)$ in $\\Gamma $ is a Cauchy sequence if for every $g\\in \\Gamma $ , there exists a number $N$ such that $g_j^{-1}g_i\\in g\\Lambda g^{-1}$ for all $i,j\\ge N$ .", "If $(g_i)$ is a Cauchy sequence in $\\Gamma $ , then $g_j\\Lambda =g_i\\Lambda $ for sufficiently large $i,j$ , and so $\\omega (g_i)=\\Delta (\\sigma (g_i\\Lambda ))=\\Delta (\\sigma (g_j\\Lambda ))=\\omega (g_j)$ for sufficiently large $i,j$ .", "Suppose $(g_i)$ and $(k_i)$ are Cauchy sequences.", "We pick $N_0$ large enough such that $k_i^{-1}k_j\\in \\Lambda $ for all $i,j\\ge N_0$ .", "Set $kk_{N_0}$ .", "We pick $N\\ge N_0$ large enough such that $g_i^{-1}g_j\\in \\Lambda \\cap k\\Lambda k^{-1}\\cap \\Lambda _{k^{-1}}$ for all $i,j\\ge N$ .", "Set $g=g_N$ .", "By the choice of $N$ , $g$ and $k$ , for each $i,j\\ge N$ we have $\\text{$g_i=gh_{g,i}$, $k_i=kh_{k,i}$ and $g_ik_j=gk h_{gk,i,j}$}$ for some $h_{g,i},h_{k,i},h_{gk,i,j} \\in \\Lambda $ .", "Since $g_ik_j=gh_{g,i}kh_{k,j}=gkh_{gk,i,j}$ , we deduce $h_{k,j}h_{gk,i,j}^{-1}=k^{-1}h_{g,i}^{-1}k$ for all $i,j\\ge N$ .", "For all $i\\ge N$ , we see that $\\sigma _g=\\sigma (g\\Lambda )=\\sigma (g_i\\Lambda )=\\sigma _{g_i}$ .", "Similarly, $\\sigma _k=\\sigma _{k_i}$ and $\\sigma _{gk}=\\sigma _{g_ik_j}$ for all $i,j\\ge N$ .", "Therefore, $\\text{$h_{g_i}=h_gh_{g,i}$, $h_{k_i}=h_kh_{k,i}$ and $h_{g_ik_j}=h_{gk}h_{gk,i,j}$}$ for all $i,j\\ge N$ .", "Now for $i\\ge N$ , we have $h_{g,i}^{-1}\\in \\Lambda _{k^{-1}}$ and so $\\Delta (k^{-1})(\\rho (h_{g,i}^{-1}))=\\rho (k^{-1}h_{g,i}^{-1}k)$ .", "Putting everything together, we see that for all $i,j\\ge N$ we have $f(g_i,k_j)&=\\Delta (g_i)(v_{g_i}^{-1})\\Delta (g_ik_j)(v_{k_j}^{-1}v_{g_ik_j})\\\\& =\\Delta (g_i)(\\rho (h_{g_i}^{-1}))\\Delta (g_ik_j)(\\rho (h_{k_j}^{-1}h_{g_ik_j}))\\\\&=\\Delta (gh_{g,i})(\\rho (h_{g,i}^{-1}h_g^{-1}))\\Delta (gkh_{gk,i,j})(\\rho (h_{k,j}^{-1}h_k^{-1}h_{gk}h_{gk,i,j}))\\\\&=\\Delta (g)(\\rho (h_g^{-1}h_{g,i}^{-1}))\\Delta (gk)(\\rho (h_{gk,i,j}h_{k,j}^{-1}h_k^{-1}h_{gk}))\\\\&=\\Delta (g)(v_g^{-1}\\rho (h_{g,i}^{-1}))\\Delta (gk)(\\rho (h_{gk,i,j}h_{k,j}^{-1})v_k^{-1}v_{gk})\\\\&=\\Delta (g)(v_g^{-1})\\Delta (gkk^{-1})(\\rho (h_{g,i}^{-1}))\\Delta (gk)(\\rho (h_{gk,i,j}h_{k,j}^{-1})v_k^{-1}v_{gk})\\\\&=\\Delta (g)(\\rho (h_g^{-1}))\\Delta (gk)(\\rho (k^{-1}h_{g,i}^{-1}kh_{gk,i,j}h_{k,j}^{-1}))\\Delta (gk)(v_k^{-1}v_{gk}))\\\\&=\\Delta (g)(\\rho (h_g^{-1}))\\Delta (gk)(\\rho (h_{k,j}h_{gk,i,j}^{-1}h_{gk,i,j}h_{k,j}^{-1}))\\Delta (gk)(v_k^{-1}v_{gk}))\\\\&=\\Delta (g)(v_g^{-1})\\Delta (gk)(v_k^{-1}v_{gk})\\\\&=f(g,k).$ Part 3: Defining the locally compact group.", "Let $(Q,\\lambda )$ be the Schlichting completion of $(\\Gamma ,\\Lambda )$ .", "It follows from the definitions that $(g_i)$ is a Cauchy sequence in $\\Gamma $ if and only if $(\\lambda (g_i))$ converges in $ Q$ .", "Since $\\lambda (\\Gamma )$ is dense in $Q$ and $\\omega $ and $f$ are eventually constant on any pair of Cauchy sequences, it follows that $\\omega $ and $f$ uniquely induce continuous maps $\\hat{\\omega }:Q\\rightarrow \\operatorname{Aut}(E)$ and $\\hat{f}: Q\\times Q\\rightarrow E$ such that $\\hat{\\omega }(\\lambda (g))=\\omega (g)$ and $\\hat{f}(\\lambda (g),\\lambda (k))=f(g,k)$ for all $g,k\\in \\Gamma $ .", "Since $\\omega $ and $f$ satisfy the compatibility condition, generalised cocycle condition, and are normalised, $\\hat{\\omega }$ and $\\hat{f}$ also satisfy these properties.", "Thus $\\hat{\\omega }$ and $\\hat{f}$ define a locally compact topological group $G=G_{\\hat{\\omega },\\hat{f}}$ as above, i.e.", "$G$ is equal to $E\\times Q$ as a set, equipped with the product topology and with multiplication defined by (REF ).", "We consider the homomorphisms $p:E\\rightarrow G$ given by $v\\mapsto (v,1)$ and $q:G\\rightarrow Q$ given by $(v,g)\\rightarrow g$ .", "Since $p$ is injective, $q$ is surjective, and $q\\circ p$ is zero, we have a short exact sequence $1\\rightarrow E\\xrightarrow{} G\\xrightarrow{} Q\\rightarrow 1$ .", "Moreover, since $G$ is homeomorphic to $E\\times Q$ , $p$ is a topological embedding and $q$ is a topological quotient map.", "Part 4: Defining the homomorphism to $G$ .", "We define $\\gamma :\\Gamma \\rightarrow G$ by $\\gamma (g)=(\\Delta (g)(v_g),\\lambda (g))$ .", "We note that $q\\circ \\gamma =\\lambda $ as required.", "We claim $\\gamma $ is a virtual uniform lattice embedding, first showing $\\gamma $ is a homomorphism.", "For all $g,k\\in \\Gamma $ , we have $\\gamma (g)\\gamma (k)&= (\\Delta (g)(v_g),\\lambda (g))(\\Delta (k)(v_k),\\lambda (k))\\\\&= (\\Delta (g)(v_g)\\hat{\\omega }(\\lambda (g))(\\Delta (k)(v_k))\\hat{f}(\\lambda (g),\\lambda (k)),\\lambda (g)\\lambda (k))\\\\&= (\\Delta (g)(v_g)\\omega (g)(\\Delta (k)(v_k))f(g,k),\\lambda (gk))\\\\&= (\\Delta (g)(v_g)\\Delta (g)(v_g^{-1}\\Delta (k)(v_k)v_g)\\Delta (g)(v_g^{-1})\\Delta (gk)(v_k^{-1}v_{gk}),\\lambda (gk))\\\\&= (\\Delta (g)(\\Delta (k)(v_k))\\Delta (gk)(v_k^{-1}v_{gk}),\\lambda (gk))\\\\&= (\\Delta (gk)(v_{gk}),\\lambda (gk))=\\gamma (gk)$ verifying that $\\gamma $ is a homomorphism.", "Part 5: $\\operatorname{Im}(\\gamma )$ is cocompact.", "We now show $\\gamma (\\Gamma )$ is cocompact.", "Since $\\rho :\\Lambda \\rightarrow E$ is a uniform lattice embedding, there is compact set $K\\subseteq E$ such that $\\rho (\\Lambda )K=E$ .", "Since $(Q,\\lambda )$ is a Schlichting completion of $(\\Gamma ,\\Lambda )$ , there is a compact open subgroup $U\\le Q$ such that $\\lambda ^{-1}(U)=\\Lambda $ .", "We claim $G=\\gamma (\\Gamma )(K, U)$ , which will show $\\gamma (\\Gamma )$ is cocompact.", "Let $(g_1,g_2)\\in G$ .", "Since $\\lambda (\\Gamma )$ is dense in $Q$ , there is some $g\\in \\Gamma $ such that $g_2\\in \\lambda (g)U$ .", "As $\\rho (\\Lambda )K=E$ , there is some $x\\in K$ and $h\\in \\Lambda $ such that $\\rho (h)x=\\Delta (g^{-1})(g_1)\\rho (h_g)^{-1}$ .", "As $g_2\\in \\lambda (g)U=\\lambda (gh)U$ , we can choose $y\\in U$ such that $g_2=\\lambda (gh)y$ .", "We will show $\\gamma (gh)(x,y)=(g_1,g_2)$ .", "Since $\\lambda (\\Lambda )$ is dense in $U$ , we can pick a sequence $(z_i)$ in $\\Lambda $ such that $(\\lambda (z_i))$ converges to $y$ .", "Since $(z_i)$ is a Cauchy sequence, the sequence $\\hat{f}(\\lambda (gh),\\lambda (z_i))$ is eventually constant and thus equal to $\\hat{f}(\\lambda (gh),y)$ for $i$ sufficiently large.", "We can thus pick some $z\\in \\Lambda $ such that $ f(gh,z)=\\hat{f}(\\lambda (gh),y)$ .", "Since $h,z\\in \\Lambda $ , we have $h_z=z$ and $h_{ghz}=h_ghz$ .", "We thus deduce $\\gamma (gh)(x,y)&=(\\Delta (gh)(v_{gh}),\\lambda (gh))\\cdot (x,y)\\\\&=(\\Delta (gh)(v_{gh})\\omega (gh)(x)f(gh,z),g_2)\\\\&=(\\Delta (gh)(v_{gh})\\Delta (gh)(v_{gh}^{-1}xv_{gh})\\Delta (gh)(v_{gh}^{-1})\\Delta (ghz)(v_z^{-1}v_{ghz}),g_2)\\\\&=(\\Delta (gh)(x)\\Delta (ghz)(v_{z}^{-1}v_{ghz}),g_2)\\\\&=(\\Delta (g)(\\rho (h)x\\rho (h)^{-1})\\Delta (g)(\\rho (hzz^{-1}h_ghzz^{-1}h^{-1})),g_2)\\\\&=(\\Delta (g)(\\rho (h)x\\rho (h_g)),g_2)\\\\&=(\\Delta (g)(\\Delta (g^{-1})(g_1)),g_2)\\\\&=(g_1,g_2).$ This shows $\\gamma (\\Gamma )(K,U)=G$ as required.", "Part 6: $\\gamma $ is proper.", "We show $\\gamma $ is proper by demonstrating that for every compact $V\\subseteq G$ , $\\gamma ^{-1}(V)$ is finite.", "Since $\\gamma (\\Gamma )$ is cocompact, this will show $\\gamma $ is a virtual uniform lattice embedding.", "As $G$ is identified as a set with $E\\times Q$ and is equipped with the product topology, there is a continuous projection $\\pi :G\\rightarrow E$ , which is typically not a homomorphism.", "Observe that for all $h\\in \\Lambda $ , we have $\\pi (\\gamma (h))=\\Delta (h)(v_h)=\\rho (h)\\rho (h)\\rho (h)^{-1}=\\rho (h)$ , hence $\\pi \\circ \\gamma |_{\\Lambda }=\\rho $ .", "Let $V\\subseteq G$ be compact and let $U\\le Q$ be a compact open subgroup as above such that $\\lambda ^{-1}(U)=\\Lambda $ .", "Let $L=q^{-1}(U)$ , where $q:G\\rightarrow Q$ is the quotient map.", "Then $L$ is an open subgroup of $G$ , so $V$ is contained in finitely many left $L$ cosets.", "Since $\\lambda (\\Gamma )$ is dense, it intersects every coset of $U$ , hence $\\gamma (\\Gamma )$ intersects every left coset of $L$ .", "Thus there are finitely many $g_1,\\dots ,g_n\\in \\Gamma $ such that $V\\subseteq \\bigcup _{i=1}^n\\gamma (g_i)L$ .", "We thus deduce $\\gamma ^{-1}(V)\\subseteq \\bigcup _{i=1}^n g_i \\gamma ^{-1}(L\\cap \\gamma (g_i)^{-1} V).$ Since $\\gamma ^{-1}(L)=\\Lambda $ and $\\pi \\circ \\gamma |_\\Lambda =\\rho $ , we see that $\\rho (\\gamma ^{-1}(L\\cap \\gamma (g_i)^{-1} V))=\\pi (L\\cap \\gamma (g_i)^{-1} V)$ for each $1\\le i\\le n$ .", "As $\\pi (L\\cap \\gamma (g_i)^{-1} V)$ is compact and $\\rho $ is a virtual lattice embedding, each $\\gamma ^{-1}(L\\cap \\gamma (g_i)^{-1} V)\\subseteq \\rho ^{-1}(\\pi (L\\cap \\gamma (g_i)^{-1} V))$ is finite.", "Thus $\\gamma ^{-1}(V)$ is finite.", "Part 7: $\\Delta $ acts by conjugation All that remains is to show if $v\\in E$ and $g\\in \\Gamma $ , we have $p(\\Delta (g)(v))=\\gamma (g)p(v)\\gamma (g)^{-1}$ .", "Indeed, $\\gamma (g)p(v)&=(\\Delta (g)(v_g),\\lambda (g))(v,1)\\\\&=(\\Delta (g)(v_g)\\omega (g)(v) f(g,1),\\lambda (g))\\\\&=(\\Delta (g)(v_g)\\Delta (g)(v_g^{-1}vv_g),\\lambda (g))\\\\&=(\\Delta (g)(vv_g),\\lambda (g)).$ Since $\\gamma (g)^{-1}=(\\Delta (g)^{-1}(f(g,g^{-1})^{-1})v_g^{-1},\\lambda (g)^{-1}),$ we see $\\gamma (g)p(v)\\gamma (g)^{-1}&=(\\Delta (g)(vv_g),\\lambda (g))(\\Delta (g)^{-1}(f(g,g^{-1})^{-1})v_g^{-1},\\lambda (g)^{-1})\\\\&=(\\Delta (g)(vv_g)\\omega (g)(\\Delta (g)^{-1}(f(g,g^{-1})^{-1})v_g^{-1})f(g,g^{-1}),1)\\\\&=(\\Delta (g)(vv_g)\\Delta (g)(v_g^{-1}\\Delta (g)^{-1}(f(g,g^{-1})^{-1}))f(g,g^{-1}),1)\\\\&=(\\Delta (g)(v),1)=p(\\Delta (g)(v))$ as required." ], [ "Applications to low dimensional groups", "In this section we prove Theorems REF and REF using tools from group cohomology.", "We refer the reader to Brown's book, especially Chapter VIII, for the necessary background [5].", "The following lemma is our starting point, which says uniform lattices in connected Lie groups possess desirable cohomological properties.", "Lemma 6.1 ([5]) If $\\Gamma $ is a torsion-free uniform lattice in virtually connected Lie group $G$ , then $\\operatorname{cd}(\\Gamma )=\\dim (G/K)=\\dim (G)-\\dim (K)$ , where $K$ is a maximal compact subgroup of $G$ .", "Moreover, $\\Gamma $ is a Poincaré duality group and so in particular, is of type $FP$ .", "We will prove Theorems REF and REF using the following theorem of the author, generalising work of Kropholler [28], [29]: Theorem 6.2 ([32]) Let $\\Gamma $ be a group containing a commensurated subgroup $\\Lambda \\text{\\;Q\\; }\\Gamma $ such that both $\\Gamma $ and $\\Lambda $ are of type $FP$ .", "If $\\operatorname{cd}(\\Gamma )=\\operatorname{cd}(\\Lambda )$ , then $\\Lambda $ is a finite index subgroup of $\\Gamma $ .", "If $\\operatorname{cd}(\\Gamma )=\\operatorname{cd}(\\Lambda )+1$ , then $\\Gamma $ splits as a finite graph of groups in which every vertex and edge group is commensurable to $\\Lambda $ .", "In order to show the group in question is of type $FP$ , we appeal to the following result of Kropholler.", "Proposition 6.3 ([29]) Let $n\\ge 1$ be a natural number.", "Let $\\Gamma $ be a finitely generated group of cohomological dimension at most $n+1$ .", "Let $\\Lambda \\text{\\;Q\\; }\\Gamma $ be a commensurated subgroup that is a Poincaré duality group of cohomological dimension $n$ .", "Then $\\Gamma $ is of type $FP$ .", "We also recall Thurston's definition of a model geometry: Definition 6.4 ([45]) A model geometry in the sense of Thurston is a pair $(G,X)$ , where $X$ is a manifold and $G$ is a Lie group of diffeomorphisms of $X$ such that: $X$ is simply-connected; $G$ acts transitively on $X$ with compact point stabilisers; $G$ is not contained in a larger group of diffeomorphisms of $X$ with compact point stabilisers; there is at least one compact manifold modelled on $(G,X)$ .", "If $(G,X)$ is a model geometry in the sense of Thurston, we equip $X$ with a $G$ -invariant Riemannian metric.", "Although this metric is not unique, the isometry group of this Riemannian manifold is $G$ and hence is independent of the metric.", "By a slight abuse of notation, we refer to $X$ equipped with such a Riemannian metric as a model geometry, noting that two different metrics on $X$ may give rise to the same model geometry.", "The following proposition is necessary to obtain the sharp conclusions of Theorem REF and REF without passing to finite index subgroups.", "Proposition 6.5 Let $\\Gamma $ be a finitely generated group containing a finite index torsion-free subgroup $\\Gamma ^{\\prime }$ acting geometrically on a model geometry $X$ in the sense of Thurston.", "Then the $\\Gamma ^{\\prime }$ action on $X$ extends to a geometric action of $\\Gamma $ on $X$ .", "Let $G=\\operatorname{Isom}(X)$ .", "Then $(G,X)$ is a model geometry in the sense of Definition REF , and $X$ can be identified with the homogeneous space $G/K$ where $K$ is a maximal compact subgroup.", "As $\\Gamma ^{\\prime }$ is torsion-free and acts geometrically on $X$ , $\\Gamma ^{\\prime }$ is a uniform lattice in $G$ .", "A result of Grunewald–Platonov says that $\\Gamma $ is a uniform lattice in a Lie group $G_\\Gamma $ with finitely many components [20].", "Moreover, the group $G_\\Gamma $ is a finite extension of $G$ ; see [20].", "By Theorem REF , there exists a maximal compact subgroup $K_\\Gamma \\le G_\\Gamma $ that meets every component of $G_\\Gamma $ .", "Thus the inclusion $G\\hookrightarrow G_\\Gamma $ induces a diffeomorphism $G/K\\rightarrow G_\\Gamma /K_\\Gamma $ .", "This gives an action of $G_\\Gamma $ on $X$ by diffeomorphisms with compact point stabilisers that extends the action of $G$ on $X$ .", "By Definition REF , $G$ is a maximal such group of diffeomorphisms of $X$ , and so any $G$ -invariant metric on $X$ is also $G_\\Gamma $ -invariant.", "Since $\\Gamma $ is a uniform lattice in $G_\\Gamma $ , it follows the $\\Gamma ^{\\prime }$ action on $X$ extends to a geometric action of $\\Gamma $ on $X$ as required.", "Model geometries of dimensions two and three were listed by Thurston: Theorem 6.6 ([45]) Suppose $(G,X)$ is a model geometry in the sense of Thurston.", "If $\\dim (X)=2$ , then $X$ is either $\\mathbb {E}^2$ , $\\mathbb {H}^2$ or $S^2$ .", "If $\\dim (X)=3$ , then $X$ is either $\\mathbb {E}^3$ , $\\mathbb {H}^3$ , $\\mathbb {H}^2\\times \\mathbb {E}^1$ , $\\operatorname{Nil}$ , $\\widetilde{\\operatorname{SL}(2,\\mathbb {R})}$ , $\\operatorname{Sol}$ , $S^3$ or $S^2\\times \\mathbb {E}^1$ .", "Moreover, $\\mathbb {H}^2$ and $\\mathbb {H}^3$ are the only symmetric spaces of non-compact type of dimension two or three.", "Combining Lemma REF with Theorem REF , we can explicitly describe the commensurated subgroups of cohomological dimension at most three that may occur in the statement of Theorem REF .", "This may also be deduced from the classification of symmetric spaces [21].", "Corollary 6.7 Let $\\Lambda $ be a torsion-free group that is either a finite rank free abelian group, or is a uniform lattice in a connected centre-free semisimple Lie group with no compact factors.", "Then $\\Lambda $ is a Poincaré duality group.", "Moreover, the following hold.", "If $\\operatorname{cd}(\\Lambda )=1$ , then $\\Lambda \\cong \\mathbb {Z}$ .", "If $\\operatorname{cd}(\\Lambda )=2$ , then either $\\Lambda \\cong \\mathbb {Z}^2$ or $\\Lambda $ acts geometrically on $\\mathbb {H}^2$ .", "If $\\operatorname{cd}(\\Lambda )=3$ , then either $\\Lambda \\cong \\mathbb {Z}^3$ or $\\Lambda $ acts geometrically on $\\mathbb {H}^3$ .", "We recall groups of finite cohomological dimension are torsion-free, and that if $\\Lambda \\le \\Gamma $ , then $\\operatorname{cd}(\\Lambda )\\le \\operatorname{cd}(\\Gamma )$ .", "Moreover, any non-trivial group $\\Lambda $ satisfies $\\operatorname{cd}(\\Lambda )>0$ .", "These facts will be used implicitly throughout the proofs of Theorems REF and REF .", "$\\Rightarrow $ : Suppose $\\Gamma $ has a model geometry that is not dominated by a locally finite vertex-transitive graph.", "Then Theorem REF and Corollary REF imply $\\Gamma $ contains an infinite commensurated Poincaré duality subgroup $\\Lambda $ such that is either isomorphic to $\\mathbb {Z}$ or $\\mathbb {Z}^2$ , or acts geometrically on $\\mathbb {H}^2$ .", "Since $1\\le \\operatorname{cd}(\\Lambda )\\le \\operatorname{cd}(\\Gamma )=2$ , Proposition REF ensures $\\Gamma $ is of type FP.", "If $\\Lambda \\cong \\mathbb {Z}^2$ or $\\Lambda $ acts geometrically on $\\mathbb {H}^2$ , then $\\operatorname{cd}(\\Lambda )=\\operatorname{cd}(\\Gamma )$ , so Theorem REF implies $\\Lambda $ is a finite index subgroup of $\\Gamma $ .", "As $\\Lambda $ acts freely and cocompactly on either $\\mathbb {E}^2$ or $\\mathbb {H}^2$ , Proposition REF implies the torsion-free group $\\Gamma $ acts freely and cocompactly on either $\\mathbb {E}^2$ or $\\mathbb {H}^2$ , hence is a surface group and we are done.", "Now suppose $\\Lambda \\cong \\mathbb {Z}$ .", "Theorem REF ensures $\\Gamma $ splits as a non-trivial graph of groups in which all vertex and edge groups are commensurable to $\\Lambda $ .", "Let $T$ be the associated Bass–Serre tree.", "If the tree $T$ is infinite-ended, then $\\Gamma $ is a generalised Baumslag–Solitar group of rank one and we are done.", "If $T$ is two-ended, then $\\Gamma $ surjects onto $\\mathbb {Z}$ or $D_\\infty $ with kernel $K$ commensurable to $\\Lambda $ .", "As $\\Gamma $ is torsion-free, $K\\cong \\mathbb {Z}$ .", "Thus $\\Gamma $ is virtually $\\mathbb {Z}^2$ and so Proposition REF implies $\\Gamma $ acts geometrically on $\\mathbb {E}^2$ and hence is a surface group.", "$$ : If $\\Gamma $ acts geometrically on $\\mathbb {E}^2$ or $\\mathbb {H}^2$ , it has a model geometry not dominated by a locally finite graph.", "If $\\Gamma $ is a generalised Baumslag–Solitary group, it contains either a commensurated infinite cyclic subgroup, so Theorem REF implies $\\Gamma $ has a model geometry not dominated by a locally finite graph.", "The following lemma describes the structure of certain Schlichting completions, and will arise in the proof of Theorem REF .", "Lemma 6.8 Let $\\Gamma $ be a finitely generated group that splits as a finite graph of groups with all vertex and edge groups commensurable to $\\Lambda $ .", "Then $\\Lambda $ is commensurated.", "Moreover, if $Q$ is the Schlichting completion of $\\Gamma $ with respect to $\\Lambda $ , then $Q$ acts geometrically on a locally finite tree quasi-isometric to the Bass–Serre tree of $\\Gamma $ .", "Let $T$ be the Bass-Serre tree associated to the given graph of groups $\\mathcal {G}$ of $\\Gamma $ , noting that the action of $\\Gamma $ on $T$ is cocompact.", "Since all vertex and edge groups of $\\mathcal {G}$ are commensurable to $\\Lambda $ , the tree $T$ is locally finite, hence every vertex and edge stabiliser of $T$ is commensurable to $\\Lambda $ .", "Thus $\\Lambda $ is commensurated.", "Fix a finite symmetric generating set $S$ of $\\Gamma $ .", "Recall as in [33], the quotient space $\\Gamma /\\Lambda $ is the set $\\Gamma /\\Lambda $ of left $\\Lambda $ -cosets equipped with the relative word metric.", "By [33], we see that as $\\Lambda $ is commensurable to a vertex stabiliser of $T$ , the quotient space $\\Gamma /\\Lambda $ is quasi-isometric to $T$ .", "As there is a bijective correspondence between $\\Lambda $ -cosets and cosets of a compact open subgroup of $Q$ containing $\\Lambda $ as a dense subgroup, the quotient space $\\Gamma /\\Lambda $ coincides precisely with the Cayley–Abels graph of the Schlichting completion $Q$ .", "Therefore, $Q$ is quasi-isometric to $T$ .", "A result of Cornulier, synthesising earlier work of Stallings, Abels and Mosher–Sageev–Whyte, shows $Q$ either acts geometrically on a locally finite tree, or admits a transitive geometric action on the real line [12].", "Since $Q$ is a Schlichting completion, hence totally disconnected, the latter cannot occur.", "$\\Rightarrow $ : Suppose $\\Gamma $ has a model geometry that is not dominated by a locally finite vertex-transitive graph.", "Then Theorem REF implies $\\Gamma $ contains an infinite commensurated subgroup $\\Lambda $ that is one of the groups in the statement of Corollary REF .", "In the case where $\\operatorname{cd}(\\Lambda )=1$ , $\\Lambda \\cong \\mathbb {Z}$ and we are done.", "We thus suppose $\\operatorname{cd}(\\Lambda )>1$ .", "Then $2\\le \\operatorname{cd}(\\Lambda )\\le \\operatorname{cd}(\\Gamma )=3$ .", "Since $\\Lambda $ is a Poincaré duality group, Proposition REF ensures $\\Lambda $ is of type FP.", "Suppose $\\operatorname{cd}(\\Lambda )=3$ .", "Theorem REF ensures $\\Lambda $ is a finite index subgroup of $\\Gamma $ .", "Moreover, Corollary REF says either $\\Lambda \\cong \\mathbb {Z}^3$ hence acts geometrically on $\\mathbb {E}^3$ , or $\\Lambda $ acts geometrically on $\\mathbb {H}^3$ .", "Proposition REF implies $\\Gamma $ acts geometrically on $\\mathbb {E}^3$ or $\\mathbb {H}^3$ as required.", "Now suppose $\\operatorname{cd}(\\Lambda )=2$ .", "By Corollary REF , $\\Lambda $ is either isomorphic to $\\mathbb {Z}^2$ or acts geometrically on $\\mathbb {H}^2$ .", "Moreover, in the case $\\Lambda $ acts geometrically on $\\mathbb {H}^2$ , it is uniformly commensurated.", "Theorem REF says $\\Gamma $ splits as a non-trivial finite graph of groups in which all vertex and edge groups are commensurable to $\\Lambda $ .", "Let $T$ be the associated Bass–Serre tree.", "The action of $\\Gamma $ on $T$ is cocompact and without loss of generality, may assumed to be minimal.", "There are four cases to consider, depending on whether $\\Lambda $ is isomorphic to $\\mathbb {Z}^2$ or acts geometrically on $\\mathbb {H}^2$ , and on whether $T$ is two-ended or infinite-ended.", "In the case $\\Lambda \\cong \\mathbb {Z}^2$ and $T$ is infinite-ended, $\\Gamma $ is a generalised Baumslag–Solitar group of rank two and we are done.", "In the case $T$ is two-ended, $\\Gamma $ surjects onto $\\mathbb {Z}$ or $D_\\infty $ with kernel commensurable to $\\Lambda $ hence virtually $\\mathbb {Z}^2$ .", "It follows that $\\Gamma $ has a subnormal series with all factors infinite cyclic or finite.", "Thus $\\Gamma $ is virtually polycyclic, hence some finite index subgroup $\\Gamma ^{\\prime }\\le \\Gamma $ is a lattice in a simply-connected solvable Lie group $G$ [41].", "If $K\\le G$ is a maximal compact subgroup, the associated homogeneous space $G/K$ can be endowed with a left-invariant metric making it a three-dimensional model geometry.", "This model geometry must be one of $\\mathbb {E}^3$ , $\\operatorname{Nil}$ or $\\operatorname{Sol}$ ; see Theorem REF and [45].", "Since $\\Gamma ^{\\prime }$ acts geometrically on one of $\\mathbb {E}^3$ , $\\operatorname{Nil}$ or $\\operatorname{Sol}$ , Proposition REF ensures $\\Gamma $ does also.", "If $\\Gamma $ acts geometrically on $\\mathbb {E}^3$ or $\\operatorname{Sol}$ we are done.", "If $\\Gamma $ acts geometrically on $\\operatorname{Nil}$ , then $\\Gamma $ contains a normal infinite cyclic subgroup and we are also done.", "We now consider the case $\\Lambda $ acts geometrically on $\\mathbb {H}^2$ .", "Since $\\Lambda $ is uniformly commensurated, Proposition REF ensures it has a semisimple Lie engulfing group, which may be assumed to be $\\operatorname{Isom}(\\mathbb {H}^2)$ .", "Theorem REF implies there is a uniform lattice embedding $\\gamma :\\Gamma \\rightarrow G$ , where $G$ is a locally compact group that fits into a short exact sequence $1\\rightarrow \\operatorname{Isom}(\\mathbb {H}^2)\\rightarrow G\\rightarrow Q\\rightarrow 1,$ with $Q$ the Schlichting completion of $(\\Gamma ,\\Lambda _0)$ for some finite index subgroup $\\Lambda _0\\le \\Lambda $ .", "By observing that $\\operatorname{Isom}(\\mathbb {H}^2)=\\operatorname{Aut}(\\operatorname{Isom}(\\mathbb {H}^2)^\\circ )$ , we use Proposition REF to deduce there is a copci homomorphism $G\\rightarrow \\operatorname{Isom}(\\mathbb {H}^2)\\times Q$ .", "By Lemma REF , there is a copci map $\\psi :Q\\rightarrow \\operatorname{Aut}(T^{\\prime })$ for some locally finite tree $T^{\\prime }$ quasi-isometric to $T$ .", "Thus there is a uniform lattice embedding $\\Gamma \\rightarrow \\operatorname{Isom}(\\mathbb {H}^2)\\times \\operatorname{Aut}(T^{\\prime })$ , hence $\\Gamma $ acts geometrically on $\\mathbb {H}^2\\times T^{\\prime }$ .", "In the case $T^{\\prime }$ is infinite-ended, we are done.", "In the case $T^{\\prime }$ is two-ended, $\\Gamma $ surjects onto $\\mathbb {Z}$ or $D_\\infty $ with kernel commensurable to $\\Lambda $ .", "Corollary REF implies $\\Gamma $ contains a two-ended normal subgroup, hence an infinite cyclic normal subgroup as required.", "$$ : If $\\Gamma $ acts geometrically on $\\mathbb {E}^3$ , $\\mathbb {H}^2$ , $\\operatorname{Sol}$ or $\\mathbb {H}^2\\times T$ , it clearly has a model geometry not dominated by a locally finite graph.", "In the remaining cases, $\\Gamma $ contains a commensurated subgroup isomorphic to $\\mathbb {Z}$ or $\\mathbb {Z}^2$ , so Theorem REF implies $\\Gamma $ has a model geometry not dominated by a locally finite graph." ] ]
2207.10509
[ [ "The effect of nutation angle on the flow inside a precessing cylinder\n and its dynamo action" ], [ "Abstract The effect of the nutation angle on the flow inside a precessing cylinder is experimentally explored and compared with numerical simulations.", "The focus is laid on the typical breakdown of the directly forced m=1 Kelvin mode for increasing precession ratio (Poincar\\'e number), and the accompanying transition between a laminar and turbulent flow.", "Compared to the reference case with a 90{\\deg} nutation angle, prograde rotation leads to an earlier breakdown, while in the retrograde case the forced mode continues to exist also for higher Poincar\\'e numbers.", "Depending largely on the occurrence and intensity of an axisymmetric double-roll mode, a kinematic dynamo study reveals a sensitive dependency of the self-excitation condition on the nutation angle and the Poincar\\'e number.", "Optimal dynamo conditions are found for 90{\\deg} angle which, however, might shift to slightly retrograde precession for higher Reynolds numbers." ], [ "Introduction", "Precession-induced fluid motion is essential in various phenomena and applications, including fuel payloads of rotating spacecrafts, atmospheric vortices like tornadoes and hurricanes, and the flow in the Earth's liquid outer core.", "The suggestion that precessional forcing can act as a potential power source for Earth's magnetic field [1] had initiated a debate on the non-trivial issue of the energy budget required for the geodynamo.", "The early argument that a precessing laminar flow cannot convert enough energy to maintain Earth's magnetic field [2], [3] changes completely for turbulent flows which can dissipate more energy, thereby making it possible to sustain the geomagnetic field [4].", "Meanwhile precession is also believed to be responsible for the dynamos of the ancient moon [5], [6] and the asteroid Vesta [7], and several numerical studies have evidenced that magnetic fields indeed can be generated via precession-driven flows [8], [9], [10].", "In the meantime, a variety of experiments focusing on precession-driven flows were conducted in numerous laboratories [11], [12], [13], [14], [15], [16], [17], [18].", "The closest to a hydromagnetic dynamo was that of Gans[19] who performed a precession-driven liquid metal experiment in 1971, and achieved a magnetic field amplification by a factor of 3.", "This experiment motivated the development of a large-scale precession dynamo experiment, which is presently under construction at Helmholtz-Zentrum Dresden-Rossendorf (HZDR) within the framework of the DRESDYN project [20].", "After the successes of the pioneering liquid sodium experiments in Riga [21], Karlsruhe[22], and Cadarache [23], the DRESDYN experiment aims at achieving truly homogeneous dynamo action without the use of any propellers, pumps, guiding tubes or magnetic materials.", "Prior studies had indeed demonstrated that precession can act as a efficient mechanism for driving an intense flow in a homogeneous fluid [24].", "The DRESDYN precession experiment consists of a cylinder with a radius of R = 1 m and a height of H = 2 m. The cylinder rotates around its symmetry axis at the cylinder frequency $f_c \\le $ 10 Hz and precesses around another axis at the precession frequency $f_p \\le $ 1 Hz.", "In this experiment, liquid sodium will be used as a working fluid to accomplish dynamo action [20], [25].", "To understand the dynamics of the flow for the large-scale experiment, a 1:6 down-scaled water test experiment with the same aspect ratio and rotation rates has been in operation at HZDR for many years [18], [26], [27].", "Precession produces complex three-dimensional flow structures as a consequence of the interaction between inertial modes, boundary layers, and the directly driven base flow [25], [18], [28].", "In cylindrical geometry, the inertial modes interact to form global modes known as Kelvin modes [29].", "Each mode has its own eigen-frequency, which is determined by the radial ($n$ ), axial ($k$ ), and azimuthal ($m$ ) wave numbers.", "The Reynolds number $Re$ , the precession ratio or Poincare number ($Po=\\Omega _p/\\Omega _c$ ), the geometric aspect ratio ($\\Gamma =$ H/R), and the nutation angle $ \\pm \\alpha $ (angle between the axis of precession and the axis of rotation) determine the flow inside the precessing cylinder.", "Here, $+\\alpha $ represents prograde precession and $-\\alpha $ represents retrograde precession (prograde/retrograde precession occurs when the projection of the turntable rotation on the cylinder rotation is positive/negative).", "The present study examines the influence of different nutation angles on the dominant flow modes inside the precessing cylinder for these rotation configurations.", "For this purpose we conduct water experiments and compare their results with numerical simulations[28].", "Focusing on the inertial modes with the largest energy fractions we use direct measurements of the axial velocity with ultrasound Doppler velocimetry (UDV) and simulation data from a three dimensional numerical model.", "Finally, we utilize the obtained flow fields in a kinematic dynamo model, in order to identify the most promising parameter range for dynamo action in the upcoming DRESDYN experiment.", "In this kinematic dynamo model the flow is prescribed using the full information obtained from numerical simulations of the hydrodynamic problem, whereas at this stage any feedback of the field on the flow via the Lorentz force is still ignored.", "The paper is structured as follows: In section 2, we briefly describe the setup and the employed measurement procedure for the down-scaled water experiment.", "The theory and numerics is described in section 3, including both the hydrodynamic and the kinematic dynamo problem.", "Our findings from hydrodynamic experiments and simulations are described and compared in section 4.", "The results on dynamo action for different configurations are then discussed in section 5.", "In the final section, the results are summarized and some prospects for future work and the large-scale liquid sodium experiment are discussed.", "Figure 1(a) gives a schematic view of the 1:6 down-scaled water precession experiment.", "The experiment comprises a water-filled acrylic cylindrical vessel with radius $R = 163$   mm and height $H = 326$   mm.", "The vessel is connected with an asynchronous 3 kW motor via a transmission chain, which allows to adjust the rotation rate of the cylinder.", "The rotational frequency of the cylinder $f_c$ can reach a maximum of 10 Hz.", "The cylinder's end caps are joined axially by eight rods to keep their alignment in parallel, as shown in Fig.", "1(a).", "This entire system is mounted on a turntable powered by a second 2.2 kW asynchronous motor, which can rotate up to a frequency $f_p$ of 1 Hz.", "Both rotation rates, i.e.", "$f_c$ and $f_p$ , are continuously measured by two tachometers and recorded by a data acquisition system.", "The nutation angle $\\alpha $ formed by the cylinder rotation axis and the turntable rotation axis can be varied between 60$^{\\circ }$ and 90$^{\\circ }$ .", "The present study conducts experiments for the three different nutation angles of $\\alpha $ = 60$^{\\circ }$ , 75$^{\\circ }$ and 90$^{\\circ }$ .", "Figure: (a) Schematic setup of the 1:6 down-scaled water precession experiment;(b) Sketch of the precessing cylinder with the container Ω c =2πf c \\Omega _{c}=2 \\pi f_c and the precession angular velocity Ω p =2πf p \\Omega _{p}=2 \\pi f_p .", "Here α\\alpha is the nutation angle., ." ], [ "Measurement procedure", "To determine the flow field inside the cylinder, ultrasonic transducers (TR0408SS, Signal Processing SA, Lausanne) are placed at one end cap of the cylinder (Fig.", "1(a)).", "These transducers are connected with an ultrasound Doppler velocimeter (UDV) (Dop 3010, Signal Processing SA, Lausanne), which records the velocity profiles with a temporal resolution of about 10 Hz.", "Each transducer emits an ultrasonic pulse and receives echoes reflected from particles in the path of the ultrasonic beam on a regular basis.", "The velocimeter then infers the axial flow velocity in front of the sensor from the Doppler shift of the recorded echoes.", "The tracer particles used in the water inside the cylindrical vessel consist of a mixture of Griltex 2A P1 particles with sizes of 50 $\\mu $ m (60 % by weight) and 80 $\\mu $ m (40 % by weight).", "These tracer particles have a density of 1.02 g/cm$^3$ .", "Before commencing the experimental run, the water is vigorously mixed at a high rotation rate to ensure that the tracer particles are distributed evenly throughout the cylindrical vessel.", "The vessel is then set into rotation at $f_c$ until the fluid co-rotates with the vessel, which is indicated by a vanishing velocity on the UDV channel.", "We then set the turntable in motion at $f_p$ and wait until a statistically steady state is attained.", "The velocity profile is then recorded for approximately 52 rotations of the cylinder.", "Finally, we increase the precession rate to the next value of $f_p$ and wait until the fluid motion at the increased precession rate has reached a steady state.", "Initially, the measurement started at a value of $Po = 0.035$ (i.e.", "$f_p = 0.002$   Hz), which was then gradually increased in steps up to $Po = 0.198$ (i.e.", "$f_p = 0.011$  Hz).", "The flow measurements were conducted with the ultrasonic transducer at radius $r = 150$  mm and at a constant rotation rate of the cylinder $f_c = 0.058$   Hz, corresponding to $Re \\approx 10^4$ .", "These experiments were carried out with two different rotation directions, i.e.", "prograde and retrograde precession.", "During the experiments, the room temperature was kept as constant as possible using air-conditioning, thereby eliminating the temperature-dependent viscosity as a possible influence on the flow." ], [ "Hydrodynamics", "We consider an incompressible fluid of kinematic viscosity $\\nu $ enclosed in a cylinder of radius $R$ and height $H$ .", "The container rotates and precesses with angular velocities ${\\Omega _{c}}$ and ${\\Omega _{p}}$ ($-{\\Omega _{p}}$ ) for prograde (retrograde) motion, with $\\alpha $ denoting the nutation angle, as illustrated in Fig.", "REF (b).", "Another option is considering $\\alpha $ to run between $0^{\\circ }$ and $180^{\\circ }$ , however we use here the range between $0^{\\circ }$ and $90^{\\circ }$ , and distinguish between pro- and retrograde precession.", "The fluid motion inside the precessing cylinder is governed by the Navier-Stokes equation $\\frac{\\partial {u}}{\\partial t} + {u} \\cdot {\\nabla u} = - {\\nabla }P + \\frac{1}{Re} {\\nabla }^{2}{u} - 2 {\\Omega } \\times {u} + \\frac{d{\\Omega }}{dt} \\times {r} \\:,$ together with the incompressibility condition ${\\nabla } \\cdot {u} = 0$ .", "Here ${u}$ is the velocity flow field, ${\\Omega }={\\Omega _{c}}+{\\Omega _{p}}$ is the total rotation vector and ${r}$ is the position vector with respect to the center of the cylinder.", "$P$ is the reduced pressure which comprises the hydrostatic pressure and other gradient terms, e.g.", "the centrifugal force, that do not change the dynamical behavior of the flow.", "The last two terms on the right-hand side are the Coriolis and the Poincaré forces, respectively.", "The cylindrical coordinates are the axial ($z$ ), radial ($r$ ) and azimuthal ($\\varphi $ ) ones, respectively.", "Equation (REF ) is complemented by no-slip boundary conditions at the walls.", "More specifically the axial $u_{z}$ and azimuthal $u_{\\varphi }$ velocities vanish on the sidewall while on the endwalls the radial $u_{r}$ and azimuthal velocities are zero.", "In order to non-dimensionalize the Navier-Stokes equation we use the radius $R$ as length scale and $\\left| \\Omega _{c}+\\Omega _{p}\\cos \\alpha \\right|^{-1}$ as time scale.", "The latter choice relies on using the projection of total angular velocity on the cylinder axis, i.e $({\\Omega _{c}}+{\\Omega _{p}}) \\cdot {\\widehat{z}}$ .", "The key dimensionless parameters governing precession-driven flows, the Reynolds number $Re$ , the Poincaré number $Po$ and the aspect ratio of the container $\\Gamma $ , are defined as $Re = \\frac{R^{2}\\left| \\Omega _{c}+\\Omega _{p}\\cos \\alpha \\right|}{\\nu }, \\quad Po=\\frac{\\Omega _{p}}{\\Omega _{c}}, \\quad \\Gamma =\\frac{\\textrm {H}}{\\textrm {R}} \\: .$" ], [ "Magnetohydrodynamics", "The governing equation for the spatio-temporal evolution of the magnetic field ${B}$ is the (dimensionless) induction equation: $\\frac{\\partial {B}}{\\partial t} = {\\nabla } \\times \\left( \\langle {u} \\rangle \\times {B} - \\frac{\\nabla \\times {B}}{Rm} \\right) \\: .$ For the time-averaged velocity field $\\langle {u} \\rangle $ considered constant, the solution of the linear evolution equation has the form ${B}={B}_{0} \\exp \\left( \\sigma \\: t\\right)$ with $\\sigma $ representing the eigenvalue.", "In Eq.", "(3) $Rm$ is the magnetic Reynolds number, defined as: $Rm=\\frac{R^{2}\\left| \\Omega _{c}+\\Omega _{p}\\cos \\alpha \\right|}{\\eta },$ where $\\eta $ is the magnetic diffusivity of the liquid metal.", "Figure: From left to right: velocity contours over time and depth for increasing Po (appr.", "between 0.049 and 0.16).", "First row (a1-a4) for α=60 ∘ \\alpha = 60^{\\circ } (prograde); Second row (b1-b4) for α=75 ∘ \\alpha = 75^{\\circ } (prograde); Third row (c1-c4) for α=90 ∘ \\alpha = 90^{\\circ }; Fourth row (d1-d4) for α=75 ∘ \\alpha = 75^{\\circ } (retrograde); Fifth row (e1-e4) for α=60 ∘ \\alpha = 60^{\\circ } (retrograde) at Re≈10 4 Re \\approx 10^4." ], [ "Numerical methods", "Precession-driven flows in cylinders can be simulated in one of the following two frames of reference: ($i$ ) the mantle frame (attached to the cylinder wall) or ($ii$ ) the turntable frame in which the cylinder walls rotate at $\\Omega _{c}$ and the total vector $\\Omega $ is fixed.", "We select the former for which $\\partial \\Omega /\\partial t = 0$ so that the Poincare force disappears and both the rotation vector $\\Omega _{c}$ as well the precession vector $\\Omega _{p}$ become stationary.", "While for the hydrodynamic problem we use a spectral element-Fourier code [31] (meshing the domain 300 quadrilateral elements to mesh the meridional half plane and 128 Fourier modes in azimuthal direction) with no slip boundary condition, we solve the induction equation through a finite volume scheme with constraint transport in order to ensure $\\nabla \\cdot {B}=0$ .", "For the dynamo simulations we apply pseudo-vacuum boundary condition for the magnetic field (only tangential components vanish at the wall).", "The simulation protocol is the following: we start at $t=0$ with a pure solid body rotation motion i.e ${u} = (\\Omega _{c} \\: r) \\widehat{{\\varphi }}$ imposing a certain forcing.", "Once the statistically steady regime is achieved by the hydrodynamic flow field we average in time and we put this flow structure in the induction equation Eq.", "(REF ).", "The kinematic dynamo simulations are run till a diffusion time $t_{b} \\ge Rm$ .", "The phase space investigated for the kinematic dynamo problem ranges between $Re \\in [1000,10000] $ and the Poincaré number in the range $\\pm [0.010 , 0.20]$ .", "The nutation angle is in between $\\alpha \\in [60^{\\circ }, 75^{\\circ }]$ , both for prograde and retrograde precession.", "The aspect ratio will be fixed at $\\Gamma = 2$ , quite close to the resonance point $\\Gamma = 1.989$ of the first inertial mode.", "The collection of all simulations in the parameter space $(Re, Po)$ will be later shown in Figs.", "REF and REF ." ], [ "Typical flow patterns", "In this subsection, we present the results of the precession water experiment.", "The contour plots of the flow structures obtained during the experiment are shown in Fig.", "REF .", "These plots illustrate the evolution of the axial velocity profile $u_z$ over time $t$ and depth $z$ (the depth indicates the distance along the transducer's axis from the transducer).", "The first to fifth rows show the results for $\\alpha $ = 60$^{\\circ }$ , 75$^{\\circ }$ and 90$^{\\circ }$ respectively, for both prograde and retrograde precession.", "The dominating oscillatory pattern of the velocity profile, defined by the rotational frequency $\\Omega _c$ of the cylinder, represents the standing inertial mode with $(m,k)=(1,1)$ as recorded by the rotating UDV sensor mounted on the vessel wall.", "As we increase $f_p$ , the response of the fluid begins to change.", "At lower values of $Po$ (see Fig.", "REF , first column), the flow pattern exhibits a stable flow structure and is almost vertical in both cases (prograde and retrograde).", "However, as $Po$ exceeds a certain higher value for $\\alpha $ = 60$^{\\circ }$ (prograde), $\\alpha $ = 75$^{\\circ }$ (prograde) and 90$^{\\circ }$ , as shown in the third column of Fig.", "REF (a3, b3 and c3), the flow pattern changes and exhibits a significant tilt with respect to the vertical axis.", "By contrast, in the retrograde case (see Fig.", "REF (d3) and (e3)), it remains almost unchanged up to this $Po$ value, and the occurrence of the tilt is shifted to a higher value of $Po$ .", "The tilt indicates a flow state transition at a critical value of the precession ratio ($Po^c$ ), implying the presence of other inertial modes [25].", "In addition, the transition that occurred at the critical $Po$ has a considerable effect on the amplitudes of the inertial modes, as we will demonstrate in the following." ], [ "Quantitative results and comparison with numerics", "In a more quantitative analysis, the mode amplitudes are calculated by decomposing the measured axial velocity field in the axial and azimuthal directions (for a comprehensive overview of the calculation of the amplitudes of inertial modes, we refer the reader to a publication [25]).", "While, in principle, various modes can be observed inside the precessing cylinder, in our analysis we examined only those modes that have substantial amplitudes and are most relevant for dynamo action[27], i.e., $(m, k) = (1, 1)$ and $(m, k) = (0, 2)$ .", "Figure REF shows the amplitudes of the prominent modes versus the precession ratio for both cases (prograde and retrograde) at 60$^{\\circ }$ , 75$^{\\circ }$ , and 90$^{\\circ }$ , respectively.", "In order to compare with the experimental data, the simulation results at $Re = 6500$ were linearly extrapolated to the experimental value $Re = 10^4$ .", "For prograde 60$^{\\circ }$ , 75$^{\\circ }$ and 90$^{\\circ }$ (see Fig.", "REF (a), REF (b) and REF (c)), we observe that the directly forced mode $(m, k) = (1, 1)$ rises up to $Po \\approx 0.08$ , beyond which there is an abrupt transition of the flow state due to the breakdown of the $(m, k) = (1, 1)$ mode.", "Simultaneously, an axially symmetric mode $(m, k) = (0, 2)$ appears in a narrow range of $Po$ , which corresponds to the double roll structure that was previously shown to be most relevant for dynamo action [25].", "Remarkably, the nutation angle influences the critical $Po$ , such that as the angle increases (60$^{\\circ }$ , 75$^{\\circ }$ and 90$^{\\circ }$ ), so does the critical $Po$ (0.083, 0.087 and 0.01).", "In other words, for smaller nutation angles the transition sets in earlier.", "In contrast, the data for the retrograde $60^{\\circ }$ and $75^{\\circ }$ cases show no clear breakdown of the directly forced mode $(m, k) = (1, 1)$ which has a gradual decrease in amplitude.", "At the same time, we observe a smoother increase of the axially symmetric mode $(m, k) = (0, 2)$ within the considered range of $Po$ , as shown in Figs.", "REF (d) and REF (e).", "In comparison to all other cases, $\\alpha = 75^{\\circ }$ (retrograde) acquires the largest amplitude of the $(m, k) = (0, 2)$ mode, and $\\alpha = 60^{\\circ }$ (retrograde) has the smallest.", "As compared to the prograde case the critical Po values are shifted to larger values for retrograde case.", "The slight offsets of the experimental data with respect to the numerical data along the x-axis is probably due to the difference in Re: as Re increases, its critical $Po$ decreases slightly[28].", "In general, however, the experimental values at $\\alpha = 60^{\\circ }, 75^{\\circ }$ for two different configurations (prograde and retrograde) and at $90^{\\circ }$ are in good agreement with the results of the numerical simulations.", "Figure: Comparison of amplitudes of the directly forced mode (m,k)=(1,1)(m, k) = (1, 1) and the axisymmetric mode (m,k)=(0,2)(m, k) = (0, 2) between numerically calculated (red) and experimentally measured flow (blue) for a nutation angle of 60 ∘ ^{\\circ }, 75 ∘ ^{\\circ } and 90 ∘ ^{\\circ }." ], [ "Dynamo results", "In this section we present the results of the kinematic dynamo code applied to the flow fields as obtained in the previous section.", "The analysis will focus on two main points: $(i)$ the influence of the nutation angle $\\alpha $ on the ability to drive dynamo action; $(ii)$ the impact of the Reynolds number for a fixed angle $\\alpha =90^{\\circ }$ .", "Figure: Regime diagram of the kinematic dynamo simulations in the (Po,α)(Po,\\alpha ) parameter space with fixed Re=6500Re=6500.", "Prograde (retrograde) cases correspond to the positive (negative) region Po>0Po>0 (Po<0Po<0).", "Red symbols indicate dynamo action; black symbols show missing dynamo action and the blue diamond show the peak of strongest dynamo for each angle.Figure: Growth rate γ\\gamma of the magnetic energy as a function of the magnetic Reynolds number for five nutation angles.", "Various curves represent the different precession ratios and the arrows mark the dynamo onset at the critical RmRm." ], [ "The role of the nutation angle", "Figure REF shows the regime diagram in the $(\\alpha , Po)$ space at fixed $Re=6500$ .", "We find dynamo action (red symbols) for all the angles except for $\\alpha =60^{\\circ }$ prograde.", "The range of the precession ratio where dynamo action occurs changes with the nutation angle: for prograde cases dynamos occur at $Po \\approx 0.1$ while for retrograde they appear at $Po > 0.15$ with a more extended range.", "For each angle, the blue diamonds indicate the dynamo with the largest growth rate.", "We plot the growth rate $\\gamma = 2\\: \\Re \\left( \\sigma \\right)$ of the magnetic field in Fig.", "REF .", "As already highlighted, the $\\alpha =60^{\\circ }$ prograde shows no positive growth rate even at the largest magnetic Reynolds number considered here.", "The lowest critical magnetic Reynolds number occurs for $\\alpha =90^{\\circ }$ which, therefore, turns out to be the most promising case for the later dynamo experiment.", "Also the magnetic field structure, in this case the azimuthal component $B_{\\varphi }$ , depends on $\\alpha $ (Fig.", "REF ).", "The three snapshots are taken between $t=300$ and and $t=380$ .", "Both cases present contours elongated along the axis and the final field shows a change in sign during the evolution.", "This feature could be either a rotation or reversal in the sense of `active longitudes' that can be found in some astrophysical objects.", "Figure: Three snapshots of the azimuthal magnetic field B ϕ B_{\\varphi } at Rm=700Rm=700: top row α=90 ∘ \\alpha =90^{\\circ } and Po=0.105Po=0.105 ; bottom row : α=75 ∘ \\alpha =75^{\\circ } retrograde and Po=0.175Po=0.175.", "Blue colour denotes negative values and red colour positive with the levels of translucency denoting 30%30\\%, 50%50\\%, 70%70\\% of the field." ], [ "The role of Reynolds number", "In this subsection we fix the nutation angle $\\alpha =90^{\\circ }$ to investigate the impact of the hydrodynamic Reynolds number on the flow regime and the dynamo action.", "We select this angle since presently it appears the best angle for dynamo action, with the lowest the critical magnetic Reynolds number.", "We start by showing the regime diagram in the $(Re,Po)$ space where the meaning of the symbols is consistent with that of Fig.", "REF : black squares denote no dynamo action, red triangles indicate dynamo action, and blue diamond signify the strongest dynamo action.", "The blue curve is a fit marking the scaling for the $Po^{c} \\approx Re^{-1/4}$ .", "Notice that for $Re < 3500$ we observe dynamos also significantly above the threshold curve; by contrast for larger $Re$ the dynamo action is restricted in a quite narrow range.", "In the next step we select the best precession ratio for every $Re$ (the blue diamonds) and show the growth rate $\\gamma $ as a function of $Rm$ in Fig.", "REF (a).", "The slopes of the curves seem to converge for the highest Reynolds number considered here.", "Collecting the points where the lines cross the $\\gamma =0$ we plot the critical magnetic Reynolds number in Fig.", "REF (b).", "The trend is not monotonic, showing a flat maximum in the range $4000<Re<8000$ .", "The smallest critical magnetic Reynolds number is found for $Re=2000$ .", "This might be the case since at small Reynolds the flow tends to remain well organized in large scale structures rather than become turbulent with the presence of small scales.", "Figure: (Po,Re)(Po, Re) parameter space with fixed α=90 ∘ \\alpha =90^{\\circ }.", "Black symbols represent no dynamo effect and red symbol dynamo action found in the range of 0<Rm≤10 3 0 < Rm \\le 10^{3}.", "Blue diamonds highlight the best dynamo action for each ReRe and the corresponding blue line is the scaling law Po (c) ∼Re -1/4 Po^{(c)} \\sim Re^{-1/4}.Figure: Analysis for α=90 ∘ \\alpha =90^{\\circ }.", "(a) Plot of the growth rate of magnetic energy for different Reynolds number taken at the best PoPo;(b) Critical magnetic Reynolds number in terms of dynamo action." ], [ "Conclusion and Prospects", "In this study, we investigated the effects of different nutation angles (60$^{\\circ }$ , 75$^{\\circ }$ and 90$^{\\circ }$ ) on a precession-driven flow in cylindrical geometry for both prograde and retrograde motion.", "We compared experimental results to direct numerical simulations.", "Experimentally, the axial flow $u_z$ was measured by an UDV sensor mounted on the end cap of the cylinder.", "These velocities were decomposed into several $(m, k)$ modes.", "We chose $(m, k) = (1, 1)$ and $(m, k) = (0, 2)$ modes for our study because they have significant amplitudes and are relevant for dynamo studies.", "In all cases, the experimental results agreed well with the numerical findings.", "For prograde precession with $\\alpha $ = 60$^{\\circ }$ , 75$^{\\circ }$ and 90$^{\\circ }$ the flow abruptly transitions from a laminar to a turbulent regime, which goes along with the sudden decrease of the directly forced flow.", "By contrast retrograde motion does not show a clear breakdown of the directly forced mode $(m, k) = (1, 1)$ , but rather a smooth increase of the axisymmetric mode $(m, k) = (0, 2)$ .", "The tendency of retrograde precession to provide a stronger large scale flow amplitude without a breakdown of the directly forced mode should be interesting for dynamo purposes, because in principle this allows an injection of more energy into the flow without breaking the base flow.", "With this question in mind, we further investigated whether the (time-averaged) flow fields obtained from the hydrodynamic simulations are capable of driving a dynamo.", "We conducted kinematic dynamo simulations, which can be summarized according to the following points: The nutation angle $\\alpha $ is crucial both for the hydrodynamic flow structure and the resulting dynamo action.", "In the phase diagram Fig.", "REF we have shown that (at the present state) the most efficient dynamo (with the highest growth rate $\\gamma $ ) is found at $\\alpha =90^{\\circ }$ .", "The reason for that lies in the rich and optimal flow structure for this nutation angle [28], [32].", "With view on Fig.", "REF it is tempting to assume that a slightly retrograde motion might provide an even lower threshold for the onset of dynamo action.", "For the particular case $\\alpha =90^{\\circ }$ the hydrodynamic Reynolds number slightly affects the best precession ratio range where dynamo is found.", "This precession ratio scales as $Po^{c} \\sim Re^{-1/4}$ .", "At low $Re$ the dynamos occur in a range of $Po$ more extended than for larger Reynolds, e.g $0.120< Po < 0.200$ for $Re=2000$ .", "The critical magnetic Reynolds number shows a weak dependence on $Re$ with a slight increase around $Re \\approx 6000$ , but approaches the previously known value of 430 when going to $Re=10000$ .", "Given that the real dynamo experiment can achieve an Rm value of 700, there seems to be a reasonable safety margin to reach dynamo action.", "However the extrapolation to the hydrodynamic regime of the DRESDYN precession experiment must be considered with a grain of salt and has to be carefully checked in the larger experiment.", "The structure of the (azimuthal) magnetic field depends on the nutation angle $\\alpha $ , too.", "The present work can be extended in several directions.", "The possibility of the down-scaled water experiment to reach Reynolds numbers of up to 2 million should be utilized to confirm the -1/4 scaling of the critical precession ratio also for nutation angles different from 90$^{\\circ }$ .", "From the numerical point of view there is the possibility to use stress-free boundary condition for the velocity on the endcaps in order to check the specific impact of those endwall's boundary layers.", "The kinematic dynamo code should be extended to the use of vacuum boundary conditions which might still lead to some changes of the critical $Rm$ when compared to the presently used vertical field conditions.", "Finally, in a more advanced study the fully coupled system of induction and Navier-Stokes equations including the back-reaction of the Lorentz forces should be investigated.", "For our precession system, with its very sensitive dependence on various parameters, this fully non-linear system promises to show particularly interesting effects." ], [ "DATA AVAILABILITY", "The data that support the findings of this study are available from the corresponding author upon reasonable request.", "This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No.", "787544)." ] ]
2207.10508
[ [ "Parity and singlet-triplet high fidelity readout in a silicon double\n quantum dot at 0.5 K" ], [ "Abstract We demonstrate singlet-triplet readout and parity readout allowing to distinguish T0 and the polarized triplet states.", "We achieve high fidelity spin readout with an average fidelity above $99.9\\%$ for a readout time of $20~\\mu$s and $99\\%$ for $4~\\mu$s at a temperature of $0.5~K$.", "We initialize a singlet state in a single dot with a fidelity higher than $99\\%$ and separate the two electrons while keeping the same spin state with $a \\approx 95.6\\%$ fidelity." ], [ "Introduction", "The system size of today's semiconductor quantum dots remains in the few qubits regime [1], [2], but the community already works on scalable designs for qubit processors[3].", "Scaling up qubit systems goes along with an increased interest in cointegration of control electronics[4], [5].", "This would result in higher power dissipation at the quantum chip level and the necessity to work at elevated temperatures (beyond dilution fridge base temperature), where the cooling power is typically in the hundreds of ${}{}$ range [6].", "Important progresses in this direction have been made in particular the realisation of high fidelity single qubit gate and two-qubit gate [7], [8], [9] above ${1}{}$ .", "However, spin readout fidelity and initialisation is often a limiting process due to thermal broadening of reservoirs or the presence of low lying excited valley state.", "In this context, the three steps of qubit operation, namely initialization, manipulation, and readout, all need to be fast compared to the decoherence rate in order to perform error correction protocols.", "For Si spin qubits, this requires ${}{}$ readout frequency with a fidelity above the ${99.9}{}$ threshold to ensure that readout is not the bottleneck in the operation of a quantum processor.", "Additionally, the readout should ideally come with a small footprint and gate overhead, enabling large scale architectures.", "Typical spin readout in quantum dots requires a spin-to-charge conversion mechanism.", "Common techniques are energy selective readout [10] and Pauli spin blockade (PSB) [6], [11], [12].", "Pauli spin blockade showed so far the highest readout fidelity, achieving fidelities $>{99}{}$[13], [14].", "Moreover, PSB does not require a nearby reservoir and was demonstrated at temperatures as high as ${4.5}{}$ [8].", "PSB requires two spins, giving rise to one singlet and three triplet states.", "As PSB just provides one bit of information, further readout is required to determine the system state completely.", "Two different Pauli spin blockade readouts have been observed [15].", "The so-called ST-readout allows to distinguish the singlet $S_0$ state from the three triplet $T_-, T_0,$ and $T_+$ .", "Fast $T_0$ relaxation, e.g.", "through $S_0/T_0$ mixing, can yield to the same signal as $S_0$ .", "This so called parity readout allows to distinguish the polarized spin states from the non-polarized ones.", "Using a combination of both of these readouts allows to distinguish $T_0$ ,$S_0$ and $T_- (T_+)$ , a requirement for the full tomography of a two spin-1/2 system [16].", "The charge sensor of choice in many devices is a single electron transistor or quantum point contact, with at least three electrodes[17].", "This large gate overhead poses scalability challenges, especially if one wants to have local charge sensors in 2D-arrays [18].", "A solution is RF-reflectometry, not relying on a current measurement, but a capacitive measurement.", "This allows to reduce the readout device to a single gate electrode to form an ancillar quantum dot[19], [20], [13].", "In this work we combine scalable fabrication technology, PSB and RF-reflectometry to demonstrate rapid and high fidelity single shot readout of spins in a double quantum dot.", "We work at a temperature of ${0.5}{}$ , use a single lead quantum dot as a charge sensor and perform PSB with an average fidelity and visibility exceeding ${99.5}{}$ in a device fabricated in a ${300}{}$ foundry." ], [ "Device fabrication and experimental setup", "The device used in this work, similar to the one depicted in REF (a), is fabricated on a 300-mm silicon-on-insulator substrate.", "A ${80}{}$ wide silicon channel is defined by mesa patterning and is separated from the substrate by a buried oxide of ${145}{}$ .", "A ${6}{}$ thermally grown SiO$_2$ is used to separate the gates from the nanowire.", "The gates are made by atomic layer deposition of TiN of ${5}{}$ and ${50}{}$ of poly-Si.", "A bilayer hard mask of ${30}{}$ SiN and ${25}{}$ SiO$_2$ is on top of the metallic gates.", "Using a hybrid deep-UV-electron-beam gate-patterning scheme allows to transfer the gate structure into the hard mask by an alternation of lithography and etch steps.", "A final etch step separates the gates, resulting in a total of 6 gates ($2\\times 3$ ) of ${40}{}$ in gate width, longitudinal and lateral and transverse spacing.", "Before doping of the reservoirs, Si$_3$ N$_4$ spacers of ${35}{}$ are deposited to cover the space between the gates.", "Then, the source and drain reservoirs, labeled $S$ and $D$ respectively, are n-type doped using ion implantation, followed by an activation step using an $N_2$ spike anneal.", "Finally, the device is encapsulated and the gates are connected to Al bond pads.", "The device chip is glued to a PCB and all gates except the gate B2 are connected to DC wires.", "The gate B2 is connected to a DC wire and a coaxial-cable through a bias-T with a ${20}{}$ DC cut-off.", "Connecting the source reservoir to a Nb spiral inductor with $L={69}{}$ forms together with the parasitic capacitance $C_p$ of the circuit an LC-resonance at $f_{res} = {1.2}{}$ .", "We extract from the resonance frequency a parasitic capacitance of $C_p \\approx {0.25}{}$ .", "The amplitude and phase of the reflection of an RF-tone close to the resonance of the LC-circuit changes if the resonance frequency of the circuit is changed.", "A shift of the resonance frequency occurs when the electrochemical potential $\\mu $ of the reservoir is aligned with an energy level of a nearby quantum dot and electron tunneling can occur[21], [22].", "We use IQ demodulation to sense the signal change.", "Thus, source reflectometry allows sensing of the nearby quantum dots defined by gates T1 and B1 (see suppl.", "mat.", "A).", "The device is cooled down in a dilution fridge with a variable temperature control.", "The present experiment is achieved at a base temperature of ${500}{}$ .", "At this temperature, applying a positive voltage on the gates results in the accumulation of quantum dots at the Si/SiO$_2$ interface.", "We can hence form an array of up to $2 \\times 3$ quantum dots.", "In this paper we will only present data using the leftmost $2 \\times 2$ array." ], [ "Charge sensing using RF-reflectometry", "We start tuning the sensor which is controlled by gate B1.", "To minimize the tunnel coupling between the sensor and further probed dot we try to work with the smallest amount of electron possible in the dot.", "However, if the number of electrons is too small the tunnel coupling between the dot and the lead is too weak to give a reflectometry signal [17], [22].", "We find that the optimal operation point is the degeneracy between 4 and 5 electrons in B1 (see suppl.", "mat.", "A for more detail on the tuning of the sensor dot).", "Operating the sensor at this degeneracy point gives a strong capacitive coupling to nearby quantum dots which allows the detection of the first electron in the double quantum dots formed by B2 and T2[19], [20].", "We now move to the tuning of the double quantum dot formed by B2 and T2.", "The gates B3 and T3 are set to 0 to isolate the double quantum dot from the drain reservoir.", "To determine the charge configuration space of the double quantum dot system, we measure stability diagrams of B2-T2 and scan in a third dimension the voltage of the sensor B1 .", "The charge degeneracies of the sensor become visible in the B2-T2 stability diagram as broadened Coulomb peaks (see suppl.", "mat.", "A).", "Loading a single electron in QD$_{B2}$ leads to a splitting of the sensor signal along B2 voltage (see orange lines in Fig.", "REF (c)).", "The transitions of the QD defined by T2 are detected as almost vertical cuts of the sensor signal (see red line in Fig.", "REF (c)).", "Thus we can probe the charge occupation of QD$_{B2}$ and QD$_{T2}$ using QD$_{B1}$ and the source-reflectometry signal." ], [ "Pauli Spin Blockade for Singlet-Triplet and Parity Readout", "We start by identifying the region where Pauli spin blockade can occur.", "For this we tune the sensor to probe both dots simultaneously and their interdot transition corresponding to the $(2|0)-(1|1)$ charge states, depicted in REF (d).", "Under a magnetic field $B_z = {300}{}$ , the $T_-$ state is the ground spin state in the (1|1) regime, whereas $S_0$ remains the ground state in the (2|0) regime as sketched in Fig.", "REF (a).", "Therefore, by scanning over the interdot transition starting from (1|1), we can identify the Pauli spin blockade region as dashed extension of the (1|1) charge state beyond the interdot transition on the (2|0) side, see REF (d).", "This PSB area has a finite width as the blockade is lifted when the detuning energy surpasses the valley or orbital energy separating the $T_-$ (2,0) and $T_-$ (1,1) states.", "After evaluation of the lever-arm to be $\\approx {0.05}{}/{}{}$ (see suppl.", "mat.", "F), we can estimate the valley splitting to be around ${130}{}$ , in agreement with measurements performed in similar devices [23].", "By performing pulsed-measurement in the PSB area, we can resolve spin blockade lifting in real time as presented in Fig.", "REF (b) .", "To investigate further the single-shot spin readout we perform two different measurements.", "We refer to these as singlet-triplet (S-T) readout and parity readout.", "The S-T readout is used to distinguish the singlet state from all triplet states.", "The parity readout is used to distinguish polarized spin states (or even states $T_-$ and $T_+$ ) from unpolarized spin states (or odd states $T_0$ and $S_0$ )[7].", "We start with the standard S-T readout which allows to distinguish between the singlet and the three triplet states.", "We initialise in (2|0) where the system relaxes to the ground singlet state and make a pulse to (1|1) where the singlet and triplet can mix (see suppl.", "mat.", "G).", "Then, we pulse to the S-T readout position located just across the interdot transition using a non-adiabatic pulse.", "At zero magnetic field and at the S-T readout position the three triplets are degenerated leading to a single exponential decay to the ground singlet state as observed on the blue curve Fig.", "REF (c) with a characteristic $T_1 = {0.9}{}$ .", "At finite magnetic field, we initialise in (2|0) where the system relaxes to the ground singlet state and ramp to (1|1).", "Using a Landau-Zener experiment at the $S-T_-$ anticrossing (see suppl.", "mat.", "E) combined with $S-T_0$ mixing (see suppl.", "mat.", "G), we set the ramp sweep rate to obtain an initial state which contains a similar fraction of $T_-$ and $T_0$ with $S_0$ .", "We obtain the blue curves on Fig.", "REF (d) for $B_Z = {0.3}{}$ .", "At finite magnetic field the $T_-$ and $T_0$ are split in energy leading to different relaxation characteristic times as shown by the double exponential decay on the blue curve with characteristic times $T_1 = {0.9}{}$ and $T_1 = {32}{}$ .", "These signatures show that it is possible to distinguish between $S_0$ and the triplet states.", "To obtain further information on the different triplet populations we can rely on their different relaxation dynamics and readout at different timescales.", "For instance probing the spin state at short time allows to distinguish between singlet and triplet and probing ${3}{}$ later when $T_0$ has relaxed allows to get information on the remaining $T_-$ population.", "However, such measurement leads to poor fidelity in $T_-$ readout, $\\approx {95}{}$ , due to the small contrast between $T_0 \\rightarrow S_0$ and $T_- \\rightarrow S_0$ relaxation rates(see suppl.", "mat.", "D).", "To improve this fidelity we propose to move to a second measurement point where the $T_0$ relaxation rate is drastically increased to perform a parity readout.", "We perform the same initialisation for the parity readout.", "The measurement position is now further in the (2|0) regime, where the $T_0$ state relaxes much faster.", "In contrast to the S-T readout, at zero magnetic field, we cannot observe any blocked state leading to a flat relaxation curve at the measurement point, see the orange curve in Fig.", "REF (c).", "At finite magnetic field, see the orange curve in Fig.", "REF (d), similarly, the rapid exponential decay has disappeared, leaving only a slow relaxation attributed to $T_-$ population.", "In both cases, there is no signature of a $T_0$ relaxation which leads us to the conclusion that the $T_0$ has relaxed prior to any measurement due to mixing with the excited $S_0$ state followed by charge relaxation[15]." ], [ "Fidelity benchmarking of parity readout", "In the following section, we want to discuss the optimization of our parity readout, which could as well be used to optimize the ST-readout.", "The readout time is constrained by the relaxation time $T_1$ at the measurement position.", "Another parameter to optimize is the RF-power of the readout, affecting the back action on the double dot, that can drive relaxation[25].", "We benchmark our readout fidelity by varying the RF power as well as the integration time.", "Performing experiments where we prepare approximately ${50}{}$ even/odd state ratio by relaxation to the respective ground state, we measure the signal distribution for 1000 repetitions.", "We follow Barthel et al.", "[24] to calculate the fidelities by fitting a normal distribution to the signal distribution of the odd state and a normal distribution with a decay term to the signal distribution of the even state (accounting for relaxation).", "Plotting the SNR as a function of power and integration time (see suppl.", "mat.", "C), we find an optimal power at around ${-91}{dBm}$ .", "We accumulate 10000 single shot traces for each integration time $\\tau _m$ to ensure sufficient sampling of the signal distribution.", "A histogram of the signal distribution for $\\tau _m = {20}{}$ is shown in FIG.", "REF (a).", "FIG.", "REF (b) depicts a plot of the fidelities and visibility as a function of integration time $\\tau _m$ .", "We find an optimal integration time of $\\tau _{m,opt} = {20}{}$ with an odd (even) fidelity of ${99.98}{}$ (${99.83}{}$ ).", "This results in a visibility of ${99.79}{}$ .", "Reducing the integration time to ${4}{}$ , which is the limit of our measurement bandwidth, the fidelities are slightly lower with ${99.57}{}$ (${99.56}{}$ ) for odd (even) and a visibility of ${99.13}{}$ , still being above ${99}{}$ .", "The lower fidelity at short integration times is in good agreement with the expected noise broadening, decreasing the SNR by $\\propto \\sqrt{\\tau _m}$ .", "The exponential decrease of fidelities, found for longer integration times, agrees as well with the expected exponential growth of leaked even state into odd state due to relaxation." ], [ "State preparation and error analysis", "The total fidelity of state preparation and measurement (SPAM) sums up two DiVincenco criteria.", "We break down the error contributions into three different types: initialization, transfer to the regime of single qubit operation (1|1) and readout.", "The latter has been characterized in the previous section.", "To identify the initialization error, we start by initializing in (1|0) and load a second electron into QD$_{B2}$ by pulsing into (2|0), where we allow relaxation to the ground state by waiting for ${10}{}$ .", "Then we pulse after this initialization phase to the readout measurement position and measure the spin state.", "We find the signal distribution depicted in FIG.", "REF (a), indicating a $S_0$ initialization fidelity of ${99.6}{}$ .", "We now investigate errors due to transfer in the (1|1) where the two electrons can be decoupled.", "We initialize a singlet state in (0|2) as previously described.", "We then transfer one electron by performing first a non-adiabatic pulse to avoid the $S-T_-$ anti-crossing (see suppl.", "mat.", "E) followed by an adiabatic ramp deep in (1|1) to avoid mixing S with $T_0$ .", "Without waiting, we pulse from this position to the measurement position.", "Considering the time per instruction in our sequence, the total time spent in (1|1) is $\\approx {20}{}$ , negligible with regards to the relaxation time $T_1 > {1}{}$ in (1|1) (see suppl.", "mat.", "D).", "We find a $S_0$ population of $\\approx {95.6}{}$ for the transfer measurement by using the signal distribution depicted in REF (b).", "We attribute the $\\approx {4}{}$ difference between these two experiments to leakage during the transfer through the $S_0$ -$T_-$ anti-crossing.", "This assumption is supported by a Landau-Zener type of experiment where the transfer from (0|2) to (1|1) is performed at different rate (see suppl.", "mat.", "E).", "The presence of the anti-crossing leads to a sweep-rate-dependent return singlet probability." ], [ "Conclusion", "We have shown how we can operate a triple quantum dot to perform high fidelity single shot readout of a double dot system.", "Thanks to a strong capacitive coupling and reflectometry method we have been able to achieve spin readout fidelity above ${99.9}{}$ (${99}{}$ ) in ${20}{}$ (${4}{}$ ).", "Using the spin readout we have characterized the different error sources during initialization and displacement of electrons between the (2|0) and (1|1).", "Finally, by adjusting the measurement position in detuning we can alternatively use singlet-triplet or parity readout.", "Combining sequentially these two readouts can be of strong interest in order to extract the full spin information of a 2-qubit system.", "As proposed by [26], to achieve such complete readout, we could start with a S-T readout to distinguish singlet from all triplets.", "Followed by a parity readout, it would allow to distinguish the unpolarized triplet ($T_0$ ) from the two polarized ones($T_-$ and $T_+$ ).", "Finally, an adiabatic transfer which swaps the $T_-$ and $S_0$ population followed by a ST or parity readout will allow to differentiate between the two polarized triplets.", "Note added.", "During the preparation of this manuscript, we became aware of a recent experimental observation of a PSB in a similar device [27]." ], [ "Acknowledgment", "We acknowledge support for the cryogenic apparatus from W. Wernsdorfer, E. Bonet and E. Eyraud.", "We acknowledge technical support from L. Hutin, D. Lepoittevin, I. Pheng, T. Crozes, L. Del Rey, D. Dufeu, J. Jarreau, C. Hoarau and C. Guttin.", "D.J.N.", "acknowledges the GreQuE doctoral programs (grant agreement No.754303).", "The device fabrication is funded through the Mosquito project (Grant agreement No.688539).", "This work is supported by the Agence Nationale de la Recherche through the CRYMCO project and the CMOSQSPIN project.", "This project receives as well funding from the project QuCube(Grant agreement No.810504) and the project QLSI (Grant agreement No.951852).", "Suppl.", "Mat Charge state of double quantum dot system Figure: (a) Charge stability diagram of the two sensing dots QD B1 _{B1} and QD T1 _{T1}.", "(b) Zoom of the region in the red rectangle in (a) with indicated transitions of QD B1 _{B1}, sensed by QD T1 _{T1}.", "(c) Zoom of the region in the orange rectangle in (a) with indicated transitions of QD T1 _{T1}, sensed by QD B1 _{B1}.Figure: (a) Stability diagram of the two gates B2 and T2 with the sensor B1 set to 0.662{0.662}{}.", "A single charge degeneracy line of the sensor is visible.", "When a charge transition of one of the two QDs B2 or T2 is aligned with the sensor degeneracy point, the sensor line shows a sharp horizontal (vertical) discontinuity for a transition of B2 (T2).", "(b) and (c) depict the same stability diagram with the gate voltage on the sensor gate set to 0.6615{0.6615}{} and 0.661{0.661}{} respectively.", "(d) Overlap of 20 stability diagrams with the sensor gate voltage ranging from 0.67{0.67}{} - 0.657{0.657}{}.", "(e) Stability diagram using QD T1 _{T1} as sensor.", "Using multiple sensor degeneracy points in a single stability diagram allows the identification of the charge regimes of the double quantum dot system.Figure: (a) Histogram of 10000 single shot traces at the parity readout position.", "Fitted distribution of blocked (orange) and non-blocked (green) state.", "(b) Fidelity and visibility error defined as 1-F for the fits in (a).We measure a stability diagram of the two potential sensing dots to decide on which one we use as a sensing dot.", "The stability diagram is depicted in FIG.", "REF (a) with a zoom in the respective region of interest in (b) and (c) for QD$_{B1}$ and QD$_{T1}$ .", "We use QD$_{B1}$ as a sensing dot as it shows a strong signal at a low number of electrons.", "Using the charge degeneracy points of the sensing dot as a charge sensor for the two center quantum dots allows to determine the number of charges in the quantum dots.", "FIG.", "REF depicts three stability diagrams for different sensor voltages.", "The transitions of QD$_{B2}$ (QD$_{T2}$ ) are indicated as orange (red) lines in.", "FIG.", "REF (d) allows to map out all transitions of the sensed dots by overlaying 20 stability diagram.", "Using multiple degeneracy points of the sensor allows to identify the different charge regimes in a single stability diagram as depicted in FIG.", "REF (e) (here T1 is used as sensor).", "In the main text, we use the first degeneracy point of the QD defined by B1 as sensor.", "In this configuration, the sensor is weakly tunnel coupled to the QD of T2, reducing lifting of PSB by co-tunneling through the sensor dot.", "Fidelity/Visibility definitions Following the analysis of Barthel et al.", "[24], we fit the signal distribution of the PSB measurement using: $n_S(V_{rf}) &= \\frac{1-{P_T}}{\\sqrt{2\\pi }\\sigma }e^{-\\frac{V_{rf}-V^S_{rf}}{2\\sigma ^2}} ,\\\\n_T(V_{rf}) &= \\frac{{P_T}}{\\sqrt{2\\pi \\sigma }}e^{-\\frac{\\tau _m}{T_1}}e^{-\\frac{(V_{rf}-V^T_{rf})^2}{2\\sigma ^2}} \\nonumber \\\\&+ \\int _{V_{rf}^S} ^{V_{rf}^T} \\frac{\\tau _m}{T_1}\\frac{{P_T}}{\\Delta V_{rf}}e^{-\\frac{V-V_{rf}^S}{\\Delta V_{rf}}\\frac{\\tau _m}{T_1}}e^{-\\frac{(V_{rf}-V)^2}{2\\sigma ^2}} \\frac{dV}{\\sqrt{2\\pi }\\sigma },$ where ${P_T}$ is the triplet probability, $V^S_{rf}$ ($V^T_{rf}$ ) is the signal expectation value for the non-blocked (blocked) state, $\\sigma $ is the standard deviation of the Gaussian signal distribution, $\\tau _m$ is the measurement integration time, and $T_1$ is the lifetime at the measurement position.", "While equation REF is a Gaussian distribution, describing the non-blocked state signal distribution, the excited state is given by equation , the convolution of a Gaussian distribution with an exponential decay.", "We use these functions to fit the signal distribution of our PSB measurements.", "An example is given in FIG.", "REF (a).", "The definition of fidelities allows a simple metric to estimate the error of the signal assignment as blocked (non-blocked).", "The singlet and triplet fidelities are defined as $F_S &= 1 - \\int _{V_T}^\\infty n_S (V)dV, \\\\F_T &= 1 - \\int _{-\\infty }^{V_T} n_T (V)dV, ,$ where $V_T$ is the threshold that separated the identification as blocked/non-blocked state.", "The visibility is then defined as $V = F_S + F_T -1.$ To optimize the threshold, one calculates the maximum of the visibility.", "FIG.", "REF (b) depicts the error of the fidelities and visibility for the fits from (a) close to the maximal visibility.", "Reflectometry backaction Figure: (a) Relaxation of the even state at the parity readout position for -71dBm{-71}{dBm} and -91dBm{-91}{dBm} RF power.", "The relaxation is faster for higher RF-power.", "(b) Lifetime T 1 T_1 of even states as a function of RF input power at the parity readout position.", "The graph shows a sharp decrease in T 1 T_1 of one order of magnitude around -100dBm{-100}{dBm}.", "(c) SNR as a function of integration time τ m \\tau _m and RF power.We investigate the backaction of the sensing mechanism on the spin state by performing fidelity measurements as a function of the RF-power.", "We find that for $> {-100}{dBm}$ , the lifetime at the measurement position is strongly decreased by one order of magnitude (see FIG.", "REF (a) and (b)).", "However, increasing the RF-power goes along with an increase in signal strength.", "We map the SNR as a function of RF-power and integration time in FIG.", "REF (c).", "The best SNR is found at around ${-90}{dBm}$ , where the lifetime $T_1$ at the measurement position is $\\approx {3}{}$ .", "Lifetime of $S_0$ in (1|1) Figure: Singlet population as a function of waiting time in (1|1) for a magnetic field of 150{150}{} (blue dots) and 300{300}{} (green dots).", "Fitted S 0 S_0 relaxation for 150{150}{} (orange) (300{300}{} (red)) giving a lifetime T 1 ≈640T_1 \\approx {640}{} (520{520}{}).While we initialize in the $S_0$ state, spin operations take place in the (1|1) regime where $T_-$ is the ground state.", "Therefore, the relaxation of $S_0$ in (1|1) must be much slower than the spin manipulation.", "We measure the $S_0$ relaxation by preparing a $S_0$ state, followed by pulsing in the (1|1) regime.", "Deep in the (1|1) regime where we can assume that the quantum dots are completely decoupled, we wait for a given time $\\tau $ ranging from ${0.1}{}$ to ${3}{}$ .", "After, we pulse to the parity readout position and measure the spin state.", "We fit the resulting signal distribution from 2000 data traces.", "The $T_- (T_+)$ population from these measurements is depicted in FIG.", "REF .", "We fit an exponential function with a decay time $T_1 \\approx {1.6}{}$ .", "This relaxation time is much longer than typical times of operation in (1|1) which are typically not longer than a few ${}$ , around six orders of magnitude shorter than the relaxation time.", "The high temperature of operation compared to the magnetic field of ${150}{}$ leads to a relatively high residual $S_0$ population in this experiment.", "Landau-Zener experiment Figure: Singlet population as a function of transfer speed.", "The inset shows a schematic pulse sequence of the Landau-Zener experiment.", "While we keep the amplitude Δϵ\\Delta \\epsilon constant, we vary the ramp time τ\\tau .", "The transfer speed is than calculated from ν=αeΔϵ τ\\nu = \\frac{\\alpha e \\Delta \\epsilon }{\\tau }, with α\\alpha and ee the gate lever arm and elementary charge, respectively.The transfer from the (2|0) regime to the (1|1) involves the passage of the $S_0$ - $T_-$ anti-crossing for $B_z \\ne 0$ .", "We perform a Landau-Zener experiment to estimate the fraction of non-adiabatic transfer.", "We initialize in $S_0$ and ramp with amplitude $\\Delta \\epsilon $ from (2|0) to (1|1) and return non-adiabatically to the measurement position.", "The pulse schematic is depicted in the inset in FIG.", "REF .", "We perform this experiment with different ramp speeds and calculate the transfer speed as $\\nu = \\frac{\\alpha e \\Delta \\epsilon }{\\tau }$ , with $\\alpha $ and $e$ the gate lever arm and elementary charge, respectively.", "The experiment was performed at a base temperature of ${100}{}$ and a magnetic field $B_z = {300}{}$ .", "We find the expected monotonous increase of singlet conservation with higher transfer speed and extract a $S_0$ - $T_-$ avoided crossing of ${120}{}$ .", "For very slow transfer, the population tends towards more and more population of the triplet ground state.", "The residual $S_0$ population could arise from charge noise which induces rapid fluctuations in the vicinity of the anticrossing reducing the maximum transfer probability [28].", "Measuring gate lever arm Figure: (a) Stability diagram showing a charge transition of T2.", "(b) and (c) depict traces along the charge degeneracy point of QD T2 _{T2} and a respective fit (orange) to the sensor signal.", "The depicted traces in (b) and (c) are averaged over 500 measurements.We determine the lever arm using the method proposed by Rossi et al.", "[29].", "We average traces along the charge transition of T2 indicated in FIG.", "REF (a).", "The conversion between gate voltage and dot potential is given by $E-E_0 = \\alpha e(V_{G}-V_0),$ where $E$ is the energy, $E_0$ is the energy of the system at the charge degeneracy point, $\\alpha $ is the gate lever arm, $e$ is the elementary charge, and $V_G$ is the gate voltage with respect to the voltage of the degeneracy point of the charge transition $V_0$ .", "For this transition the condition $E_C \\gg k_B T_e \\gtrsim \\Delta \\epsilon $ , with $E_C$ the charging energy, $k_B$ the Boltzmann constant, $T_e$ the electron temperature and $\\Delta \\epsilon $ the single-particle level separation, is fulfilled.", "We can thus approximate the transition broadening using the Fermi-Dirac distribution, $f(E-E_0) &= \\frac{1}{1+e^{-(E-E_0)/k_B T_e}} \\nonumber \\\\&= \\frac{1}{1+e^{\\alpha e(V_G - V_0)/k_B T_e}} .$ We extract from the charge transition fits in FIG.", "REF (b) and (c) a gate lever arm of $\\alpha = 0.05 \\pm 0.002$ .", "S-$T_0$ mixing Figure: Singlet population as a function of pulse duration measured at the S-T readout position.", "The square pulse is applied to gate B2 to move the electrons from the (0|2) to the (1|1) regime.", "We interpret the decay as a mixing between the S and T 0 T_0 states with a characteristic time, 18.5±2.518.5 \\pm {2.5}{}, in agreement with the spin decoherence in isotopically purified 28 Si^{28}Si measured in .We perform an exchange experiment similar to [31], [30] by preparing a singlet state through relaxation to the ground state in (0|2).", "After we pulse the T2 and B2 gates to the measurement position of ST-readout, followed by a square pulse on the B2 gate to move electrons deep in (1|1).", "After the AWG pulse, we measure the spin state using Pauli spin blockade at the S-T readout.", "We plot the singlet population as a function of pulse duration in FIG.", "REF .", "The singlet population as a function of pulse duration can be fitted with an exponential decay, indicating a characteristic time scale of $18.5 \\pm {2.5}{}$ .", "The exponential decay and the convergence towards a population of ${50}{}$ is an indication of the $S_0$ (1|1) and $T_0$ (1|1) occurs.", "However, no spin-orbit induced coherent oscillations are observed.", "This could be explained by a quenched difference of g factor when the magnetic field is applied perpendicular to the nanowire axis as found in planar MOS silicon double quantum dot with perpendicular to the plane magnetic field [32]." ], [ "Charge state of double quantum dot system", "We measure a stability diagram of the two potential sensing dots to decide on which one we use as a sensing dot.", "The stability diagram is depicted in FIG.", "REF (a) with a zoom in the respective region of interest in (b) and (c) for QD$_{B1}$ and QD$_{T1}$ .", "We use QD$_{B1}$ as a sensing dot as it shows a strong signal at a low number of electrons.", "Using the charge degeneracy points of the sensing dot as a charge sensor for the two center quantum dots allows to determine the number of charges in the quantum dots.", "FIG.", "REF depicts three stability diagrams for different sensor voltages.", "The transitions of QD$_{B2}$ (QD$_{T2}$ ) are indicated as orange (red) lines in.", "FIG.", "REF (d) allows to map out all transitions of the sensed dots by overlaying 20 stability diagram.", "Using multiple degeneracy points of the sensor allows to identify the different charge regimes in a single stability diagram as depicted in FIG.", "REF (e) (here T1 is used as sensor).", "In the main text, we use the first degeneracy point of the QD defined by B1 as sensor.", "In this configuration, the sensor is weakly tunnel coupled to the QD of T2, reducing lifting of PSB by co-tunneling through the sensor dot." ], [ "Fidelity/Visibility definitions", "Following the analysis of Barthel et al.", "[24], we fit the signal distribution of the PSB measurement using: $n_S(V_{rf}) &= \\frac{1-{P_T}}{\\sqrt{2\\pi }\\sigma }e^{-\\frac{V_{rf}-V^S_{rf}}{2\\sigma ^2}} ,\\\\n_T(V_{rf}) &= \\frac{{P_T}}{\\sqrt{2\\pi \\sigma }}e^{-\\frac{\\tau _m}{T_1}}e^{-\\frac{(V_{rf}-V^T_{rf})^2}{2\\sigma ^2}} \\nonumber \\\\&+ \\int _{V_{rf}^S} ^{V_{rf}^T} \\frac{\\tau _m}{T_1}\\frac{{P_T}}{\\Delta V_{rf}}e^{-\\frac{V-V_{rf}^S}{\\Delta V_{rf}}\\frac{\\tau _m}{T_1}}e^{-\\frac{(V_{rf}-V)^2}{2\\sigma ^2}} \\frac{dV}{\\sqrt{2\\pi }\\sigma },$ where ${P_T}$ is the triplet probability, $V^S_{rf}$ ($V^T_{rf}$ ) is the signal expectation value for the non-blocked (blocked) state, $\\sigma $ is the standard deviation of the Gaussian signal distribution, $\\tau _m$ is the measurement integration time, and $T_1$ is the lifetime at the measurement position.", "While equation REF is a Gaussian distribution, describing the non-blocked state signal distribution, the excited state is given by equation , the convolution of a Gaussian distribution with an exponential decay.", "We use these functions to fit the signal distribution of our PSB measurements.", "An example is given in FIG.", "REF (a).", "The definition of fidelities allows a simple metric to estimate the error of the signal assignment as blocked (non-blocked).", "The singlet and triplet fidelities are defined as $F_S &= 1 - \\int _{V_T}^\\infty n_S (V)dV, \\\\F_T &= 1 - \\int _{-\\infty }^{V_T} n_T (V)dV, ,$ where $V_T$ is the threshold that separated the identification as blocked/non-blocked state.", "The visibility is then defined as $V = F_S + F_T -1.$ To optimize the threshold, one calculates the maximum of the visibility.", "FIG.", "REF (b) depicts the error of the fidelities and visibility for the fits from (a) close to the maximal visibility." ], [ "Reflectometry backaction", "We investigate the backaction of the sensing mechanism on the spin state by performing fidelity measurements as a function of the RF-power.", "We find that for $> {-100}{dBm}$ , the lifetime at the measurement position is strongly decreased by one order of magnitude (see FIG.", "REF (a) and (b)).", "However, increasing the RF-power goes along with an increase in signal strength.", "We map the SNR as a function of RF-power and integration time in FIG.", "REF (c).", "The best SNR is found at around ${-90}{dBm}$ , where the lifetime $T_1$ at the measurement position is $\\approx {3}{}$ ." ], [ "Lifetime of $S_0$ in (1|1)", "While we initialize in the $S_0$ state, spin operations take place in the (1|1) regime where $T_-$ is the ground state.", "Therefore, the relaxation of $S_0$ in (1|1) must be much slower than the spin manipulation.", "We measure the $S_0$ relaxation by preparing a $S_0$ state, followed by pulsing in the (1|1) regime.", "Deep in the (1|1) regime where we can assume that the quantum dots are completely decoupled, we wait for a given time $\\tau $ ranging from ${0.1}{}$ to ${3}{}$ .", "After, we pulse to the parity readout position and measure the spin state.", "We fit the resulting signal distribution from 2000 data traces.", "The $T_- (T_+)$ population from these measurements is depicted in FIG.", "REF .", "We fit an exponential function with a decay time $T_1 \\approx {1.6}{}$ .", "This relaxation time is much longer than typical times of operation in (1|1) which are typically not longer than a few ${}$ , around six orders of magnitude shorter than the relaxation time.", "The high temperature of operation compared to the magnetic field of ${150}{}$ leads to a relatively high residual $S_0$ population in this experiment." ], [ "Landau-Zener experiment", "The transfer from the (2|0) regime to the (1|1) involves the passage of the $S_0$ - $T_-$ anti-crossing for $B_z \\ne 0$ .", "We perform a Landau-Zener experiment to estimate the fraction of non-adiabatic transfer.", "We initialize in $S_0$ and ramp with amplitude $\\Delta \\epsilon $ from (2|0) to (1|1) and return non-adiabatically to the measurement position.", "The pulse schematic is depicted in the inset in FIG.", "REF .", "We perform this experiment with different ramp speeds and calculate the transfer speed as $\\nu = \\frac{\\alpha e \\Delta \\epsilon }{\\tau }$ , with $\\alpha $ and $e$ the gate lever arm and elementary charge, respectively.", "The experiment was performed at a base temperature of ${100}{}$ and a magnetic field $B_z = {300}{}$ .", "We find the expected monotonous increase of singlet conservation with higher transfer speed and extract a $S_0$ - $T_-$ avoided crossing of ${120}{}$ .", "For very slow transfer, the population tends towards more and more population of the triplet ground state.", "The residual $S_0$ population could arise from charge noise which induces rapid fluctuations in the vicinity of the anticrossing reducing the maximum transfer probability [28]." ], [ "Measuring gate lever arm", "We determine the lever arm using the method proposed by Rossi et al.", "[29].", "We average traces along the charge transition of T2 indicated in FIG.", "REF (a).", "The conversion between gate voltage and dot potential is given by $E-E_0 = \\alpha e(V_{G}-V_0),$ where $E$ is the energy, $E_0$ is the energy of the system at the charge degeneracy point, $\\alpha $ is the gate lever arm, $e$ is the elementary charge, and $V_G$ is the gate voltage with respect to the voltage of the degeneracy point of the charge transition $V_0$ .", "For this transition the condition $E_C \\gg k_B T_e \\gtrsim \\Delta \\epsilon $ , with $E_C$ the charging energy, $k_B$ the Boltzmann constant, $T_e$ the electron temperature and $\\Delta \\epsilon $ the single-particle level separation, is fulfilled.", "We can thus approximate the transition broadening using the Fermi-Dirac distribution, $f(E-E_0) &= \\frac{1}{1+e^{-(E-E_0)/k_B T_e}} \\nonumber \\\\&= \\frac{1}{1+e^{\\alpha e(V_G - V_0)/k_B T_e}} .$ We extract from the charge transition fits in FIG.", "REF (b) and (c) a gate lever arm of $\\alpha = 0.05 \\pm 0.002$ ." ], [ "S-$T_0$ mixing", "We perform an exchange experiment similar to [31], [30] by preparing a singlet state through relaxation to the ground state in (0|2).", "After we pulse the T2 and B2 gates to the measurement position of ST-readout, followed by a square pulse on the B2 gate to move electrons deep in (1|1).", "After the AWG pulse, we measure the spin state using Pauli spin blockade at the S-T readout.", "We plot the singlet population as a function of pulse duration in FIG.", "REF .", "The singlet population as a function of pulse duration can be fitted with an exponential decay, indicating a characteristic time scale of $18.5 \\pm {2.5}{}$ .", "The exponential decay and the convergence towards a population of ${50}{}$ is an indication of the $S_0$ (1|1) and $T_0$ (1|1) occurs.", "However, no spin-orbit induced coherent oscillations are observed.", "This could be explained by a quenched difference of g factor when the magnetic field is applied perpendicular to the nanowire axis as found in planar MOS silicon double quantum dot with perpendicular to the plane magnetic field [32]." ] ]
2207.10523
[ [ "DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using\n Unsupervised Domain-Classifier Guided Network" ], [ "Abstract Shadow removal from a single image is generally still an open problem.", "Most existing learning-based methods use supervised learning and require a large number of paired images (shadow and corresponding non-shadow images) for training.", "A recent unsupervised method, Mask-ShadowGAN, addresses this limitation.", "However, it requires a binary mask to represent shadow regions, making it inapplicable to soft shadows.", "To address the problem, in this paper, we propose an unsupervised domain-classifier guided shadow removal network, DC-ShadowNet.", "Specifically, we propose to integrate a shadow/shadow-free domain classifier into a generator and its discriminator, enabling them to focus on shadow regions.", "To train our network, we introduce novel losses based on physics-based shadow-free chromaticity, shadow-robust perceptual features, and boundary smoothness.", "Moreover, we show that our unsupervised network can be used for test-time training that further improves the results.", "Our experiments show that all these novel components allow our method to handle soft shadows, and also to perform better on hard shadows both quantitatively and qualitatively than the existing state-of-the-art shadow removal methods." ], [ "Introduction", "Shadow removal from a single image can benefit many applications, such as image editing, scene relighting, etc., [19], [17], [16].", "Unfortunately, in general, removing shadows from a single image is still an open problem.", "Existing physics-based methods for shadow removal [7], [6], [10] are based on entropy minimization that can capture the invariant features of shadow and non-shadow regions belong to the same surfaces in the log-chromaticity space.", "These methods, however, tend to fail, particularly when the image surfaces are close to achromatic (e.g.", "gray or white surfaces), and are not designed to handle soft shadow images.", "Unlike physics-based methods, deep-learning methods, e.g.", "[24], [27], [14], [20], [1], [21], are more robust to different conditions of image surfaces and lighting.", "However, most of these methods are based on fully-supervised learning, which means that for training, they require pairs of shadow and their corresponding non-shadow images.", "To collect these image pairs in a large amount, particularly for images containing diverse scenes and shadows can be considerably expensive.", "Recently, Hu propose an unsupervised method, Mask-ShadowGAN [13], the network architecture of which is based on CycleGAN [34].", "To remove shadows, the method mainly relies on adversarial training that employs a discriminator to check the quality of the generated output.", "Unfortunately, due to the absence of ground truth, the discriminator relies solely on unpaired non-shadow images, which can cause the generator to produce incorrect outputs.", "Moreover, the method uses a binary mask to represent shadow regions present in the input image, making it inapplicable to soft shadow images.", "Fig.", "REF shows an example where for the given soft-shadow input image, the output generated by the method [13] is improper.", "In this paper, our goal is to remove both hard and soft shadows from a single image.", "To achieve this, we propose DC-ShadowNet, an unsupervised network guided by the shadow/shadow-free domain classifier.", "Specifically, we integrate a domain classifier (that classifies the input image to either shadow or shadow-free domain) into our generator and its corresponding discriminator.", "This allows our generator and discriminator to focus on shadow regions and thus perform better shadow removal.", "Unlike the existing unsupervised method [13], which only relies on adversarial training based on an unpaired discriminator (i.e.", "using unpaired non-shadow images as reference images), our method uses additional novel unsupervised losses that enable our method to achieve better shadow removal results.", "Our new losses are based on physics-based shadow-free chromaticity, shadow-robust perceptual features, and boundary smoothness.", "Our physics-based shadow-free chromaticity loss employs a shadow-free chromaticity image, which is obtained from the input shadow image by performing entropy minimization in the log-chromaticity space [7].", "Our shadow-robust perceptual features loss uses shadow-robust features obtained from the input shadow image using the pre-trained VGG-16 network [15].", "We also add a boundary smoothness loss to ensure that our output shadow-free image has smoother transitions in the regions that contained shadow boundaries.", "All these ideas enable our method to better deal with hard and soft shadow images compared to existing methods like [13] (see Fig.", "REF for an example showing the better performance of our method).", "Furthermore, we show that our method being unsupervised can be used for test-time training to further improve the performance of our method.", "As a summary, here are our contributions: We introduce DC-ShadowNet, a new unsupervised single-image shadow removal network guided by a domain classifier to focus on shadow regions.", "We propose novel unsupervised losses based on physics-based shadow-free chromaticity, shadow-robust perceptual features, and boundary smoothness losses for robust shadow removal.", "To our knowledge, our method is the first unsupervised method to perform shadow removal robustly for both hard and soft shadow in a single image." ], [ "Related work", "Physics-based shadow removal methods (e.g.", "[4], [3], [5], [7], [6]) are based on the physics models of illumination and surface colors.", "These methods assume that the surface colors in the input image are chromatic, and hence they are erroneous when this assumption does not hold.", "These methods are designed to remove hard shadows only.", "In contrast, our method is based on unsupervised learning and is designed to handle both hard and soft shadows.", "Also, our method is more robust in dealing with achromatic surfaces.", "Some other non-learning-based methods rely on user interaction.", "Gryka [9] propose a regression model to learn a mapping function of shadow image regions and their corresponding shadow mattes.", "However, they need the user to provide brush strokes to relight shadow regions.", "Guo [10], [11] use annotated ground truth to learn the appearances of shadow regions.", "Unlike these methods, our method is learning-based and does not rely on hand-crafted feature descriptors, making it more robust.", "Moreover, our method does not need any annotated ground truth and user interaction; hence, it is more practical and efficient.", "To address the aforementioned limitations of non-deep learning methods, many deep learning methods are proposed.", "Wang  [27] use a stacked conditional GAN (ST-CGAN) to detect and remove shadows jointly.", "Le  [20], [21] propose SP+M-Net do shadow removal using image decomposition.", "Hu  [14], [12] propose to add global and direction-aware context into the direction-aware spatial context (DSC) module.", "Ding  [2] introduce an LSTM-based attentive recurrent GAN (ARGAN) to detect and remove shadows.", "All these methods are trained on paired data using supervised learning.", "Hence, training them using various soft shadows and complex scenes is difficult, since obtaining the ground truths is intractable.", "In contrast, our method is based on unsupervised learning and does not need any paired data.", "Figure: Network Architecture of Our DC-ShadowNet.We have two domains: shadow, ss, and shadow-free, sfsf.", "Our shadow removal generator is represented by G s G_s.", "It consists of an encoder F s g F^g_s, a decoder H s g H^g_s, and a domain classifier Φ s g \\Phi ^g_s.We also use a discriminator D sf D_{sf} that consists of its own encoder F sf d F^d_{sf}, a classifier C sf d C^d_{sf} and a domain classifier Φ sf d \\Phi ^d_{sf}.For the input shadow image 𝐈 s \\mathbf {I}_s, its corresponding output shadow-free image is represented by 𝐙 sf \\mathbf {Z}_{sf}.Also, for the unpaired input shadow-free image 𝐈 sf \\mathbf {I}_{sf}, G s G_{s} reconstruct the image back.The domain classifiers, Φ s g \\Phi ^g_s and Φ sf d \\Phi ^d_{sf}, are used to classify whether the inputs to their respective networks, G s G_s and D sf D_{sf}, belong to shadow (ss) or shadow-free (sfsf) domain.To guide our generator G s G_s to do shadow removal, other than adversarial loss from the discriminator D sf D_{sf}, we include novel losses: shadow-free chromaticity loss ℒ chroma \\mathcal {L}_{\\text{chroma}} (purple) guided by the physics-based shadow-free chromaticity σ sf phy \\sigma _{sf}^\\text{phy} obtained from 𝐈 s \\mathbf {I}_s; shadow-robust feature loss ℒ feature \\mathcal {L}_{\\text{feature}} (red) guided by the shadow-robust perceptual features V(𝐈 s )V(\\mathbf {I}_s) obtained from 𝐈 s \\mathbf {I}_s, and boundary smoothness loss ℒ smooth \\mathcal {L}_{\\text{smooth}} (orange) guided by the boundary detection of our generated soft shadow mask 𝐌 s \\mathbf {M}_s.Recently, Hu  [13] propose an unsupervised deep-learning method Mask-ShadowGAN.", "Unfortunately, since it mainly relies on adversarial training for shadow removal, it cannot guarantee that the generated output images are shadow-free since there is no strong guidance for the network to do so.", "Moreover, it cannot handle soft shadows due to the use of binary masks.", "In contrast, our method DC-ShadowNet uses new additional unsupervised losses and domain-classifier guided network that helps our method to more effectively deal with hard and soft shadows." ], [ "Proposed Method", "Fig.", "REF shows the architecture of our network, DC-ShadowNet.", "Given a shadow input image, $\\mathbf {I}_s$ , we use a generator, $G_s$ , to transform it into a shadow-free output image $\\mathbf {Z}_{sf}$ .", "Also, given an unpaired shadow-free input image, $\\mathbf {I}_{sf}$ , we expect the generator, $G_{s}$ , to simply reconstruct the image back.", "Therefore, the generator $G_s$ , whether its input is a shadow or shadow-free image, always generates a shadow-free output image.", "Note that, in our method, we have two domains: shadow, $s$ , and shadow-free, $sf$ .", "Our generator $G_s$ consists of an encoder ($F^g_s$ ), decoder ($H^g_s$ ) and a domain classifier ($\\Phi ^g_s$ ).", "We use a discriminator $D_{sf}$ to assess the quality of the shadow removal output.", "It consists of an encoder ($F^d_{sf}$ ), a classifier ($C^d_{sf}$ ) and a domain classifier ($\\Phi ^d_{sf}$ ).", "Both the domain classifiers, $\\Phi ^g_s$ and $\\Phi ^d_{sf}$ , are used to classify the inputs of their respective modules, $G_s$ and $D_{sf}$ , belonging to either shadow or shadow-free domain.", "However, unlike $\\Phi ^g_s$ , which is trained together with $G_s$ , $\\Phi ^d_{sf}$ is pre-trained, and its weights are kept frozen while training $D_{sf}$ .", "The underlying idea of integrating the domain classifier into our generator and its discriminator is to guide our network to focus on shadow regions.", "The reference images of our discriminator are the unpaired shadow-free real images.", "Our discriminator's classifier, $C^d_{sf}$ , outputs the real/fake binary label, where real refers to the label given to an image that belongs to the reference images.", "While not shown in Fig.", "REF , for the sake of clarity, we employ another generator $G_{sf}$ and the shadow mask to transform the shadow-free output image back to a shadow image, in order to enforce reconstruction consistency [34] and locate the shadow regions.", "Also, another discriminator $D_{s}$ is used to distinguish whether the generated shadow image is real or not.", "Our method, DC-ShadowNet, is trained in an unsupervised manner using our losses, which are described in the following sections." ], [ "Shadow-Free Chromaticity Loss", "Given a shadow input image $\\mathbf {I}_s$ , we obtain a physics-based shadow-free chromaticity image $\\sigma _{sf}^\\text{phy}$ , which is used to guide our shadow removal generator $G_s$ , through our shadow-free chromaticity loss function.", "Obtaining $\\sigma _{sf}^\\text{phy}$ from $\\mathbf {I}_s$ requires two steps: (1) Entropy Minimization, and (2) Illumination Compensation.", "Figure: Shadow-Free Chromaticity Loss.", "The upper part is the physics-based pipeline where we use entropy minimization followed by illumination compensation to generate the shadow-free chromaticity image σ sf phy \\sigma _{sf}^\\text{phy} from the input image 𝐈 s \\mathbf {I}_s.The lower part shows our shadow removal generator G s G_s guided by σ sf phy \\sigma _{sf}^\\text{phy} through our shadow-free chromaticity loss ℒ chroma \\mathcal {L}_\\text{chroma}.Figure: (a) Input shadow image 𝐈 s {\\mathbf {I}_{s}}, (b) Shadow-free chromaticity after entropy minimization σ sf ent \\sigma _{sf}^{\\text{ent}}, (c) Shadow-free chromaticity after illumination compensation σ sf phy \\sigma _{sf}^{\\text{phy}}, (d) Output shadow-free image 𝐙 sf \\mathbf {Z}_{sf}, and (e) Chromaticity map the of output image σ sf 𝐙 \\sigma _{sf}^\\textbf {Z}.", "Our shadow-free chromaticity loss constrains (e) to be similar to (c) facilitating better shadow removal.Entropy Minimization Following [6], as shown in Fig.", "REF , we plot the input shadow image $\\mathbf {I}_s$ onto the log-chromaticity space, calculate the entropy, and use the entropy minimization to find the projection direction $\\theta $ , which is specific to $\\mathbf {I}_s$ .", "From $\\theta $ , we can obtain a shadow-free chromaticity map $\\sigma _{sf}^{\\text{ent}}$ that no longer contains any shadows (see Figs.", "REF and REF ).", "However, owing to the projection, there is a color shift present in $\\sigma _{sf}^{\\text{ent}}$ , which can be corrected by using the illumination compensation procedure.", "Illumination Compensation To correct the color of the shadow-free chromaticity map $\\sigma _{sf}^{\\text{ent}}$ , following [3], we add back the original illumination color of the non-shadow regions to the map.", "For this, we use uniformly sampled $30\\%$ of the brightest pixels from the input image $\\mathbf {I}_s$ based on the assumption that these pixels are located in the non-shadow regions of $\\mathbf {I}_s$ .", "Once we reinstate the illumination color, we can obtain a new shadow-free chromaticity map $\\sigma _{sf}^\\text{phy}$ , (see Figs.", "REF and REF ).", "Having obtained our shadow-free chromaticity, $\\sigma _{sf}^\\text{phy}$ , for the output shadow-free image $\\mathbf {Z}_{sf}$ , we compute its chromaticity map $\\sigma _{sf}^\\mathbf {Z}$ by: $\\sigma _{{sf}_c}^\\mathbf {Z}\\!= \\!\\frac{\\mathbf {Z}_{{sf}_c}}{(\\mathbf {Z}_{{sf}_r} + \\mathbf {Z}_{{sf}_g} + \\mathbf {Z}_{{sf}_b})},$ where $c\\in \\lbrace r,g,b\\rbrace $ represents a color channel, $\\mathbf {Z}_{sf} = [\\mathbf {Z}_{{sf}_r}, \\mathbf {Z}_{{sf}_g}, \\mathbf {Z}_{{sf}_b}]$ , and $\\sigma _{sf}^\\mathbf {Z} = [\\sigma _{{sf}_r}^\\mathbf {Z}, \\sigma _{{sf}_g}^\\mathbf {Z}, \\sigma _{{sf}_b}^\\mathbf {Z}]$ .", "We can now define our shadow-free chromaticity loss as: $\\mathcal {L}_{\\rm chroma}(G_s) = \\mathbb {E}_{\\mathbf {I}_s}\\big [||\\sigma _{sf}^\\mathbf {Z} - \\sigma _{sf}^\\text{phy}||_{1}\\big ].$ Using the loss function expressed in Eq.", "(REF ), we enforce the chromaticity of the output shadow-free image, $\\sigma _{sf}^\\mathbf {Z}$ , to be the same as our physics-based shadow-free chromaticity $\\sigma _{sf}^\\text{phy}$ , which can be observed in the results shown in Fig.", "REF for both hard shadow and soft shadow imagesFor surfaces that are close to being achromatic, the entropy minimization can fail, which can lead to the improper recovery of the shadow-free chromaticity map.", "However, due to the presence of our other unsupervised losses, our method can still generate proper shadow removal results..", "Figure: (a) Input shadow image 𝐈 s \\mathbf {I}_{s}, (b) Sample feature map for 𝐈 s \\mathbf {I}_{s}, (c) Output shadow-free image 𝐙 sf \\mathbf {Z}_{sf}, and (d) Sample feature map for 𝐙 sf \\mathbf {Z}_{sf}.", "We can observe that features in (b) for the input shadow images are less affected by shadows, and they are similar to the features in (d) owing to our shadow-robust feature loss." ], [ "Shadow-Robust Feature Loss", "Our shadow-robust feature loss is based on the perceptual features obtained from the pre-trained VGG-16 network [15], [26].", "Since we do not have ground truth to obtain the correct shadow-free features, to guide the shadow-free output, we use features from the input shadow image itself.", "Our underlying idea is that, since with some degree of shadows and lighting conditions, object classification using the pre-trained VGG-16 is known to be robust [28], there should be some features in the pre-trained VGG-16 that are less affected by shadows.", "Based on this, we perform a calibration experiment and find that the Conv22 layer in the VGG-16 network provides features that are least affected by shadows.", "Hence, from the input shadow image, we obtain the shadow-robust features and use them to guide our shadow-free output image.", "Specifically, given an input shadow image $\\mathbf {I}_s$ and the corresponding shadow-free output image $\\mathbf {Z}_{sf}$ , we define our shadow-robust feature loss as: $\\mathcal {L}_{\\rm feature}(G_s) = \\mathbb {E}_{\\mathbf {I}_s}[\\big \\Vert V(\\mathbf {Z}_{sf}) - V(\\mathbf {I}_s)\\big \\Vert _1],$ where $V(\\mathbf {I}_s)$ and $V(\\mathbf {Z}_{sf})$ denote the feature maps extracted from the Conv22 layer of the pre-trained VGG-16 network for $\\mathbf {I}_s$ and $\\mathbf {Z}_{sf}$ respectively.", "Fig.", "REF shows some examples where we can observe that the features $V(\\mathbf {I}_s)$ are less affected by shadows and represent more of structural information (like edges).", "Figure: Domain Classification and Shadow Attention.In the generator G s G_s, its encoder F s g F^g_s extracts feature maps Π s g \\mathbf {\\Pi }^g_s from the input shadow image 𝐈 s \\mathbf {I}_s.", "As in , using global average pooling (GAP), the domain classifier Φ s g \\Phi ^g_s is trained to learn the weights 𝐰 s g \\mathbf {w}^g_s of the feature maps.", "Averaging the weighted feature maps generates an attention map 𝐀 𝐬 𝐠 \\mathbf {A^g_{s}}, i.e.", "𝐀 s g =1 n∑ i=1 n 𝐰 s g i Π s g i \\mathbf {A}^g_s = \\frac{1}{n}\\sum _{i=1}^{n}{\\mathbf {w}^g_s}_i {\\mathbf {\\Pi }^g_s}_i (nn being the total number of feature maps), which clearly shows that the network is focusing on shadow regions." ], [ "Domain Classification Loss", "We incorporate an attention mechanism that allows our DC-ShadowNet to know the shadow removal/restoration regions [33], [23], [18].", "To achieve this, we create a domain classifier $\\Phi ^g_s$ and integrate it with the generator $G_s$ .", "We train $\\Phi ^g_s$ to classify whether the input to $G_s$ is from the shadow or shadow-free domain.", "Fig.", "REF shows the integration of $\\Phi ^g_s$ into $G_s$ to obtain an attention map $\\mathbf {A^g_{s}}$ that highlights shadow regions.", "We also add a similar domain classifier $\\Phi ^d_{sf}$ to the discriminator $D_{sf}$ .", "This allows our network to selectively focus on important shadow regions and generate better shadow removal results (see Fig.", "REF ).", "Since the generator can accept either a shadow or shadow-free image as input, it allows us to train it together with its domain classifier.", "However, for the discriminator, the domain of its input image, which is the output of the generator, can be ambiguousIn the early stage of training, shadow removal can be improper, and the output of the generator can still have shadows.", "Hence, it is difficult to ensure that the domain of the output is always shadow-free.. For this reason, we pre-train the domain classifier of the discriminator using the following classification loss: $\\mathcal {L}_{\\rm domcls}(D_{sf}) =& \\mathbb {E}_{\\mathbf {I}_{s}} \\Big [-\\log \\big (\\Phi ^d_{sf}(F^d_{sf}(\\mathbf {I}_{s}))\\big )\\Big ]+\\nonumber \\\\& \\mathbb {E}_{\\mathbf {I}_{sf}} \\Big [-\\log \\big (1-\\Phi ^d_{sf}(F^d_{sf}(\\mathbf {I}_{sf}))\\big )\\Big ],$ and after pre-training, we freeze its weights during the main training cycle that trains our entire network (see Fig.", "REF ).", "To train the domain classifier of the generator, we use a similar classification loss: $\\mathcal {L}_{\\rm domcls}(G_s) =& \\mathbb {E}_{\\mathbf {I}_s} \\Big [-\\log \\big (\\Phi ^g_{s}(F^g_s(\\mathbf {I}_s))\\big )\\Big ]+\\nonumber \\\\& \\mathbb {E}_{\\mathbf {I}_{sf}} \\Big [-\\log (1-\\Phi ^g_{s}(F^g_s(\\mathbf {I}_{sf})\\big )\\Big ].$ Figure: (a) Input shadow image 𝐈 s \\mathbf {I}_s, (b) Attention map 𝐀 s g \\mathbf {A}^g_s, and (c) Output shadow-free image 𝐙 sf \\mathbf {Z}_{sf}.", "The attention maps clearly indicate the shadow regions of the input shadow images." ], [ "Boundary Smoothness Loss", "To ensure that the output shadow-free image $\\mathbf {Z}_{sf}$ have smoother transitions in the boundaries defined by the shadow regions of the input shadow image $\\mathbf {I}_s$ , we also use a boundary smoothness loss: $\\mathcal {L}_{\\rm smooth}(G_s) =\\mathbb {E}_{\\mathbf {I}_s}\\Big [\\big \\Vert \\text{B}(\\mathbf {M}_s) *|\\nabla (\\mathbf {Z}_{sf})|\\Vert _1\\Big ],$ where $\\nabla $ is the gradient operation, $\\text{B}$ is a noise-robust function [29], [25], [31] to compute the boundaries of the shadow regions from our shadow mask $\\mathbf {M}_s$ .", "To obtain $\\mathbf {M}_s$ , we compute the difference between the input shadow image $\\mathbf {I}_s$ and output shadow-free image $\\mathbf {Z}_{sf}$ , and apply our mask detection function $\\text{F}$ on the difference: $\\mathbf {M}_s = \\text{F}\\bigl ({\\mathbf {I}_s}_c - {\\mathbf {Z}_{sf}}_c\\bigr )\\!=\\!\\sum _{c\\in \\lbrace r,g,b\\rbrace }\\frac{1}{3}\\Big \\vert \\bigl (\\text{N}({\\mathbf {I}_s}_c - {\\mathbf {Z}_{sf}}_c)\\bigr )\\Big \\vert ,$ where the function $\\text{N}$ is a normalization function defined as $\\text{N}(\\mathbf {I}) = (\\mathbf {I}- \\mathbf {I}_\\text{min})/(\\mathbf {I}_\\text{max} - \\mathbf {I}_\\text{min})$ , where $\\mathbf {I}_\\text{max}$ and $\\mathbf {I}_\\text{min}$ are the maximum and minimum values of $\\mathbf {I}$ , respectively.", "Note that, our shadow mask $\\mathbf {M}_s$ is a soft map and have the values in the range of $[0, 1]$ .", "See Fig.", "REF for some examples.", "The noise-robust function $\\text{B}$ is defined as: $\\text{B}(\\mathbf {M}_s) =$ ${\\mathbf {B}_{s}}_x + {\\mathbf {B}_{s}}_y$ where ${\\mathbf {B}_{s}}_x(\\text{p}) = \\big \\vert \\sum _{\\text{q}\\in \\mathbf {R}_\\text{p}}g_{\\text{p},\\text{q}}\\partial _x(\\mathbf {M}_s(\\text{q}))\\big \\vert $ and ${\\mathbf {B}_{s}}_y(\\text{p}) = \\big \\vert \\sum _{\\text{q}\\in \\mathbf {R}_\\text{p}}g_{\\text{p},\\text{q}}\\partial _y(\\mathbf {M}_s(\\text{q}))\\big \\vert $ , $\\partial _x$ and $\\partial _y$ are partial derivatives in horizontal and vertical directions respectively, $\\text{p}$ defines a pixel, $\\mathbf {R}_\\text{p}$ is a 3$\\times $ 3 window around $\\text{p}$ , and $g_{\\text{p},\\text{q}}$ is a weighing function measuring spatial affinity defined as $g_{\\text{p},\\text{q}}=\\exp \\big (\\frac{-(\\text{p}-\\text{q})^2}{2\\tau ^2}\\big )$ , where $\\tau $ is set to 0.01 by default.", "See Fig.", "REF (c) for some examples of our soft boundary detection.", "Figure: (a) Input shadow image, (b) Soft shadow mask 𝐌 s \\mathbf {M}_s, (c) Detected shadow boundaries, and (d) Our output shadow-free results.", "We can observe that our boundary smoothness loss helps in having smoother outputs in the shadow boundary regions." ], [ "Adversarial, Consistency and Identity Losses", "For shadow removal, we use the generator $G_s$ , which is coupled with a discriminator $D_{sf}$ .", "To ensure reconstruction consistency, we use another generator $G_{sf}$ coupled with its own discriminator $D_{s}$ .", "We use adversarial losses to train our DC-ShadowNet: $\\mathcal {L}_{\\rm adv}(G_s, D_{sf})=\\mathbb {E}_{\\mathbf {I}_{sf}} \\big [&\\log \\big (D_{sf}(\\mathbf {I}_{sf})\\big )\\big ]+\\\\\\mathbb {E}_{\\mathbf {I}_s} \\big [&\\log \\big (1-D_{sf}(G_s(\\mathbf {I}_s))\\big )\\big ],\\nonumber \\\\\\mathcal {L}_{\\rm adv}(G_{sf}, D_s)=\\mathbb {E}_{\\mathbf {I}_{s}} \\big [&\\log \\big (D_s(\\mathbf {I}_{s})\\big )\\big ]+ \\\\ \\mathbb {E}_{\\mathbf {I}_{sf}}\\big [&\\log \\big (1-D_s(G_{sf}(\\mathbf {I}_{sf}, \\mathbf {M}_s))\\big )\\big ].\\nonumber $ During training, the losses expressed in Eqs.", "(REF ) and () are actually minimized as $\\min _{G_s}\\max _{D_{sf}}$ $(\\mathcal {L}_{\\rm adv}(G_s, D_{sf}))$ and $\\min _{G_{sf}}\\max _{D_s}$ $(\\mathcal {L}_{\\rm adv}(G_{sf}, D_s))$ respectively.", "Note that, unlike generator $G_s$ , the generator $G_{sf}$ takes the mask $\\mathbf {M}_s$ (from Eq.", "REF ) as input to help render more proper shadow images [13].", "Following [34], [30], we define our reconstruction consistency losses by: $\\mathcal {L}_{\\rm cons}(G_s)&= \\mathbb {E}_{\\mathbf {I}_s}\\big [||G_{sf}\\big (G_s(\\mathbf {I}_s), \\mathbf {M}_s\\big )-\\mathbf {I}_{s}||_{1}\\big ],\\\\\\mathcal {L}_{\\rm cons}(G_{sf})& = \\mathbb {E}_{\\mathbf {I}_{sf}}\\big [||G_s\\big (G_{sf}(\\mathbf {I}_{sf}, \\mathbf {M}_s)\\big )-\\mathbf {I}_{sf}||_{1}\\big ].$ While our $G_s$ is designed to remove shadows from shadow input image $\\mathbf {I}_s$ , we also encourage it to output the same image as input, if the input is a shadow-free image $\\mathbf {I}_{sf}$ .", "We achieve this by using the following identity losses [34]: $\\mathcal {L}_{\\rm iden}(G_s) &=\\mathbb {E}_{\\mathbf {I}_{sf}} \\big [||(G_s(\\mathbf {I}_{sf}))-\\mathbf {I}_{sf}||_{1}\\big ],\\\\\\mathcal {L}_{\\rm iden}(G_{sf}) &=\\mathbb {E}_{\\mathbf {I}_{s}} \\big [||(G_{sf}(\\mathbf {I}_s,\\mathbf {M}_0)-\\mathbf {I}_s)||_{1}\\big ].$ where $\\mathbf {M}_0$ represents a mask with all zero values.", "Overall Loss We multiply each loss function with its respective weight, and sum them together to obtain our overall loss function.", "The weights of the losses, {$\\mathcal {L}_{\\rm chroma}$ ,$\\mathcal {L}_{\\rm feature}$ , $\\mathcal {L}_{\\rm smooth}$ , $\\mathcal {L}_{\\rm domcls}$ , $\\mathcal {L}_{\\rm adv}$ , $\\mathcal {L}_{\\rm cons}$ , $\\mathcal {L}_{\\rm iden}$ }, are represented by {$\\lambda _\\text{chroma}$ , $\\lambda _\\text{feat}$ , $\\lambda _\\text{sm}$ , $\\lambda _\\text{dom}$ , $\\lambda _\\text{adv}$ , $\\lambda _\\text{cons}$ , $\\lambda _\\text{iden}$ }.", "Table: RMSE results on the SRD dataset.", "All, S and NS represent entire, shadow and non-shadow regions respectively.Table: RMSE results on the AISTD dataset.", "All, S and NS represent entire, shadow and non-shadow regions respectively.", "M shows that ground truth shadow masks are also used in training." ], [ "Experiments", "To evaluate our method, we use five datasets: SRD [24], adjusted ISTD (AISTD) [20], ISTD [27], USR [13] and LRSS [9], where LRSS is a soft shadow dataset.", "To ensure fair comparisons, all the unsupervised baselines, including ours are trained and tested on the same datasets.", "For the SRD dataset, for Table REF and Fig.", "REF rows 2-4, we use 2680 shadow images and 2680 shadow-free images for training.", "We use 408 shadow images that have shadow-free ground truth for testing.", "Similarly, for Table REF , we use 1330 training and 540 testing AISTD images; Fig.", "REF row 1, we use 1330 training and 540 testing ISTD images.", "For the USR dataset, we use 1956 shadow, 1770 shadow-free images for training, 489 shadow images for testing.", "However, for testing, the USR dataset does not provide paired shadow and shadow-free images.", "Our DC-ShadowNet is trained in an unsupervised manner (Sec. ).", "The weights of our losses $\\lbrace \\lambda _\\text{chroma},\\lambda _\\text{feat},\\lambda _\\text{sm},\\lambda _\\text{iden}, \\lambda _\\text{adv},\\lambda _\\text{cons},\\lambda _\\text{dom}\\rbrace $ are set empirically to $\\lbrace 1,1,1,10,1,10,1\\rbrace $ .", "Following the baselines [11], [13], to evaluate shadow removal performanceResults of [13], [14], [27], [8], [20], [1] are taken from their official implementations.", "Results of [9], [11] are obtained from their project website: http://visual.cs.ucl.ac.uk/pubs/softshadows/.", "The quantitative results are taken from the paper [21]., we use root mean squared error (RMSE) between the ground truth and the predicted shadow-free imageAs mentioned in [22], the default RMSE evaluation code used by all methods (including ours) actually computes mean absolute error (MAE)..", "Hence, lower numbers show better performance.", "Table: RMSE (lower is better) and PSNR (higher is better) results on the LRSS dataset (soft shadow dataset).", "M and S respectively show that ground truth shadow masks and synthetic paired data are used in training.", "P and UP denote paired and unpaired training, respectively.Figure: Comparison results on the ISTD (top row) and SRD (bottom three rows) datasets.", "(a) Input image, (b) Our method, unsupervisedmethod (c) Mask-ShadowGAN , weakly-supervised method (d) Param+M+D-Net  (top row), supervised methods DSC , (e) ST-CGAN  (top row), DeshadowNet , and traditional method (f) Gong  .", "Our method trained using unsupervised learning provides the best performance.Results on Hard Shadows We conduct quantitative evaluations on the SRD and AISTD datasets, and the corresponding results are shown in Table REF and Table REF , respectively.", "For comparisons, we use the state-of-the-art unsupervised shadow removal method Mask-ShadowGAN [13], weakly-supervised method Param+M+D-Net [21], supervised methods DSC [14], DeshadowNet [24], ST-CGAN [27], and traditional methods Gong .", "[8], Guo .", "[11], and Yang .", "[32].", "From Tables REF and REF , our DC-ShadowNet trained in an unsupervised manner achieves the best performance compared to the baseline methods.", "Compared to the state-of-the-art unsupervised method Mask-ShadowGAN [13], our results for the shadow regions are better by $\\sim $ 33% and $\\sim $ 18% on the SRD and AISTD datasets, respectively.", "The qualitative results for the SRD (rows 2-4) and ISTD (top row) datasets are shown in Fig.", "REF , which include challenging conditions and diverse objects.", "For example, the shadow image contains shadows casted on semantic objects (i.e., building, wall).", "In Fig.", "REF , the method [13] alters the colors of the non-shadow regions and cannot properly handle shadow boundaries.", "For the method [8], the recovery of the shadow-free images is unsatisfactory.", "In comparison, our DC-ShadowNet performs better, showing the effectiveness of our domain classification network and our novel unsupervised losses.", "Table: Ablation experiments of our method using the SRD dataset.", "All, S and NS represent entire, shadow and non-shadow regions, respectively.", "The numbers represent RMSE.Figure: Comparison results on the soft shadow LRSS dataset (a) Input image, (b) Our result, (c) Unsupervised method Mask-ShadowGAN , Supervised methods (d) SP+M-Net  and (e) DHAN .", "(f)∼\\sim (h) are the results of the traditional methods (auto means automatic detection).", "Our method, trained using unsupervised learning, generates better shadow-free results.Results on Soft Shadows The LRSS dataset has 134 shadow images, mainly contains soft-shadow images.", "We pre-trained our DC-ShadowNet on the SRD training set, then we use 100 LRSS images for training it in an unsupervised manner.", "The remaining 34 LRSS images with their corresponding shadow-free images are used for testing.", "The quantitative results are shown in Table REF .", "We compare our DC-ShadowNet with the following methods: unsupervised method Mask-ShadowGAN [13], supervised methods SP+M-Net [20] and DHAN [1], automatic method Guo [11], and interactive method [9] which requires user-annotations of shadow regions.", "As shown in Table REF , our method achieves the lowest RMSE and highest PSNR.", "The qualitative results covering a diverse set of images such as indoor/outdoor scenes, shadow regions, etc., are shown in Fig.", "REF .", "While the state-of-the-art methods can remove shadows to some extent, the results are still improper.", "Mask-ShadowGAN [13] fails to handle soft-shadows since it uses binary masks to represent shadow regions.", "Moreover, it mainly relies on adversarial training that cannot guarantee proper shadow removal.", "Supervised methods like DHAN [1] and SP+M-Net [20] have artifacts in the shadow regions as they suffer from the domain gap problem.", "Guo [11] fails due to the difficulty in automatically identifying soft shadow regions.", "Compared to all the baseline methods, our results are more proper, and the image surfaces are better-restored.", "Figure: (a) Input image, (b) and (c) show our results without and with test-time-training, (d) Result of Mask-ShadowGAN .Test-Time Training We show that our method being unsupervised can be used for test-time training to further improve the results on the test images.", "For this, we use the 34 shadow images from the test set used in the soft shadow evaluation above, and employ our unsupervised losses to train our method.", "To evaluate shadow removal performance, we use the corresponding shadow-free images; and the performance in terms of RMSE and PSNR improves from 3.48 and 31.01 to 3.36 and 31.31, respectively.", "See Fig.", "REF for a qualitative example showing the effectiveness of test-time training." ], [ "Ablation Study", "We conduct ablation studies to analyze the effectiveness of different components of our method such as the shadow-invariant chromaticity loss $\\mathcal {L}_{\\rm chroma}$ , shadow-robust feature loss $\\mathcal {L}_{\\rm feature}$ , boundary-smoothness loss $\\mathcal {L}_{\\rm smooth}$ , and the domain classifier $\\Phi ^g_s$ and $\\Phi ^d_{sf}$ .", "We use the SRD dataset for our experiments and the corresponding quantitative results are shown in Table REF .", "Each component of our method is important and contributes to the better performance of our method." ], [ "Conclusion", "We have proposed DC-ShadowNet, an unsupervised learning-based shadow removal method guided by domain classification network, shadow-free chromaticity, shadow-robust feature and boundary smoothness losses.", "Our method can robustly handle both hard and soft shadow images.", "We integrate a domain classifier with our generator and its corresponding discriminator, enabling our method to focus on shadow regions.", "To train DC-ShadowNet, we use novel unsupervised losses that enable it to directly learn from unlabeled (no ground truth) real shadow images.", "We also showed that we could employ test-time refinement that can further improve our performance.", "Experimental results have confirmed that our method is effective and outperforms the state-of-the-art shadow removal methods." ] ]
2207.10434
[ [ "Strongly symmetric homeomorphisms on the real line with uniform\n continuity" ], [ "Abstract We investigate strongly symmetric homeomorphisms of the real line which appear in harmonic analysis aspects of quasiconformal Teichm\\\"uller theory.", "An element in this class can be characterized by a property that it can be extended quasiconformally to the upper half-plane so that its complex dilatation induces a vanishing Carleson measure.", "However, differently from the case on the unit circle, strongly symmetric homeomorphisms on the real line are not preserved under either the composition or the inversion.", "In this paper, we present the difference and the relation between these two cases.", "In particular, we show that if uniform continuity is assumed for strongly symmetric homeomorphisms of the real line, then they are preserved by those operations.", "We also show that the barycentric extension of uniformly continuous one induces a vanishing Carleson measure and so do the composition and the inverse of those quasiconformal homeomorphisms of the upper half-plane." ], [ "Introduction and statement of the main results", "The universal Teichmüller space and its subspaces are regarded as the spaces consisting of quasiconformal mappings on the complex plane.", "By introducing various particular properties to these mappings from view points of complex analysis and harmonic analysis, we can study those concepts through such subspaces reflecting their properties.", "For instance, studies on Teichmüller spaces of integrable complex dilatations with Weil–Petersson metrics are in Cui [9], Takhtajan and Teo [33], and Shen [30], those of BMO and VMO functions are in Astala and Zinsmeister [3] and Shen and Wei [31], and those of $C^{1+\\alpha }$ -diffeomorphisms are in [23].", "In the conformally invariant formulation, Teichmüller spaces defined on the upper half-plane $\\mathbb {U}$ are the same as those defined on the unit disk $\\mathbb {D}$ .", "However, if we consider subspaces of the universal Teichmüller space by imposing certain conditions on quasiconformal and quasisymmetric mappings, the theory can differ greatly depending on whether the conditions are placed on the compact set (the unit circle $\\mathbb {S}$ ) or on the non-compact set (the real line $\\mathbb {R}$ ).", "In this paper, we study the class of strongly symmetric homeomorphisms in the non-compact setting.", "A sense-preserving homeomorphism $h$ of $\\mathbb {R}$ is called strongly symmetric if $h$ is locally absolutely continuous, $h^{\\prime }$ is an $A_\\infty $ -weight, and $\\log h^{\\prime }$ is a VMO function.", "We denote the set of all strongly symmetric homeomorphisms on $\\mathbb {R}$ by ${\\rm SS}(\\mathbb {R})$ .", "The set ${\\rm SS}(\\mathbb {S})$ of those on $\\mathbb {S}$ , which defines the original VMO Teichmüller space as in [31], is a natural counterpart to $\\rm SS(\\mathbb {R})$ .", "The situation on $\\mathbb {R}$ is more complicated than that on $\\mathbb {S}$ because one has to worry about behavior at $\\infty $ .", "Typically, we have found a phenomenon that $\\rm SS(\\mathbb {R})$ does not constitute a group by the composition of mappings in [37] whereas $\\rm SS(\\mathbb {S})$ is a group.", "This causes a trouble in the theory of Teichmüller spaces.", "The VMO Teichmüller space $T_v(\\mathbb {R})$ is defined as the set of all equivalence classes of ${\\rm SS}(\\mathbb {R})$ by affine transformations, and its complex analytic structure is studied in [30] and [38].", "On the contrary, homogeneity of Teichmüller space is important to consider the group of automorphisms of the Teichmüller space and also to introduce an invariant metric with respect to the analytic structure; $T_v(\\mathbb {R})$ lacks this nature.", "In this paper, we consider a condition under which ${\\rm SS}(\\mathbb {R})$ is preserved by the composition, and prove the following result.", "Theorem 1 If $g, h \\in {\\rm SS}(\\mathbb {R})$ and $h^{-1}$ is uniformly continuous on $\\mathbb {R}$ , then $g \\circ h^{-1} \\in {\\rm SS}(\\mathbb {R})$ .", "Let ${\\rm SS}_{\\rm uc}(\\mathbb {R})$ denote a subset of ${\\rm SS}(\\mathbb {R})$ consisting of all elements $h$ such that both $h$ and $h^{-1}$ are uniformly continuous.", "Then, ${\\rm SS}_{\\rm uc}(\\mathbb {R})$ becomes a group by Theorem REF .", "Every element $h \\in {\\rm SS}_{\\rm uc}(\\mathbb {R})$ acts on $T_v(\\mathbb {R})$ as an automorphism that maps the equivalence class of $h$ to the origin of $T_v(\\mathbb {R})$ (see Section 3).", "Hence, ${\\rm SS}_{\\rm uc}(\\mathbb {R})$ is embedded into the group ${\\rm Aut}(T_v(\\mathbb {R}))$ of biholomorphic automorphisms of $T_v(\\mathbb {R})$ , which plays the role of the Teichmüller modular group of $T_v(\\mathbb {R})$ .", "In order to see that the action of $h$ is biholomorphic, the following investigation on quasiconformal extension is necessary.", "The complex dilatation $\\mu _F$ of a quasiconformal homeomorphism $F$ is defined by $\\mu _F = F_{\\bar{z}}/F_z$ .", "It satisfies $\\Vert \\mu _F \\Vert _\\infty <1$ .", "Let $\\mathcal {M}(\\mathbb {U})$ be the set of all measurable functions $\\mu $ on $\\mathbb {U}$ such that $\\Vert \\mu \\Vert _\\infty <1$ and $|\\mu (z)|^2dxdy/y$ is a Carleson measure on $\\mathbb {U}$ .", "In addition, if $|\\mu (z)|^2dxdy/y$ is a vanishing Carleson measure, then the subset of all such $\\mu $ is denoted by $\\mathcal {M}_0(\\mathbb {U})$ .", "The following chain rule of complex dilatations is obtained by refinement of the argument in Cui and Zinsmeister [10] who showed the first statement in the case that $G$ is the identity map.", "Theorem 2 Let $G$ and $H$ be quasiconformal homeomorphisms of $\\mathbb {U}$ onto itself, and assume that $H$ is bi-Lipschitz with respect to the hyperbolic metric on $\\mathbb {U}$ .", "Then, $(1)$ $\\mu _{G \\circ H^{-1}}$ belongs to $\\mathcal {M}(\\mathbb {U})$ if $\\mu _G,\\, \\mu _H \\in \\mathcal {M}(\\mathbb {U})$ ; $(2)$ $\\mu _{G \\circ H^{-1}}$ belongs to $\\mathcal {M}_0(\\mathbb {U})$ if $\\mu _G,\\, \\mu _H \\in \\mathcal {M}_0(\\mathbb {U})$ and in addition if the boundary extension $h^{-1}$ of $H^{-1}$ to $\\mathbb {R}$ is uniformly continuous.", "We remark that if the uniform continuity is dropped then statement (2) is no longer valid due to the lack of group structure of $\\rm SS(\\mathbb {R})$ .", "The relation between $\\mathcal {M}_0(\\mathbb {U})$ and $\\rm SS(\\mathbb {R})$ will be given in Proposition REF .", "By statement (2) of Theorem REF , we can define a biholomorphic automorphism of $\\mathcal {M}_0(\\mathbb {U})$ induced by some quasiconformal extension of $h \\in {\\rm SS}_{\\rm uc}(\\mathbb {R})$ .", "Suppose that $h$ extends to a bi-Lipschitz quasiconformal homeomorphism $H$ of $\\mathbb {U}$ such that $\\mu _H \\in \\mathcal {M}_0(\\mathbb {U})$ , which is known to be always the case independently of Theorem REF below (see Proposition REF ).", "Then, by representing any element of $\\mathcal {M}_0(\\mathbb {U})$ by $\\mu _G$ for a quasiconformal homeomorphism $G$ of $\\mathbb {U}$ , we have the right translation $r_H:\\mathcal {M}_0(\\mathbb {U}) \\rightarrow \\mathcal {M}_0(\\mathbb {U})$ by the correspondence $\\mu _G \\mapsto \\mu _{G \\circ H^{-1}}$ .", "Standard arguments show that $r_H$ is biholomorphic, and moreover, this action is projected down to $T_v(\\mathbb {R})$ under the Teichmüller projection to induce a biholomorphic automorphism $R_h:T_v(\\mathbb {R}) \\rightarrow T_v(\\mathbb {R})$ well defined by $h \\in {\\rm SS}_{\\rm uc}(\\mathbb {R})$ (Theorem REF ).", "There are several ways to extend quasisymmetric homeomorphisms of $\\mathbb {S}$ and $\\mathbb {R}$ to quasiconformal homeomorphisms.", "The classical one is due to Beurling and Ahlfors [4], and its variants and modified versions are also introduced by Semmes [28] and by Fefferman, Kenig and Pipher [14].", "Including the barycentric extension introduced by Douady and Earle [12], all of them have a property that the extension map is a bi-Lipschitz diffeomorphism with respect to the hyperbolic metric.", "In addition, the conformal naturality of the barycentric extension is useful in the theory of quasiconformal mappings, in particular, when we consider a Möbius group action on the Teichmüller space.", "However, it is so far unknown whether the complex dilatation of the barycentric extension $e(h)$ of a strongly symmetric homeomorphism $h$ of $\\mathbb {R}$ induces a vanishing Carleson measure on $\\mathbb {U}$ .", "In this paper we prove it does if a strongly symmetric homeomorphism $h$ and its inverse $h^{-1}$ are uniformly continuous on $\\mathbb {R}$ , as stated in the following result.", "Theorem 3 If $h \\in {\\rm SS}_{\\rm uc}(\\mathbb {R})$ , then $\\mu _{e(h)} \\in \\mathcal {M}_0 (\\mathbb {U})$ .", "This result is a consequence of Theorem REF , and implies that a biholomorphic automorphism $R_h$ of $T_v(\\mathbb {R})$ for $h \\in {\\rm SS}_{\\rm uc}(\\mathbb {R})$ is lifted canonically to the biholomorphic automorphism $r_H$ of $\\mathcal {M}_0 (\\mathbb {U})$ by the barycentric extension $H=e(h)$ .", "We end this introduction (Section 1) with showing the organization of the rest of this paper.", "(Section 2): Definitions and review of basic results are given.", "These are concerning strongly symmetric and quasisymmetric homeomorphisms, BMO and VMO functions, the Muckenhoupt weights, the Carleson measures, the spaces $\\mathcal {M}(\\mathbb {U})$ and $\\mathcal {M}_0(\\mathbb {U})$ of Beltrami coefficients, and their Teichmüller spaces.", "(Section 3): Under the assumption of uniform continuity, the group structure is considered.", "The proof of Theorem REF is given.", "We also prove a similar result in Theorem REF for symmetric homeomorphisms of $\\mathbb {R}$ , a vanishing class of quasisymmetric homeomorphisms, since it has the property parallel to Theorem REF .", "(Section 4): The composition of quasiconformal homeomorphisms whose complex dilatations satisfy the Carleson measure condition is considered.", "The uniform continuity condition is applied in the case for the vanishing Carleson measure condition.", "The proof of Theorem REF is given.", "(Section 5): The barycentric extension defined on $\\mathbb {R}$ is considered.", "The proof of Theorem REF is given.", "We also give another proof of Theorem REF based on Theorem REF .", "(Section 6): Comparisons between strongly symmetric homeomorphisms on $\\mathbb {R}$ and those on $\\mathbb {S}$ under the conjugation by the Cayley transformation are addressed." ], [ "Quasisymmetric homeomorphisms", "As background knowledge, the definitions of quasisymmetric homeomorphisms and the universal Teichmüller space are given.", "Including the concept of quasiconformal mapping, a basic reference of those is [1].", "Definition An increasing homeomorphism $h$ of the real line $\\mathbb {R}$ onto itself is said to be quasisymmetric if there exists a constant $M \\ge 1$ such that $\\frac{1}{M} \\le \\frac{h(x+t)-h(x)}{h(x)-h(x-t)}\\le M$ for all $x\\in \\mathbb {R}$ and $t>0$ .", "The least possible value of such $M$ is called the quasisymmetry constant of $h$ .", "If we define a measure $m_h$ by $m_h(E)=|h(E)|$ for a measurable subset $E \\subset \\mathbb {R}$ with respect to the Lebesgue measure $|\\cdot |$ , then the boundedness of the quasisymmetry quotient of $h$ is equivalent to that $m_h$ is a doubling measure, i.e., there is some constant $M^{\\prime } \\ge 1$ such that $m_h(2I) \\le M^{\\prime } m_h(I)$ for every bounded closed interval $I=[x-t,x+t]$ and its double $2I=[x-2t,x+2t]$ .", "Beurling and Ahlfors [4] proved the following theorem.", "Proposition 4 An increasing homeomorphism $h$ of the real line $\\mathbb {R}$ onto itself is quasisymmetric if and only if there exists some quasiconformal homeomorphism of the upper half-plane $\\mathbb {U}$ onto itself that is continuously extendable to the boundary map $h$ .", "This quasiconformal extension is explicitly written in terms of $h$ , and is called the Beurling–Ahlfors extension in the literature.", "Later, Douady and Earle [12] gave a quasiconformal extension of a quasisymmetric homeomorphism, called the barycentric extension, in a conformally natural way.", "We will explain this extension in Section 4.", "Definition Let $\\rm QS(\\mathbb {R})$ denote the group of all quasisymmetric homeomorphisms of $\\mathbb {R}$ .", "The universal Teichmüller space $T$ is defined as the group $\\rm QS(\\mathbb {R})$ modulo the left action of the group $\\rm Aff(\\mathbb {R})$ of all real affine mappings $z \\mapsto az + b, \\, a > 0, b \\in \\mathbb {R}$ , i.e., $T = {\\rm Aff}(\\mathbb {R})\\backslash {\\rm QS}(\\mathbb {R})$ .", "See monographs [21], [24] for comprehensive introduction on Teichmüller spaces.", "Let $M(\\mathbb {U})$ denote the open unit ball of the Banach space $L^{\\infty }(\\mathbb {U})$ of essentially bounded measurable functions on $\\mathbb {U}$ .", "An element in $M(\\mathbb {U})$ is called a Beltrami coefficient.", "By the measurable Riemann mapping theorem, a Beltrami coefficient $\\mu \\in M(\\mathbb {U})$ determines uniquely a quasiconformal homeomorphism $F$ on $\\mathbb {U}$ with its complex dilatation $\\mu _F=F_{\\bar{z}}/F_{z}$ equal to $\\mu $ up to post composition with conformal mappings.", "See [1].", "By Proposition REF with the measurable Riemann mapping theorem, the universal Teichmüller space $T$ can be also defined as the set of all equivalence classes $[\\mu ]$ of $\\mu \\in M(\\mathbb {U})$ , where $\\mu , \\mu ^{\\prime } \\in M(\\mathbb {U})$ are equivalent if they produce quasiconformal homeomorphisms of $\\mathbb {U}$ onto itself having the same boundary extension to $\\mathbb {R}$ .", "We call the quotient map $\\pi :M(\\mathbb {U}) \\rightarrow T$ the Teichmüller projection." ], [ "BMO, VMO, and $A_\\infty $ -weights", "The functions in $\\rm BMO(\\mathbb {R})$ are characterized by the boundedness of their mean oscillations over intervals.", "The functions in $\\rm VMO(\\mathbb {R})$ are those with additional property that their mean oscillations over small intervals are small.", "To be precise: Definition A locally integrable function $u$ on $\\mathbb {R}$ belongs to BMO if $\\Vert u \\Vert _{ \\rm BMO(\\mathbb {R})} = \\sup _{I \\subset \\mathbb {R}}\\frac{1}{|I|} \\int _I |u(x)-u_I| dx <\\infty ,$ where the supremum is taken over all bounded intervals $I$ on $\\mathbb {R}$ and $u_I$ denotes the integral mean of $u$ over $I$ .", "The set of all BMO functions on $\\mathbb {R}$ is denoted by ${\\rm BMO}(\\mathbb {R})$ .", "This is regarded as a Banach space with the BMO-norm $\\Vert \\cdot \\Vert _{ \\rm BMO(\\mathbb {R})}$ by ignoring the difference of constant functions.", "It is said that $u \\in {\\rm BMO}(\\mathbb {R})$ belongs to VMO if $\\lim _{|I| \\rightarrow 0}\\frac{1}{|I|} \\int _I |u(x)-u_I| dx=0,$ and the set of all such functions is denoted by ${\\rm VMO}(\\mathbb {R})$ .", "The spaces $\\rm BMO(\\mathbb {S})$ and $\\rm VMO(\\mathbb {S})$ can be defined in the same way.", "Sarason [26] proved that $\\rm VMO(\\mathbb {R})$ is the closure of ${\\rm BMO}(\\mathbb {R}) \\cap {\\rm UC}(\\mathbb {R})$ in the BMO-norm, where ${\\rm UC}(\\mathbb {R})$ denotes the set of uniformly continuous functions on $\\mathbb {R}$ .", "In particular, $\\rm VMO(\\mathbb {R})$ is a closed subspace of ${\\rm BMO}(\\mathbb {R})$ .", "Remark The space ${\\rm VMO}(\\mathbb {S})$ is the closure of $C^\\infty (\\mathbb {S})$ in the BMO-norm.", "Differently from the case of $\\mathbb {S}$ , however, it was shown by Martell, Mitrea et al.", "[22] that $L^\\infty (\\mathbb {R}) \\cap {\\rm UC}(\\mathbb {R})$ is not dense in ${\\rm VMO}(\\mathbb {R})$ , while ${\\rm BMO}(\\mathbb {R}) \\cap C^\\alpha (\\mathbb {R})$ is dense in $\\rm VMO(\\mathbb {R})$ in the BMO-norm.", "Here, $C^\\alpha (\\mathbb {R})$ denotes the set of Hölder continuous functions on $\\mathbb {R}$ of order $\\alpha \\in (0,1)$ .", "BMO functions satisfy the following John–Nirenberg inequality (see [18], [32]).", "Proposition 5 There exists two universal positive constants $C_1, C_2>0$ such that for any BMO function $u$ on $\\mathbb {R}$ $($ or on $\\mathbb {S}$$)$ , any bounded closed interval $J \\subset I$ on any interval $I \\subset \\mathbb {R}$ $($ or $I \\subset \\mathbb {S}$$)$ , the inequality $\\frac{1}{|J|}|\\lbrace z \\in J: |u(x) - u_J|> t \\rbrace |\\le C_1 {\\rm exp}\\left(\\frac{-C_2t}{\\Vert u \\Vert _{{\\rm BMO}(I)}}\\right)$ holds for all $t > 0$ , where $\\Vert u \\Vert _{{\\rm BMO}(I)}$ is the BMO-norm of $u$ on $I$ .", "The exponentials of BMO functions are closely related to the Muckenhoupt weights.", "There are several equivalent definitions of $A_{\\infty }$ -weights (see [7]), and the following is one of them.", "Definition A locally integrable non-negative measurable function $\\omega \\ge 0$ on $\\mathbb {R}$ is called a weight.", "We say that $\\omega $ is an $A_{\\infty }$ -weight if there exist two positive constants $C$ and $\\alpha $ such that $\\frac{\\int _E\\omega (x)dx}{\\int _I\\omega (x)dx}\\le C\\left(\\frac{|E|}{|I|}\\right)^{\\alpha }$ whenever $I\\subset \\mathbb {R}$ is a bounded closed interval and $E\\subset I$ a measurable subset.", "A weight $\\omega $ is called doubling if the measure $\\omega (x)dx$ is doubling.", "If we use the above definition, it is easy to see that an $A_{\\infty }$ -weight is doubling.", "But, the converse is not.", "Fefferman and Muckenhoupt [15] provided an example of a weight that satisfies the doubling condition but not $A_{\\infty }$ .", "If $\\omega $ is an $A_{\\infty }$ -weight, then $\\log \\omega $ is a BMO function.", "Conversely, if $\\log \\omega $ is a real-valued BMO function, then $\\omega ^{\\delta }$ is an $A_{\\infty }$ -weight for some small $\\delta > 0$ .", "This is a consequence from the John–Nirenberg inequality, but $\\omega $ itself need not be even locally integrable, and thus need not be an $A_{\\infty }$ -weight (see [16]).", "However, if $\\log \\omega $ is a real-valued VMO function on the unit circle $\\mathbb {S}$ , then by the John–Nirenberg inequality, $\\omega $ is an $A_{\\infty }$ -weight on $\\mathbb {S}$ (see [16]).", "Namely, the local BMO norm of a VMO function $\\log \\omega $ can be made so small on a small interval that $\\omega $ is a local $A_{\\infty }$ -weight on this interval.", "Then, this is in fact an $A_{\\infty }$ -weight on $\\mathbb {S}$ by its compactness." ], [ "Beltrami coefficients inducing Carleson measures", "We define the spaces of Beltrami coefficients characterizing particular quasiconformal homeomorphisms considered in our research.", "Definition Let $\\lambda $ be a positive Borel measure on the upper half-plane $\\mathbb {U}$ .", "We say that $\\lambda $ is a Carleson measure if $\\Vert \\lambda \\Vert _c = \\sup _{I \\subset \\mathbb {R}} \\frac{\\lambda (I \\times (0,|I|])}{|I|} < \\infty ,$ where the supremum is taken over all bounded closed interval $I \\subset \\mathbb {R}$ and $I \\times (0,|I|] \\subset \\mathbb {U}$ is a Carleson box.", "The set of all Carleson measures on $\\mathbb {U}$ is denoted by ${\\rm CM}(\\mathbb {U})$ .", "A Carleson measure $\\lambda \\in {\\rm CM}(\\mathbb {U})$ is called vanishing if $\\lim _{|I| \\rightarrow 0}\\frac{\\lambda (I \\times (0,|I|])}{|I|} = 0.$ The set of all vanishing Carleson measures on $\\mathbb {U}$ is denoted by ${\\rm CM}_0(\\mathbb {U})$ .", "Definition Let $\\mathcal {L}(\\mathbb {U})$ be the Banach space of all essentially bounded measurable functions $\\mu $ on $\\mathbb {U}$ such that $\\lambda _{\\mu } \\in {\\rm CM}(\\mathbb {U})$ for $d\\lambda _{\\mu }(z) = |\\mu (z)|^2\\rho _{\\mathbb {U}}(z)dxdy$ .", "Here, $\\rho _{\\mathbb {U}}$ is the hyperbolic density on $\\mathbb {U}$ .", "The norm of $\\mathcal {L}(\\mathbb {U})$ is given by $\\Vert \\mu \\Vert _\\infty +\\Vert \\lambda _\\mu \\Vert _c^{1/2}$ .", "Let $\\mathcal {L}_0(\\mathbb {U})$ be the subspace of $\\mathcal {L}(\\mathbb {U})$ consisting of all elements $\\mu $ such that $\\lambda _{\\mu } \\in {\\rm CM}_0(\\mathbb {U})$ .", "Moreover, we set the corresponding spaces of Beltrami coefficients as $\\mathcal {M}(\\mathbb {U}) = M(\\mathbb {U}) \\cap \\mathcal {L}(\\mathbb {U})$ , and $\\mathcal {M}_0(\\mathbb {U}) = M(\\mathbb {U}) \\cap \\mathcal {L}_0(\\mathbb {U})$ .", "On the unit disk $\\mathbb {D}$ , the corresponding spaces of Carleson measures $\\rm CM(\\mathbb {D})$ and vanishing Carleson measures $\\rm CM_0(\\mathbb {D})$ are defined in the same way (see [18]), and so are $\\mathcal {M}(\\mathbb {D})$ and $\\mathcal {M}_0(\\mathbb {D})$ ." ], [ "Strongly quasisymmetric homeomorphism", "The notion of strongly quasisymmetric homeomorphisms was introduced by Semmes [28].", "This subclass is much related with and also has wide application to some important problems in real and harmonic analysis (see [11]).", "Definition An increasing homeomorphism $h$ of $\\mathbb {R}$ onto itself is said to be strongly quasisymmetric if $h$ is locally absolutely continuous and $h^{\\prime }$ belongs to the class of $A_{\\infty }$ -weights.", "Let ${\\rm SQS}(\\mathbb {R})$ denote the set of all strongly quasisymmetric homeomorphisms of $\\mathbb {R}$ onto itself.", "In particular, if $h \\in \\rm SQS(\\mathbb {R})$ then $\\log h^{\\prime }$ belongs to $\\rm BMO(\\mathbb {R})$ .", "By the definition of $A_{\\infty }$ -weight, we see that ${\\rm SQS}(\\mathbb {R})$ is preserved under the composition of elements.", "In addition, by [7], the inverse operation also preserves ${\\rm SQS}(\\mathbb {R})$ .", "Thus, ${\\rm SQS}(\\mathbb {R})$ is a group.", "Moreover, since an $A_{\\infty }$ -weight defines a doubling measure, $\\rm SQS(\\mathbb {R})$ is a subgroup of $\\rm QS(\\mathbb {R})$ .", "Definition We say that a strongly quasisymmetric homeomorphism $h \\in \\rm SQS(\\mathbb {R})$ is strongly symmetric if $\\log h^{\\prime }$ belongs to $\\rm VMO(\\mathbb {R})$ .", "Let $\\rm SS(\\mathbb {R})$ denote the set of all strongly symmetric homeomorphisms of $\\mathbb {R}$ .", "On the unit circle $\\mathbb {S}$ , strongly quasisymmetric and symmetric homeomorphisms are defined similarly.", "The sets of those are denoted by ${\\rm SQS}(\\mathbb {S})$ and ${\\rm SS}(\\mathbb {S})$ respectively.", "These classes were investigated in [13], [31], [35], [40] during their study of BMO theory on Teichmüller spaces.", "In particular, it is known that ${\\rm SS}(\\mathbb {S})$ is a subgroup of ${\\rm SQS}(\\mathbb {S})$ .", "Concerning the quasiconformal extension of strongly quasisymmetric homeomorphisms to $\\mathbb {U}$ , and conversely the boundary extension of quasiconformal homeomorphisms with complex dilatations in $\\mathcal {M}(\\mathbb {U})$ , the results in Fefferman, Kenig and Pipher [14] imply the following claim adapted to our purpose.", "Proposition 6 An increasing homeomorphism $h$ of $\\mathbb {R}$ onto itself belongs to ${\\rm SQS}(\\mathbb {R})$ if and only if $h$ continuously extends to some quasiconformal homeomorphism of $\\mathbb {U}$ onto itself whose complex dilatation belongs to $\\mathcal {M}(\\mathbb {U})$ .", "Precisely, a variant of the Beurling–Ahlfors extension by the heat kernel introduced in [14] gives an appropriate extension (quasiconformal diffeomorphism) for strongly quasisymmetric homeomorphisms, while a variant of the Beurling–Ahlfors extension constructed by Semmes [28] is also valid in this case under the assumption that $\\log h^{\\prime }$ has small BMO norm.", "Due to the conformal invariance of Carleson measures (see [18]), the corresponding statement to Proposition REF for ${\\rm SQS}(\\mathbb {S})$ and $\\mathcal {M}(\\mathbb {D})$ holds true.", "However, for strongly symmetric homeomorphisms, the situation is different.", "In [31], the corresponding claim for strongly symmetric homeomorphisms on $\\mathbb {S}$ is proved though this only asserts the existence of desired quasiconformal extension.", "Later, under this existence result, it is shown that the barycentric extension is in fact such an extension (see Remark in Section 5).", "Proposition 7 A sense-preserving homeomorphism $\\varphi $ of $\\mathbb {S}$ onto itself belongs to $\\rm SS(\\mathbb {S})$ if and only if $\\varphi $ continuously extends to some quasiconformal homeomorphism of $\\mathbb {D}$ onto itself whose complex dilatation belongs to $\\mathcal {M}_0(\\mathbb {D})$ .", "This result uses a fact that $C^\\infty (\\mathbb {S})$ is dense in ${\\rm VMO}(\\mathbb {S})$ , and does not directly imply the corresponding statement for ${\\rm SS}(\\mathbb {R})$ and $\\mathcal {M}_0(\\mathbb {U})$ .", "Nevertheless, the result itself is satisfied and we obtain the following characterization of strongly symmetric homeomorphisms on $\\mathbb {R}$ by their quasiconformal extensions.", "Proposition 8 An increasing homeomorphism $h$ of $\\mathbb {R}$ onto itself belongs to ${\\rm SS}(\\mathbb {R})$ if and only if $h$ continuously extends to some quasiconformal homeomorphism of $\\mathbb {U}$ onto itself whose complex dilatation belongs to $\\mathcal {M}_0(\\mathbb {U})$ .", "Indeed, it was proved in [38] that the variant of the Beurling–Ahlfors extension by the heat kernel yields such a quasiconformal extension.", "The fact that the above boundary extension is strongly symmetric was obtained in [29].", "Definition The quotient space $T_b = {\\rm Aff}(\\mathbb {R}) \\backslash {\\rm SQS}(\\mathbb {R})$ is called the BMO Teichmüller space.", "This can be also defined by $T_b=\\pi (\\mathcal {M}(\\mathbb {U}))$ .", "This space was introduced by Astala and Zinsmeister [3].", "By the conformal invariance, we can also define this by $T_b = \\mbox{Möb}(\\mathbb {S}) \\backslash {\\rm SQS}(\\mathbb {S})$ , where $\\mbox{Möb}(\\mathbb {S})$ is the group of Möbius transformations keeping $\\mathbb {S}$ invariant.", "Definition The quotient space $T_v = \\mbox{Möb}(\\mathbb {S}) \\backslash {\\rm SS}(\\mathbb {S})$ is called the VMO Teichmüller space, and $T_v(\\mathbb {R})= {\\rm Aff}(\\mathbb {R}) \\backslash {\\rm SS}(\\mathbb {R})$ the VMO Teichmüller space on the real line.", "These can be also defined by $T_v=\\pi (\\mathcal {M}_0(\\mathbb {D}))$ and $T_v(\\mathbb {R})=\\pi (\\mathcal {M}_0(\\mathbb {U}))$ .", "These two VMO Teichmüller spaces are different.", "Considering all of them on $\\mathbb {R}$ by taking the conjugate, we have the strict inclusion relations $T_v \\subset T_v(\\mathbb {R}) \\subset T_b$ (see Theorem REF ).", "The VMO Teichmüller space on the real line was introduced by Shen [29].", "The Teichmüller spaces $T_b$ and $T_v$ possess the group structure inherited from those of ${\\rm SQS}(\\mathbb {S})$ and ${\\rm SS}(\\mathbb {S})$ ." ], [ "Uniform continuity: Proofs of Theorem ", "In this section, we give the real-variable proof of Theorem REF for strongly symmetric homeomorphisms.", "We also show that symmetric homeomorphisms defined below have a very similar property to that in Theorem REF .", "It is known that each strongly quasisymmetric homeomorphism $h$ induces a bounded linear isomorphism $P_h: \\rm BMO(\\mathbb {R})\\rightarrow BMO(\\mathbb {R})$ by $P_h(u) = u \\circ h$ (see Jones [20]).", "We note that the operator $P_h$ does not necessarily maps $\\rm VMO(\\mathbb {R})$ into itself.", "For example, there exist strongly symmetric homeomorphisms $g$ and $h$ (see Section 6 for specific constructions) such that $\\log (g\\circ h)^{\\prime } = \\log g^{\\prime } \\circ h + \\log h^{\\prime } \\notin \\rm VMO(\\mathbb {R})$ .", "Since $\\log h^{\\prime } \\in \\rm VMO(\\mathbb {R})$ , we see that $P_h(\\log g^{\\prime })= \\log g^{\\prime } \\circ h \\notin \\rm VMO(\\mathbb {R})$ .", "However, under the uniform continuity of $h$ , the operator $P_h$ maps $\\rm VMO(\\mathbb {R})$ properly.", "Namely, we have the following: Proposition 9 Let $h \\in {\\rm SQS}(\\mathbb {R})$ such that $h$ is uniformly continuous on $\\mathbb {R}$ .", "Then, the operator $P_h$ maps $\\rm VMO(\\mathbb {R})$ into $\\rm VMO(\\mathbb {R})$ .", "We follow the proof of Anderson, Becker and Lesley [2].", "To see that this is composed by real analytic arguments, we show their proof.", "Let $v=P_h(u)$ for any $u \\in {\\rm BMO}(\\mathbb {R})$ , which is also in $\\rm BMO(\\mathbb {R})$ .", "For every bounded interval $I \\subset \\mathbb {R}$ , we have $\\int _I |v(x)-v_I| dx \\le \\int _I |v(x)-u_{h(I)}| dx+ \\int _I |v_I-u_{h(I)}|dx \\le 2 \\int _I |v(x)-u_{h(I)}| dx.$ Here, for $E_t=\\lbrace y \\in h(I): |u(y) - u_{h(I)}|> t \\rbrace $ , Proposition REF implies that $\\frac{|E_t|}{|h(I)|}\\le C_1 {\\rm exp}\\left(\\frac{-C_2t}{\\Vert u \\Vert _{{\\rm BMO}(h(I))}}\\right).$ Since the inverse $h^{-1}$ also belongs to ${\\rm SQS}(\\mathbb {R})$ , there are positive constants $C$ and $\\alpha $ for the $A_\\infty $ -weight $(h^{-1})^{\\prime }$ such that $\\frac{|h^{-1}(E_t)|}{|I|} \\le C\\left(\\frac{|E_t|}{|h(I)|}\\right)^\\alpha .$ We consider the distribution function $\\lambda (t)=|h^{-1}(E_t)|$ , where $h^{-1}(E_t)=\\lbrace x \\in I: |u\\circ h(x)-u_{h(I)}|> t \\rbrace .$ Then, we have $&\\quad \\int _I |v(x)-u_{h(I)}| dx=\\int _I |u \\circ h(x)-u_{h(I)}| dx=\\int _0^\\infty \\lambda (t)dt\\\\&\\le CC_1^\\alpha |I| \\int _0^\\infty {\\rm exp}\\left(\\frac{-\\alpha C_2 t}{\\Vert u \\Vert _{{\\rm BMO}(h(I))}}\\right)dt=\\frac{CC_1^\\alpha }{\\alpha C_2}|I| \\Vert u \\Vert _{{\\rm BMO}(h(I))}.$ Therefore, we conclude that $\\frac{1}{|I|} \\int _I |P_h(u)(x)-(P_h(u))_I|dx \\le \\frac{2CC_1^\\alpha }{\\alpha C_2} \\Vert u \\Vert _{{\\rm BMO}(h(I))}.$ Since $h$ is uniformly continuous on $\\mathbb {R}$ , we have $|h(I)|\\rightarrow 0$ as $|I| \\rightarrow 0$ .", "Hence, the BMO norm $\\Vert u \\Vert _{{\\rm BMO}(h(I))}$ tends uniformly to 0 as $|I| \\rightarrow 0$ owning to $u \\in \\rm VMO(\\mathbb {R})$ .", "This implies that $P_h(u) \\in \\rm VMO(\\mathbb {R})$ .", "Theorem REF follows from this proposition.", "Since $\\rm SQS(\\mathbb {R})$ is a group, we have $g \\circ h^{-1} \\in \\rm SQS(\\mathbb {R})$ by the condition $g, h \\in \\rm SS(\\mathbb {R}) \\subset \\rm SQS(\\mathbb {R})$ .", "Noting that $\\log (g\\circ h^{-1})^{\\prime } = (\\log g^{\\prime } - \\log h^{\\prime })\\circ h^{-1} = P_{h^{-1}}(\\log g^{\\prime } - \\log h^{\\prime }),$ we conclude by Proposition REF that $\\log (g\\circ h^{-1})^{\\prime } \\in \\rm VMO(\\mathbb {R})$ , and thus $g\\circ h^{-1} \\in \\rm SS(\\mathbb {R})$ .", "Theorem REF implies the following as well: Corollary 10 The following statements hold: If $h \\in \\rm SS(\\mathbb {R})$ and $h^{-1}$ is uniformly continuous on $\\mathbb {R}$ , then $h^{-1} \\in \\rm SS(\\mathbb {R})$ ; If $g, h \\in \\rm SS(\\mathbb {R})$ and $h, h^{-1}$ are uniformly continuous on $\\mathbb {R}$ , then $g\\circ h \\in \\rm SS(\\mathbb {R})$ ; The set of elements $h \\in \\rm SS(\\mathbb {R})$ such that both $h$ and $h^{-1}$ are uniformly continuous on $\\mathbb {R}$ is a subgroup of ${\\rm SQS}(\\mathbb {R})$ .", "Definition The subgroup of ${\\rm SQS}(\\mathbb {R})$ consisting of all elements $h \\in \\rm SS(\\mathbb {R})$ such that both $h$ and $h^{-1}$ are uniformly continuous on $\\mathbb {R}$ is denoted by ${\\rm SS}_{\\rm uc}(\\mathbb {R})$ .", "As mentioned in Section 1, ${\\rm SS}_{\\rm uc}(\\mathbb {R})$ acts on the VMO Teichmüller space $T_v(\\mathbb {R})={\\rm Aff}(\\mathbb {R}) \\backslash {\\rm SS}(\\mathbb {R})$ as a group of its automorphisms.", "For any $h \\in {\\rm SS}_{\\rm uc}(\\mathbb {R})$ , this action is defined by $h_*([g])=[g \\circ h^{-1}]$ for every $[g] \\in T_v(\\mathbb {R})$ , where $[g]$ denotes the equivalence class represented by $g \\in {\\rm SS}(\\mathbb {R})$ .", "Next, we consider a similar problem for symmetric homeomorphisms of $\\mathbb {R}$ .", "Definition A quasisymmetric homeomorphism $h$ of $\\mathbb {R}$ is said to be symmetric if $\\lim _{t\\rightarrow 0}\\,\\frac{h(x+t)-h(x)}{h(x)-h(x-t)}=1$ uniformly for all $x\\in \\mathbb {R}$ .", "Let $\\rm S(\\mathbb {R})$ denote the subset of $\\rm QS(\\mathbb {R})$ consisting of all symmetric homeomorphisms of $\\mathbb {R}$ .", "It is known that $h$ is symmetric if and only if $h$ can be extended to an asymptotically conformal homeomorphism $H$ of the upper half-plane $\\mathbb {U}$ onto itself (see [6], [17]).", "Here, we say that $H$ is asymptotically conformal if its complex dilatation $\\mu _H = H_{\\bar{z}}/H_z$ satisfies that ${\\rm ess}\\!\\!\\!\\!\\!\\!\\!\\sup _{0<y < t\\qquad } \\!\\!\\!\\!\\!\\!\\!", "|\\mu _H(x+iy)| \\rightarrow 0 \\quad (t \\rightarrow 0).$ In fact, the Beurling–Ahlfors extension of $h$ is asymptotically conformal when $h$ is symmetric.", "The class $\\rm S(\\mathbb {R})$ was first studied by Carleson [6] when he discussed absolute continuity of quasisymmetric homeomorphisms.", "It was investigated in depth later by Gardiner and Sullivan [17] in their study of the asymptotic Teichmüller space $T_0=\\mbox{Möb}(\\mathbb {S}) \\backslash {\\rm S}(\\mathbb {S})$ by using ${\\rm S}(\\mathbb {S})$ similarly defined on the unit circle $\\mathbb {S}$ .", "Recently, Hu, Wu and Shen [19] introduced the Teichmüller space $T_0(\\mathbb {R})={\\rm Aff}(\\mathbb {R}) \\backslash \\rm S(\\mathbb {R})$ on the real line.", "This has been further generalized in [36].", "The inclusion relation $\\rm SS(\\mathbb {R}) \\subset \\rm S(\\mathbb {R})$ is seen from the characterization of VMO functions in Sarason [26].", "In more detail, by the John–Nirenberg inequality, the local $A_2$ -constant for the exponential of a VMO function tends to 1 when the interval gets small.", "This can be applied to show the above inclusion.", "See [29].", "In [37], we constructed counter-examples for showing that the class $\\rm S(\\mathbb {R})$ does not constitute a group under the composition.", "To be precise, we have proved that neither the composition nor the inverse preserves this class.", "However, we have the following result similar to Theorem REF for strongly symmetric homeomorphisms.", "This can be regarded as its prototype in the non-compact setting.", "Theorem 11 If $g, h \\in \\rm S(\\mathbb {R})$ and $h^{-1}$ is uniformly continuous on $\\mathbb {R}$ , then $g\\circ h^{-1} \\in \\rm S(\\mathbb {R})$ .", "Suppose that $g\\circ h^{-1}$ is not symmetric.", "Then, for some $\\delta >0$ , there are consecutive bounded closed intervals $J_n$ and $J_n^{\\prime }$ in $\\mathbb {R}$ such that $|J_n|=|J_n^{\\prime }| \\rightarrow 0$ $(n \\rightarrow \\infty )$ and $\\max \\left\\lbrace \\frac{|g\\circ h^{-1}(J_n)|}{|g\\circ h^{-1}(J_n^{\\prime })|},\\frac{|g\\circ h^{-1}(J_n^{\\prime })|}{|g\\circ h^{-1}(J_n)|}\\right\\rbrace \\ge 1+\\delta .$ Without loss of generality, we may assume that $|g\\circ h^{-1}(J_n)| \\ge (1+\\delta )|g\\circ h^{-1}(J_n^{\\prime })|$ .", "Let $I_n = h^{-1}(J_n)$ and $I_n^{\\prime } = h^{-1}(J_n^{\\prime })$ .", "Since $h^{-1}$ is uniformly continuous, we have $|I_n|\\rightarrow 0$ and $|I_n^{\\prime }|\\rightarrow 0$ .", "By the symmetry of $g$ with $|g(I_n)| \\ge (1+\\delta )|g(I_n^{\\prime })|$ , we see that there exists $\\varepsilon > 0$ such that $|I_n| \\ge (1+\\varepsilon )|I_n^{\\prime }|$ for all sufficiently large $n$ .", "We choose $\\widetilde{I}_n \\subset I_n$ so that $\\widetilde{I}_n$ and $I_n^{\\prime }$ are consecutive intervals in $\\mathbb {R}$ with $|\\widetilde{I}_n|=|I_n^{\\prime }|$ .", "Then, $|I_n| \\ge (1+\\varepsilon )|\\widetilde{I}_n|$ but $\\lim _{n \\rightarrow \\infty } \\frac{|h(I_n)|}{|h(\\widetilde{I}_n)|} =\\lim _{n \\rightarrow \\infty } \\frac{|h(I_n)|}{|h(I^{\\prime }_n)|}=\\frac{|J_n|}{|J^{\\prime }_n|}=1.$ This contradicts that $h$ is a symmetric homeomorphism.", "As an immediate consequence from Theorem REF , we have: Corollary 12 The following statements hold: If $h \\in \\rm S(\\mathbb {R})$ and $h^{-1}$ is uniformly continuous on $\\mathbb {R}$ , then $h^{-1} \\in \\rm S(\\mathbb {R})$ ; If $g, h \\in \\rm S(\\mathbb {R})$ and $h, h^{-1}$ are uniformly continuous on $\\mathbb {R}$ , then $g\\circ h \\in \\rm S(\\mathbb {R})$ ; The set of elements $h \\in \\rm S(\\mathbb {R})$ such that both $h$ and $h^{-1}$ are uniformly continuous on $\\mathbb {R}$ is a subgroup of ${\\rm QS}(\\mathbb {R})$ .", "Thus, the group of those $h \\in {\\rm S}(\\mathbb {R})$ with both $h$ and $h^{-1}$ being uniformly continuous acts on the Teichmüller space $T_0(\\mathbb {R})$ ." ], [ "The chain rule of complex dilatations: Proof of Theorem ", "The idea of the proof of Theorem REF originally appeared in Cui and Zinsmeister [10] (and in Semmes [28] for a different statement), and we supply necessary ingredients for it to make our proof more understandable.", "These include stability of quasi-geodesics, the Carleson embedding theorem, and certain properties of $A_\\infty $ -weights in the Muckenhoupt theory.", "First, we prepare two lemmas for the proof of this theorem.", "For every $z=(x,y) \\in \\mathbb {U}$ , we define a closed interval $I_z \\subset \\mathbb {R}$ by $I_z=[x-y,x+y]$ .", "Conversely, for a closed interval $I=[a,b] \\subset \\mathbb {R}$ , we define a point $q(I) \\in \\mathbb {U}$ by $q(I)=(\\frac{a+b}{2},\\frac{b-a}{2})$ .", "Let $\\gamma _{a,b}$ denote the hyperbolic geodesic line in the hyperbolic plane $\\mathbb {U}$ joining two points $a$ and $b$ on the boundary at infinity ${\\mathbb {R}} \\cup \\lbrace \\infty \\rbrace $ .", "Then, $q([a,b])$ is the intersection of $\\gamma _{a,b}$ and $\\gamma _{(a+b)/2,\\infty }$ .", "Hereafter, we use a convenient notation $A \\asymp B$ , which means that there is a constant $C \\ge 1$ satisfying that $C^{-1}A \\le B \\le CA$ uniformly with respect to certain circumstances obvious from the context.", "Lemma 13 Let $F:\\mathbb {U} \\rightarrow \\mathbb {U}$ be a bi-Lipschitz quasiconformal homeomorphism with respect to the hyperbolic metric that extends to a quasisymmetric homeomorphism $f:\\mathbb {R} \\rightarrow \\mathbb {R}$ with $f(\\infty )=\\infty $ .", "Then, there is a constant $C \\ge 1$ depending only on the bi-Lipschitz constant $L=L(F) \\ge 1$ of $F$ such that $C^{-1} \\frac{|f(I_z)|}{|I_z|} \\le \\frac{{\\rm Im}\\,F(z)}{{\\rm Im}\\,z} \\le C \\frac{|f(I_z)|}{|I_z|}$ for every $z \\in \\mathbb {U}$ .", "For $I_z=[a,b]$ , the point $z$ is the intersection of the geodesic lines $\\gamma _{a,b}$ and $\\gamma _{(a+b)/2,\\infty }$ .", "Then, $F(z)$ is the intersection of the quasi-geodesic lines $F(\\gamma _{a,b})$ and $F(\\gamma _{(a+b)/2,\\infty })$ .", "By the stability of quasi-geodesics (see [5] and [8]), we see that $F(\\gamma _{a,b})$ is within a bounded hyperbolic distance of $\\gamma _{f(a),f(b)}$ and that $F(\\gamma _{(a+b)/2,\\infty })$ is within a bounded hyperbolic distance of $\\gamma _{f((a+b)/2),\\infty }$ , where the bounds depend only on the bi-Lipschitz constant $L$ .", "This shows that the hyperbolic distance between the intersections $F(z)$ and $\\gamma _{f(a),f(b)} \\cap \\gamma _{f((a+b)/2),\\infty }$ is bounded from above by a constant depending only on $L$ .", "On the other hand, by the quasisymmetry of $f$ on $\\mathbb {R}$ , there exists some constant $M \\ge 1$ which also depends only on $L$ such that $M^{-1} \\le \\frac{f(b)-f((a+b)/2)}{f((a+b)/2)-f(a)} \\le M.$ This shows that the hyperbolic distance between the intersections $\\gamma _{f(a),f(b)} \\cap \\gamma _{f((a+b)/2),\\infty }$ and $\\gamma _{f(a),f(b)} \\cap \\gamma _{(f(a)+f(b))/2,\\infty }=q(f(I_z))$ is bounded from above by a constant depending only on $L$ .", "Combining the boundedness from above of the two hyperbolic distances mentioned above, we conclude that the hyperbolic distance between $F(z)$ and $q(f(I_z))$ is bounded from above by a constant depending only on $L$ .", "By the formula of the hyperbolic distance $d_{H}$ on the upper half-plane $\\mathbb {U}$ , for any two points $z, w \\in \\mathbb {U}$ , it holds that $\\sinh \\left(\\frac{d_H(z, w)}{2}\\right) = \\frac{|z - w|}{2 ({\\rm Im}\\,z\\, {\\rm Im}\\,w)^{\\frac{1}{2}}}\\ge \\frac{1}{2}\\left|\\left(\\frac{{\\rm Im}\\,z}{{\\rm Im}\\,w}\\right)^{\\frac{1}{2}}-\\left(\\frac{{\\rm Im}\\,w}{{\\rm Im}\\,z}\\right)^{\\frac{1}{2}}\\right|.$ Thus, $C^{-1}{\\rm Im}\\,F(z) \\le {\\rm Im}\\, q(f(I_z)) \\le C {\\rm Im}\\,F(z)$ for some constant $C \\ge 1$ depending only on $L$ .", "Consequently, we have $C^{-1}\\frac{{\\rm Im}\\, F(z)}{{\\rm Im}\\,z} \\le \\frac{|f(I_z)|}{|I_z|} =\\frac{{\\rm Im}\\, q(f(I_z))}{{\\rm Im}\\,z} \\le C \\frac{{\\rm Im}\\, F(z)}{{\\rm Im}\\,z},$ which is the required inequalities.", "Lemma 14 Let $F:\\mathbb {U} \\rightarrow \\mathbb {U}$ and $f:\\mathbb {R} \\rightarrow \\mathbb {R}$ be as in Lemma REF .", "There is a constant $\\alpha \\ge 1$ depending only on the bi-Lipschitz constant $L=L(F)$ such that for every bounded closed interval $I \\subset \\mathbb {R}$ , the image $F(Q_I)$ of the Carleson box $Q_I=I \\times (0,|I|] \\subset \\mathbb {U}$ is contained in the Carleson box $Q_{\\alpha f(I)}$ associated with the interval $\\alpha f(I) \\subset \\mathbb {R}$ with the same center as $f(I)$ and with length $|\\alpha f(I)|=\\alpha |f(I)|$ .", "We take any point $z$ on the upper side $I \\times \\lbrace |I|\\rbrace $ of the Carleson box $Q_I$ and consider $F(z)$ .", "Lemma REF shows that ${\\rm Im}\\, F(z) \\asymp \\frac{{\\rm Im}\\, z \\cdot |f(I_z)|}{|I_z|}=\\frac{1}{2} |f(I_z)|$ is satisfied with the comparability constant $C \\ge 1$ depending only on $L$ .", "Since $I \\subset I_z \\subset 3I$ , the quasisymmetry of $f$ with a constant $M \\ge 1$ depending only on $L$ implies that $|f(I_z)| \\le (2M+1)|f(I)|$ .", "Hence, ${\\rm Im}\\, F(z)$ is bounded by $C(M+\\frac{1}{2})|f(I)|$ .", "By the stability of quasi-geodesics, the images of the left and the right sides of $Q_I$ under $F$ are within a bounded hyperbolic distance of the hyperbolic geodesic lines towards $\\infty $ from the left and the right end points of the interval $f(I)$ , respectively.", "Combining this with the above estimate on the image of the upper side of $Q_I$ , we can find the required constant $\\alpha \\ge 1$ depending only on $L$ such that $F(Q_I) \\subset Q_{\\alpha f(I)}$ .", "Statement (1) for the case of $G={\\rm id}$ was proved by Cui and Zinsmeister [10].", "We will give the proof of the other statements.", "Our argument also gives a detailed exposition of their proof simultaneously.", "We remark in advance that in virtue of Proposition REF we can use the strongly quasisymmetric properties of the boundary extension $h$ of $H$ to $\\mathbb {R}$ in the proof.", "Let $I \\subset \\mathbb {R}$ be a bounded closed interval and $Q_I = I \\times (0, |I|]$ the associated Carleson box.", "Then, by change of variables $\\zeta =H(z)$ , we obtain that $\\begin{split}\\mathcal {I} & = \\iint _{Q_I} |\\mu _{G \\circ H^{-1}}(\\zeta )|^2 \\rho _{\\mathbb {U}}(\\zeta )d\\xi d\\eta \\\\& = \\iint _{H^{-1}(Q_I)} \\left|\\frac{\\mu (z) - \\nu (z)}{1-\\overline{\\nu (z)}\\mu (z)}\\right|^2\\rho _{\\mathbb {U}}(H(z)) J_{H}(z) dxdy ,\\\\\\end{split}$ where $J_{H}$ denotes the Jacobian of $H$ .", "We set $J=h^{-1}(I)$ , where $h$ is the boundary extension of $H$ to $\\mathbb {R}$ .", "By Lemma REF , there is a constant $\\alpha \\ge 1$ depending only on the bi-Lipschitz constant $L=L(H)=L(H^{-1}) \\ge 1$ such that $H^{-1}(Q_I) \\subset Q_{\\alpha J}$ , where $\\alpha J$ is the interval with the same center as $J$ , but with length $\\alpha |J|$ .", "Then, for some constant $K>0$ depending only on $\\Vert \\mu \\Vert _\\infty $ and $\\Vert \\nu \\Vert _\\infty $ , we have that $\\mathcal {I} \\le K \\iint _{Q_{\\alpha J}} |\\mu (z)-\\nu (z)|^2\\rho _{\\mathbb {U}}(z)\\frac{{\\rm Im}\\,z}{{\\rm Im}\\,H(z)}J_{H}(z) dxdy.$ Since $H$ is bi-Lipschitz with respect to the hyperbolic metric, we see that $\\left(\\frac{{\\rm Im}\\,z}{{\\rm Im}\\,H(z)}\\right)^2J_{H}(z) \\asymp 1.$ Moreover, Lemma REF implies that $\\frac{{\\rm Im}\\,z}{{\\rm Im}\\,H(z)} \\asymp \\frac{|I_z|}{|h(I_z)|}.$ Both comparabilities as above are given by constants depending only on the bi-Lipschitz constant $L$ .", "Hence, there is a constant $\\widetilde{L} \\ge 1$ depending only on $L$ such that $\\widetilde{L}^{-1}\\frac{|h(I_z)|}{|I_z|} \\le \\frac{{\\rm Im}\\,z}{{\\rm Im}\\,H(z)}J_{H}(z) \\le \\widetilde{L}\\, \\frac{|h(I_z)|}{|I_z|}$ for every $z \\in \\mathbb {U}$ .", "By setting $\\widetilde{\\omega }(z)=|h(I_z)|/|I_z|$ , we have $\\mathcal {I} \\le K \\widetilde{L} \\iint _{Q_{\\alpha J}} \\widetilde{\\omega }(z) |\\mu (z)-\\nu (z)|^2\\rho _{\\mathbb {U}}(z) dxdy.$ Now we apply the following Carleson embedding theorem (see [27]).", "It says that assuming $\\lambda \\in {\\rm CM}(\\mathbb {U})$ is a Carleson measure on the upper half-plane $\\mathbb {U}$ and $F: \\mathbb {U} \\rightarrow [0, \\infty )$ is a non-negative Borel measurable function in general, we have $\\iint _{\\mathbb {U}}F(z) d\\lambda (z) \\le A \\Vert \\lambda \\Vert _{c}\\int _{\\mathbb {R}}F^{\\ast }(t) dt,$ where $F^{\\ast }(t) = \\sup _{z \\in \\Gamma (t)}F(z)$ denotes the non-tangential maximal function of $F$ at $t \\in \\mathbb {R}$ with a cone $\\Gamma (t) = \\lbrace z=(x,y) \\in \\mathbb {U}\\mid |x-t| \\le y\\rbrace $ , and $A>0$ is an absolute constant.", "By the assumption $\\mu , \\nu \\in \\mathcal {M}(\\mathbb {U})$ , the measure $\\lambda _{\\mu -\\nu }$ defined by $d\\lambda _{\\mu -\\nu }=|\\mu (z)-\\nu (z)|^2\\rho _{\\mathbb {U}}(z)dxdy$ belongs to $\\rm CM(\\mathbb {U})$ .", "Then, we obtain that $\\begin{split}\\mathcal {I} & \\le K \\widetilde{L} \\iint _{\\mathbb {U}} \\widetilde{\\omega }(z)1_{Q_{\\alpha J}}(z)|\\mu (z)-\\nu (z)|^2\\rho _{\\mathbb {U}}(z) dxdy \\\\& \\le AK \\widetilde{L}\\,\\Vert \\lambda _{\\mu -\\nu } 1_{Q_{\\alpha J}} \\Vert _c \\int _{\\mathbb {R}} (\\widetilde{\\omega }1_{Q_{\\alpha J}})^{\\ast }(t) dt = C(J)\\int _{3\\alpha J} (\\widetilde{\\omega }1_{Q_{\\alpha J}})^{\\ast }(t) dt,\\\\\\end{split}$ where the constant $C(J)=AK \\widetilde{L}\\,\\Vert \\lambda _{\\mu -\\nu } 1_{Q_{\\alpha J}} \\Vert _c>0$ depends also on the interval $J$ , but is bounded due to $\\lambda _{\\mu -\\nu } \\in \\rm CM(\\mathbb {U})$ .", "The boundary extension $h$ of $H$ to $\\mathbb {R}$ is strongly quasisymmetric by Proposition REF .", "We consider the $A_\\infty $ -weight $\\omega =h^{\\prime }$ on $\\mathbb {R}$ and set $\\varphi = \\omega 1_{3\\alpha J}$ .", "Let $M\\varphi (t) = \\sup _{t \\in I}\\frac{1}{|I|}\\int _{I}\\varphi (s) ds$ denote the Hardy–Littlewood (uncentered) maximal function of $\\varphi $ .", "Then, we can show that $(\\widetilde{\\omega }1_{Q_{\\alpha J}})^{\\ast }(t)\\le M\\varphi (t)$ for any $t\\in 3 \\alpha J$ .", "Indeed, $t \\in I_z \\subset 3 \\alpha J$ for every $z \\in \\Gamma (t) \\cap Q_{\\alpha J}$ , and hence $(\\widetilde{\\omega }1_{Q_{\\alpha J}})^{\\ast }(t) =\\sup _{z \\in \\Gamma (t) \\cap Q_{\\alpha J}} \\widetilde{\\omega }(z)&=\\sup _{z \\in \\Gamma (t) \\cap Q_{\\alpha J}}\\frac{|h(I_z)|}{|I_z|}\\\\&\\le \\sup _{t \\in I_z \\subset 3 \\alpha J} \\frac{1}{|I_z|}\\int _{I_z} \\omega (t)dt \\le M\\varphi (t).$ Therefore, $\\mathcal {I} \\le C(J)\\int _{3\\alpha J} M\\varphi (t) dt.$ By the reverse Hölder inequality in the Muckenhoupt theory (see [7]), there exist $C>0$ and $p >1$ such that for any bounded closed interval $I \\subset \\mathbb {R}$ , we have $\\frac{1}{|I|}\\int _{I}\\omega (t)^p dt \\le C\\left(\\frac{1}{|I|}\\int _{I}\\omega (t) dt\\right)^p.$ It follows that $\\begin{split}\\mathcal {I} & \\le C(J)|3\\alpha J|^{\\frac{1}{p^{\\prime }}} \\left(\\int _{3\\alpha J} (M\\varphi )^p\\right)^{\\frac{1}{p}}\\le C_1(J)|3 \\alpha J|^{\\frac{1}{p^{\\prime }}} \\left(\\int _{\\mathbb {R}} \\varphi ^p\\right)^{\\frac{1}{p}} \\\\& = C_1(J)|3\\alpha J|^{\\frac{1}{p^{\\prime }}} \\left(\\int _{3\\alpha J} \\omega ^p\\right)^{\\frac{1}{p}}\\le C_2(J)\\int _{3\\alpha J}\\omega = C_2(J)|h(3\\alpha J)| \\\\\\end{split}$ for $1/p+1/p^{\\prime }=1$ , where $C_1(J)$ and $C_2(J)$ are some constant multiples of $C(J)$ .", "We have used the Hölder inequality in the first inequality, the strong $L^p$ -estimate for maximal functions (see [7], [18]) in the second inequality, and the reverse Hölder inequality mentioned above in the last inequality.", "Finally, we estimate $|h(3\\alpha J)|$ .", "Let $M \\ge 1$ be the quasisymmetry constant of $h$ .", "Then, $|h(3\\alpha J)| \\le \\frac{M^{3\\alpha }-1}{M-1}|h(J)|=\\widetilde{M}|I|,$ where the constant involving $M$ and $\\alpha $ is replaced with $\\widetilde{M} \\ge 1$ .", "This yields an estimate $\\frac{\\mathcal {I}}{|I|} =\\frac{1}{|I|}\\iint _{Q_I} |\\mu _{G \\circ H^{-1}}(\\zeta )|^2 \\rho _{\\mathbb {U}}(\\zeta )d\\xi d\\eta \\le \\widetilde{M} C_2(J).$ Since $C_2(J)$ is bounded independent of $J$ , we see that $\\mu _{G \\circ H^{-1}}$ belongs to $\\mathcal {M}(\\mathbb {U})$ .", "This proves statement (1).", "We further assume that $\\mu , \\nu \\in \\mathcal {M}_0(\\mathbb {U})$ .", "Then, $\\lambda _{\\mu -\\nu } \\in \\rm CM_0(\\mathbb {U})$ .", "Since $C_2(J) \\asymp \\Vert \\lambda _{\\mu -\\nu } 1_{Q_{\\alpha J}} \\Vert _c,$ this tends uniformly to 0 as $|J| \\rightarrow 0$ .", "Moreover, by the assumption that $h^{-1}$ is uniformly continuous on $\\mathbb {R}$ , we see that $|J| \\rightarrow 0$ uniformly as $|I| \\rightarrow 0$ .", "This shows that $\\mu _{G \\circ H^{-1}}$ belongs to $\\mathcal {M}_0(\\mathbb {U})$ , which proves statement (2)." ], [ "The barycentric extension: Proof of Theorem ", "In this section, after introducing the barycentric extension, we prove Theorem REF , and provide a complex-variable proof for Theorem REF .", "We also show the fact that the group ${\\rm SS}_{\\rm uc}$ acts on the Teichmüller space $T_v(\\mathbb {R})$ as biholomorphic automorphisms.", "We first recall the barycentric extension for a homeomorphism of the unit circle $\\mathbb {S}$ introduced by Douady and Earle [12], and then we translate it to the setting of the real line $\\mathbb {R}$ .", "For $\\varphi \\in {\\rm QS}(\\mathbb {S})$ , the average of $\\varphi $ taken at $w \\in \\mathbb {D}$ is defined by $\\xi _\\varphi (w)=\\frac{1}{2\\pi } \\int _{\\mathbb {S}}\\gamma _w(\\varphi (\\zeta ))|d\\zeta |$ where the Möbius transformation $\\gamma _w(z)=\\frac{z-w}{1-\\bar{w} z} \\in \\mbox{\\rm {Möb}}(\\mathbb {D})$ maps $w$ to the origin 0.", "The barycenter of $\\varphi $ is the unique point $w_0 \\in \\mathbb {D}$ such that $\\xi _\\varphi (w_0)=0$ .", "The value of the barycentric extension $e(\\varphi )$ at the origin 0 is defined to be the barycenter $w_0=w_0(\\varphi )$ , that is, $e(\\varphi )(0)=w_0(\\varphi )$ .", "For an arbitrary point $z \\in \\mathbb {D}$ , the barycentric extension $e(\\varphi )$ is defined by ${e(\\varphi )(z)=e(\\varphi \\circ \\gamma _z^{-1})(0)},$ which satisfies the conformal naturality such that ${e(\\gamma _1 \\circ \\varphi \\circ \\gamma _2)=e(\\gamma _1) \\circ e(\\varphi ) \\circ e(\\gamma _2)}$ for any $\\gamma _1, \\gamma _2 \\in \\mbox{\\rm {Möb}}(\\mathbb {S})$ .", "Moreover, $e(\\varphi )$ is a quasiconformal diffeomorphism of $\\mathbb {D}$ onto itself and is even bi-Lipschitz under the hyperbolic metric on $\\mathbb {D}$ .", "We fix a Cayley transformation $T:\\mathbb {U} \\rightarrow \\mathbb {D}$ given by $w = T(z) = (z - i)/(z + i)$ .", "For a quasisymmetric homeomorphism $h \\in {\\rm QS}(\\mathbb {R})$ , we set $\\varphi = T \\circ h \\circ T^{-1}$ , and define $e(h) =T^{-1} \\circ e(\\varphi ) \\circ T$ , where $e(\\varphi )$ is the barycentric extension of $\\varphi \\in {\\rm QS}(\\mathbb {S})$ .", "We also call $e(h)$ the barycentric extension of $h$ .", "The complex dilatation of $e(h)$ is denoted by $\\mu _{e(h)}$ .", "The barycentric extension $e(h)$ on $\\mathbb {U}$ also satisfies the conformal naturality and bi-Lipschitz continuity as in the case of $\\mathbb {D}$ .", "Suppose that $\\varphi \\in {\\rm QS}(\\mathbb {S})$ has a quasiconformal extension to $\\mathbb {D}$ with complex dilatation $\\mu $ .", "We denote by $e^{-1}(\\varphi ^{-1})$ the inverse mapping of the barycentric extension $e(\\varphi ^{-1})$ .", "By checking the proof of [34], we see that $\\mu _{e^{-1}(\\varphi ^{-1})} \\in \\mathcal {M}(\\mathbb {D})$ if $\\mu \\in \\mathcal {M}(\\mathbb {D})$ , and $\\mu _{e^{-1}(\\varphi ^{-1})} \\in \\mathcal {M}_{0}(\\mathbb {D})$ if $\\mu \\in \\mathcal {M}_0(\\mathbb {D})$ .", "Furthermore, the results on the barycentric extension $e(\\varphi )$ itself were deduced from this by using [3] or [28] (which are generalized to Theorem REF ).", "Namely, $\\mu _{e(\\varphi )} \\in {\\mathcal {M}}(\\mathbb {D})$ if $\\mu \\in {\\mathcal {M}}(\\mathbb {D})$ , and $\\mu _{e(\\varphi )} \\in {\\mathcal {M}}_0(\\mathbb {D})$ if $\\mu \\in {\\mathcal {M}}_0(\\mathbb {D})$ .", "This claim is also true for ${\\mathcal {M}}(\\mathbb {U})$ on the upper half-plane $\\mathbb {U}$ by the conformal invariance, but is not known to be true for ${\\mathcal {M}}_0(\\mathbb {U})$ .", "The former result on $\\mu _{e^{-1}(\\varphi ^{-1})}$ was translated to the setting on $\\mathbb {U}$ in [30] as follows.", "Lemma 15 Let $H$ be a quasiconformal homeomorphism of $\\mathbb {U}$ onto itself whose complex dilatation is in ${\\mathcal {M}}_0(\\mathbb {U})$ .", "Then, for the boundary extension $h$ of $H$ to $\\mathbb {R}$ , the complex dilatation $\\mu _{e^{-1}(h^{-1})}$ is also in ${\\mathcal {M}}_0(\\mathbb {U})$ .", "This lemma implies that once we obtain such a quasiconformal homeomorphism $H$ , we can replace it with the bi-Lipschitz diffeomorphism given by means of the barycentric extension with the same boundary extension $h$ as $H$ and with its complex dilatation in the same class ${\\mathcal {M}}_0(\\mathbb {U})$ as $H$ .", "For a given $h \\in {\\rm SS}(\\mathbb {R})$ , the existence of the appropriate quasiconformal extension $H$ is guaranteed by Proposition REF .", "Thus, we can prepare the following claim for the proof of Theorem REF .", "Proposition 16 Any $h \\in {\\rm SS}(\\mathbb {R})$ extends continuously to a bi-Lipschitz diffeomorphism $e^{-1}(h^{-1})$ of $\\mathbb {U}$ onto itself whose complex dilatation belongs to ${\\mathcal {M}}_0(\\mathbb {U})$ .", "Remark On the unit disk $\\mathbb {D}$ , the barycentric extension $e(\\varphi )$ satisfies $\\mu _{e(\\varphi )} \\in {\\mathcal {M}}(\\mathbb {D})$ for $\\varphi \\in {\\rm SQS}(\\mathbb {S})$ and $\\mu _{e(\\varphi )} \\in {\\mathcal {M}}_0(\\mathbb {D})$ for $\\varphi \\in {\\rm SS}(\\mathbb {S})$ .", "These facts follow from the above arguments combined with Proposition REF (applied to $\\mathbb {D}$ ) and Proposition REF , which originally appeared in [10] and [34].", "Since $h \\in \\rm SS(\\mathbb {R})$ and $h^{-1}$ is uniformly continuous on $\\mathbb {R}$ , we conclude by Corollary REF that $h^{-1} \\in \\rm SS(\\mathbb {R})$ .", "Then by Proposition REF , we have $\\mu _{e^{-1}(h)} \\in \\mathcal {M}_0 (\\mathbb {U})$ .", "Since $e^{-1}(h)$ is bi-Lipschitz on $\\mathbb {U}$ and $h$ is uniformly continuous on $\\mathbb {R}$ , Theorem REF implies that $\\mu _{e(h)} \\in \\mathcal {M}_0 (\\mathbb {U})$ .", "This completes the proof.", "The complex-variable proof of Theorem REF is also instructive.", "We give the argument by using the results on quasiconformal extensions in Theorem REF , Proposition REF , and Proposition REF .", "For $g, h \\in \\rm SS(\\mathbb {R})$ , we set $G=e^{-1}(g^{-1})$ and $H=e^{-1}(h^{-1})$ (only for $H$ , this particular construction is necessary).", "By Proposition REF , we have $\\mu _G,\\, \\mu _H \\in \\mathcal {M}_0 (\\mathbb {U})$ .", "Since $H=e^{-1}(h^{-1})$ is bi-Lipschitz with respect to the hyperbolic metric on $\\mathbb {U}$ and $h^{-1}$ is uniformly continuous on $\\mathbb {R}$ by assumption, we conclude by Theorem REF that $\\mu _{G \\circ H^{-1}} \\in \\mathcal {M}_0 (\\mathbb {U})$ .", "Then, the boundary extension $g \\circ h^{-1}$ of $G \\circ H^{-1}$ belongs to ${\\rm SS}(\\mathbb {R})$ by Proposition REF .", "Finally, we mention the action of the group ${\\rm SS}_{\\rm uc}(\\mathbb {R})$ on $T_v(\\mathbb {R})$ .", "For each $h \\in {\\rm SS}_{\\rm uc}(\\mathbb {R})$ , this can be simply defined by $h_*([g])=[g \\circ h^{-1}]$ for $[g] \\in T_v(\\mathbb {R})$ , but to see that this gives a biholomorphic automorphism of $T_v(\\mathbb {R})$ , we have to extend $h$ quasiconformally to $\\mathbb {U}$ and consider its action on ${\\mathcal {M}}_0(\\mathbb {U})$ .", "Theorem 17 For any $h \\in {\\rm SS}_{\\rm uc}(\\mathbb {R})$ , let $H=e(h)$ .", "For any $\\mu \\in {\\mathcal {M}}_0(\\mathbb {U})$ , let $G$ be a quasiconformal homeomorphism of $\\mathbb {U}$ onto itself whose complex dilatation is $\\mu $ .", "Then, the correspondence $\\mu \\mapsto \\mu _{G \\circ H^{-1}}$ defines a biholomorphic automorphism $r_H:{\\mathcal {M}}_0(\\mathbb {U}) \\rightarrow {\\mathcal {M}}_0(\\mathbb {U})$ .", "Moreover, this map descends down to a biholomorphic automorphism $R_h:T_v(\\mathbb {R}) \\rightarrow T_v(\\mathbb {R})$ that coincides with $h_*$ and satisfies $\\pi \\circ r_H=R_h \\circ \\pi $ for the Teichmüller projection $\\pi :{\\mathcal {M}}_0(\\mathbb {U}) \\rightarrow T_v(\\mathbb {R})$ .", "Indeed, in the proof of Theorem REF , we obtain that $\\Vert \\lambda _{\\mu _{G \\circ H^{-1}}} \\Vert _c \\lesssim \\Vert \\lambda _{\\mu _{G}-\\mu _{H}} \\Vert _c\\lesssim \\Vert \\lambda _{\\mu _{G}} \\Vert _c+\\Vert \\lambda _{\\mu _{H}} \\Vert _c,$ from which we see that $r_H$ is locally bounded.", "Then, the remaining arguments for showing holomorphy are carried out in a standard way.", "See [31] and [39].", "The properties of the inverse maps are clear by $(r_{H})^{-1}=r_{H^{-1}}$ and $(R_h)^{-1}=R_{h^{-1}}$ ." ], [ "Comparisons of $\\rm SS (\\mathbb {R})$ and {{formula:910d8253-9d44-4b3c-bd72-19135b2ab9f4}} ", "The conformal invariance of strongly quasisymmetric homeomorphisms is well understood.", "However, this is not the case for strongly symmetric homeomorphisms.", "The problem comes from the uniformity of vanishing quantities related to VMO and vanishing Carleson measures.", "In this section, we will clarify the relationship between strongly symmetric homeomorphisms on $\\mathbb {R}$ and those on $\\mathbb {S}$ along with some observations which may be of independent interests.", "We switch the definition of Carleson measure to measuring the intersection of a disk with $\\mathbb {U}$ or $\\mathbb {D}$ instead of measuring a Carleson box or sector.", "More precisely, a positive Borel measure $\\lambda $ on $\\mathbb {D}$ (similarly on $\\mathbb {U}$ ) is a Carleson measure if $\\sup _{\\Delta (\\xi ,r)} \\frac{\\lambda (\\Delta (\\xi ,r) \\cap \\mathbb {D})}{r} <\\infty ,$ where the supremum is taken over all closed disks $\\Delta (\\xi ,r)$ with center $\\xi \\in \\mathbb {S}$ and radius $r \\in (0,2)$ .", "A vanishing Carleson measure $\\lambda \\in \\rm CM_0(\\mathbb {D})$ is defined by verifying a uniform vanishing limit of the above quantity as $r \\rightarrow 0$ .", "These definitions are equivalent to the previous ones.", "We begin with considering the correspondence between $\\rm CM_0(\\mathbb {U})$ and $\\rm CM_0(\\mathbb {D})$ under the Cayley transformation $T:\\mathbb {U} \\rightarrow \\mathbb {D}$ defined by $T(z) = (z - i)/(z + i)$ .", "The following is a basic fact.", "Lemma 18 If $\\lambda \\in \\rm CM_0(\\mathbb {D})$ , then the pull-back measure $T^*\\lambda $ on $\\mathbb {U}$ satisfying $d(T^*\\lambda )= |T^{\\prime }|^{-1} d\\lambda \\circ T $ belongs to $\\rm CM_0(\\mathbb {U})$ .", "Let $\\Delta (x,r)$ denote a closed disk with center $x \\in \\mathbb {R}$ and radius $r>0$ .", "Then, $&\\quad \\ \\frac{1}{r} \\iint _{\\Delta (x,r) \\cap \\mathbb {U}} d(T^*\\lambda )(z)= \\frac{1}{r} \\iint _{\\Delta (x,r) \\cap \\mathbb {U}} |T^{\\prime }(z)|^{-1} d\\lambda \\circ T(z)\\\\&=\\frac{1}{{\\rm rad}(T(\\Delta (x,r)))} \\iint _{T(\\Delta (x,r)) \\cap \\mathbb {D}}\\frac{{\\rm rad}(T(\\Delta (x,r)))}{{\\rm rad}(\\Delta (x,r))}|(T^{-1})^{\\prime }(w)| d\\lambda (w).$ Here, ${\\rm rad}(T(\\Delta (x,r)))|(T^{-1})^{\\prime }(w)|/{\\rm rad}(\\Delta (x,r))$ is uniformly bounded by some constant $C>0$ on $T(\\Delta (x,r))$ for all sufficiently small $r>0$ .", "Since $\\lambda \\in \\rm CM_0(\\mathbb {D})$ , for every $\\varepsilon >0$ , there is some $\\delta >0$ such that if ${\\rm rad}(T(\\Delta (x,r)))<2\\delta $ then $\\frac{C}{{\\rm rad}(T(\\Delta (x,r)))}\\iint _{T(\\Delta (x,r)) \\cap \\mathbb {D}} d \\lambda (w) <\\varepsilon .$ Moreover, ${\\rm rad}(T(\\Delta (x,r))) \\le 2\\, {\\rm rad}(\\Delta (x,r))$ by $|T^{\\prime }(x)| \\le 2$ for all $x \\in \\mathbb {R}$ .", "Hence, if $r<\\delta $ then $\\frac{1}{r} \\iint _{\\Delta (x,r) \\cap \\mathbb {U}} d(T^*\\lambda )(z) <\\varepsilon $ for all $x \\in \\mathbb {R}$ .", "Therefore, we have $T^*\\lambda \\in \\rm CM_0(\\mathbb {U})$ .", "The following is the main result in this section.", "This also compares $T_v$ with $T_v(\\mathbb {R})$ .", "Theorem 19 If $\\varphi \\in \\rm SS(\\mathbb {S})$ , then $h = T^{-1}\\circ \\varphi \\circ T \\in \\rm SS(\\mathbb {R})$ .", "The converse does not necessarily hold.", "In other words, there exists $h \\in \\rm SS (\\mathbb {R})$ such that $\\varphi = T \\circ h \\circ T^{-1} \\notin \\rm SS(\\mathbb {S})$ .", "Remark We point out that there is another way of constructing a strongly symmetric homeomorphism on $\\mathbb {R}$ from that on $\\mathbb {S}$ .", "This is done by taking a lift against the universal covering projection $\\mathbb {R} \\rightarrow \\mathbb {S}$ defined by $x \\mapsto e^{ix}$ .", "Namely, for each sense-preserving homeomorphism $\\varphi $ of $\\mathbb {S}$ onto itself, there exists a strictly increasing homeomorphism $\\hat{h}$ of $\\mathbb {R}$ onto itself that satisfies $\\varphi (e^{ix}) = e^{i\\hat{h}(x)}$ .", "Then, $\\hat{h}(x + 2\\pi ) - \\hat{h}(x) \\equiv 2\\pi $ and $\\hat{h}^{\\prime }(x) = |\\varphi ^{\\prime }(e^{ix})|$ .", "If $\\varphi \\in \\rm SS(\\mathbb {S})$ , we have by Partyka [25] that $\\hat{h} \\in {\\rm SS}(\\mathbb {R})$ .", "By Proposition REF , any $\\varphi \\in \\rm SS(\\mathbb {S})$ extends to a quasiconformal homeomorphism $\\Phi $ of $\\mathbb {D}$ onto itself whose complex dilatation $\\mu _{\\Phi }$ induces a vanishing Carleson measure $\\lambda _{\\mu _{\\Phi }} \\in {\\rm CM}_0(\\mathbb {D})$ .", "Then, $H = T^{-1}\\circ \\Phi \\circ T$ is a quasiconformal extension of $h$ to $\\mathbb {U}$ whose complex dilatation $\\mu _{H}$ induces a measure $\\lambda _{\\mu _{H}}$ on $\\mathbb {U}$ such that $\\begin{split}d\\lambda _{\\mu _{H}}(z) & = |\\mu _{H}(z)|^2\\rho _{\\mathbb {U}}(z)dxdy \\\\& = |\\mu _{\\Phi }(T(z))|^2\\rho _{\\mathbb {D}}(T(z))|T^{\\prime }(z)| dxdy\\\\& = |T^{\\prime }(z)|^{-1} d\\lambda _{\\mu _{\\Phi }}\\circ T(z) =d(T^*\\lambda _{\\mu _{\\Phi }})(z).\\\\\\end{split}$ It follows from Lemma REF that $\\lambda _{\\mu _{H}} \\in \\rm CM_0(\\mathbb {U})$ , and thus $h \\in \\rm SS(\\mathbb {R})$ by Proposition REF .", "In order to prove the second assertion, we first recall two functions $h$ and $g$ constructed in [37] (the roles of $h$ and $g$ are exchanged here).", "The function $h$ is simply defined as follows: $h(x) = {\\left\\lbrace \\begin{array}{ll}(x+1)^2-1, & x \\ge 0\\\\-(x-1)^2+1, & x \\le 0.\\end{array}\\right.", "}$ To construct the function $g$ , we consider a function $g_1(x)=x^2/24$ on the interval $[1,12] \\subset \\mathbb {R}$ .", "We draw the graph of $y=g_1(x)$ on the $xy$ -plane and its $\\pi $ -rotating copy around the point $O=(1,g_1(1))$ .", "The union of these two curves is denoted by $\\mathcal {G}_1$ .", "Its end points are $E=(12, g_1(12))$ and the antipodal point $E^{\\prime }$ on the copy.", "We move $\\mathcal {G}_1$ by parallel translation so that $E^{\\prime }$ coincides with the origin $(0,0)$ of the $xy$ -plane.", "In the positive direction, we put each $\\mathcal {G}_1$ from one to another so that $E^{\\prime }$ coincides with $E$ .", "The resulting curve that is a graph on $\\lbrace x \\ge 0\\rbrace $ is denoted by $\\mathcal {G}_+$ .", "We also make its $\\pi $ -rotating copy around the origin $(0,0)$ , which is denoted by $\\mathcal {G}_-$ .", "Then, we set $\\mathcal {G}=\\mathcal {G}_+ \\cup \\mathcal {G}_-$ .", "This curve $\\mathcal {G}$ on the $xy$ -plane defines a function $y=g(x)$ for $x \\in \\mathbb {R}$ that has $\\mathcal {G}$ as its graph.", "We have shown in [37] that $g, h \\in \\rm SS(\\mathbb {R})$ but $g \\circ h \\notin \\rm SS(\\mathbb {R})$ .", "Suppose that both $T \\circ g \\circ T^{-1}$ and $T \\circ h \\circ T^{-1}$ are in $\\rm SS(\\mathbb {S})$ .", "Since $\\rm SS(\\mathbb {S})$ is a group, we have that $T \\circ g \\circ h \\circ T^{-1}$ is in $\\rm SS(\\mathbb {S})$ .", "Then, we conclude by the above argument that $g \\circ h \\in \\rm SS(\\mathbb {R})$ .", "This is a contradiction, and thus either $T \\circ g \\circ T^{-1}$ or $T \\circ h \\circ T^{-1}$ is not in $\\rm SS(\\mathbb {S})$ .", "Remark As an immediate consequence of Theorem REF , we see that there exists $\\tilde{\\lambda }\\in \\rm CM_0(\\mathbb {U})$ such that the push-forward measure $T_*{\\tilde{\\lambda }}$ on $\\mathbb {D}$ satisfying $d(T_*{\\tilde{\\lambda }})=d((T^{-1})^*{\\tilde{\\lambda }}) = |(T^{-1})^{\\prime }|^{-1} d\\tilde{\\lambda }\\circ T^{-1}$ is not in $\\rm CM_0(\\mathbb {D})$ .", "Next, we consider the correspondence between VMO functions on $\\mathbb {R}$ and on $\\mathbb {S}$ .", "Similar results to Theorem REF can be obtained; the boundedness of $|T^{\\prime }(x)|$ also transforms $\\rm VMO(\\mathbb {S})$ into $\\rm VMO(\\mathbb {R})$ under the Cayley transformation $T$ .", "Proposition 20 If $v \\in \\rm VMO(\\mathbb {S})$ , then $u = v \\circ T \\in \\rm VMO(\\mathbb {R})$ .", "The converse does not necessarily hold.", "In other words, there exists $u \\in \\rm VMO(\\mathbb {R})$ such that $v = u \\circ T^{-1} \\notin \\rm VMO(\\mathbb {S})$ .", "It is easy to see that $\\frac{1}{|I|}\\int _{I}|u(x) - u_I| dx \\le \\frac{2}{|I|}\\int _{I}|u(x) - c| dx$ for any bounded closed interval $I \\subset \\mathbb {R}$ and any $c \\in \\mathbb {R}$ .", "For $c = v_J$ , $J = T(I)$ and $x = T^{-1}(\\xi )$ , the right side term in the above inequality turns out to be $\\frac{2}{|J|}\\int _{J}|v(\\xi ) - v_J| \\frac{|J|}{|I|}|(T^{-1})^{\\prime }(\\xi )| |d\\xi |.$ Here, $|J||(T^{-1})^{\\prime }(\\xi )|/|I|$ is bounded from above by an absolute constant $C>0$ for all sufficiently small intervals $I$ .", "We see that $|J|\\rightarrow 0$ as $|I|\\rightarrow 0$ by $|T^{\\prime }(z)|\\le 2$ .", "Hence, $\\frac{1}{|I|}\\int _{I}|u(x) -u_I| dx \\le \\frac{2C}{|J|}\\int _{J}|v(\\xi ) - v_J| |d\\xi | \\rightarrow 0$ as $|I| \\rightarrow 0$ by the condition $v \\in \\rm VMO(\\mathbb {S})$ .", "This implies that $u \\in \\rm VMO(\\mathbb {R})$ .", "For the converse direction, we set $v(\\xi )=\\log |1-\\xi |$ .", "This does not belong to $\\rm VMO(\\mathbb {S})$ .", "However, $u(x)=v \\circ T(x) = -\\log |x+i|+\\log 2$ belongs to $\\rm VMO(\\mathbb {R})$ because it is in $\\rm BMO(\\mathbb {R})$ and is uniformly continuous on $\\mathbb {R}$ (see [26]).", "Acknowledgments The authors would like to thank the referee for a very careful reading of the manuscript and for several suggestions that greatly improve the presentation of the paper." ] ]
2207.10468
[ [ "Spectral Variational Multi-Scale method for parabolic problems.\n Application to 1D transient advection-diffusion equations" ], [ "Abstract In this work, we introduce a Variational Multi-Scale (VMS) method for the numerical approximation of parabolic problems, where sub-grid scales are approximated from the eigenpairs of associated elliptic operator.", "The abstract method is particularized to the one-dimensional advection-diffusion equations, for which the sub-grid components are exactly calculated in terms of a spectral expansion when the advection velocity is approximated by piecewise constant velocities on the grid elements.", "We prove error estimates that in particular imply that when Lagrange finite element discretisations in space are used, the spectral VMS method coincides with the exact solution of the implicit Euler semi-discretisation of the advection-diffusion problem at the Lagrange interpolation nodes.", "We also build a feasible method to solve the evolutive advection-diffusion problems by means of an offline/online strategy with reduced computational complexity.", "We perform some numerical tests in good agreement with the theoretical expectations, that show an improved accuracy with respect to several stabilised methods." ], [ "Introduction", "The Variational Multi-Scale is a general methodology to deal with the instabilities arising in the Galerkin discretisation of PDEs (Partial Differential Equations) with terms of different derivation orders (see Hughes (cf.", "[18], [19], [20])).", "The VMS formulation is based upon the formulation of the Galerkin method as two variational problems, one satisfied by the resolved and another satisfied by the sub-grid scales of the solution.", "To build a feasible VMS method, the sub-grid scales problem is approximately solved by some analytic or computational procedure.", "In particular, an element-wise diagonalisation of the PDE operator leads to the Adjoint Stabilised Method, as well as to the Orthogonal Sub-Scales (OSS) method, introduced by Codina in [4].", "Within these methods, the effects of the sub-grid scales is modelled by means of a dissipative interaction of operator terms acting on the resolved scales.", "The VMS methods have been successfully applied to many flow problems, and in particular to Large Eddy Simulation (LES) models of turbulent flows (cf.", "[21], [22], [9]).", "The application of VMS method to evolution PDEs dates back to the 1990s, when the results from [18] were extended to nonsymmetric linear evolution operators, see [19].", "The papers [12], [13] deal with the spurious oscillations generated in the Galerkin method for parabolic problems due to very small time steps.", "The series of articles [14], [15], [16] deal with transient Galerkin and SUPG methods, transient subgrid scale (SGS) stabilized methods and transient subgrid scale/gradient subgrid scale (SGS/GSGS), making a Fourier analysis for the one-dimensional advection-diffusion-reaction equation.", "A stabilised finite element method for the transient Navier-Stokes equations based on the decomposition of the unknowns into resolvable and subgrid scales is considered in [5], [6].", "Further, [1] compares the Rothe method with the so-called Method of Lines, which consists on first, discretise in space by means of a stabilized finite element method, and then use a finite difference scheme to approximate the solution.", "More recently, [7] introduced the use of spectral techniques to model the sub-grid scales for 1D steady advection-diffusion equations.", "The basic observation is that the eigenpairs of the advection-diffusion operator may be calculated analitycally on each grid element.", "A feasible VMS-spectral discretization is then built by truncation of this spectral expansion to a finite number of modes.", "An enhanced accuracy with respect to preceding VMS methods is achieved.", "In [3], the spectral VMS method is extended to 2D steady advection-diffusion problems.", "It is cast for low-order elements as a standard VMS method with specific stabilised coefficients, that are anisotropic in the sense that they depend on two grid Péclet numbers.", "To reduce the computing time, the stabilised coefficients are pre-computed at the nodes of a grid in an off-line step, and then interpolated by a fast procedure in the on-line computation.", "The present paper deals with the building of the spectral VMS numerical approximation to evolution advection-diffusion equations.", "We construct an abstract spectral VMS discretisation of parabolic equations, that is particularised to 1D advection-diffusion equations.", "The sub-grid components are exactly calculated in terms of spectral expansions when the driving velocity is approximated by piecewise constant velocities on the grid elements.", "We prove error estimates that in particular imply that when Lagrange finite element discretisations in space are used, the solution provided by the spectral VMS method coincides with the exact solution of the implicit Euler semi-discretisation at the Lagrange interpolation nodes.", "We also build a feasible method to solve the evolutive advection-diffusion problem by means of an offline/online strategy that pre-computes the action of the sub-grid scales on the resolved scales.", "This allows to dramatically reduce the computing times required by the method.", "We further perform some numerical tests for strongly advection dominated flows.", "The spectral VMS method is found to satisfy the discrete maximum principle, even for very small time steps.", "A remarkable increase of accuracy with respect to several stabilised methods is achieved.", "The outline of the paper is as follows.", "In Section , we describe the abstract spectral VMS discretisation to linear parabolic problems, which is applied to transient advection-diffusion problems in Section .", "A feasible method is built in Section , based upon an offline/online strategy.", "We present in Section our numerical results, and address some conclusions in Section ." ], [ "Spectral VMS method", "In this section, we build the spectral VMS discretisation to abstract linear parabolic equation.", "Let $\\Omega $ a bounded domain in $\\mathbb {R}^d$ and $T>0$ a final time.", "Let us consider two separable Hilbert spaces on $\\Omega $ , $X$ and $ H$ , so that $X \\subset H$ with dense and continuous embedding.", "We denote $(\\cdot ,\\cdot )$ the scalar product in $X$ ; $X^{\\prime }$ and $H^{\\prime }$ are the dual topological spaces of $X$ and $H$ , respectively, and $\\langle \\cdot , \\cdot \\rangle $ is the duality pairing between $X^{\\prime }$ and $X$ .", "We identify $H$ with its topological dual $H^{\\prime }$ so that $X \\subset H \\equiv H^{\\prime } \\subset X^{\\prime }$ .", "Denote by ${\\cal L}(X)$ the space of bilinear bounded forms on $X$ and consider $b \\in L^1(0,T; {\\cal L}(X))$ uniformly bounded and $X$ -elliptic with respect to $t\\in (0,T)$ .", "Given the data $f \\in L^2(0,T;X^{\\prime })$ and $u_0 \\in H$ , we consider the following variational parabolic problem: $\\left\\lbrace \\begin{array}{l}\\mbox{Find } u \\in L^2((0,T);X) \\cap C^0([0,T];H) \\mbox{ such that,}\\\\[0,2cm]\\displaystyle \\frac{d}{dt}(u(t),v) \\,+\\, b(t;u(t),v) \\,=\\, \\langle f(t),v\\rangle \\quad \\forall \\, v \\in X,\\quad \\mbox{in } {\\cal D}^{\\prime }(0,T),\\\\[0,4cm]u(0) \\,=\\, u_0 \\quad \\mbox{in } H.\\end{array}\\right.$ It is well known that this problem is well posed and, in particular, admits a unique solution [11].", "To discretize this problem, we proceed through the so-called Horizontal Method of Lines [1], [2], [13].", "First, we discretise in time by the Backward Euler scheme and then we apply a steady spectral VMS method to the elliptic equations appearing at each time step.", "Consider a uniform partition of the interval $[0,T]$ , $\\lbrace 0=t_0<t_1<...<t_N=T\\rbrace $ , with time-step size $\\Delta t=T/N$ .", "The time discretization of problem (REF ) by the Backward Euler scheme gives the following family of stationary problems: given the initialization $u^0 \\,=\\, u_0$ , $\\left\\lbrace \\begin{array}{l}\\mbox{Find } u^{n+1}\\in X \\mbox{ such that,}\\\\[0,2cm]\\left( \\displaystyle \\frac{u^{n+1}-u^n}{\\Delta t}, v\\right) + b^{n+1}(u^{n+1},v) \\,=\\, \\Delta t\\, \\langle f^{n+1},v \\rangle \\quad \\forall \\, v\\in X,\\\\[0,4cm]\\forall \\, n=0,1, \\hdots ,N-1,\\end{array}\\right.$ where $b^{n+1}$ and $f^{n+1}$ are some approximations of $b(t;\\cdot ,\\cdot )$ and $f(t)$ , respectively, at $t=t_{n+1}$ .", "To discretise in space problem (REF ), we assume that $\\Omega $ is polygonal (when $d=2$ ) or polyhedric (when $d=3$ ), and consider a family of conforming and regular triangulations of $\\overline{\\Omega }$ , $\\lbrace {\\cal T}_{h}\\rbrace _{h>0}$ , formed by simplycial elements, where the parameter $h$ denotes the largest diameter of the elements of the triangulation ${\\cal T}_{h}$ .", "The VMS method is based on the decomposition, $X=X_h\\oplus \\tilde{X},$ where $X_h$ is a continuous finite element sub-space of $X$ constructed on the grid ${\\cal T}_{h}$ , and $\\tilde{X}$ is a complementary, infinite-dimensional, sub-space of $X.$ Notice that this is a multi-scale decomposition of the space $X$ , being $X_h$ the large or resolved scale space and $\\tilde{X}$ the small or sub-grid scale space.", "This decomposition defines two projection operators $P_h: X \\mapsto X_h$ and $\\tilde{P}: X \\mapsto \\tilde{X}$ , by $P_h (v)= v_h ,\\quad \\tilde{P}(v)=\\tilde{v}, \\quad \\forall \\, v\\in X,$ where $v_h$ and $\\tilde{v}$ are the unique elements belonging to $X_h$ and $\\tilde{X}$ , respectively, such that $v= v_h + \\tilde{v}$.", "Hence, one can decompose the solution of problem (REF ) as $u^{n+1}=u_{h}^{n+1}+\\tilde{u}^{n+1},$ where $u_{h}^{n+1}=P_h (u^{n+1})$ and $\\tilde{u}^{n+1}=\\tilde{P}(u^{n+1})$ satisfy the coupled problem, $ \\nonumber \\left\\lbrace \\begin{array}{ll}\\displaystyle \\left( \\frac{u^{n+1}_h-u^n_h}{\\Delta t},v_h \\right) + \\left( \\frac{\\tilde{u}^{n+1}-\\tilde{u}^n}{\\Delta t}, v_h \\right) + b^{n+1}(u^{n+1}_h,v_h) + b^{n+1}(\\tilde{u}^{n+1},v_h)= \\langle f^{n+1},v_h \\rangle & (\\ref {VMS}.1)\\\\ [0,6cm]\\displaystyle \\left( \\frac{u^{n+1}_h-u^n_h}{\\Delta t},\\tilde{v} \\right) + \\left( \\frac{\\tilde{u}^{n+1}-\\tilde{u}^n}{\\Delta t}, \\tilde{v} \\right) + b^{n+1}(u^{n+1}_h,\\tilde{v}) + b^{n+1}(\\tilde{u}^{n+1},\\tilde{v})= \\langle f^{n+1},\\tilde{v} \\rangle & (\\ref {VMS}.2)\\\\ [0,5cm]\\forall v_h \\in X_h, \\, \\forall \\tilde{v}\\in \\tilde{X},\\end{array}\\right.$ for all $ \\, n=0,1, \\hdots ,N-1$ .", "The small scales component $\\tilde{u}^{n+1}$ thus satisfies, $(\\tilde{u}^{n+1}, \\tilde{v}) + \\Delta t \\ b^{n+1}(\\tilde{u}^{n+1},\\tilde{v})= \\langle R^{n+1}(u_h^{n+1}),\\tilde{v}\\rangle $ where $\\langle R^{n+1}(u_h^{n+1}),\\tilde{v}\\rangle $ is the residual of the large scales component, defined as, $\\begin{array}{r}\\langle R^{n+1}(u_{h}^{n+1}), \\tilde{v}\\rangle := (u^{n}_h+ \\tilde{u}^{n},\\tilde{v}) + \\Delta t \\ \\langle f^{n+1},\\tilde{v}\\rangle - (u^{n+1}_h, \\tilde{v}) - \\Delta t \\ b^{n+1}(u^{n+1}_h,\\tilde{v}),\\,\\forall \\, \\tilde{v}\\in \\tilde{X}.\\end{array}$ In condensed notation, this may be written as, $\\tilde{u}^{n+1}=\\Pi ^{n+1}(R^{n+1}(u_{h}^{n+1})),$ where $\\begin{array}{cccl}\\Pi ^{n+1}: & \\tilde{X} & \\rightarrow & \\tilde{X} \\\\& g & \\mapsto & \\Pi ^{n+1}(g) = \\tilde{G}\\end{array}$ is the static condensation operator on $\\tilde{X}$ defined as, $( \\tilde{G} , \\tilde{v}) +\\Delta t \\ b^{n+1}(\\tilde{G},\\tilde{v})=\\langle g, \\tilde{v}\\rangle \\quad \\forall \\, \\tilde{v}\\in \\tilde{X},\\mbox{ for any } g \\in \\tilde{X}^{\\prime }.$ Inserting expression (REF ) in the large scales equation (REF .1), leads to the condensed VMS formulation of problem (REF ): $\\left\\lbrace \\begin{array}{l}\\mbox{Find } u_{h}^{n+1}\\in X_h \\mbox{ such that}\\\\[0,3cm](u^{n+1}_h, v_h) + \\Delta t \\, b^{n+1}(u^{n+1}_h,v_h) + (\\Pi ^{n+1}(R^{n+1}(u_{h}^{n+1})), v_h)+\\Delta t \\,b^{n+1}(\\Pi ^{n+1}(R^{n+1}(u_{h}^{n+1})), v_h)\\\\[0,4cm]\\quad \\quad =\\Delta t\\, \\langle f^{n+1},v_h \\rangle \\,+\\, (u^{n}_h+ \\Pi ^n ( R^n ( u^n_h) ),v_h)\\\\[0,3cm]\\forall \\, v_h\\in X_h,\\,\\,\\forall \\, n=0,1, \\hdots ,N-1,\\end{array}\\right.$ with $u_h^0=P_h(u_0)$ .", "This problem is an augmented Galerkin formulation, where the additional terms represents the effect of the small scales component of the solution $(\\tilde{u}^{n+1})$ on the large scales component $(u_{h}^{n+1})$ .", "To build an approximation of the sub-grid scales, we use a spectral decomposition of the operator associated to the variational formulation on each grid element, at each discrete time.", "To apply this approximation to problem (REF ), the small scales space $\\tilde{X}$ is approximated by the “bubble\" sub-spaces, $\\tilde{X}\\simeq \\tilde{X}_h = \\bigoplus _{K\\in \\mathcal {T}_h}\\tilde{X}_K,\\quad \\mbox{with } \\tilde{X}_K=\\lbrace \\tilde{v}\\in \\tilde{X}, \\mbox{ such that } \\mbox{supp}(\\tilde{v})\\subset K\\rbrace .$ Hence, we approximate $\\tilde{u}^{n+1}\\simeq \\tilde{u}^{n+1}_h = \\sum _{K\\in \\mathcal {T}_h}\\tilde{u}^{n+1}_K, \\quad \\mbox{with }\\tilde{u}^{n+1}_K \\in \\tilde{X}_K,\\quad \\forall \\, n=0,1, \\hdots ,N-1.$ Then, problem (REF ) is approximated by the following family of decoupled problems, $\\begin{array}{l}(\\tilde{u}_K^{n+1},\\tilde{v}_K) + \\Delta t \\ b^{n+1}(\\tilde{u}_K^{n+1},\\tilde{v}_K)=\\langle R^{n+1}(u_h^{n+1}),\\tilde{v}_K\\rangle ,\\quad \\forall \\, \\tilde{v}_K\\in \\tilde{X}_K, \\quad \\forall \\, K\\in \\mathcal {T}_h.\\end{array}$ Let $\\mathcal {L}^{n+1}: X \\mapsto X^{\\prime }$ be the operator defined by $\\langle \\mathcal {L}^{n+1} w,v \\rangle =b^{n+1}(w,v), \\quad \\forall \\, w, v \\in X,$ and let $\\mathcal {L}^{n+1}_K$ be the restriction of this operator to $\\tilde{X}_K$ .", "Let us also consider the weighted $L^2$ space, $L^2_p(K)=\\lbrace w:K\\rightarrow \\mathbb {R} \\mbox{ measurable such that } p|w|^2\\in L^1(K)\\rbrace ,$ where $p$ is some measurable real function defined on $K,$ which is positive a.e.", "on $K$ .", "This is a Hilbert space endowed with the inner product $(w,v)_p=\\int _K p(x) w(x) v(x) dx.$ We denote by $\\Vert \\cdot \\Vert _{p}$ the norm on $L^2_p(K)$ induced by this inner product.", "Now, we can state the following result, which allows to compute the small scales on each grid element by means a spectral expansion.", "Theorem 2.1 Let us assume that there exists a complete sub-set $\\lbrace \\tilde{z}_j^{n,K}\\rbrace _{j\\in \\mathbb {N}}$ on $\\tilde{X}_K$ formed by eigenfunctions of the operator $\\mathcal {L}^{n}_K$ , which is an orthonormal system in $L^2_{p^{n,K}}(K)$ for some weight function $p^{n,K}\\in C^1(\\bar{K}).$ Then, $\\begin{array}{l}\\tilde{u}_K^{n}=\\displaystyle \\sum _{j=1}^{\\infty }\\beta _j^{n,K} \\, r^{n,K}_j \\, \\tilde{z}_j^{n,K}, \\quad \\forall \\, n=1,\\ldots ,N,\\end{array}$ where $\\beta _j^{n,K} = (\\Lambda _j^{n,K})^{-1}$ , with $\\Lambda _j^{n,K}=1+\\Delta t \\, \\lambda _j^{n,K}$ being $\\lambda _j^{n,K}$ the eigenvalue of $\\mathcal {L}^{n}_K$ associated to $\\tilde{z}_j^{n,K}$ , and $r^{n,K}_j = \\langle R^{n}(u_h^{n}), p^{n,K}\\,\\tilde{z}_j^{n,K}\\rangle .$ This is a rather straightforward application of Theorem 1 in [7], that we do not detail for brevity.", "Once the eigenpairs $(\\tilde{z}_j^{n+1,K},\\lambda _j^{n+1,K})$ are known, the previous procedure allows us to directly compute $u_h^{n+1}$ from problem (REF ), approximating the sub-grid component $\\tilde{u}^{n+1}$ by expressions (REF ) and (REF ).", "This gives the spectral VMS method to fully discretize problem (REF ).", "Namely, $\\left\\lbrace \\begin{array}{l}\\mbox{Find } u_{h}^{n+1}\\in X_h \\mbox{ such that}\\\\[0,3cm](u^{n+1}_h, v_h) + \\Delta t \\, b^{n+1}(u^{n+1}_h,v_h) + (\\tilde{u}^{n+1}_h, v_h) + \\Delta t \\, b^{n+1}(\\tilde{u}^{n+1}_h ,v_h)\\\\[0,4cm]\\qquad = \\Delta t\\, \\langle f^{n+1},v_h \\rangle + (u^{n}_h,v_h) + (\\tilde{u}_h^{n}, v_h)\\\\ [0,3cm]\\forall \\, v_h \\in X_h, \\quad \\forall \\, n=0,1,\\hdots ,N-1,\\end{array}\\right.$ where, $\\tilde{u}^{n+1}_h = \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, \\langle R^{n+1}_h(u_h^{n+1}), p^{n+1,K} \\,\\tilde{z}_j^{n+1,K}\\rangle \\, \\tilde{z}_j^{n+1,K},\\quad \\forall \\, n=0, \\hdots ,N-1,$ with $\\langle R^{n+1}_h(u_{h}^{n}), \\tilde{v}\\rangle :=(u^{n}_h+\\tilde{u}_h^{n},\\tilde{v}) + \\Delta t \\, \\langle f^{n+1},\\tilde{v}\\rangle - (u^{n+1}_h, \\tilde{v}) - \\Delta t \\ b^{n+1}(u^{n+1}_h,\\tilde{v}),\\quad \\forall \\, \\tilde{v}\\in \\tilde{X},$ $u_h^0 \\,=\\, P_h(u_0)$ and $\\tilde{u}_h^0 \\in \\tilde{X}_h$ some approximation of $\\tilde{u}^0$ ." ], [ "Application to transient advection-diffusion problems", "In this section, we apply the abstract spectral VMS method introduced in the previous section to transient advection-diffusion equations, that we state with homogeneous boundary conditions, $\\left\\lbrace \\begin{array}{ll}\\partial _t u +\\mathbf {a}\\cdot \\nabla u-\\mu \\Delta u = f &\\mbox{in }\\Omega \\times (0,T),\\\\[0,2cm]u=0 & \\mbox{on }\\partial \\Omega \\times (0,T),\\\\[0,2cm]u(0)=u_0 & \\mbox{on } \\Omega ,\\end{array}\\right.$ where $\\mathbf {a} \\in L ^\\infty (0,T;W^{1,\\infty }(\\Omega ))^d$ is the advection velocity field, $\\mu >0$ is the diffusion coefficient, $f\\in L^2((0,T);L^2(\\Omega ))$ is the source term and $u_0\\in L^2(\\Omega )$ is the initial data.", "Different boundary conditions may be treated as well, as these also fit into the general spectral VMS method introduced in the previous section.", "The weak formulation of problem (REF ) reads, $\\left\\lbrace \\begin{array}{l}\\mbox{Find }u\\in L^2((0,T);H^1_0(\\Omega ))\\cap C^0([0,T];L^2(\\Omega )) \\mbox{ such that,}\\\\[0,2cm]\\displaystyle \\frac{d}{dt}(u(t),v) + (\\mathbf {a}\\cdot \\nabla u(t),v) + \\mu (\\nabla u(t),\\nabla v)=\\langle f(t),v\\rangle \\quad \\forall \\, v\\in H_0^1(\\Omega ),\\\\[0,3cm]u(0)=u_0.\\end{array}\\right.$ Problem (REF ) admits the abstract formulation (REF ) with $H=L^2(\\Omega )$ , $X=H^1_0(\\Omega )$ and $b(w,v)=(\\mathbf {a}\\cdot \\nabla w,v)+\\mu (\\nabla w,\\nabla v),\\quad \\forall \\, w,v\\in H^1_0(\\Omega ).$ In practice, we replace the velocity field $\\mathbf {a}$ by $\\mathbf {a}_h$ , the piecewise constant function defined a. e. on $\\overline{\\Omega }$ such that $\\mathbf {a}_h =\\mathbf {a}_K$ on the interior of each element $K\\in {\\cal T}_{h}$ .", "Then, we apply the spectral VMS method to the approximated problem, $\\left\\lbrace \\begin{array}{ll}\\mbox{Find } U^{n+1}\\in H^1_0(\\Omega )\\mbox{ such that}\\\\[0,3cm]\\begin{array}{r}\\left( \\displaystyle \\frac{U^{n+1}-U^n}{\\Delta t}, v\\right) + (\\mathbf {a}_h^{n+1}\\cdot \\nabla U^{n+1},v) + \\mu (\\nabla U^{n+1},\\nabla v)=\\langle f^{n+1},v\\rangle , \\;\\forall \\, v\\in H^1_0(\\Omega ),\\end{array}\\\\[0,5cm]\\forall \\, n=0,1, \\hdots ,N-1,\\end{array}\\right.$ with $u^0=u_0$ .", "In this case, $\\mathcal {L}^n w =\\mathbf {a}_h^n \\cdot \\nabla w - \\mu \\Delta w$ is the advection-difusion operator.", "Proposition 1 in [7] proved that the eigenpairs $(\\tilde{w}_j^{n,K}, \\lambda _j^{n,K})$ of operator $\\mathcal {L}^n_K$ can be obtained from the eigenpairs $(\\tilde{W}_j^K, \\sigma _j^K)$ of the Laplace operator in $H_0^1(K)$ , in the following way: $\\begin{array}{l}\\tilde{w}_j^{n,K} = \\psi ^{n,K} \\, \\tilde{W}_j^K, \\quad \\psi ^{n,K}(x)= \\exp \\left(\\frac{1}{2\\mu } \\,\\mathbf {a}_K^n\\cdot x \\right)\\\\[0,2cm]\\lambda _j^{n,K} = \\mu \\, \\left( \\sigma _j^K + \\displaystyle \\frac{|\\mathbf {a}_K^n|^2}{4\\mu ^2} \\right),\\quad \\forall \\, j\\in \\mathbb {N}.\\end{array}$ Moreover, for the weight function $\\displaystyle p^{n,K}(x)= (\\psi ^{n,K})^{-2}= \\exp \\left(-\\frac{1}{\\mu }\\,\\mathbf {a}_K\\cdot x\\right)$ the sequence $\\tilde{z}_j^{n,K} = \\displaystyle \\frac{\\tilde{w}_j^{n,K}}{\\Vert \\tilde{w}_j^{n,K}\\Vert _{p^{n,K}}}, \\quad \\forall \\, j\\in \\mathbb {N},$ is a complete and orthonormal system in $L_{p^{n,K}}^2(K)$ (see Theorem 2 in [7]).", "Then, Theorem REF holds and it is possible to apply the method (REF ) to problem (REF )." ], [ "One dimension problems", "The eigenpairs of the Laplace operator can be exactly computed for grid elements with simple geometrical forms, as it is the case of parallelepipeds.", "In the 1D case, the elements $K\\in {\\cal T}_{h}$ are closed intervals, $K=[a,b]$ .", "The eigenpairs $(\\tilde{W}_j^K, \\sigma _j^K)$ are solutions of the problem $\\left\\lbrace \\begin{array}{l}-\\partial _{xx} \\tilde{W}^K = \\sigma ^K \\, \\tilde{W}^K \\,\\, \\mbox{in } K,\\\\[0,2cm]\\tilde{W}^K(a)=\\tilde{W}^K(b)=0.\\end{array}\\right.$ Solutions of this problem are $\\tilde{W}_j^K = \\sin \\big ( \\sqrt{\\sigma _j^K} \\, (x- a) \\big ), \\quad \\sigma _j^K= \\left(\\frac{j\\pi }{h_K} \\right)^2, \\,\\, \\mbox{with } h_K=b-a,\\,\\, \\mbox{for any } j\\in \\mathbb {N}.$ As the function $p^{n,K}$ defined in (REF ) is unique up to a constant factor, to express the eigenpairs in terms of non-dimensional parameters, we replace $p^{n,K}$ by (we still denote it in the same way), $p^{n,K}(x)= \\exp \\left(-2\\, P_{n,K}\\, \\frac{x-a}{h_K}\\right),$ where $P_{n,K}=\\displaystyle \\frac{|\\mathbf {a}_K^n|\\,h_K}{2\\mu }$ is the element Péclet number.", "Then, from expressions (REF ) and (REF ), $\\tilde{z}_j^{n,K}=\\sqrt{\\frac{2}{h_K}} \\exp \\left(P_{n,K} \\frac{x- a}{h_K}\\right) \\,\\sin \\left(j\\pi \\frac{x- a}{h_K}\\right), \\quad \\lambda _j^K= \\mu \\, \\left(\\frac{j\\pi }{h_K} \\right)^2 + \\displaystyle \\frac{|\\mathbf {a}_K^n|^2}{4\\mu }.$ It follows $\\displaystyle \\beta _j^{n,K} = \\frac{1}{1+S_K(P_{n,K}^2+\\pi ^2j^2)} \\quad \\mbox{for any } j\\in \\mathbb {N},$ where $S_K=\\displaystyle \\frac{\\Delta t\\, \\mu }{h_K^2}$ is a non-dimensional parameter that represents the relative strength of the time derivative and diffusion terms in the discrete equations, at element $K$ ." ], [ "Error analysis", "We afford in this section the error analysis for the solution of the 1D evolutive convection-diffusion problem by the spectral VMS method (REF ).", "Let $\\lbrace \\alpha _i\\rbrace _{i=0}^I \\in \\bar{\\Omega }$ be the Lagrange interpolation nodes of space $X_h$ .", "Let $\\omega _i=(\\alpha _{i-1},\\alpha _i)$ , $i=1,\\ldots , I$ .", "Setting $\\tilde{X}_i=H^1_0(\\omega _i)$ , it holds, $H^1_0(\\Omega )= X_h \\oplus \\tilde{X},\\quad \\mbox{with }\\, \\tilde{X}=\\bigoplus _{i=1}^I \\tilde{X}_i.$ Observe that this decomposition generalises (REF ) with $\\tilde{X}_h=\\tilde{X}$ .", "Moreover, when operator in (REF ) is $\\mathcal {L}^n w =\\mathbf {a}_h^n\\cdot \\nabla w - \\mu \\Delta w$ , problem (REF ) can be exactly decoupled into the family of problems (REF ).", "In particular, if the projection operator $P_h$ in (REF ) is the Lagrange interpolate on $X_h$ , then $U_h^n =P_h(U^n)$ , $\\tilde{U}_h^n= U^n- U_h^n \\in \\tilde{X}$ and consequently, $U_h^n \\in X_h$ satisfies method (REF ).", "Notice that thanks to the spectral expansion, the sub-grid scales contribution in method (REF ), when the advection velocity is element-wise constant, is exactly computed, and then, the discretisation error only is due to the time discretisation and the approximation of the advection velocity $\\mathbf {a}$ , but not to the space discretisation.", "Therefore, to analize the discretisation error we compare the solution of problem (REF ) to the solution of the implicit Euler time semi-discretisation of problem (REF ), $\\left\\lbrace \\begin{array}{l}\\mbox{Find } u^{n+1}\\in H^1_0(\\Omega )\\mbox{ such that}\\\\[0,3cm]\\left( \\displaystyle \\frac{u^{n+1}-u^n}{\\Delta t}, v\\right) + (\\mathbf {a}^{n+1}\\partial _x u^{n+1},v) + \\mu \\,(\\partial _x u^{n+1},\\partial _x v)=\\langle f^{n+1},v\\rangle \\quad \\forall \\, v\\in H^1_0(\\Omega ),\\\\[0,3cm]\\forall \\, n=0,1, \\hdots ,N-1,\\end{array}\\right.$ with $u^0=u_0$ .", "We assume that $\\mathbf {a}_h$ restricted to each $K$ is extended by continuity to $\\partial K$ .", "Given a sequence $b=\\lbrace b^n,\\,n=1,\\cdots , N\\rbrace $ of elements of a normed space $Y$ , let us denote, $\\Vert b\\Vert _{l^p(Y)}=\\left(\\Delta t \\,\\sum _{n=1}^N \\Vert b^n\\Vert _Y^p \\right)^{1/p},\\quad \\Vert b\\Vert _{l^\\infty (Y)}=\\max _{n=1,\\cdots ,N} \\Vert b^n\\Vert _Y.$ We shall use the following discrete Gronwall's lemma, whose proof is standard, and so we omit it.", "Lemma 3.1 Let $\\alpha _n$ , $\\beta _n$ , $\\gamma _n$ , $n=1,2,...$ be non-negative real numbers such that $ (1-\\sigma \\,\\Delta t) \\,\\alpha _{n+1} + \\beta _{n+1} \\le (1+\\tau \\,\\Delta t) \\,\\alpha _n + \\gamma _{n+1}$ for some $\\sigma \\ge 0$ , $\\tau \\ge 0$ .", "Assume that $\\sigma \\, \\Delta t \\le 1-\\delta $ for some $\\delta >0$ .", "Then it holds $ \\alpha _n \\le e^{\\rho \\,t_n}\\,\\alpha _0 + \\frac{1}{\\delta } \\,\\sum _{l=1}^n e^{\\rho \\, (t_n-t_l)}\\, \\gamma _l,$ and $ \\sum _{l=1}^n\\beta _l \\le \\left(1+\\frac{\\tau }{\\sigma } +(\\sigma +\\tau )\\,e^{\\rho \\, t_{n-1}}\\,t_{n-1} \\right)\\,\\alpha _0 +\\frac{1}{\\delta } \\, \\left( 1+ (\\sigma +\\tau )\\,e^{\\rho \\, t_{n-1}}\\,t_{n-1} \\right)\\sum _{l=1}^n\\,\\gamma _l,$ with $\\displaystyle \\rho =(\\sigma +\\tau )/\\delta $ .", "Let $e=\\lbrace e^n,\\,n=0,1,\\cdots , N\\rbrace \\subset H^1_0(\\Omega )$ be the sequence of errors $e^n=u^n - U^n \\in H^1_0(\\Omega )$ , where we recall that $U^n$ is the solution of the discrete problem (REF ), and denote $\\delta _t e^{n+1}=\\displaystyle \\frac{e^{n+1}-e^n}{\\Delta t}$ .", "It holds the following result.", "Proposition 3.2 Assume that $\\mathbf {a}\\in L^\\infty (\\Omega \\times (0,T))^d$ , $f\\in L^2(\\Omega \\times (0,T))$ , $\\displaystyle \\Delta t \\le (1-\\varepsilon )\\, \\frac{\\mu }{\\Vert \\mathbf {a}\\Vert _{L^\\infty (\\Omega \\times (0,T))}^2}$ for some $\\varepsilon \\in (0,1)$ and $\\Vert \\mathbf {a}_h\\Vert _{L^\\infty (\\Omega \\times (0,T))} \\le D\\, \\Vert \\mathbf {a}\\Vert _{L^\\infty (\\Omega \\times (0,T))}$ for some constant $D>0$ .", "Then, $ \\Vert \\delta _t e\\Vert _{l^2(L^2(\\Omega ))}+ \\mu \\Vert e\\Vert _{l^\\infty (H^1_0(\\Omega ))} \\le C\\, \\Vert \\mathbf {a}_h -\\mathbf {a}\\Vert _{l^2(L^\\infty (\\Omega ))},$ for some constant $C>0$ independent of $h$ , $\\Delta t$ and $\\mu $ .", "Let us substract (REF ) from (REF ) with $v=v_h\\in X_h$ .", "This yields $\\begin{array}{l}\\left( \\displaystyle \\frac{e^{n+1}-e^n}{\\Delta t}, v_h\\right) + (\\mathbf {a}_h^{n+1}\\partial _x e^{n+1},v_h) + \\mu \\,(\\partial _x e^{n+1},\\partial _x v_h)=((\\mathbf {a}_h^{n+1}-\\mathbf {a}^{n+1})\\partial _x u^{n+1},v_h) .\\end{array}$ Setting $v_h = \\delta _t e^{n+1}$ , and using the identity $2(b,b-a)=\\Vert b\\Vert _{L^2(\\Omega )}^2-\\Vert a\\Vert _{L^2(\\Omega )}^2+\\Vert b-a\\Vert _{L^2(\\Omega )}^2$ for any $a,\\, b \\in {L^2(\\Omega )}^d$ yields $\\Delta t \\,\\Vert \\delta _t e^{n+1}\\Vert ^2_{L^2(\\Omega )} +\\Delta t \\, (\\mathbf {a}_h^{n+1}\\partial _x e^{n+1},\\delta _t e^{n+1})&+&\\frac{\\mu }{2}\\, \\left(\\Vert \\partial _x e^{n+1}\\Vert ^2_{L^2(\\Omega )}- \\Vert \\partial _x e^n\\Vert ^2_{L^2(\\Omega )} \\right)\\nonumber \\\\&\\le &\\Delta t \\,((\\mathbf {a}_h^{n+1}-\\mathbf {a}^{n+1})\\partial _x u^{n+1},\\delta _t e^{n+1}).$ It holds $|(\\mathbf {a}_h^{n+1}\\,\\partial _x e^{n+1},\\delta _t e^{n+1})| &\\le & \\Vert \\mathbf {a}_h^{n+1}\\Vert _{L^\\infty (\\Omega )}\\,\\Vert \\partial _x e^{n+1}\\Vert _{L^2(\\Omega )}\\, \\Vert \\delta _t e^{n+1}\\Vert _{L^2(\\Omega )}\\nonumber \\\\&\\le &\\frac{1}{2} \\Vert \\delta _t e^{n+1}\\Vert ^2_{L^2(\\Omega )}+\\frac{ \\Vert \\mathbf {a}\\Vert _{L^\\infty (\\Omega \\times (0,T))}^2}{2}\\,\\Vert \\partial _x e^{n+1}\\Vert ^2_{L^2(\\Omega )}.$ As $\\mathbf {a}\\in L^\\infty (\\Omega \\times (0,T))^d$ , $f\\in L^2(\\Omega \\times (0,T))$ , then the $u^n$ are uniformly bounded in $L^\\infty (0,T;H^1_0(\\Omega ))$ , due to the standard estimates for the implicit Euler method in strong norms.", "Then, for some constant $C>0$ , $((\\mathbf {a}_h^{n+1}-\\mathbf {a}^{n+1})\\partial _x u^{n+1},\\delta _t e^{n+1})&\\le & \\Vert \\mathbf {a}_h^{n+1}-\\mathbf {a}^{n+1}\\Vert _{L^\\infty (\\Omega )}\\,\\Vert \\partial _x u^{n+1}\\Vert _{L^2(\\Omega )}\\, \\Vert \\delta _t e^{n+1}\\Vert _{L^2(\\Omega )}\\nonumber \\\\&\\le &C \\, \\Vert \\mathbf {a}_h^{n+1}-\\mathbf {a}^{n+1}\\Vert _{L^\\infty (\\Omega )}^2 + \\frac{1}{4}\\, \\Vert \\delta _t e^{n+1}\\Vert ^2_{L^2(\\Omega )}.$ Hence, combining (REF ) and (REF ) with (REF ), $\\frac{\\Delta t}{4} \\,\\Vert \\delta _t e^{n+1}\\Vert ^2_{L^2(\\Omega )} +\\frac{\\mu }{2}\\, ( 1- \\sigma \\,\\Delta t )\\, \\Vert \\partial _x e^{n+1}\\Vert ^2_{L^2(\\Omega )}\\le \\frac{\\mu }{2}\\, \\Vert \\partial _x e^n\\Vert ^2_{L^2(\\Omega )} + C \\, \\Delta t\\,\\Vert \\mathbf {a}_h^{n+1}-\\mathbf {a}^{n+1}\\Vert _{L^\\infty (\\Omega )}^2,$ with $\\sigma =\\displaystyle \\frac{\\Vert \\mathbf {a}\\Vert _{l^\\infty (L^\\infty (\\Omega ))}^2}{\\mu }$ .", "Applying the discrete Gronwall's lemma REF , estimate (REF ) follows.", "Corollary 3.3 Under the hypotheses of Proposition REF , it holds $ \\mu \\,\\Vert e^n\\Vert _{l^\\infty (L^\\infty (\\Omega ))}\\le C\\, \\Vert \\mathbf {a}_h -\\mathbf {a}\\Vert _{l^2(L^\\infty (\\Omega ))}$ for some constant $C>0$ .", "Moreofer, if $\\mathbf {a}$ is constant, then the solution $U_h^n$ of the spectral VMS method (REF ) coincides with the solution $u^n$ of the implicit Euler time semi-discretisation (REF ) at the Lagrange interpolation nodes of space $X_h$ .", "In one space dimension $H^1(\\Omega ) $ is continuously injected in $L^\\infty (\\Omega )$ .", "Then estimate (REF ) follows from estimate (REF ).", "If $\\mathbf {a}$ is constant obviously $U^n = u^n$ for all $n=0,1,\\cdots , N$ .", "As $U^n_h(\\alpha _i) = U^n(\\alpha _i)$ at the Lagrange interpolation nodes $\\alpha _i$ , $i=1,\\ldots ,I$ , then $U_h^n$ coincides with $u^n$ at these nodes." ], [ "Feasible method: offline/online strategy", "Building the spectral VMS method using the formulation (REF ) requires quite large computing times, due to the summation of the spectral expansions that yield the coefficients of the matrices that appear in the algebraic expression of the method.", "In order to reduce this time, we shall neglect the dependency of method (REF ) w.r.t.", "$\\tilde{u}^{n-1}$ .", "Then, our current discretization of problem (REF ) is the following, $\\left\\lbrace \\begin{array}{l}\\mbox{Find } u_{h}^{n+1}\\in X_h \\mbox{ such that}\\\\[0,3cm](u^{n+1}_h, v_h) + \\Delta t \\, b^{n+1}(u^{n+1}_h,v_h) + (\\tilde{u}^{n+1}_h, v_h) + \\Delta t \\, b^{n+1}(\\tilde{u}^{n+1}_h ,v_h)\\\\[0,4cm]\\qquad = \\Delta t\\, \\langle f^{n+1},v_h \\rangle + (u^{n}_h,v_h) + (\\tilde{u}_h^{n}, v_h)\\\\ [0,3cm]\\forall \\, v_h \\in X_h, \\quad \\forall \\, n=0,1,\\hdots ,N-1,\\end{array}\\right.$ where $\\tilde{u}^{n+1}_h$ is given by (REF ), but $\\tilde{u}^{n}_h$ is defined from an approximated residual: $\\tilde{u}^{n}_h = \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n,K} \\, \\langle \\hat{R}^n_h(u_h^{n}), p^{n,K} \\,\\tilde{z}_j^{n,K}\\rangle \\, \\tilde{z}_j^{n,K}$ with $\\langle \\hat{R}^n_h({u}^n_h), \\tilde{v} \\rangle = ({u}^{n-1}_h,\\tilde{v})+ \\Delta t \\, \\langle f^n,\\tilde{v}\\rangle - ({u}^n_h,\\tilde{v})-\\Delta t\\, b^n({u}^n_h,\\tilde{v}),\\quad \\forall \\tilde{v} \\in \\tilde{X}.$ Neglecting the dependency of method (REF ) w.r.t.", "$\\tilde{u}^{n-1}$ allows to eliminate the recurrence in time of the sub-grid scales.", "Thanks to this fact, problem (REF ) is equivalent to a linear system (that we describe in detail in Appendix), whose coefficients only depend on non-dimensional parameters." ], [ "Application to 1D transient advection-diffusion problems", "In this case the coefficients of the linear system equivalent to problem (REF ) only depend on two non-dimensional parameters, as we confirm below.", "As we can see in Appendix, if $\\left\\lbrace \\varphi _m\\right\\rbrace _{m=1}^{L+1}$ is a basis of the space $X_h$ associated to a partition $\\lbrace x_1 < x_2 < \\ldots < x_{L+1}\\rbrace $ of $\\Omega $ , the solution $u_{h}^{n+1}$ of (REF ) can be written as $u_h^{n+1} = \\displaystyle \\sum _{m=1}^{L+1} u_m^{n+1}\\varphi _m.$ Then, the unknown vector $\\mathbf {u}^{n+1}=(u_1^{n+1},u_2^{n+1},\\hdots ,u_L^{n+1},u_{L+1}^{n+1})^t\\in \\mathbb {R}^{L+1} $ is the solution of the linear system $\\mathbf {A}^{n+1} \\, \\mathbf {u}^{n+1} = \\mathbf {b}^{n+1},$ where the matrix and second term are defined in (REF ) from matrices $A_i^{n+1}$ and $B_i^{n+1}$ given by (REF )-() and (REF )-().", "We focus, for instance, on the coefficients of matrix $A^n_1$ : $(A^n_1)_{lm}= \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n,K}(\\varphi _m, p^{n,K} \\tilde{z}_j^{n,K})(\\tilde{z}_j^{n,K}, \\varphi _l ).$ Let $K=[x_{l-1},x_l]\\in {\\cal T}_{h}$ .", "From expressions (REF ) and (REF ), $p^{n,K}$ and $\\tilde{z}_j^{n,K}$ depend on the element non-dimensional parameters $P_{n,K}$ and $S_K$ and the non-dimensional variable $\\displaystyle \\hat{x}=\\frac{x-x_{l-1}}{h_K}$ .", "The change of variable $\\hat{x} \\in [0,1] \\mapsto x\\in K$ from the reference element $[0,1]$ to element $K$ in the integral expressions $(\\varphi _m, p^{n,K} \\tilde{z}_j^{n,K})=\\int _K \\varphi _m\\, p^{n,K}(x) \\tilde{z}_j^{n,K}(x) \\, dx,\\quad (\\tilde{z}_j^{n,K}, \\varphi _l ) = \\int _K \\tilde{z}_j^{n,K}(x)\\, \\varphi _l(x)\\, dx$ readily proves that these expressions (up to a factor depending on $h$ ) can be written as functions of $S_K$ and $P_K$ .", "Further, by (REF ) the coefficients $\\beta _j^{n,K}$ also depend on $P_{n,K}$ and $S_K$ .", "Then, for each $K\\in {\\cal T}_{h}$ the spectral expansion that determines the element contribution to coefficient $(A^n_1)_{lm}$ , that is, $\\sum _{j=1}^{\\infty } \\beta _j^{n,K}(\\varphi _m, p^{n,K} \\tilde{z}_j^{n,K})(\\tilde{z}_j^{n,K}, \\varphi _l ),$ is a function of $P_{n,K}$ and $S_K$ , up to a factor depending on $h$ .", "This also holds for the coefficients of all other matrices that defines the linear system (REF ), $A^n_i$ and $B^n_i$ , as these are built from the basic values $(\\varphi _m, p^{n,K} \\tilde{z}_j^{n,K})$ , $(\\tilde{z}_j^{n,K}, \\varphi _l )$ , $b^n(\\varphi _m, p^{n,K} \\tilde{z}_j^{n,K})$ and $b^n(\\tilde{z}_j^{n,K}, \\varphi _l )$ .", "We take advantage of this fact to compute these matrices in a fast way, by means of an offline/online computation strategy." ], [ "Offline stage", "In the offline stage we compute the element contribution to the coefficients of all matrices appearing in system (REF ) as a function of the two parameters $P$ and $S$ , that take values at the nodes of a uniform grid, between minimum and maximum feasible values of these parameters.", "That is, $\\left\\lbrace (P_i,S_j)=( \\Delta \\, i , \\Delta \\,j ), \\quad \\forall \\, i,j = 1,2,\\ldots M \\right\\rbrace , \\quad \\mbox{with } \\Delta >0.$ In order to set these values, we consider the piecewise affine finite element functions associated to a uniform partition of $\\Omega $ with step $h$ .", "In practical applications the advection dominates and $P$ takes values larger than 1.", "Also, taking usual values of diffusion coefficient and $h \\simeq \\Delta t$ , $S$ takes low positive values.", "Moreover, when we compute the spectral series that determines the coefficients of the system matrices as functions of $P$ and $S$ , we observe that these values are nearly constant as $P$ and $S$ approaches 20.", "For instance, we can see in figures REF , REF and REF how the spectral series for the diagonal coefficient of $A_3$ matrix tend to a constant value as $P$ or $S$ increase to 20.", "Therefore, in numerical tests, we will consider a step $\\Delta =0.02$ and $M=1000$ in (REF ).", "Figure: Values of the spectral series to compute the diagonal coefficient of matrix A 3 A_3 for each pair (P,S)(P,S).Figure: Values of the spectral series to compute the diagonal coefficient of matrix A 3 A_3 for(P,S)∈(0,1)×(0,1)(P,S) \\in (0,1)\\times (0,1).To do the computations in this stage, in order to avoid computational roundoff problems due to large velocities, we express the eigenfunctions of the advection-diffusion operator given in (REF ) in terms of the midpoint of the grid elements $\\displaystyle x_{\\frac{l,l+1}{2}} = \\displaystyle \\frac{x_l+x_{l+1}}{2}$ .", "That is, we consider $\\tilde{z}_j^K=\\sqrt{\\frac{2}{h_K}} \\exp \\left(\\frac{|a_K|}{2\\mu }(x- x_{\\frac{l,l+1}{2}})\\right)\\sin \\left(j\\pi \\frac{x- x_{\\frac{l,l+1}{2}}}{h_K}\\right), \\quad \\mbox{for any } j\\in \\mathbb {N}.$ We further truncate the spectral series neglecting all the terms following to the first term that reaches an absolute value less than a prescribed threshold $\\varepsilon $ .", "Actually, we have taken $\\varepsilon =10^{-10}$ .", "In Figure REF we represent the number of these summands needed to reach a first term with absolute value smaller than this $\\varepsilon $ for the series defining the diagonal coefficient of $A_3$ matrix.", "As we can see, more terms are needed as $P$ increases and as $S$ decreases to 0.", "Figure: Number of summands needed to reach a first term with absolute value lower than ε=10 -10 \\varepsilon =10^{-10}for the series defining the diagonal coefficient of matrix A 3 A_3, in terms of (P,S)(P,S).In the online stage, for each grid element $K$ we compute the contribution of this element to the coefficients of all matrices appearing in system (REF ).", "Then, we sum up over grid elements, to calculate these coefficients.", "For that, we determine $P_K$ and $S_K$ and find the indices $i,\\, j \\in {1,\\ldots , M}$ such that $(P_K,\\, S_K)$ belongs to $[P_i , P_{i+1}] \\times [S_j , S_{j+1}]$ .", "In other case, if $P_k<\\Delta $ we set $i=1$ and if $P_K >\\Delta M$ we set $i=M-1$ , and similarly for $j$ in terms of $S_K$ .", "As we see above, each element contribution is a function of $P_K$ and $S_K$ that we denote $C(P_K,S_K)$ in a generic way.", "For instance, for matrix $A^n_1$ , $C(P_K,S_K)= \\sum _{j=1}^{\\infty } \\beta _j^{n,K}(\\varphi _m, p^{n,K} \\tilde{z}_j^{n,K})(\\tilde{z}_j^{n,K}, \\varphi _l ).$ Then, we compute $C(P_k,S_K)$ by the following second-order interpolation formula: $C(P_K,S_K) \\simeq \\sum _{k=1}^4 \\frac{Q_k}{Q} \\, C(\\alpha _k),$ where the $\\alpha _k$ are the four corners of the cell $[P_i , P_{i+1}] \\times [S_j, S_{j+1}]$ , $Q=\\Delta ^2$ is its area and the $Q_k$ are the areas of the four rectangles in which the cell is split by $(P_k,S_K)$ (see Figure REF ).", "Figure: Splitting of interpolation cell for online computation of matrices coefficients." ], [ "Numerical Tests", "In this section, we present the numerical results obtained with the spectral method to solve 1D advection-diffusion problems.", "Our purpose, on the one hand, is to confirm the theoretical results stated in Corollary REF for the spectral VMS method and, on the other hand, test the accuracy of the spectral VMS and feasible spectral VMS methods for problems with strong advection-dominance, in particular by comparison with several stabilised methods." ], [ "Test 1: Accuracy of spectral VMS method for constant advection velocity", "To test the property stated in Corollary REF , we consider the following advection-diffusion problem: $\\left\\lbrace \\begin{array}{ll}\\partial _t u +a \\, \\partial _x u-\\mu \\, \\partial ^2_{xx} u = 0 &\\mbox{in }(0,1)\\times (0,T),\\\\[0,2cm]u(0,t)=\\exp ((\\mu -a)t),\\quad u(1,t)=\\exp (1+(\\mu -a)t) & \\mbox{on }(0,T), \\\\[0,2cm]u(x,0)=\\exp (x) & \\mbox{on } (0,1),\\end{array}\\right.$ whose exact solution is given by $\\exp (x+(\\mu -a)t).$ We set $T=0.1$ , $a=1$ and $\\mu =20$ .", "We apply the spectral VMS method (REF ) to solve this problem with time step $\\Delta t=0.01$ and piecewise affine finite element space on a uniform partition of interval $(0,1)$ with steps $h=0.05/(2^i)$ for $i=2,3,...,7$ .", "We have truncated the spectral expansions that yield the small scales $\\tilde{u}_h^n$ to 10 eigenfunctions.", "The errors in $l^\\infty (L^2)$ and $l^2(H^1)$ norms computed at grid nodes are represented in Figure REF .", "We observe that, indeed, the errors quite closely do not depend on the space step $h$ .", "Moreover, we have computed the convergence orders in time, obtaining very closely order 1 in $l^2(H^1)$ norm and order 2 in $l^\\infty (L^2)$ norm, as could be expected.", "Figure: Test 1. l ∞ (L 2 )l^\\infty (L^2) and l 2 (H 1 )l^2(H^1) errors for the spectral VMS solution of problem ().", "In the following numerical experiments we consider the 1D problem (REF ) setting $\\Omega =(0,1)$ , with constant velocity field $a$ , source term $f=0$ and the hat-shaped initial condition $u_0 = \\left\\lbrace \\begin{array}{ll} 1 & \\text{if } |x-0.45|\\le 0.25 , \\\\ 0 & \\text{otherwise.}", "\\end{array} \\right.$ We also set $X_h$ to be the piecewise affine finite element space constructed on a uniform partition of interval $(0,1)$ with step size $h$ ." ], [ "Test 2: Accuracy of spectral VMS method", " Very large Péclet numbers In this test we examine the accuracy of spectral VMS method for very high Péclet numbers.", "To do that, we set $a=1000$ and $\\mu = 1$ , and solve this problem by the spectral VMS method (REF ), truncating to 150 spectral basis functions the series (REF ) that yield the sub-grid components.", "The solution interacts with the boundary condition at $x=1$ in times of order $1/a$ , that is, $10^{-3}$ .", "We then set a time-step $\\Delta t = 10^{-3}$ .", "Moreover, we set $h=0.02$ that corresponds to $P=10$ and $S=2.5$ .", "We present the results obtained in Figure REF , where we represent the Galerkin solution (in red) on the left panels and the spectral solution (in cyan) on the right panels, both with the exact solution (in blue): in $(a)$ the first 4 time-steps, in $(b)$ time-steps from 5 to 7 and in $c$ times-steps 8 and 9.", "By Corollary REF the discrete solution coincides at the grid nodes with the exact solution of the implicit Euler semi-discretisation, the expected errors at grid nodes are of order $\\Delta t=10^{-3}$ .", "We can see that the spectral solution indeed is very close to the exact solution at grid nodes.", "Figure: Solution of problem () for a=1000,μ=1,f=0a=1000, \\mu = 1, f=0 and u 0 u_0 given by () withΔt=10 -3 \\Delta t=10^{-3} and h=0.02h=0.02 (P=10P = 10, S=2.5S = 2.5).", "The spectral VMS solution is compared to the exact solution and the Galerkin solution.", "The results for time-steps numbers 1 to 4, 5 to 7 and 8 to 9 are respectively represented in figures (a)(a), (b)(b) and (c)(c).As the discrete solution coincides at the grid nodes with the exact solution of the implicit Euler semi-discretisation and $u^0$ is exact, then $u^1_h$ should coincide with the exact solution at grid nodes.", "This can already be observed in Figure REF $(a)$ .", "We also test this result with different discretisation parameters.", "We actually set $\\Delta t = 10^{-5}$ and $h=0.02$ that corresponds to $P=10$ and $S=0.025$ .", "The solution in the first time-step is represented in Figure REF $(a)$ and a zoom around $x=0.7$ in depicted in $(b)$ .", "Indeed the discrete solutions coincides with the exact one at grid nodes.", "Figure: Solution of problem () for a=1000,μ=1,f=0a=1000, \\mu = 1, f=0 and u 0 u_0 given by ()with Δt=10 -5 \\Delta t=10^{-5} and h=0.02h=0.02 (P=10P=10, S=0.025S=0.025).", "The spectral VMS solution is compared to the exact solution and the Galerkin solution at first time step.", "Figures (a)(a) and (b)(b) respectively show these solutions in the whole domain and a zoom around x=0.7x=0.7.Very small time steps We test here the arising of spurious oscillations due to extra small time-steps.", "These spurious oscillations occur in the solutions provided by the Galerkin discretisation when $CFL <CFL_{bound} = P/(3(1-P))$ (see [13]).", "For that, we consider the same problem as in this section but with $a=20$ , $h=0.01$ and the time-step $\\Delta t$ is chosen such that $CFL/CFL_{bound}= 1/2$ .", "We obtain the results shown in Figure REF , where we have represented the first five time-steps.", "As one can see the spectral solution does not present any oscillation.", "Figure: Solution of problem () for a=20,μ=1,f=0a=20, \\mu = 1, f=0 and u 0 u_0 given by ()with h=0.01h=0.01 and Δt\\Delta t such that CFL/CFL bound =1/2CFL/CFL_{bound}= 1/2 (P=0.1P=0.1, S=0.0926S=0.0926).", "Red lines represent Galerkin solution and cyan lines represent spectral VMS solution in each step-time." ], [ "Test 3: Accuracy of the feasible spectral VMS method. Comparison with other stabilised methods", "We next proceed to compare the results obtained with the feasible spectral VMS method (REF ) with those obtained by several stabilised methods.", "Stabilised methods add specific stabilising terms to the Galerkin discretisation, generating the following matrix scheme, $(M+\\Delta t \\, R^n + \\Delta t \\, a^2 \\, \\tau \\, M_{s}) \\, \\mathbf {u}^{n+1} = M \\, \\mathbf {u}^n,$ where $M$ and $R^n$ are, respectively, mass and stiffness matrices, while $M_{s}$ is a tridiagonal matrix defined by $({M_s})_{i,i} = \\frac{2}{h}, \\quad ({M_s})_{i+1,i}=({M_s})_{i,i+1} = -\\frac{1}{h}$ .", "Each stabilised method is determined by the stabilised coefficient $\\tau $ .", "In particular, we consider: The optimal stabilisation coefficient for 1D steady advection-diffusion equation [10], [23], $\\tau _{1D} = \\displaystyle \\frac{\\mu }{|a|^2} (P\\coth (P)-1).$ The stabilisation coefficient based on orthogonal sub-scales proposed by Codina in [4], $\\tau _{C}= \\displaystyle \\left( \\left( 4 \\frac{\\mu }{h^2} \\right)^2 + \\left( 2 \\frac{|a|}{h} \\right)^2 \\right)^{-1/2}.$ The stabilisation coefficient based on $L_2$ proposed by Hauke et.", "al.", "in [17], $\\tau _H = \\displaystyle \\min \\left\\lbrace \\frac{h}{\\sqrt{3}|a|} , \\frac{h^2}{24.24\\mu } , \\Delta t \\right\\rbrace .$ The stabilisation coefficient separating the diffusion-dominated from the convection-dominated regimes proposed by Franca in [8], $\\tau _{F}= \\displaystyle \\frac{h}{|a|}\\, \\min \\lbrace P,\\tilde{P}\\rbrace ,$ where $\\tilde{P}>0$ is a threshold separating the diffusion dominated ($P \\le \\tilde{P}$ ) to the advection dominated ($Pe > \\tilde{P}$ ) regimes.", "In figures REF , REF and REF , we show the solutions of each method for different values of $P$ and $S$ , always for advection-dominated regime $P>1$ .", "We also display the errors in $l^{\\infty }(L^2)$ and $l^2(H^1)$ norms for the solutions of these problems in tables 1, 2 and 3.", "As it can be observed in the three tables, spectral method reduces the error between 10 and 100 times compared to the stabilised methods, without presenting oscillations.", "Figure: Comparison of different stabilised methods to solve problem () when P=3P=3, S=25S=25 with Δt=10 -2 \\Delta t=10^{-2} and h=0.02h=0.02.", "Solutions in the three first time-steps.Table: l ∞ (L 2 )l^{\\infty }(L^2) and l 2 (H 1 )l^2(H^1) errors for the solutions represented in Figure .Figure: Comparison of different stabilised methods to solve problem () when P=1P=1 and S=5S=5 with Δt=10 -3 \\Delta t=10^{-3} and h=10 -2 h=10^{-2}.", "Solutions in the three first time-steps.", "Right: zoom around x=0.7x=0.7.Table: l ∞ (L 2 )l^{\\infty }(L^2) and l 2 (H 1 )l^2(H^1) errors for the solutions represented in Figure .Figure: Comparison of different stabilised methods to solve problem () when P=3.5P=3.5 and S=100S=100 with Δt=10 -2 \\Delta t = 10^{-2} and h=10 -2 h=10^{-2}.", "Solutions in the three first time-steps.", "Right: zoom around x=0.99x=0.99.Table: l ∞ (L 2 )l^{\\infty }(L^2) and l 2 (H 1 )l^2(H^1) errors for the solutions represented in Figure .Next, we consider the same tests performed in Section REF , but applying the feasible spectral VMS method.", "Firstly, we check the behaviour of the feasible spectral VMS method (REF ) for very large Péclet numbers.", "In Figure REF we represent the solution of same problem as in Figure REF obtained with this method.", "We show solutions in time-steps 1 to 4 in $(a)$ , times-steps 5 to 7 in $(b)$ and time-steps 8 and 9 in $(c)$ .", "As we can observe, the spectral method is the closest to the reference solution without presenting any spurious oscillations.", "Figure: Solution of problem () for a=1000,μ=1,f=0a=1000, \\mu = 1, f=0 and u 0 u_0 given by () withΔt=10 -3 \\Delta t=10^{-3} and h=0.02h=0.02 (P=10P = 10, S=2.5S = 2.5).", "The feasible spectral VMS is compared with different stabilised methods.", "The results for time-steps numbers 1 to 4, 5 to 7 and 8 to 9 are respectively represented in figures (a)(a), (b)(b) and (c)(c).Secondly, Figure REF is the analogous to Figure REF , but comparing the feasible spectral VMS with different stabilised methods.", "Although Hauke's solution is closer to the exact solution than the spectral method, we can see on the right figure that this approximation does not satisfy the Maximun Principle.", "Figure: First time-step solution of problem () for a=1000,μ=1,f=0a=1000, \\mu = 1, f=0 and u 0 u_0 given by () with Δt=10 -5 \\Delta t=10^{-5} and h=0.02h=0.02 (P=10P=10, S=0.025S=0.025).", "The feasible spectral VMS is compared with different stabilised methods in the whole domain Ω=(0,1)\\Omega =(0,1) in (a)(a) and in a zoom around x=0.7x=0.7 in (b)(b).Finally, we illustrate the fact that the feasible spectral VMS method is the only method among those studied that does not have oscillations for small time steps, when $CFL <CFL_{bound}$ .", "In Figure REF , we can see the first five time-steps solutions obtained with each method using a time-step that verifies $CFL/CFL_{bound} = 1/2$ .", "Figure: Solution of problem () for a=20,μ=1,f=0a=20, \\mu = 1, f=0 and u 0 u_0 given by ()with h=0.01h=0.01 and Δt\\Delta t such that CFL/CFL bound =1/2CFL/CFL_{bound}= 1/2 (P=0.1P=0.1, S=0.0926S=0.0926).", "The feasible spectral VMS is compared with different stabilised methods.Regarding computing times, by means of the offline/online strategy the feasible spectral VMS method requires somewhat larger computing times than the remaining stabilised methods, due to the interpolation step to build the system matrices." ], [ "Conclusions", "In this paper we have extended to parabolic problems the spectral VMS method developed in [7] for elliptic problems.", "We have constructed a feasible method to solve the evolutive advection-diffusion problem by means of an offline/online strategy that pre-computes the effect of the sub-grid scales on the resolved scales.", "We have proved that when Lagrange finite element discretisations in space are used, the solution obtained by the fully spectral VMS method (REF ) coincides with the exact solution of the implicit Euler semi-discretisation of the advection-diffusion problem at the Lagrange interpolation nodes.", "We have performed some numerical tests that have confirmed this property for very large Péclet numbers and very small time steps, by the fully spectral VMS method.", "Also some additional tests show an improved accuracy with respect to several stabilised methods for the feasible spectral VMS method (REF ), with moderate increases of computing times.", "The methodology introduced here may be extended to multi-dimensional advection-diffusion equations, by parameterising the sub-grid scales in an off-line step.", "This research is at present in progress." ], [ "Acknowledgements", "The research of T. Chacón and I. Sánchez has been partially funded and that of D. Moreno fully funded by Programa Operativo FEDER Andalucía 2014-2020 grant US-1254587.", "The research of S. Fernández has been partially funded by AEI - Feder Fund Grant RTI2018-093521-B-C31.", "Problem (REF ) is equivalent to a linear system with a particular structure that we describe next.", "If $\\left\\lbrace \\varphi _m\\right\\rbrace _{m=1}^{L+1}$ is a basis of the space $X_h$ , the solution $u_{h}^{n+1}$ is obtained as $u_{h}^{n+1} = \\displaystyle \\sum _{m=1}^{L+1} u_m^{n+1}\\varphi _m,$ where $\\mathbf {u}^{n+1}=(u_1^{n+1},\\hdots ,u_{L+1}^{n+1})^t\\in \\mathbb {R}^{L+1} $ is the unknown vector.", "Taking $v_h=\\varphi _l$ , with $l=1\\ldots L$ , each term in (REF ) can be written in the following way: $\\begin{array}{l}(u^{n+1}_h, \\varphi _l) = \\displaystyle \\sum _{m=1}^L (\\varphi _m,\\varphi _l) \\, u_m^{n+1} = \\Big ( M \\, \\mathbf {u}^{n+1} \\Big )_l\\\\[0,5cm]b^{n+1}(u^{n+1}_h,\\varphi _l) = \\displaystyle \\sum _{m=1}^L b^{n+1}(\\varphi _m,\\varphi _l) \\, u_m^{n+1} = \\Big ( R^{n+1} \\, \\mathbf {u}^{n+1} \\Big )_l\\\\[0,5cm](f^{n+1}, \\varphi _l) = \\Big ( \\mathbf {F}^{n+1} \\Big )_l\\end{array}$ where $(M)_{lm} =(\\varphi _m,\\varphi _l),$ $(R^{n+1})_{lm} =b^{n+1}(\\varphi _m,\\varphi _l),$ $\\begin{array}{l}(\\tilde{u}^{n+1}_h, \\varphi _l) =\\\\[0,2cm]\\qquad \\quad \\displaystyle \\sum _{m=1}^L \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, (\\varphi _m,p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, (\\tilde{z}_j^{n+1,K},\\varphi _l) \\, u_m^{n}\\\\[0,5cm]\\qquad +\\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, (\\tilde{u}_h^n,p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, (\\tilde{z}_j^{n+1,K},\\varphi _l)\\\\[0,5cm]\\qquad + \\, \\Delta t \\, \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, (f^{n+1},p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, (\\tilde{z}_j^{n+1,K},\\varphi _l)\\\\[0,5cm]\\qquad -\\displaystyle \\sum _{m=1}^L \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, (\\varphi _m,p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, (\\tilde{z}_j^{n+1,K},\\varphi _l) \\, u_m^{n+1}\\\\[0,5cm]\\qquad - \\, \\Delta t \\, \\displaystyle \\sum _{m=1}^L \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, b^{n+1} (\\varphi _m,p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, (\\tilde{z}_j^{n+1,K},\\varphi _l) \\, u_m^{n+1}\\\\[0,6cm]\\qquad =\\Big ( A^{n+1}_1 \\, \\mathbf {u}^{n} + \\mathbf {G}^{n+1}_1 + \\Delta t \\, \\mathbf {F}^{n+1}_1-A^{n+1}_1 \\, \\mathbf {u}^{n+1} - \\Delta t \\, A^{n+1}_2 \\, \\mathbf {u}^{n+1} \\Big )_l,\\end{array}$ with $&& (A^{n+1}_1)_{lm} = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n+1,K} \\,(\\varphi _m, p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\,(\\tilde{z}_j^{n+1,K}, \\varphi _l ),\\\\[0,2cm]&& (A^{n+1}_2)_{lm} = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n+1,K} \\,b^{n+1}(\\varphi _m, p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, (\\tilde{z}_j^{n+1,K}, \\varphi _l ),\\\\[0,2cm]&& (\\mathbf {F}^{n+1}_1)_l =\\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, \\langle f^{n+1},p^{n+1,K} \\tilde{z}_j^{n+1,K} \\rangle \\, (\\tilde{z}_j^{n+1,K},\\varphi _l), \\nonumber \\\\[0,2cm]&& (\\mathbf {G}^{n+1}_1)_l = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, (\\tilde{u}_h^n,p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, b^{n+1}(\\tilde{z}_j^{n+1,K},\\varphi _l),\\nonumber $ and $\\begin{array}{l}b^{n+1}(\\tilde{u}^{n+1}_h, v_h) =\\\\[0,2cm]\\qquad \\quad \\displaystyle \\sum _{m=1}^L \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, (\\varphi _m,p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, b^{n+1}(\\tilde{z}_j^{n+1,K},\\varphi _l) \\, u_m^{n}\\\\[0,5cm]\\qquad + \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, (\\tilde{u}_h^n,p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, b^{n+1}(\\tilde{z}_j^{n+1,K},\\varphi _l)\\\\[0,5cm]\\qquad + \\, \\Delta t \\, \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, (f^{n+1},p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, b^{n+1}(\\tilde{z}_j^{n+1,K},\\varphi _l)\\\\[0,5cm]\\qquad -\\displaystyle \\sum _{m=1}^L \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, (\\varphi _m,p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, b^{n+1}(\\tilde{z}_j^{n+1,K},\\varphi _l) \\, u_m^{n+1}\\\\[0,5cm]\\qquad - \\, \\Delta t \\, \\displaystyle \\sum _{m=1}^L \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, b^{n+1} (\\varphi _m,p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, b^{n+1}(\\tilde{z}_j^{n+1,K},\\varphi _l) \\, u_m^{n+1} \\\\[0,6cm]\\qquad =\\Big (A^{n+1}_3 \\, \\mathbf {u}^{n} + \\mathbf {G}^{n+1}_2+ \\Delta t \\, \\mathbf {F}^{n+1}_2 -A^{n+1}_3 \\, \\mathbf {u}^{n+1} - \\Delta t \\,A^{n+1}_4 \\, \\mathbf {u}^{n+1} \\Big )_l,\\end{array}$ where $&& (A^{n+1}_3)_{lm} = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n+1,K} \\,(\\varphi _m, p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, b^{n+1}(\\tilde{z}_j^{n+1,K}, \\varphi _l ),\\\\[0,2cm]&& (A^{n+1}_4)_{lm} =\\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n+1,K} \\,b^{n+1}(\\varphi _m, p^{n+1,K} \\tilde{z}_j^{n+1,K} \\, b^{n+1}(\\tilde{z}_j^{n+1,K}, \\varphi _l ),\\\\[0,2cm]&& (\\mathbf {F}^{n+1}_2)_l =\\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, \\langle f^{n+1},p^{n+1,K} \\tilde{z}_j^{n+1,K} \\rangle \\, b^{n+1}(\\tilde{z}_j^{n+1,K},\\varphi _l),\\nonumber \\\\[0,2cm]&&(\\mathbf {G}^{n+1}_2)_l = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, (\\tilde{u}_h^n,p^{n+1,K} \\tilde{z}_j^{n+1,K}) \\, b^{n+1}(\\tilde{z}_j^{n+1,K},\\varphi _l).\\nonumber $ Taking in account the definition of $\\tilde{u}_h^n$ in (REF ), the second terms $\\mathbf {G}^{n+1}_1$ and $\\mathbf {G}^{n+1}_2$ can be expressed in the following way: $\\begin{array}{l}\\Big (\\mathbf {G}^{n+1}_1 \\Big )_l = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n+1,K} \\,\\beta _j^{n,K} \\, \\langle \\hat{R}^n_h({u}^n_h), p^{n,K} \\, \\tilde{z}_j^{n,K} \\rangle (\\tilde{z}_j^{n+1,K},\\varphi _l \\Big )\\\\ [0,6cm]\\qquad = \\Big ( B_1^{n+1} \\, \\mathbf {u}^{n-1} + \\Delta t \\, \\mathbf {F}^{n+1}_3- B_1^{n+1} \\, \\mathbf {u}^{n} - \\Delta t \\, B_2^{n+1} \\, \\mathbf {u}^{n}\\Big )_l,\\\\ [0,5cm]\\Big (\\mathbf {G}^{n+1}_2 \\Big )_l = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n+1,K} \\,\\beta _j^{n,K} \\, \\langle \\hat{R}^n_h({u}^n_h), p^{n,K} \\, \\tilde{z}_j^{n,K} \\rangle \\, b^{n+1}(\\tilde{z}_j^{n+1,K},\\varphi _l \\Big )\\\\ [0,6cm]\\qquad = \\Big ( B_3^{n+1} \\, \\mathbf {u}^{n-1} + \\Delta t \\, \\mathbf {F}^{n+1}_4- B_3^{n+1} \\, \\mathbf {u}^{n} - \\Delta t \\, B_4^{n+1} \\, \\mathbf {u}^{n}\\Big )_l,\\end{array}$ where $&&(B^{n+1}_1)_{lm} = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n+1,K} \\, \\beta _j^{n,K} \\,(\\varphi _m, p^{n,K} \\tilde{z}_j^{n,K}) \\,(\\tilde{z}_j^{n+1,K}, \\varphi _l ),\\\\[0,2cm]&&(B^{n+1}_2)_{lm} = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n+1,K} \\, \\beta _j^{n,K} \\,b^n(\\varphi _m, p^{n,K} \\tilde{z}_j^{n,K}) \\, (\\tilde{z}_j^{n+1,K}, \\varphi _l ),\\\\[0,2cm]&& (B^{n+1}_3)_{lm} = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n+1,K} \\, \\beta _j^{n,K} \\,(\\varphi _m, p^{n,K} \\tilde{z}_j^{n,K}) \\, b^{n+1} (\\tilde{z}_j^{n+1,K}, \\varphi _l ),\\\\[0,2cm]&& (B^{n+1}_4)_{lm} = \\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty } \\beta _j^{n+1,K} \\, \\beta _j^{n,K} \\,b^n(\\varphi _m, p^{n,K} \\tilde{z}_j^{n,K}) \\, b^{n+1} (\\tilde{z}_j^{n+1,K}, \\varphi _l ),\\\\[0,2cm]&&(\\mathbf {F}^{n+1}_3)_l =\\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, \\beta _j^{n,K} \\, \\langle f^{n},p^{n,K} \\tilde{z}_j^{n,K} \\rangle \\, (\\tilde{z}_j^{n+1,K},\\varphi _l),\\nonumber \\\\[0,2cm]&& (\\mathbf {F}^{n+1}_4)_l =\\displaystyle \\sum _{K\\in \\mathcal {T}_h} \\sum _{j=1}^{\\infty }\\beta _j^{n+1,K} \\, \\beta _j^{n,K} \\, \\langle f^{n},p^{n,K} \\tilde{z}_j^{n,K} \\rangle \\,b^{n+1}(\\tilde{z}_j^{n+1,K},\\varphi _l).\\nonumber $ Here we are neglecting the interaction between different eigenfunctions in two consecutive time steps.", "Obviously, this occurs when the operator is time independent.", "Thus, problem (REF ) is equivalent to the lineal system $\\mathbf {A}^{n+1} \\, \\mathbf {u}^{n+1} = \\mathbf {b}^{n+1},$ where $\\mathbf {A}^{n+1}\\in \\mathbb {R}^{(L+1)\\times (L+1)}$ and $\\mathbf {b}^{n+1}\\in \\mathbb {R}^{(L+1)}$ are given by $\\begin{array}{l}\\mathbf {A}^{n+1}= M + \\Delta t \\, R^{n+1} - {\\cal A}^{n+1},\\\\[0,5cm]\\mathbf {b}^{n+1} =\\Big ( M - (A^{n+1}_1 + \\Delta t \\, A^{n+1}_3) - (A^{n}_1 + \\Delta t \\, A^{n}_2) - {\\cal B}^{n+1} \\Big ) \\mathbf {u}^n\\nonumber \\\\[0,4cm]\\qquad + \\Big ( A^{n}_1 - B_1^{n+1} - \\Delta t \\, B_3^{n+1} \\Big ) \\, \\mathbf {u}^{n-1}\\\\[0,5cm]\\qquad + \\Delta t \\, \\mathbf {F}^{n+1} - \\Delta t \\, \\mathbf {F}^{n+1}_1 - \\Delta t^2 \\, \\mathbf {F}^{n+1}_2 +\\Delta t \\, \\mathbf {F}^{n}_1 - \\Delta t \\, \\mathbf {F}^{n+1}_3 - \\Delta t^2 \\, \\mathbf {F}^{n+1}_4,\\end{array}$ with ${\\cal A}^{n+1} = A^{n+1}_1 + \\Delta t \\, A^{n+1}_2 + \\Delta t \\, A^{n+1}_3 + \\Delta t^2 \\, A^{n+1}_4, \\qquad {\\cal B}^{n+1} = B_1^{n+1} - \\Delta t \\, B_2^{n+1} - \\Delta t \\, B_3^{n+1} - \\Delta t^2 \\, B_4^{n+1}.$ Here, $M$ and $R^{n+1}$ are, respectively, the mass and stiffness matrices from the Galerkin formulation and $A_i^{n+1}$ and $B_i^{n+1}$ are the matrices that represent the effect of the small scales component of the solution on the large scales component." ] ]
2207.10449
[ [ "Simplicial volume of manifolds with amenable fundamental group at\n infinity" ], [ "Abstract We show that for $n \\neq 1,4$ the simplicial volume of an inward tame triangulable open $n$-manifold $M$ with amenable fundamental group at infinity at each end is finite; moreover, we show that if also $\\pi_1(M)$ is amenable, then the simplicial volume of $M$ vanishes.", "We show that the same result holds for finitely-many-ended triangulable manifolds which are simply connected at infinity." ], [ "Introduction", "The simplicial volume is a homotopy invariant of manifolds introduced by Gromov in his pioneering article [8] (see Section for the precise definition).", "Although the definition of this invariant is completely homotopic in nature, Gromov himself remarked that it is strongly related to the geometric structures that a manifold can carry: for example, he proved that the simplicial volume of an oriented closed connected hyperbolic manifold is strictly positive, and it is proportional to its Riemannian volume.", "On the other hand, the vanishing of the simplicial volume is implied by amenability conditions on the manifold: in particular, if an oriented closed connected manifold $M$ has an amenable fundamental group, then it has vanishing simplicial volume (this follows from the vanishing of the bounded cohomology of amenable groups, see e.g.", "[8], [5]).", "In the context of open manifolds the situation is different; in fact, an open manifold with an amenable fundamental group has either vanishing or infinite simplicial volume ([12]).", "A finiteness criterion for the simplicial volume of tame manifolds, i.e.", "manifolds which are homeomorphic to the interior of a compact manifold with boundary, was given in [12], [13].", "It states that, given a compact manifold with boundary $(W, \\partial W)$ , the finiteness of the simplicial volume of $W^{\\circ }$ is equivalent to the $\\ell ^1$ -invisibility of $\\partial W$ (i.e., to the fact that the fundamental class of $\\partial W$ is null in $\\ell ^1$ -homology).", "A consequence of this result is that if each connected component of $\\partial W$ has an amenable fundamental group, then the simplicial volume of $W^{\\circ }$ is finite.", "The fundamental group at infinity (see Section for the definition) is an algebraic tool that is useful to study open manifolds.", "If $(W, \\partial W)$ is an oriented compact connected manifold with connected boundary, the fundamental group at infinity of $W^{\\circ }$ is isomorphic to $\\pi _1(\\partial W)$ .", "This concept allows us to generalize the result of Löh to manifolds which are not necessarily the interior of a compact manifold with boundary.", "Since the fundamental group at infinity is defined as an inverse limit, it is endowed with a natural topology.", "We relate the amenability of this group as a topological group to the finiteness/vanishing of the simplicial volume.", "In particular, we focus on two classes of manifolds: inward tame manifolds, i.e.", "manifolds in which the complement of each compact subset can be shrunk into a compact within itself (see Section for the precise definition); manifolds which are simply connected at infinity, i.e.", "such that for any compact subset $C \\subset M$ there exists a larger compact subset $D \\supset C$ such that every loop in $M \\setminus D$ is trivial in $M \\setminus C$ .", "The main purpose of this paper is to prove the following results.", "Theorem 1 Let $n \\ge 5$ be a natural number and let $M$ be a connected triangulable inward tame $n$ -manifold with amenable fundamental group at infinity at each end.", "Then the simplicial volume of $M$ is finite.", "Theorem 2 Let $n \\ge 5$ be a natural number and let $M$ be a connected triangulable inward tame $n$ -manifold with amenable fundamental group and amenable fundamental group at infinity at each end.", "Then the simplicial volume of $M$ vanishes.", "Theorem 3 Let $n\\ge 5$ be a natural number, and let $M$ be an open triangulable finitely-many-ended $n$ -manifold.", "Let us suppose that $M$ is simply connected at infinity.", "Then the simplicial volume of $M$ is finite.", "Moreover, if $\\pi _1(M)$ is amenable, we have that the simplicial volume of $M$ vanishes.", "Remark 4 In dimension $n=2$ there are no inward tame manifolds which are not tame; remarkably, combining [18] with the Poincaré Conjecture ([15], [17], [16], [2], [14], [1]), the same holds in dimension $n=3$ .", "In particular this implies that Theorem REF , Theorem REF and Theorem REF hold also if $n=2,3$ thanks to previous results of Löh [12], [13].", "The main ingredients for the proofs are two results of [4].", "These results give sufficient conditions for an open cover to imply the finiteness (respectively, the vanishing) of the simplicial volume.", "Under our hypotheses, we deduce the existence of such covers from the amenability of the fundamental group at infinity at each end.", "In order to do this, we use a result from [10], which is stated at the end of Section ." ], [ "Plan of the paper", "In Section , we recall the algebra of inverse sequences, we define the fundamental group at infinity and we specify the terminology regarding open manifolds, in particular we focus on the classes of tame manifolds, inward tame manifolds and manifolds which are simply connected at infinity.", "In Section , we define the simplicial volume and we state the results from [4] that will be the key for the proof of Theorems REF , REF and REF , to which Section is devoted.", "I thank my advisor Roberto Frigerio for helpful comments and discussions.", "I thank Dario Ascari and Francesco Milizia for useful conversations." ], [ "Fundamental group at infinity", "Following [10], we start by recalling the algebra of inverse sequences, that will be useful for the definition of the fundamental group at infinity.", "Let $G_0 \\overset{f_1}{\\longleftarrow } G_1 \\overset{f_2}{\\longleftarrow } G_2 \\overset{f_3}{\\longleftarrow } \\ldots $ be an inverse sequence of groups and homomorphisms (such a sequence will be denoted by $\\lbrace G_i, f_i\\rbrace $ henceforth).", "Let $i,j$ be two natural numbers such that $i>j$ ; after denoting the composition $f_{j}\\circ f_{j+1} \\ldots \\circ f_{i}$ by $f_{ij}$ , we can naturally define a subsequence of $\\lbrace G_i, f_i\\rbrace $ as follows: $G_{i_0} \\overset{f_{i_1i_0}}{\\longleftarrow } G_{i_1} \\overset{f_{i_2i_1}}{\\longleftarrow } G_{i_2} \\overset{f_{i_3i_2}}{\\longleftarrow } \\ldots \\ .$ Two sequences $\\lbrace G_i, f_i\\rbrace $ and $\\lbrace H_i, g_i\\rbrace $ are said to be pro-isomorphic if there exists a commuting diagram of the form Gi0 [ll, \"fi1i0\"] Gi1 [dl] Gi2 [ll, \"fi2i1\"] [dl] ...[l] Hj0 [ul] [ll, \"gj1j0\"] Hj1 [ul] Hj2 [ll, \"gj2j1\"] [ul] ...[l] for some subsequences $\\lbrace G_{i_k}, f_{i_ki_{k-1}}\\rbrace $ and $\\lbrace H_{j_k}, g_{j_kj_{k-1}}\\rbrace $ of $\\lbrace G_i, f_i\\rbrace $ and $\\lbrace H_j, g_j\\rbrace $ respectively.", "The inverse limit of a sequence $\\lbrace G_i, f_i\\rbrace $ is defined as $\\underset{\\longleftarrow }{\\lim }\\lbrace G_i, f_i\\rbrace =\\Big \\lbrace (g_0, g_1, \\ldots ) \\in \\prod _{i \\in \\mathbb {N}}G_i \\big | f_i(g_i)=g_{i-1}\\Big \\rbrace ,$ with the group operation defined componentwise; this turns out to be a subgroup of $\\prod _{i \\in \\mathbb {N}}G_i$ .", "For each $i \\in \\mathbb {N}$ we have a natural projection homomorphism $p_i: \\underset{\\longleftarrow }{\\lim }\\lbrace G_i, f_i\\rbrace \\rightarrow G_i$ .", "It can be easily seen that inverse limits of pro-isomorphic sequences are isomorphic, while we remark that two sequences with isomorphic inverse limit are not necessarily pro-isomorphic; for example the trivial sequence $1 \\longleftarrow 1 \\longleftarrow 1 \\longleftarrow \\ldots $ and the following one $\\mathbb {Z} \\overset{\\cdot 2}{\\longleftarrow }\\mathbb {Z} \\overset{\\cdot 2}{\\longleftarrow }\\mathbb {Z} \\overset{\\cdot 2}{\\longleftarrow } \\ldots $ have both inverse limit isomorphic to the trivial group, but it is straightforward that they are not pro-isomorphic (see e.g.", "[11]).", "We distinguish some particular classes of sequences; denoting by $\\lbrace G, \\operatorname{id}_G\\rbrace $ a sequence $\\lbrace G_i, f_i\\rbrace $ such that $G_i \\simeq G$ and $f_i=\\operatorname{id}_G$ for all $i\\in \\mathbb {N}$ , we say that: a sequence pro-isomorphic to one of the form $\\lbrace 1, \\operatorname{id}\\rbrace $ is called pro-trivial; a sequence pro-isomorphic to one of the form $\\lbrace H, \\operatorname{id}_H\\rbrace $ is called stable; a sequence pro-isomorphic to one whose maps are all surjective is called semistable.", "In the case of stable sequences, we remark that $H$ is well defined up to isomorphism, since the inverse limit of the sequence is isomorphic to $H$ .", "Lemma 2.1 Let $\\lbrace G_i\\rbrace _{i \\in \\mathbb {N}}$ be discrete groups and let $f_i:G_{i}\\rightarrow G_{i-1}$ be a group homomorphism for each $i \\in \\mathbb {N}_{>0}$ .", "Let us suppose that $G= \\underset{\\longleftarrow }{\\lim }\\lbrace G_i, f_i\\rbrace $ equipped with the limit topology is an amenable group (as a topological group) and that for every $i>0$ the homomoprhism $f_i$ is surjective.", "Then $G_i$ is amenable for each $i\\in \\mathbb {N}$ .", "By definition of inverse limit, if the $f_i$ 's are surjective we have that the projections $p_i:G \\rightarrow G_i$ are surjective too; by definition of the limit topology on $G$ , we have that the projections are also continuous.", "Then each $G_i$ is a continuous image of an amenable group, so it is amenable [20].", "Remark 2.2 As remarked in [7], the limit topology on an inverse limit of a sequence of discrete groups is discrete if and only if the sequence is stable.", "Following again [10], we recall the definition of fundamental group at infinity of a manifold.", "Let $M$ be a triangulable manifold (henceforth, all manifolds will be triangulable) without boundary, and let $A \\subset M$ be any subset; we say that a connected component of $M \\setminus A$ is bounded if it has compact closure, otherwise we say that it is unbounded.", "A neighborhood of infinity is a subset $N \\subset M$ such that $M \\setminus N$ is bounded.", "We say that a neighborhood of infinity is clean if it satisfies the following properties: $N$ is closed in $M$ ; $N$ is a submanifold of codimension 0 with bicollared boundary.", "It can be easily shown that every neighborhood of infinity contains a clean neighborhood of infinity.", "Let $k$ be a natural number.", "A manifold $M$ is said to have $k$ ends if there exists a compact subset $C \\subset M$ such that for any other compact subset $D$ such that $C \\subset D$ , $M \\setminus D$ has exactly $k$ unbounded components; this implies that $M$ contains a clean neighborhood of infinity with $k$ connected components.", "If such $k$ does not exist, we say that the manifold $M$ has infinitely many ends.", "Let us suppose that $M$ is a 1-ended manifold without boundary.", "A clean neighborhood of infinity is called a 0-neighborhood of infinity if it is connected and has connected boundary.", "A sequence of nested neighborhoods of infinity $U_0 \\supset U_1 \\supset U_2 \\supset \\ldots $ is cofinal if $\\cap _{i=0}^{+\\infty }U_i= \\varnothing $ .", "Let $U_0 \\supset U_1 \\supset U_2 \\supset \\ldots $ be a cofinal sequence of nested 0-neighborhoods of infinity; a base-ray is a proper map $\\rho :[0,+ \\infty ) \\rightarrow M$ .", "For each $i\\in \\mathbb {N}$ let $p_i:=\\rho (i)$ ; up to reparametrization, we can assume that $p_i\\in U_i$ and $\\rho ([i, +\\infty ))\\subset U_i$ .", "The inclusions $U_{i-1} \\hookleftarrow U_{i}$ along with the change of basepoints maps induced by $\\rho |_{[i-1, i]}$ give rise to the following sequence $\\pi _1(U_0, p_0) \\overset{\\lambda _1}{\\longleftarrow } \\pi _1(U_1, p_1) \\overset{\\lambda _2}{\\longleftarrow } \\pi _1(U_2, p_2) \\overset{\\lambda _3}{\\longleftarrow } \\ldots .$ We denote the inverse limit of this sequence with $\\pi _1^{\\infty }(M, \\rho )$ .", "If the sequence is semistable, it can be shown that its pro-equivalence class does not depend neither on the base-ray $\\rho $ nor on the choice of the sequence of neighborhoods of infinity (see e.g.", "[6]).", "In this case, we can define the fundamental group at infinity of $M$ as $\\pi _1^{\\infty }(M):= \\underset{\\longleftarrow }{\\lim }(\\pi _1(U_i, p_i), \\lambda _i).$ If $M$ has more than one end, let $U_0 \\supset U_1 \\supset U_2 \\supset \\ldots $ be a cofinal sequence of nested clean neighborhoods of infinity; we say that two base-rays $\\rho _1, \\rho _2$ determine the same end if their restrictions $\\rho _1|_{\\mathbb {N}}$ and $\\rho _2|_{\\mathbb {N}}$ are properly homotopic.", "In the case of multiple ends, we need to specify the base-ray to define the fundamental group at infinity of each end; however, if the sequence that defines the fundamental group at infinity at each end is semistable we have that if $\\rho _1$ and $\\rho _2$ determine the same end, then $\\pi _1^{\\infty }(M, \\rho _1)\\simeq \\pi _1^{\\infty }(M, \\rho _2)$ (see again [6]).", "With a similar procedure, we can define the homology at infinity of a 1-ended manifold $M$ : let $U_0 \\supset U_1 \\supset U_2 \\supset \\ldots $ be a cofinal sequence of nested 0-neighborhoods of infinity and let us denote the map induced by the inclusion $U_{j+1}\\hookrightarrow U_j$ on the $k$ -th homology groups with coefficients in a ring $R$ by $\\mu _j$ .", "Then the $k$ -th homology at infinity of $M$ with $R$ -coefficients is defined as the inverse limit of the following sequence: $H_k(U_0; R) \\overset{\\mu _1}{\\longleftarrow } H_k(U_1; R) \\overset{\\mu _2}{\\longleftarrow } H_k(U_2, R) \\overset{\\mu _3}{\\longleftarrow } \\ldots ,$ and is denoted by $H_k^{\\infty }(M; R)$ .", "If this sequence is semistable, it can be shown that the limit does not depend on the choice of the neighborhoods of infinity.", "In the multiple-ended case, we can define the homology at infinity at each end similarly to what was done for the fundamental group; again, the semistability of the end ensures indipendence from the choice of the base-ray.", "Example 2.3 Let $M$ be a connected tame manifold without boundary, i.e.", "a manifold which is homeomorphic to the interior of a compact manifold $N$ with boundary.", "Moreover, let us suppose that $\\partial N$ is connected.", "Then $M$ is a 1-ended open manifold, and taking nested collar neighborhoods of the (missing) boundary as neighborhoods of infinity, we obtain that the fundamental group at infinity of $M$ is isomorphic to $\\pi _1(\\partial N)$ .", "The following definition is a weakening of the notion of tame manifold.", "Definition 2.4 A manifold $M$ is inward tame if for every neighborhood of infinity $U$ there exists a homotopy $H: U \\times [0,1] \\rightarrow U$ such that $H(\\cdot , 0)=\\operatorname{id}_U$ and $\\overline{H(U, 1)}$ is compact.", "In other words, this means that every neighborhood of infinity $U$ can be shrunk into a compact subset within $U$ ; we remark that being inward tame implies that each clean neighborhood of infinity has finitely presented fundamental group [9].", "According to our definitions, it is immediate to check that a tame manifold is also inward tame.", "On the other hand, well known examples of inward tame manifolds which are not tame are the exotic universal covers of the aspherical manifolds produced by Davis in [3] (see e.g.", "[11]).", "Being inward tame ensures some nice properties for the topology at infinity of a manifold, as shown by the following proposition.", "Proposition 2.5 ([9]) Let $M$ be an inward tame manifold; then $M$ has finitely many ends; each end has semistable fundamental group at infinity; each end has stable homology at infinity in all degrees.", "Hence, for inward tame manifolds the fundamental group at infinity at each end is well defined (i.e.", "it does not depend on the base-ray which determines the end or on the chosen sequence of neighborhoods of infinity).", "Definition 2.6 We say that an end of an open manifold has amenable fundamental group at infinity if the fundamental group at infinity of the end is amenable, in the sense of topological groups.", "We remark that this condition is weaker then being amenable as a discrete group.", "In the introduction we defined manifolds simply connected at infinity.", "In the finitely-many-ended case, this definition can be read in terms of pro-triviality of the sequence that defines the fundamental group at infinity at each end.", "Lemma 2.7 [11] A finitely-many-ended manifold is simply connected at infinity if and only if the sequence which defines the fundamental group at infinity at every end is pro-trivial.", "Remark 2.8 We remark that for a 1-ended manifold having trivial fundamental group at infinity does not imply being simply connected at infinity: for example it is easy to see that the Whitehead manifold introduced in [19] has trivial fundamental group at infinity but is not simply connected at infinity.", "We stress that a manifold simply connected at infinity need not be inward tame, not even in the 1-ended case ([11]).", "We state a result from [10] that ensures that, in dimension $n\\ge 5$ , a sequence $\\lbrace G_i, f_i\\rbrace $ pro-isomorphic to one that defines the fundamental group at infinity can be realized as a sequence of fundamental groups of neighborhoods of infinity.", "The result is originally stated for inward tame manifolds; however, the author remarks that it is enough to suppose that the fundamental groups of the neighborhoods of infinity are finitely presented ([10]).", "Proposition 2.9 [10] Let $n \\ge 5$ , let $M$ be a 1-ended open $n$ -manifold such that every neighborhood of infinity has finitely presented fundamental group and let $\\mathcal {G}=\\lbrace G_j, f_j\\rbrace _{j \\in \\mathbb {N}}$ be a sequence pro-isomorphic to a sequence that realizes the fundamental group at infinity of $M$ .", "Then there exists a cofinal sequence of 0-neighborhoods of infinity $\\lbrace U_j\\rbrace _{j \\in \\mathbb {N}}$ such that the inverse sequence $\\pi _1(U_0) \\longleftarrow \\pi _1(U_1) \\longleftarrow \\pi _1(U_2) \\longleftarrow \\ldots $ is pro-isomorphic to $\\mathcal {G}$ .", "Remark 2.10 The previous proposition can be generalized to the finitely-many-ended case.", "Let us suppose that $M$ has $k$ ends; by definition, there exists a compact subset $K_0 \\subset M$ such that $M \\setminus K_0$ has exactly $k$ unbounded connected components, so there exists a sequence of nested clean neighborhoods of infinity $U_0 \\supset U_1 \\supset \\ldots $ such that each $U_j$ has exactly $k$ connected components $U_j^1, \\ldots , U_j^k$ for each $j\\in \\mathbb {N}$ .", "Let us suppose that for each $i=1, \\ldots , k$ the sequence $\\pi _1(U_0^i) \\longleftarrow \\pi _1(U_1^i) \\longleftarrow \\pi _1(U_2^i) \\longleftarrow \\ldots $ is proisomorphic to a sequence $\\mathcal {G}^i=\\lbrace G_j^i, f_j^i\\rbrace _{j \\in \\mathbb {N}}$ .", "We can apply Proposition REF to each end separately to obtain a sequence of (disconnected) neighborhoods of infinity $\\lbrace \\widetilde{U_j}\\rbrace _{j \\in \\mathbb {N}}$ with the property that the inverse sequence of the fundamental groups of the $i$ -th connected component of the $\\widetilde{U_j}$ 's is pro-isomorphic to the sequence $\\mathcal {G}^i$ ." ], [ "Simplicial volume and amenable covers", "Let $X$ be a topological space and let $R$ be either the ring of the integers $\\mathbb {Z}$ or of the real numbers $\\mathbb {R}$ .", "We denote by $S_n(X)$ the set of singular $n$ -simplices on $X$ for a given natural number $n\\in \\mathbb {N}$ .", "A singular $n$ -chain on $X$ with $R$ -coefficients is a sum $\\sum _{i\\in I}a_i \\sigma _i$ , where $I$ is a (possibly infinite) set of indices, $a_i \\in R$ and the $\\sigma _i$ 's are singular simplices.", "A chain is said to be locally finite if for every compact subset $K \\subset X$ we have that there are only finitely many simplices of the chain whose image intersects $K$ .", "We denote by $C_n^{\\text{lf}}(X; R)$ the module of singular locally finite $n$ -chains on $X$ with $R$ -coefficients.", "We observe that the obvious extension of the usual boundary operator sends locally finite chains to locally finite chains, hence it defines a boundary operator on $C_{*}^{\\text{lf}}(X; R)$ ; the homology of this complex with respect to this differential is denoted by $H_{*}^{\\text{lf}}(X;R)$ and is called the locally finite homology of $X$ with $R$ -coefficients.", "We remark that, if $X$ is compact, then the locally finite homology coincides with the usual singular homology.", "From now on, unless otherwise stated, when we omit the coefficients we will understand that $R=\\mathbb {R}$ .", "We can endow the complex of locally finite chains $C_{*}^{\\text{lf}}(X)$ with the $\\ell ^1$ -norm as follows: $||\\sum _{i\\in I}a_i \\sigma _i||_1:=\\sum _{i \\in I}|a_i|.$ We stress that a locally finite chain may have an infinite $\\ell ^1$ -norm.", "This norm induces a seminorm on $H_{*}^{\\text{lf}}(X)$ , which we will again denote by $||\\cdot ||_1$ .", "From standard algebraic topology (see e.g.", "[12]), we have that if $M$ is an oriented connected $n$ -manifold, then $H_n^{\\text{lf}}(M, \\mathbb {Z})\\simeq \\mathbb {Z}$ , and this allows us to define the fundamental class $[M]_{\\mathbb {Z}}\\in H_n^{\\text{lf}}(M, \\mathbb {Z})$ as a generator of this group.", "The image of $[M]_{\\mathbb {Z}}$ under the change of coefficients map $H_n^{\\text{lf}}(M, \\mathbb {Z}) \\rightarrow H_n^{\\text{lf}}(M)$ is called the real fundamental class and is denoted by $[M]\\in H_n^{\\text{lf}}(M)$ .", "If $M$ is an oriented connected $n$ -manifold, the simplicial volume of $M$ is defined as $||M||=||[M]||_1.$ We state two results from [4] that give finiteness or vanishing conditions on the simplicial volume in terms of the existence of amenable covers with small multiplicity.", "First, we give some definitions.", "We say that a subset $U$ of a topological space $X$ is amenable if $i_*(\\pi _1(U,p))$ is an amenable subgroup of $\\pi _1(X,p)$ for any choice of basepoint $p \\in U$ , where $i: U \\hookrightarrow X$ denotes the inclusion.", "Following [4], we give the following definition.", "Definition 3.1 A sequence of subsets $\\lbrace V_j\\rbrace _{j \\in \\mathbb {N}}$ is amenable at infinity if there exists a sequence of neighborhoods of infinity $\\lbrace U_j\\rbrace _{j\\in \\mathbb {N}}$ such that $\\lbrace U_j\\rbrace _{j \\in \\mathbb {N}}$ is locally finite; $V_j \\subset U_j$ for each $j \\in \\mathbb {N}$ ; there exists $j_0 \\in \\mathbb {N}$ such that for every $j \\ge j_0$ it holds that $V_j$ is an amenable subset of $U_j$ .", "Theorem 3.2 ([4]) Let $M$ be an open oriented triangulable manifold of dimension $m$ , let $W$ be a neighborhood of infinity and let $\\mathcal {U}=\\lbrace U_j\\rbrace _{j \\in \\mathbb {N}}$ be an open cover of $W$ such that each $U_j$ is relatively compact in $M$ .", "Let us suppose also that $\\mathcal {U}$ is amenable at infinity and that $\\operatorname{mult}(\\mathcal {U})\\le m$ ; then $||M||< + \\infty $ .", "Theorem 3.3 ([4]) Let $M$ be an open oriented triangulable manifold of dimension $m$ and let $\\mathcal {U}=\\lbrace U_j\\rbrace _{j \\in \\mathbb {N}}$ be an amenable open cover of $M$ (i.e.", "each $U_j$ is an amenable subset of $M$ ) such that each $U_j$ is relatively compact in $M$ .", "Let us suppose also that $\\mathcal {U}$ is amenable at infinity and that $\\operatorname{mult}(\\mathcal {U})\\le m$ ; then $||M||=0$ ." ], [ "Proof of Theorem ", "First, let us deal with the 1-ended case.", "Since $M$ is inward tame, by Proposition REF we have that $M$ has semistable fundamental group at infinity.", "Let $\\mathcal {G}=\\lbrace G_0 \\longleftarrow G_1 \\longleftarrow \\ldots \\rbrace $ be a sequence whose maps are all surjective which is pro-isomorphic to a sequence that realizes the fundamental group at infinity of $M$ .", "Let $\\lbrace U_j\\rbrace _{j \\in \\mathbb {N}}$ be a cofinal sequence of clean 0-neighborhoods of infinity given by Proposition REF , i.e.", "such that $\\lbrace n_j\\rbrace _{j\\in \\mathbb {N}}$ is a strictly increasing sequence, $\\pi _1(U_j) \\simeq G_{n_j}$ and that the inclusions $U_{j+1} \\hookrightarrow U_{j}$ induce surjective maps between the fundamental groups for each $j \\in \\mathbb {N}$ .", "By Lemma REF , we have that $\\pi _1(U_j)$ is amenable for each $j \\in \\mathbb {N}$ .", "Let $V_j:= U_j \\setminus \\operatorname{int}(U_{j+1})$ ; for $j \\ge 0$ , $V_j$ is a connected codimension-0 submanifold of $M$ with two boundary components, which are $\\partial U_j$ and $\\partial U_{j+1}$ .", "For every $j \\in \\mathbb {N}$ , let us fix a regular neighborhood $N_j$ in $M$ of the boundary component of $U_j$ and a homeomorphism $N_j \\cong \\partial U_j \\times [-1,1]$ such that $\\partial U_j \\times [-1,0)$ is contained in $U_{j-1} \\setminus U_j$ and $\\partial U_{j+1} \\times (0,1]$ is contained in $U_{j+1} \\setminus U_{j+2}$ .", "Now, for each $j \\in \\mathbb {N}$ let us set $\\widetilde{V_j}:= V_j \\bigcup \\partial U_j \\times (-1/2,0] \\bigcup \\partial U_{j+1} \\times [0, 1/2)$ .", "Moreover, let us set $\\widetilde{V}_{-1}:= \\operatorname{int}(M \\setminus U_0) \\bigcup \\partial U_0 \\times [0, 1/2)$ .", "We have that $\\widetilde{V_j}$ is open and relatively compact, and that the open cover $\\mathcal {V}=\\lbrace \\widetilde{V_j}\\rbrace _{j={-1}}^{+\\infty }$ has multiplicity 2.", "Moreover, for each $j \\in \\mathbb {N}$ we have that $\\widetilde{V_j}$ is contained in $\\widetilde{U_j}:=U_j \\bigcup \\partial U_j \\times (-1,0]$ ; the $\\widetilde{U_j}$ 's are locally finite and homotopy equivalent to the $U_j$ 's, so in particular they have an amenable fundamental group.", "Hence, the image of $\\pi _1(\\widetilde{V_j})$ in $\\pi _1(\\widetilde{U_j})$ under the map induced by the inclusion $\\widetilde{V_j}\\hookrightarrow \\widetilde{U_j}$ is also amenable.", "It follows that $\\mathcal {V}$ is amenable at infinity.", "We can conclude that $||M||<+ \\infty $ thanks to Theorem REF .", "If $M$ has $k$ ends and $k \\in \\mathbb {N}_{\\ge 2}$ , thanks to Remark REF we can simply apply the same procedure to each end separately and get the same conclusion.", "In the 1-ended case, for each $j\\in \\mathbb {N} \\cup \\lbrace -1\\rbrace $ let us take $\\widetilde{V_j}$ as in the proof of Theorem REF : since $\\pi _1(M)$ is amenable, we have that the $\\widetilde{V_j}$ 's form an amenable open cover with the same properties as above, so by Theorem REF we have that $||M||=0$ .", "If $M$ has more than one end, the conclusion follows again from Remark REF .", "Remark 4.1 Our proofs of Theorem REF and Theorem REF work even if $M$ is a triangulable open manifolds of dimension $n\\ge 5$ with finitely many ends, such that the sequence which defines the fundamental group at infinity at each end is semistable and with amenable fundamental group at infinity at each end.", "We extend Theorem REF and Theorem REF to the case of finitely-many-ended manifolds that are simply connected at infinity (which, as remarked in Section , need not be inward tame).", "As usual, we write the proof in the 1-ended case, noting that it can be generalized to the finitely-many-ended one thanks to Remark REF .", "We are going to prove that if $M$ is simply connected at infinity then the fundamental groups of neighborhoods of infinity of $M$ are finitely presented, so that we can apply Proposition REF .", "Let $U_0 \\supset U_1 \\supset U_2 \\supset \\ldots $ be a cofinal sequence of nested clean 0-neighborhoods of infinity; by Lemma REF , the induced sequence on the fundamental groups is pro-trivial, i.e.", "(up to taking a subsequence) it fits in a diagram of the following form: 1(U0) [ll, \"1\"] 1(U1) [dl, \"\"] 1(U2) [ll, \"2\"] [dl, \"\"] ...[l] 1 [ul] [ll] 1 [ul] 1 [ll, \"\"] [ul] ...[l] where $\\lambda _j: \\pi _1(U_{j+1})\\rightarrow \\pi _1(U_j)$ is the map induced by the inclusion $U_{j+1} \\hookrightarrow U_j$ which, by commutativity of the diagram, is the trivial map.", "Let us fix $j\\in \\mathbb {N}$ , let $K$ be the compact subset $U_j \\setminus \\operatorname{int}U_{j+1}$ .", "By Van Kampen Theorem, we have that $\\pi _1(U_j) \\simeq \\pi _1(K) *_{\\pi _1(\\partial U_{j+1})} \\pi _1(U_{j+1}).$ Let $\\mu $ denote the map induced by the inclusion $\\partial U_{j+1} \\hookrightarrow K$ .", "Since the elements of $\\pi _1(U_{j+1})$ are trivial in the amalgamated product, $\\pi _1(U_j)$ coincides with $\\pi _1(K)/\\mu (\\pi _1(\\partial U_{j+1}))$ (we can simplify the standard presentation of the amalgamated product by cancelling the generators and the relations of $\\pi _1 (U_{j+1})$ ).", "From the compactness of $K$ and $\\partial U_{j+1}$ , it follows that $\\pi _1(U_j)$ is a quotient of a finitely presentable group by a finitely presentable group, and so is finitely presentable.", "Therefore we can apply Proposition REF to obtain a sequence of simply connected neighborhoods of infinity, and then we can conclude that $||M||<+\\infty $ as in the proof of Theorem REF .", "If in addition $\\pi _1(M)$ is amenable, the vanishing of $||M||$ is deduced as in the proof of Theorem REF ." ] ]
2207.10525
[ [ "A Forgotten Danger in DNN Supervision Testing: Generating and Detecting\n True Ambiguity" ], [ "Abstract Deep Neural Networks (DNNs) are becoming a crucial component of modern software systems, but they are prone to fail under conditions that are different from the ones observed during training (out-of-distribution inputs) or on inputs that are truly ambiguous, i.e., inputs that admit multiple classes with nonzero probability in their ground truth labels.", "Recent work proposed DNN supervisors to detect high-uncertainty inputs before their possible misclassification leads to any harm.", "To test and compare the capabilities of DNN supervisors, researchers proposed test generation techniques, to focus the testing effort on high-uncertainty inputs that should be recognized as anomalous by supervisors.", "However, existing test generators can only produce out-of-distribution inputs.", "No existing model- and supervisor-independent technique supports the generation of truly ambiguous test inputs.", "In this paper, we propose a novel way to generate ambiguous inputs to test DNN supervisors and used it to empirically compare several existing supervisor techniques.", "In particular, we propose AmbiGuess to generate ambiguous samples for image classification problems.", "AmbiGuess is based on gradient-guided sampling in the latent space of a regularized adversarial autoencoder.", "Moreover, we conducted what is - to the best of our knowledge - the most extensive comparative study of DNN supervisors, considering their capabilities to detect 4 distinct types of high-uncertainty inputs, including truly ambiguous ones." ], [ "Introduction", "Recently, more and more software systems are Deep Learning based Software Systems (DLS), i.e., they contain at least one Deep Neural Network (DNN), as a consequence of the impressive performance that DNNs achieve in complex tasks, such as image, speech or natural language processing, in addition to the availability of affordable, but highly performant hardware (i.e., GPUs) where DNNs can be executed.", "DNN algorithms can identify, extract and interpret relevant features in a training data set, learning to make predictions about an unknown function of the inputs at system runtime.", "Given the complexity of the tasks for which DNNs are used, predictions are typically made under uncertainty, where we distinguish between epistemic uncertainty, i.e., model uncertainty which may be removed by better training of the model, possibly on better training data, and aleatoric uncertainty, which is model-independent uncertainty, inherent in the prediction task (e.g., the prediction of a non-deterministic event).", "The former uncertainty is due to out-of-distribution (OOD) inputs, i.e., inputs that are inadequately represented in the training set.", "The latter may be due to ambiguity – a major issue often ignored during DNN testing, as recently recognized by Google AI Scientists: \"many evaluation datasets contain items that (...) miss the natural ambiguity of real-world context\" [1].", "The existence of uncertainty led to the development of DNN Supervisors (in short, supervisors), which aim to recognize inputs for which the DL component is likely to make incorrect predictions, allowing the DLS to take appropriate countermeasures to prevent harmful system misbehavior [2], [3], [4], [5], [6], [7], [8].", "For instance, the supervisor of a self-driving car might safely disengage the auto-pilot when detecting a high uncertainty driving scene [2], [9].", "Other examples of application domains where supervision is crucial include medical diagnosis [10], [11] and natural hazard risk assessment [12].", "While most recent literature on uncertainty driven DNN testing is focused on out of distribution detection [3], [4], [13], [2], [14], [5], [15], [16], [17], studies considering true ambiguity are lacking, which poses a big practical risk: We cannot expect that supervisors which perform well in detecting epistemic uncertainty are guaranteed to perform well at detecting aleatoric uncertainty.", "Actually, recent literature suggests the opposite [18].", "The lack of studies considering true ambiguity is related to – if not caused by – the unavailability of ambiguous test data for common case studies: While to create ODD data, such as corrupted and adversarial inputs, a variety of precompiled dataset and generation techniques are publicly available  [19], [20], [21], and invalid or mislabelled data is trivial to create in most cases, we are not aware of any approach targeting the generation of true ambiguity in a way that is sufficient for reliable and fair supervisor assessment.", "In this paper we aim to close this gap by making the following contributions: [noitemsep] Approach We propose AmbiGuess, a novel approach to generate diverse, labelled, ambiguous images for image classification tasks.", "Our approach is classifier independent, i.e., it aims to create data which is ambiguous to a hypothetical, perfectly well trained oracle (e.g., a human domain expert), and which does not just appear ambiguous to a specific, suboptimally trained DNN.", "Datasets Using AmbiGuess, we generated and released two ready-to-use ambiguous datasets for common benchmarks in deep learning testing: MNIST [22], a collection of grayscale handwritten digits, and Fashion-MNIST [23], a more challenging classification task, consisting of grayscale fashion images.", "Supervisor Testing Equipped with our datasets, we measured the capability of 16 supervisors at detecting different types of high-uncertainty inputs, including ambiguous ones.", "Our results indicate that there is complementarity in the supervisors' capability to detect either ambiguity or corrupted inputs." ], [ "Ambiguous Inputs", "In many real-world applications, the data observed at prediction time might not be sufficient to make a certain prediction, even assuming a hypothetical optimal oracle such as a domain expert with exhaustive knowledge: If some information required to make a correct prediction is missing, such missing information can be seen as a random influence, thus introducing aleatoric uncertainty in the prediction process.", "Formally, in a given classification problem, i.e., a machine learning (ML) problem where the output is the class $c$ the input $x$ is predicted to belong to, let $P(c \\mid x)$ denote the ground truth probability that $x$ belongs to $c$ , where observation $x \\in \\mathbb {O}$ and $\\mathbb {O}$ denotes the observable space, i.e., the set of all possibly observable inputs.", "We define true ambiguity as follows: Definition 1 (True Ambiguity in Classification) A data point $x \\in \\mathbb {O}$ is truly ambiguous if and only if $P(c \\mid x) > 0 $ for more than once class $c$ .", "Inputs to a classification problem are considered truly ambiguous if and only if in the (usually unknown) ground truth, such input is truly part of an overlap between two or more classes.", "We emphasize true ambiguity to indicate ambiguity intrinsic to the data and independent from any model and its classification confidence/accuracy.", "In this way we distinguish ours from other papers which also use the term ambiguous with different meaning, such as low confidence inputs, mislabelled inputs, where a label in the training/test set is not consistent with the ground truth [24], or invalid inputs, where no true label exists for a given inputIt can be noticed that the term invalidity is context dependent.", "Dola et.", "al.", "[17] consider an input invalid if it is out-of-distribution w.r.t.", "the training data, while still being an input which clearly belongs to one class, whereas other works consider as invalid input any relevant edge case [19], [20]..", "In simple domains, where humans may have no epistemic uncertainty (i.e., they know the matter perfectly), true ambiguity is equivalent to human ambiguity.", "In the remainder of this paper we focus only on true ambiguity and if not otherwise mentioned we use the term ambiguity as a synonym for true ambiguity.", "A prediction-time input is denoted OOD if it was insufficiently represented at training time, which caused the DNN not to generalize well on such types of inputs.", "This is the primary cause of epistemic uncertainty.", "OOD test data is used extensively to measure supervisor performance in academic studies, e.g.", "by modifying nominal data in a model-independent, realistic and label-preserving way (corrupted data) [14], [19], [20], [2] or by minimally modifying nominal data to fool a specific, given model (adversarial data).", "In practice, both OOD and true ambiguity are important problems when building DLS supervisors [25].", "Much recent literature works on the characterization of the decision frontier of a given model, i.e., its boundary of predictions between two classes in the input space [26], [27], [28], [29].", "It is imporant to note that the decision frontier is not equivalent to the sets of ambiguous inputs: The decision frontier is model specific, while ambiguity depends only on the problem definition and is thus independent of the model.", "I.e., the fact that an input is at a specific model's frontier, does not guarantee that it is indeed ambiguous (it may also be unambiguous, i.e., belong to a specific class with probability 1, or invalid, i.e., have 0 probability to belong to any class).", "The decision frontier may thus be considered the “model's ambiguity”, while true ambiguity implies that an input is perceived as ambiguous by a hypothetical, perfectly well trained domain expert (hence matching “human ambiguity” in many classification tasks)." ], [ "Related Work", "The research works that are most related to our approach deal with automated test generation for DNNs [19], [20], [30], [14], [2], [21].", "In these works, some reasons for uncertainty, such as ambiguity, are not considered.", "Hence, automatically generated tests do not allow meaningful evaluations under ambiguity of DNN supervisors, as well as of the DNN behavior, in the absence of supervisors.", "We illustrate this in fig:mnist-nominal-amb: Using an off-the-shelve MNIST [22] classifier, we calculated the predictive entropy to identify the 300 samples with the presumably highest aleatoric uncertainty in the MNIST test set.", "Out of these 300 images, we manually selected the ones we considered potentially ambiguous, and show them in fig:mnist-nominal-amb.", "Clearly, some of them are ambiguous, showing that ambiguity exists and is present in the MNIST test set, but the scarcity of truly ambiguous inputs indicates that supervisors cannot be confidently tested for their capability of handling ambiguity using this test set.", "Figure: The 18 most ambiguous images, manually selected from the 300 (3%) samples with the highest predictive entropy in the MNIST test set .", "Only a few of them are clearly ambiguous, showing that ambiguous data are scarce in existing datasets.In the DNN test input generators (TIG) literature [19], [20], [30], [14], [2], [31], with just one notable preprint as an exception [18], we are not aware of any paper aiming to generate true ambiguity directly, while most TIG aim for other objectives.", "Some works [19], [20], [32] propose to corrupt nominal input in predefined, natural and label-preserving ways to generate OOD test data.", "DeepTest [30] applies corruptions to road images, e.g., by adding rain, while aiming to generate data that maximizes neuron coverage.", "Also targeting road images, DeepRoad [14] is a framework using Generative Adversarial Networks (GAN) to change conditions (such as the presence of snow) on nominal images.", "The Udacity Simulator, used by Stocco et.", "al., [2], allows to dynamically add corruptions, such as rain or snow, when testing self-driving cars.", "Similar to DeepTest, TensorFuzz [33] and DeepHunter [34] generate data with the objective to increase test coverage.", "Again, aiming to generate diverse and unseen inputs, these approaches will mostly generate OOD inputs and only occasionally – if at all – truly ambiguous data.", "A fundamentally different objective is taken in adversarial input generation [35], where nominal data is not changed in a natural, but in a malicious way.", "Based on the tested model, nominal input data is slightly changed to cause misclassifications.", "Literature and open source tools provide access to a wide range of different specific adversarial attacks [21].", "While very popular, neither input corruptions nor adversarial attacks generate ambiguous data.", "As they rely on the ground truth label of the modified input to remain unchanged, true ambiguity, associated with an ambiguous ground truth label, would imply unsuccessful test data generation.", "Another popular type of test data generators aims to create inputs along the decision boundary: DeepJanus [29] uses a model based approach, while SINVAD [27] and MANIFOLD [28] use the generative power of variational autoencoders (VAE) [36].", "Note that we cannot expect inputs along the decision boundary to be always truly ambiguous – they may just as well be OOD, invalid or in rare cases even low-uncertainty inputs.", "In addition, these approaches are by design model specific, making them unsuitable to generate a generally applicable, model-independent, ambiguous dataset.", "Thus, out of all the approaches discussed above, none aims to generate a truly ambiguous dataset.", "A notable exception is a recent, yet unpublished, preprint by Mukhoti et.", "al., [18].", "In their work, to evaluate the uncertainty quantification approach they propose, they needed an ambiguous MNIST dataset.", "To that extent, they used a VAE to generate a vast amount of data (which also contains invalid, OOD and un-ambiguous data) which they then filter and stratify based on two mis-classification prediction (MP) techniques, aiming to end up with a dataset consisting of ambiguous images.", "We argue that, while certainly valuable in the scope of their paper, the so-created dataset is not sufficient as a standard benchmark for DNN supervisors, as the approach itself relies on a supervision techniques (MP), hence being circular if used for DNN supervisor assessment.", "In fact, the created ambiguity may be particularly hard (or easy) to be detected by supervisors using different (resp.", "similar) MP techniques.", "We anyway compared their approach to ours empirically and found that it is less successful in generating truly ambiguous test data than ours." ], [ "Uses of Ambiguous Test Sets", "In this paper we focus on the usage of ambiguous test data for the assessment of DNN supervisors, but ambiguous data have also other uses, including the assessment of test input prioritizers." ], [ "Assessment of DNN supervisors", "We cannot assume that results on DNN supervisors' capabilities obtained on nominal and OOD data generalize to ambiguous data.", "Recent studies [37], [5] have shown that there is no clear performance dominance amongst uncertainty quantifiers used as DNN supervisors, but such studies overlook the threats possibly associated with the presence of ambiguity.", "Warnings on such threats in medical machine learning based systems were raised already in 2000 [38], with ambiguity in a cancer detection dataset mentioned as a specific example.", "The authors proposed to equip the system with an ambiguity-specific supervisor, to “detect and re-classify as ambiguous” [38] such threatening data.", "To test such supervisors, such as the one proposed by Mukhoti et.", "al.", "[18], model and MP independent and diverse ambiguous data is needed." ], [ "Assessment of DNN input prioritizers", "Test input prioritizers, possibly based on MP, aim to prioritize test cases (inputs) in order to allow developers to detect mis-behaviours (i.e., mis-classifications) as early as possible.", "Hence, they should be able to recognize ambiguous inputs.", "Correspondingly, test input prioritizers should be assessed also on ambiguous inputs.", "On the contrary, when the goal is active learning, an ambiguous input should be given the least priority or excluded at all, as the aleatoric uncertainty causing its mis-classification cannot by definition be avoided using more training data.", "Thus, recognition of ambiguous test data is clearly of high importance when developing a test input prioritizer, be that to make sure that the ambiguous samples are given a high priority (during testing) or a low priority (during active learning)." ], [ "Other Uses of Ambiguous Data", "Ambiguous data is potentially useful in at least two other DLS testing applications: First, it allows to assess an identified decision frontier.", "Much recent literature works on the characterization of the decision boundary of a given model, i.e., its frontier of predictions between two classes in the input space [26], [27], [28], [29].", "While not every point at a models frontier is ambiguous, the inverse should be true: Any truly ambiguous input must be placed - by a sufficiently well trained classifier - at the models decision frontier.", "Thus, our generated datasets may be used as an oracle to evaluate a model's decision frontier, when we expect the model to exhibit low confidence if presented with ambiguous input.", "Second, informing the developers of a DLS about the root causes of uncertainties and mispredictions would greatly facilitate further improvement of the DLS, especially because DNNs are known for their low explainability [39], which makes debugging particularly challenging when dealing with them.", "To develop any DNN debugging technique that supports uncertainty disentanglement or uncertainty reasoning, the availability of ambiguous data in the test set is a strict prerequisite, because ambiguity (aleatoric uncertainty) is an important root cause of mis-behaviours." ], [ "Generating Ambiguous Test Data", "We designed AmbiGuess, a TIG targeting ambiguous data for image classification, based on the following design goals (DG): DG1 (labelled ambiguity): The generated data should be truly ambiguous and have correspondingly probabilistic labels, i.e., each generated data is associated with a probability distribution over the set of labels.", "Probabilistic labels are the most expressive description of true ambiguity and a single or multi-class label can be trivially derived from probabilistic labels.", "DG2 (model independence): To allow universal applicability of the generated dataset, our TIG should not depend on any specific DNN under test.", "DG3 (MP independence): The created dataset should allow fair comparison between different supervisors.", "Since supervisors are often based on MPs (e.g., uncertainty or confidence quantifiers), our TIG should not use any MP as part of the data generation process, to avoid circularity, which might give some supervisor an unfair advantage or disadvantage over another one.", "DG4 (diversity): The approach should be able to generate a high number of diverse images." ], [ "Interpolation in Autoencoders", "Autoencoders (AEs) are a powerful tool, used in a range of TIG [27], [28], [18], [31].", "AEs follow an encoder-decoder architecture as shown in the blue part of fig:ae-and-raae: An encoder $E$ compresses an input into a smaller latent space (LS), and the decoder $D$ then attempts to reconstruct $x$ from the LS.", "The reconstruction loss, i.e., the difference between input $x$ and reconstruction $\\hat{x}$ is used as the loss to be minimized during training of the AE.", "On a trained AE, sampling arbitrary points in the latent space, and using the decoder to construct a corresponding image, allows for cheap image generation.", "This is shown in fig:lssampling, where the shown images are not part of the training data, being reconstructions based on randomly sampled points in the latent space.", "In the following Section, we leverage the generative capability of AEs, by proposing an architecture that can target ambiguous samples specifically and can label the generated data probabilistically (DG1).", "Figure: Image Sampling in the Latent Space" ], [ "Our TIG AmbiGuess consists of three components: (1) The Regularized LS Generation component, which trains a specifically designed AE to have a LS that facilitates the generation of truly ambiguous samples.", "(2) The Automatic Labelling component, which leverages the AE architecture to support probabilistic labelling of any images produced by the AE's decoder.", "(3) The Heterogenous Sampling component, which chooses samples in the LS in a way that leads to high diversity of the generated images." ], [ "Regularized Latent Space Generation", "Interpolation from one class to another in the latent space, i.e., the gradual perturbation of the reconstruction by moving from one cluster of latent space points to the another one, may produce ambiguous samples between those two classes (satisfying both DG2 and DG3).", "An example of such an interpolation is shown in fig:interpolation.", "Clearly, we want the two clusters to be far from each other, providing a wide range for sampling in between them, and no other cluster should be in proximity, as it would otherwise influence the interpolation.", "However, these two conditions are usually not met by traditional autoencoders used in other TIG approaches.", "For example, fig:lssampling shows the LS of a standard variational autoencoder (a popular architecture in TIG).", "Here, interpolating between classes 0 and 7 would, amongst others, cross the cluster representing class 4, and thus samples taken from the interpolation line would clearly not be ambiguous between 0 and 7, but would be reconstructed as a 4 (or any of the other clusters lying between them).", "We solve these requirements by using 2-class Regularized Adversarial AEs:" ], [ "2-class AE", "Instead of training one AE on all classes, we train multiple AEs, each one with the training data of just two classes.", "This has a range of advantages: First and foremost, it prevents interferences with third classes.", "Then, as the corresponding reduced (2-class) datasets have naturally a lower variability (feature density), 2-class autoencoders are expected to require fewer parameters and show faster convergence during training.", "Further, the fact that the number of combinations of classes ${c}\\atopwithdelims (){2}$ grows exponentially in the number of classes $c$ is of only limited practical relevance: In very large, real-world datasets, ambiguity is much more prevalent between some combinations of classes than between others, so not all pairwise combinations are equally interesting for the test generation task.", "For example, let us consider a self driving car component which classifies vehicles on the road.", "While an image of a vehicle where one cannot say for sure weather it is a pick-up or a SUV (hence having true ambiguity) is clearly a realistic case, an image which is truly ambiguous between a SUV and a bicycle is hard to imagine.", "This phenomenon is well known in the literature, as it leads to heteroscedastic aleatoric uncertainty [40], i.e., aleatoric uncertainty which is more prevalent amongst some classes than amongst others.", "In such a case, using AmbiGuess, one would only construct the 2-class AEs for selected combinations where ambiguity is realistic.", "To guide the training process towards creating two disjoint clusters representing the two classes, with an adequate amount of space between them, we use a Regularized Adversarial Autoencoder (rAAE) [41].", "The architecture of an rAAE is shown in fig:ae-and-raae: Encoder $E$ , Decoder $D$ and the LS are those of a standard AE.", "In addition, similar to other adversarial models [42], a discriminator $Disc$ is trained to distinguish labelled, encoded images $z$ from samples drawn from a predefined distribution $p(z|y)$ .", "Specifically, we define $p(z|y)$ as a multi-modal (2 classes) multi-variate (number of dimensions in latent space) gaussian distribution, consisting of $p(z|c_1)$ and $p(z|c_2)$ for classes $c_1$ and $c_2$ , respectively.", "Then, training a rAAE consists of three training steps, which are executed on every training epoch: First, similar to a plain AE, $E$ and $D$ are trained to reduce the reconstruction loss.", "Second, $Disc$ is trained to discriminate encoded images from samples drawn from $p(z|y)$ , and third, $E$ is trained to fool $Disc$ , i.e., $E$ is trained with the objective that the training set projected onto the latent space matches the distribution $p(z|y)$ .", "This last property can be leveraged for ambiguous test generation: Given two classes $c_1$ and $c_2$ , to clear up space between them in the latent space we can choose a $p(z|y)$ such that $p(z|c_1) > \\epsilon $ on LS points disjoint from the LS points where $p(z|c_2) > \\epsilon $ , for some small $\\epsilon $ .", "For example, assume a two-dimensional latent space: Choosing $p(z|c_1) = \\mathcal {N}([-3,0], [1,1])$ and $p(z|c_2) = \\mathcal {N}([3,0], [1,1])$ will, after successful training, lead to a latent space where points representing $c_1$ are clustered around $(-3,0)$ and points representing $c_2$ around $(3,0)$ , with few if any points between them, i.e., around $(0,0)$ .", "This makes reconstructions around $(0,0)$ potentially highly ambiguous." ], [ "Probabilistic Labelling of Images", "The $Disc$ of a 2-class rAAE can be used to automatically label the images generated by its decoder: Given a latent space sample $z^*$ on a 2-class rAAE for classes $c_1$ and $c_2$ , $Disc(z^*, c_1)$ approximates $p(z^*|c_1)$ .", "Assuming $p(c_1) = p(c_2) = 0.5$ , we have $p(z^*|c_1) = p(c_1 | z^*)$ .", "Hence, $Disc(z^*, c_1)$ approximates the likelihood that $z^*$ belongs to class $c_1$ .", "The same holds for $Disc(z^*, c_2)$ .", "Normalizing these two values s.t.", "they add up to 1 thus provides a probability distribution over the classes (thus realizing DG1)." ], [ "Selecting Diverse Samples in the LS", "Diversity in a generated dataset (see DG4) is in general hard to achieve when generating a dataset by sampling the LS, as the distance between two points in the LS does not directly translate to a corresponding difference between the generated images.", "While in some parts of the LS, which we denote as high density parts, moving a point slightly in the LS space can lead to clearly visible changes in the decoder's output, in low density parts, large junks of the LS lead to very similar reconstructed images.", "We solve this problem by proposing a novel way to measure the density in the LS, and sample more points from high density regions than from low density regions.", "First, we divide the relevant part of the LS (i.e., the region lying between the distributions $p(z|c_1)$ and $p(z|c_2)$ ) into a predefined (large) number of small, equally sized (multi-dimensional) grid cells, and identify a point at the center of every grid cell, denoted as grid cell anchor.", "Grid cells for which the anchor leads to insufficient ambiguity, i.e., the difference between the two class probabilities according to the probabilistic labelling process is higher than some threshold $\\delta _{max}$ are ignored.", "For the remaining anchors, we then calculate the decoder gradient $\\Delta _s$ for any grid cell $s$ at it's anchor point.", "To calculate the derivatives $\\partial \\hat{x}/\\partial z_i$ (for every latent space dimension $i$ ), we measure the difference between two generated images, $\\partial \\hat{x}$ , using Euclidean distance.", "The norm of the decoder gradients $||\\Delta _s||$ can thus be used as a measurement of the density at the corresponding anchor point (and, by extension, grid cell $s$ ).", "Hence, to achieve diverse sampling, we select a grid cell with a selection probability proportional to $||\\Delta _s||$ and than we choose a point within the selected grid cell uniformly at random.", "Figure: Interpolation between two classes in the latent space of a 2-class Regularized Adversarial Autoencoder.We built and released two ready-to-use ambiguous datasets for mnist [22], the most common dataset used in software testing literature [43], where images of handwritten numbers between 0 and 9 are to be classified, and its more challenging drop-in replacement Fashion mnist (fmnist) [23], consisting of images of 10 different types of fashion items." ], [ "For each pair of classes, we trained 20 rAAEs to exploit the non-determinism of the training process to generate even more diversified outputs.", "To make sure we only use rAAEs where the distribution in the LS is as expected, we check if the discriminator cannot distinguish LS samples obtained from input images w.r.t.", "LS samples drawn from $p(z|y)$ : the accuracy on this task should be between $0.4$ and $0.6$ .", "At the same time, we check if the discriminator's accuracy in assigning a higher probability to the correct label of nominal samples is above 0.9.", "Otherwise it is discarded.", "Combined, we used the resulting rAAEs to draw 20,000 training and 10,000 test samples for both mnist and fmnist, using $\\delta _{max}=.25$ for test data and $\\delta _{max}=0.4$ for training data.", "We ignored generated samples where the difference between the two label's probabilities was above $\\delta _{max}$ .", "We chose different $\\delta _{max}$ , (the loose upper threshold of difference in the two class probabilities) for train and test set as our test set should be clearly and highly ambiguous, e.g.", "to allow studies that specifically target ambiguity (hence a low $\\delta _{max}$ ).", "In turn, the training set should more continuously integrate with the nominal data, hence we allow for less ambiguous data." ], [ "Evaluation of Generated Data", "The goal of this experimental evaluation is to assess both quantitatively and qualitatively whether AmbiGuess can indeed generate truly ambiguous data.", "We evaluate the ambiguity in our generated datasets first using a quantitative analysis where we analyze the outputs of a standard, well-trained classifier and second by visually inspecting and critically discussing samples created using AmbiGuess." ], [ "Quantitative Evaluation of ", "We performed our experiments using four different DNN architectures as supervised models: A simple convolutional DNN [44], a similar but fully connected DNN, a model consisting of Resnet-50 [45] feature extraction and three fully connected layers for classification and lastly a Densenet-architecture [46].", "Results are averaged over the four architectures, individual results are reported in the reproduction package.", "We compare the predictions made for our ambiguous dataset to the predictions made on nominal, non-ambiguous data, using the following metrics: Top-1 / Regular Accuracy Percentage of correctly classified inputs.", "We expect this to be considerably lower for ambiguous than for nominal samples, as choosing the correct (i.e., higher probability) class, even using an optimal model, is affected by chance.", "Top-2 Accuracy Percentage of inputs for which the true label is among the two classes with the highest predicted probability.", "For samples which are truly ambiguous between two classes, we expect a well-trained model to achieve much better performance than on Top-1 accuracy (ideally, 100%).", "Top-Pair Accuracy Novel metric for data known to be ambiguous between two classes, measured as the percentage of inputs for which the two most likely predicted classes equal the two true classes between which the input is ambiguous.", "By definition Top-Pair accuracy is lower than or equal to Top-2 accuracy.", "It is an even stronger measure to show that the model is uncertain between exactly the two classes for which the true probabilistic label of the input shows nonzero probability.", "Entropy Average entropy in the Softmax prediction arrays.", "Used as a metric to measure aleatoric uncertainty (and thus ambiguity) in related work [18].", "We focus our evaluations on models trained using a mixed-ambiguous dataset consisting of both nominal and ambiguous data.", "This aims to make sure our ambiguous test sets are not OOD, and that thus the observed uncertainty primarily comes from the ambiguity in the data.", "For completeness, we also run the evaluation on a model trained using only nominal data.", "With this model we expect even lower values of regular (top-1) accuracy on ambiguous data, as these are out-of-distribution, not just ambiguous." ], [ "Quantitative Results", "The results of our experiments for the model trained on a mixed-ambiguous training set are shown in tab:ambiguityres.", "The results for the models trained on a clean dataset can be found in the replication package; they are in line with those in tab:ambiguityres.", "We noticed that the use of the mixed-ambiguous training sets reduces the model accuracy on nominal data only by a negligible amount: On mnist, the corresponding accuracy is 96.98% (97.42% using a clean training set) and 88.43% on fmnist (88.37% using a clean training set).", "Thus, our ambiguous training datasets can be added to the nominal ones without hesitation.", "Results indicate that our datasets are indeed suitable to induce ambiguity into the prediction process, as the generated data is perceived as ambiguous by the DNN: Top-1 accuracies for both case studies is around 50%, but they increase almost to the levels of the nominal test set when considering Top-2 accuracies.", "Even Top-Pair accuracy, with values of 95.37% and 86.71% (on mnist and fmnist, respectively) are very high, showing that for the vast majority of test inputs, the two classes considered most likely by the well-trained DNN are exactly the classes between which we aimed to create ambiguity.", "Consistently, entropy is substantially higher for ambiguous data than for nominal data.", "Finally, we compared our ambigous mnist dataset against AmbiguousMNIST by Mukhoti et al.", "[18], the only publicly available dataset aiming to provide ambiguous data.", "ResultsAvailable in reproduction package.", "are clearly in favour of our dataset, which has a lower Top-1 accuracy (53.31% vs. 72.50%), indicating that our dataset is harder (more ambiguous) and has a higher Top-2 accuracy (97.99% vs. 90.93%) showing that our dataset contains more samples whose predicted class is amongst the 2 most likely labels.", "Top-Pair accuracy cannot be computed for AmbiguousMNIST, as 37% of its claimed “ambiguous” inputs have non-ambiguous labels.", "Most strikingly, the average softmax entropy for AmbiguousMNIST is 0.88 (ours: 1.22), even though AmbiguousMNIST is created by actively selecting inputs with a high softmax entropy." ], [ "Qualitative Discussion of ", "Some test samples generated using AmbiGuess, for both mnist and fmnist, are shown in fig:gen-amb.", "They have been chosen to highlight different strengths and weaknesses that emerged during our qualitative manual review of 300 randomly selected images in our generated test sets per case study." ], [ "mnist", "AmbiGuess (see Fig.", "REF a-e) is in general capable of combining features of different classes, where possible:  REF a and REF c can both be seen as an 8, but the 8-shape was combined with a 3-shape or 2-shape, respectively.", "For the combination between 0 and 7, shown in  REF b, only the upper (horizontal) part of the 7 was combined with the 0-shape, such that both a 7 and a 0 are clearly visible, making the class of the image ambiguous.", "Fig.", "REF d shows an edge case of an almost invalid image: Knowing that the image is supposed to be ambiguous between 1 and 4, one can identify both numbers.", "However, neither of them is clearly visible and the image may appear invalid to some humans.", "Overall, we considered only few samples generated by AmbiGuess for mnist as bad, i.e., as clearly unambiguous or invalid.", "An example of them is shown in Fig.", "REF e. By most humans, this image would be recognized as an unambiguous 0.", "In fact, there's a barely visible, tilted line within the 0 which apparently was sufficient to trick the rAAEs discriminator into also assigning a high probability to digit 1.", "Realistic true ambiguity is not possible between most classes of fmnist.", "Hence, we assessed how well AmbiGuess performs at creating data that would trigger an ambiguous classification by humans, even though such data might be impossible to experience in the real world.", "Examples are given in Fig.", "REF f-j.", "In most cases (e.g.", "Fig.", "REF f-h), the interpolations created by AmbiGuess show an overlay of two items of the two considered classes, with features combined only where possible.", "We can also observe that some non-common features are removed, giving more weight to common features.", "For instance, in Fig.", "REF i, the tip of the shoe, and the lower angles of the bag are barely noticeable, such that the image has indeed high similarity with both shoes and bags.", "As a negative example we observe that, in some cases, it appears that the overlay between the two considered classes is dominated by one one of them (such as Fig.", "REF j, which would be seen as non-ambiguous ankle boot by most humans).", "[colback=blue!5!white,colframe=blue!75!black,title=Summary (Evaluation of AmbiGuess-datasets)] AmbiGuess successfully generated highly ambiguous data sets, with high prediction entropy, top-1 accuracy close to 50% and top-2 accuracy close to 100%, outperforming the ambiguous dataset previously produced by Mokhoti et al. [18].", "justification=centering Figure: Selected good and bad outputs of AmbiGuess, chosen to demonstrate strengths and weaknesses." ], [ "Testing of Supervisors", "We assess the capability of 16 supervisorsWe are inclusive in our notion of supervisor: we consider also prioritization techniques that recognize unexpected inputs, as it is straightforward to adopt them to supervise a model.", "to discriminate nominal from high-uncertainty inputs for mnist and fmnist, each on 4 distinct test sets representing different root causes of mis-classifications, among which our ambiguous test set." ], [ "Experimental Setup", "We performed our experiments using four different DNN architectures (explained in  sec:quantitativeevaluation) as supervised models.", "Our training sets consist of both nominal and ambiguous data, to ensure that the ambiguous test data used later for testing is in-distribution.", "We then measure the capability of different supervisors to discriminate different types of high-uncertainty inputs from nominal data.", "We measure this using the area under the receiver operating characteristic curve (AUC-ROC), a standard, threshold-independent metric.", "We assess the supervisors using the following test sets: Invalid test sets, where we use mnist images as inputs to models trained for fashion-mnist and vice-versa, corrupted test sets available from related work (mnist-c [19] and fmnist-c [32]), adversarial data, created using 4 different attacks [47], [48], [49], [35] and lastly the ambiguous test sets generated by AmbiGuess.", "Adversarial test sets were not used with ensembles, as an ensemble does not rely on the (single) model targeted by the considered adversarial test generation techniques.", "To account for random influences during training, such as initial model weights, we ran the experiments for each DNN architecture 5 times.", "Results reported are the means of the observed results.", "Standard deviations are available in the replication package and indicate that 5 repetitions are enough to ensure sufficient stability of the results." ], [ "Tested Supervisors", "Due to limited available space, our description of the tested supervisors is brief and we refer to the corresponding papers for a detailed presentation.", "Our terminology, implementation and configuration of the first three supervisors described below, i.e., Softmax, MC-Dropout and Ensembles, are based on the material released with a recent empirical study [5].", "Plain Softmax Based solely on the softmax output array of a DNN prediction, these approaches provide very fast and easy to compute supervision: Max.", "Softmax, highest softmax value as confidence [50], Prediction-Confidence Score (PCS), the difference between the two highest softmax values [37], DeepGini, the complement of the softmax vector squared norm [51], and finally the entropy of the values in the predicted softmax probabilities [52].", "Monte-Carlo Dropout (MC-Dropout) [53], [54] Enabling the randomness of dropout layers at prediction time, and sampling multiple randomized samples allows the inference of an output distribution, hence of an uncertainty quantification.", "We use the quantifiers Variation Ratio (VR), Mutual Information (MI), Predictive Entropy (PI), or simply the highest value of the mean of the predicted softmax likelihoods (Mean-Softmax, MS).", "Ensembles [55] Similar to MC-Dropout, uncertainty is inferred from samples, but randomness is induced by training multiple models (under random influences such as initial weights) and collecting predictions from all of them.", "Here, we use the quantifiers MI, PI and MS, on an Ensemble consisting of 20 models.", "Dissector [56] On a trained model, for each layer, a submodel (more specifically, a perceptron) is trained, predicting the label directly from the activations of the given layer.", "From these outputs, the support value for each of the submodels for the prediction made by the final layer is calculated, and the overall prediction validity value is calculated as a weighted average of the per-layer support values.", "Autoencoders AEs can be used as OOD detectors: If the reconstruction error of a well-trained AE for a given input is high, it is likely not to be sufficiently represented in the training data.", "Stocco et.", "al., [2] proposed to use such OOD detection technique as DNN supervisor.", "Based on their findings, we use a variational autoencoder [36].", "Surprise Adequacy This approach detects inputs that are surprising, i.e., for which the observed DNN activation pattern is OOD w.r.t.", "the ones observed on the training data.", "We consider three techniques to quantify surprise adequacy: LSA [15], where surprise is calculated based on a kernel-density estimator fitted on the training activations of the predicted class, MDSA [57], where surprise is calculated based on the Mahalanobis distance between the tested input's activations and the training activations of the predicted class, and DSA [15] which is calculated as the ratio between two Euclidean distances: the distance between the tested input and the closest training set activation in the predicted class, and the distance between the latter activation and the closest training set activation from another class.", "As DSA is computationally intensive, growing linearly in the number of training samples, we follow a recent proposal to consider only 30% of the training data [58].", "Our comparison includes most of the popular supervisors used in recent software engineering literature.", "Some of the excluded techniques do not provide a single, continuous uncertainty score and no AUC-ROC can thus be calculated for them [59], [18], [60], or they are not applicable to the image classification domain [8].", "With its 16 tested supervisors, two case studies and four different data-centric root causes of DNN faults, our study is – to the best of our knowledge – by far the most extensive of its kind." ], [ "Results", "Results are presented in tab:mpbenchmarkresults." ], [ "Ambiguous Data", "We can observe that the predicted softmax likelihoods capture aleatoric uncertainty pretty well.", "Thus, not only do Max.", "Softmax, DeepGini, PCS, Softmax Entropy perform well at discriminating ambiguous from nominal data, but also supervisors that rely on the softmax predictions indirectly, such as Dissector, or the MS, MI and PE quantifiers on samples collected using MC-Dropout or DeepEnsembles.", "DSA, LSA, MDSA and Autoencoders are not capable of detecting ambiguity, and barely any of their AUC-ROCs exceeds the 0.5 value expected from a random classifier on a balanced dataset.", "MDSA, LSA and DSA show particularly low values, which confirms that they do only one job – detecting OOD, not ambiguous data – but they do it well (in our experimental design, ambiguous data is in-distribution by construction, while adversarial, corrupted and invalid data is OOD).", "The surprise adequacy based supervisors and the autoencoder reliably detected the unknown patterns in the input, discriminating adversarial from nominal data.", "Softmax-based supervisors showed good results on mnist, but less so on fmnist.", "Cleary, the adversarial sample detection capabilities of Softmax-based supervisors depend critically on the choice of adversarial data: With minimal perturbations, just strong enough to trigger a misclassification, softmax-based metrics can easily detect them, as the maximum of the predicted softmax likelihood is artificially reduced by the adversarial technique being used.", "However, one could apply stronger attacks, increasing the predicted likelihood of the wrong class close to 100%, which would make Softmax-based supervisors ineffective.", "Specific attacks against the other supervisors, i.e., the OOD detection based approaches (surprise adequacies and autoencoders) and Dissector, might also be possible in theory, but they are clearly much harder.", "Most approaches perform comparably well, with the exception of DSA on fmnist, which shows superior performance, with an average AUC-ROC value more than $.1$ higher than most other supervisors.", "DNNs are known to sometimes map OOD data points close to feature representations of in-distribution points (known as feature collapse) [61], thus leading to softmax output distributions similar to the ones of in-distribution images.", "This impacts negatively the OOD detection capability of Softmax-based supervisors (such as Max.", "Softmax, MC-Dropout, Ensembles or Dissector), especially in cases with a feature-rich training set, such as fmnist.", "The discussion of the supervisors' detection of invalid data is most interesting when comparing the two case studies.", "For mnist (a low-feature dataset), fmnist (a high-feature dataset) was used as invalid dataset, and vice-versa.", "It appears that the first case is much easier to detect for most supervisors: With the exception of LSA, all approaches reliably discriminated invalid from nominal data with mean AUC-ROCs close to $1.0$ .", "For fmnist, however, where invalid inputs are characterized by lower feature complexity than the training set, the problem becomes apparently much harder.", "Similarly to corrupted data, most likely due to feature collapse, the performance of supervisors relying on softmax likelihoods suffers dramatically.", "Instead, surprise adequacies (DSA, LSA and MDSA) showed exceptional performance, almost perfectly discriminating the mnist-outliers from the nominal fmnist samples.", "For what concerns autoencoders, reconstructions of images with higher feature complexity than the training set (i.e., fmnist reconstruction by an mnist-AE) consistently lead to high reconstruction errors and thus provides a very reliable outlier detection, with a mean AUC-ROC of $1.0$ .", "However, an autoencoder trained on a high-feature training set would also learn to reconstruct low-feature inputs accurately.", "Hence, discrimination between mnist and fmnist using an AE trained on fmnist led to a performance similar to the one of a random classifier.", "Related literature suggests that no single supervisor performs well under all conditions [14], [5], [59], and some works even suggest that certain supervisors are not capable to detect anything but aleatoric uncertainty (e.g.", "Softmax Entropy [18] or MC-Dropout [62]).", "Our evaluation, to the best of our knowledge, is the first one which compares supervisors on four different uncertainty-inducing test sets.", "We found that softmax-based approaches (including MC-Dropout, Ensembles and Dissector) are effective on all four types of test sets, i.e., their detection capabilities reliably exceed the performance expected from a random classifier.", "They do have their primary strength in the detection of ambiguous data, where the other, OOD focused techniques are naturally ineffective, but they are actually an inferior choice when targeting epistemic uncertainty.", "To detect corrupted inputs, DSA exhibited the best performance, but due to its high computational complexity it may not be suitable to all domains.", "The much faster MDSA may offer a good trade-off between detection capability and runtime complexity.", "Regarding invalid inputs, on low-feature problems, where invalid samples are expected to be more complex than nominal inputs, AEs provide a fast approach with the additional advantage that it does not rely on the supervised model directly, but only on its training set, which may facilitate maintenance and continuous development.", "For problems where the nominal inputs are rich of diverse features, an AE is not a valid option.", "However, our results again suggest MDSA as a reliable and fast alternative supervisor.", "For what regards inputs created by adversarial attacks, softmax-based approaches are easily deceived, being hence of limited practical utility.", "On the other hand, OOD detectors, such as surprise adequacy metrics and AEs, or Dissector can provide a more reliable detection performance against standard adversarial attacks.", "Of course, these supervisors are not immune from particularly malicious attackers that target them specifically.", "Here, the reader can refer to the wide range of research discussing defenses against adversarial attacks (survey provided by Akhtar et.", "al. [63]).", "We found that our results are barely sensitive to random influences due to training: Out of 488 reported mean AUC-ROCs (4 architectures, 8 test sets, 16 MPs, averaged over 5 runs) most of them showed a negligible standard deviation: The average observed standard deviation was 0.015, the highest one was 0.124, only 114 were larger than 0.02, only 29 were larger than 0.05, all of which correspond to results with low mean AUC-ROC (<0.9).", "The latter differences do not influence the overall observed tendencies.", "[colback=blue!5!white,colframe=blue!75!black,title=Summary (Comparison of Misclassification Detectors)] We assessed 16 supervisors on their capability to discriminate between nominal inputs and inputs which are ambiguous, adversarial, corrupted or invalid.", "For every category, we identified supervisors which perform particularly well, but we also found that to target all types of high-uncertainty inputs developers of DLS will have to rely on multiple, diverse supervisors." ], [ "Threats to validity", "External validity: we conducted our study on misclassification predition considering two standard case studies, MNIST and Fashion-MNIST.", "While our observations may not generalize to more challenging, high uncertainty datasets, the choice of two simple datasets with easily understandable features, allowed us to achieve a clear and sharp separation of the reasons for failures, which may not be the case when dealing with more complex datasets.", "On the other hand, we recognize the importance of replicating and extending this study considering additional datasets.", "To support such replications we provide all our experimental material as open source/data.", "Internal validity: The supervisors being compared include hyper-parameters that require some tuning.", "Whenever possible, we reused the original values and followed the guidelines proposed by the authors of the considered approaches.", "We also conducted a few preliminary experiments to validate and fine tune such hyper-parameters.", "However, the configurations used in our experiment could be suboptimal for some supervisor.", "Conclusion validity: We repeated our experiments 5 times to mitigate the non determinism associated with the DNN training process.", "While this might look like a low number of repetitions, we checked the standard deviation across such repetitions and found that it was negligible or small in all cases.", "To amount for the influence of the DNN architecture, we performed our experiments on 4 completely different DNN architectures, obtaining overall consistent findings." ], [ "Conclusion", "This paper brings two major advances to the field of DNN supervision testing: First, we proposed AmbiGuess, a novel technique to create labeled ambiguous images in a way that is independent of the tested model and of its supervisor, and we generated pre-compiled ambiguous datasets for two of the most popular case studies in DNN testing research, MNIST and Fashion-MNIST.", "Using four different metrics, we were able to verify the validity and ambiguity of our datasets, and we further investigated how AmbiGuess achieves ambiguity based on a qualitative analysis.", "On the four considered quantitative indicators, AmbiGuess clearly outperformed AmbiguousMNIST, the only similar-purposed dataset in the literature.", "We assessed the capabilities of 16 DNN supervisors at discriminating nominal from ambiguous, adversarial, corrupted and invalid inputs.", "To the best of our knowledge, this is not only the largest empirical case study comparing DNN supervisors in the literature, it is also the first one to do so by specifically targeting four distinct and clearly separable data-centric root causes of DNN faults.", "Our results show that softmax-based approaches (including MC-Dropout and Ensembles) work very well at detecting ambiguity, but have clear disadvantages when it comes to adversarial, corrupted, and invalid inputs.", "OOD detection techniques, such as surprise adequacy or autoencoder-based supervisors, often provide a better detection performance with the targeted types of high-uncertainty inputs.", "However, these approaches are incapable of detecting in-distribution ambiguous inputs.", "DNN developers can use the ambiguous datasets created by AmbiGuess to assess novel DNN supervisors on their capability to detect aleatoric uncertainty.", "They can also use our tool to evaluate test prioritization approaches on their capability to prioritize ambiguous inputs (depending on the developers' objectives, high priority is desired to identify inputs that are likely to be misclassified during testing; low priority is desired to exclude inputs with unclear ground truth from the training set).", "As future work, we plan to investigate the concept of true ambiguity for regression problems, i.e., to look at inputs to regression problems with high variability in their ground truth.", "This is relevant in domains, such as self-driving cars and robotics, where the DNN output is a continuous signal for an actuator.", "This problem is particularly appealing as all the approaches in our study that worked well at detecting ambiguity are based on softmax and thus are not applicable to regression problems." ], [ "Data Availability", "Upon acceptance of this paper, we will release the generated datasets using a range of common formats and through different platforms (such as Huggingface-datasets [64], Github and Zenodo).", "We will license our artifacts as permissively as the underlying datasets allow: Our code will be MIT, our ambigous-mnist dataset will be CC-BY-SA and our ambiguous-fashion-mnist dataset will be CC-BY licensed.", "While this paper is under review, the data will be made available upon request." ] ]
2207.10495
[ [ "Kinetic Field Theory: Higher-Order Perturbation Theory" ], [ "Abstract We give a detailed exposition of the formalism of Kinetic Field Theory (KFT) with emphasis on the perturbative determination of observables.", "KFT is a statistical non-equilibrium classical field theory based on the path integral formulation of classical mechanics, employing the powerful techniques developed in the context of quantum field theory to describe classical systems.", "Unlike previous work on KFT, we perform the integration over the probability distribution of initial conditions in the very last step.", "This significantly improves the clarity of the perturbative treatment and allows for physical interpretation of intermediate results.", "We give an introduction to the general framework, but focus on the application to interacting $N$-body systems.", "Specializing the results to cosmic structure formation, we reproduce the linear growth of the cosmic density fluctuation power spectrum on all scales from microscopic, Newtonian particle dynamics alone." ], [ "arrows.meta,matrix decorations.pathmorphing shapes prop/.style=draw,ultra thick, vertex/.style=circle,thick,draw=black,fill=black!20,minimum size=18pt,inner sep=3pt, empty/.style=circle,thick,draw=black,minimum size=18pt,inner sep=3pt, outtraj/.style=rectangle,thick,minimum size=18pt,inner sep=3pt, outdens/.style=rectangle,thick,draw=black,fill=black!20,minimum size=18pt,inner sep=3pt, outdensempty/.style=rectangle,thick,draw=black,minimum size=18pt,inner sep=3pt, subdiagr/.style=star,star points=7,star point ratio=0.8,thick,draw=black,fill=black!20,minimum size=18pt,inner sep=3pt, subdiagrempty/.style=star,star points=7,star point ratio=0.8,thick,draw=black,minimum size=18pt,inner sep=3pt width=7cm,compat=1.8 myformat .", "[] myformatshort th [], myThm .7em .7em .", ".5em myThm2 .7em .7em .", ".5em" ] ]
2207.10504
[ [ "Human Trajectory Prediction via Neural Social Physics" ], [ "Abstract Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.", "The former include rule-based, geometric or optimization-based models, and the latter are mainly comprised of deep learning approaches.", "In this paper, we propose a new method combining both methodologies based on a new Neural Differential Equation model.", "Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.", "The explicit physics model serves as a strong inductive bias in modeling pedestrian behaviors, while the rest of the network provides a strong data-fitting capability in terms of system parameter estimation and dynamics stochasticity modeling.", "We compare NSP with 15 recent deep learning methods on 6 datasets and improve the state-of-the-art performance by 5.56%-70%.", "Besides, we show that NSP has better generalizability in predicting plausible trajectories in drastically different scenarios where the density is 2-5 times as high as the testing data.", "Finally, we show that the physics model in NSP can provide plausible explanations for pedestrian behaviors, as opposed to black-box deep learning.", "Code is available: https://github.com/realcrane/Human-Trajectory-Prediction-via-Neural-Social-Physics." ], [ "Introduction", "Understanding human trajectories is key to many research areas such as physics, computer science and social sciences.", "Being able to learn behaviors with non-invasive sensors is important to analyzing the natural behaviors of humans.", "This problem has been widely studied in computer graphics, computer vision and machine learning [5].", "Existing approaches generally fall into model-based and model-free methods.", "Early model-based methods tended to be empirical or rule-based methods derived via the first-principles approach: summarizing observations into rules and deterministic systems based on fundamental assumptions on human motion.", "In such a perspective, social interactions can be modelled as forces in a particle system [20] or an optimization problem [8], and individuals can be influenced by affective states [36].", "Later, data-driven model-based methods were introduced, in which the model behavior is still dominated by the assumptions on the dynamics, e.g.", "a linear dynamical system [19], but retains sufficient flexibility so that the model can be adjusted to fit observations.", "More recently, model-free methods based on deep learning have also been explored, and these demonstrate surprising trajectory prediction capability [1], [18], [49], [9], [29], [33], [14], [31], [38], [50], [32], [37], [56], [77], [16], [71].", "Empirical or rule-based methods possess good explainability because they are formed as explicit geometric optimization or ordinary/partial differentiable equations where specific terms correspond to certain behaviors.", "Therefore, they have been used for not only prediction but also analysis and simulation [59].", "However, they are less effective in data fitting with respect to noise and are therefore unable to predict accurately, even when the model is calibrated on data [70].", "Data-driven model-based methods (e.g., statistical machine learning) improve the ability of data fitting but are restricted by the specific statistical models employed which have limited capacities to learn from large amounts of data [19].", "Finally, deep learning approaches excel at data fitting.", "They can learn from large datasets, but lack explainability and therefore have been mainly used for prediction rather than analysis and simulation [1], [38], [77].", "We explore a model that can explain pedestrian behaviors and retain good data-fitting capabilities by combining model-based and model-free approaches.", "Inspired by recent research in neural differential equations [13], [44], [75], [78], [25], we propose a new crowd neural differentiable equation model consisting of two parts.", "The first is a deterministic model formulated using a differentiable equation.", "Although this equation can be arbitrary, we use a dynamical system inspired by the social force model [20].", "In contrast to the social force model and its variants, the key parameters of our deterministic model are learnable through data instead of being hand-picked and fixed.", "The second part of our model captures complex uncertainty in the motion dynamics and observations via a Variational Autoencoder.", "Overall, the whole model is a deep neural network with an embedded explicit model; we call this model Neural Social Physics (NSP).", "We demonstrate that our NSP model outperforms the state-of-the-art methods [18], [49], [9], [29], [33], [14], [31], [38], [50], [32], [37], [56], [77], [16], [71] in standard trajectory prediction tasks across various benchmark datasets [46], [43], [28] and metrics.", "In addition, we show that NSP can generalize to unseen scenarios with higher densities and still predict plausible motions with less collision between people, as opposed to pure black-box deep learning approaches.", "Finally, from the explicit model in NSP, we demonstrate that our method can provide plausible explanations for motions.", "Formally, (1) we propose a new neural differentiable equation model for trajectory prediction and analysis.", "(2) we propose a new mechanism to combine explicit and deterministic models with deep neural networks for crowd modeling.", "(3) We demonstrate the advantages of the NSP model in several aspects: prediction accuracy, generalization and explaining behaviors.", "Statistical machine learning has been used for trajectory analysis in computer vision [42], [15], [67], [26], [65], [11].", "They aim to learn individual motion dynamics [76], structured latent patterns in data [65], [64], anomalies [12], [11], etc.", "These methods provide a certain level of explainability, but are limited in model capacity for learning from large amounts of data.", "Compared with these methods, our model leverages the ability of deep neural networks to handle high-dimensional and large data.", "More recently, deep learning has been exploited for trajectory prediction [54].", "Recurrent neural networks (RNNs) [1], [4], [60] have been explored first due to their ability to learn from temporal data.", "Subsequently, other deep learning techniques and neural network architectures are introduced into trajectory prediction, such as Generative Adversarial Network (GAN) [18], conditional variational autoencoder (CVAE) [22], [38], [77] and Convolutional Neural Network (CNN) [39].", "In order to capture the spatial features of trajectories and the interactions between pedestrians accurately, graph neural networks (GNNs) have also been used to reason and predict future trajectories [39], [53].", "Compared with existing deep learning methods, our method achieves better prediction accuracy.", "Further, our method has an explicit model which can explain pedestrian motions and lead to better generalizability.", "Very recently, attempts have been made in combining physics with deep learning for trajectory prediction [3], [27], [21].", "But their methods are tied to specific physics models and are deterministic, while NSP is a general framework that aims to accommodate arbitrary physics models and is designed to be intrinsically stochastic to capture motion randomness." ], [ "Pedestrian and Crowd Simulation", "Crowd simulation aims to generate trajectories given the initial position and destination of each agent [59], which essentially aims to predict individual motions.", "Empirical modelling and data-driven methods have been the two foundations in simulation [40], [35].", "Early research is dominated by empirical modelling or rule-based methods, where crowd motions are abstracted into mathematical equations and deterministic systems, such as flows [40], particle systems [20], and velocity and geometric optimization [8], [52].", "Meanwhile, data-driven methods using statistical machine learning have also been employed, e.g., using first-person vision to guide steering behaviors [35] or using trajectories to extract features to describe motions [23], [68].", "While the key parameters in these approaches are either fixed or learned from small datasets, our NSP model is more general.", "It can take existing deterministic systems as a component and provides better data-fitting capacity via deep neural networks.", "Compared with afore-mentioned model-based methods, our NSP can be regarded as using deep learning for model calibration.", "our model possesses the ability to learn from large amount of data, which is difficult for traditional parameter estimation methods based on optimization or sampling [62].", "Meanwhile, the formulation of our NSP is more general, flexible and data-driven than traditional model-based methods." ], [ "Deep Learning and Differential Equations", "Solving differentiable equations (DE) with the assistance of deep learning has recently spiked strong interests [13], [75], [78], [24].", "Based on the involvement depth of deep learning, the research can be categorized into deep learning assisted DE, differentiable physics, neural differential equations and physics-informed neural networks (PINNs).", "Deep learning assisted DE involves accelerating various steps during the DE solve, such as Finite Element mesh generation [74], [73].", "The deeper involvement of neural networks is shown in differentiable physics and neural differential equations, where the former aims to make the whole simulation process differentiable [17], [30], [69], and the latter focuses on the part of the equations being parameterized by neural networks [51].", "PINNs aim to bypass the DE solve and use NN for prediction [45], [10].", "Highly inspired by the research above, we propose a new neural differential equations model in a new application domain for human trajectory prediction." ], [ "Neural Social Physics (NSP)", "At any time $t$ , the position $p^t_i$ of the $i$ th pedestrian can be observed in a crowd.", "Then a trajectory can be represented as a function of time $q(t)$ , where we have discrete observations in time up to $T$ , $\\lbrace q^0, q^1, \\cdots , q^T\\rbrace $ .", "An observation or state of a person at time $t$ is represented by $q^t = [p^t, \\dot{p}^t]^\\mathbf {T}$ where $p$ , $\\dot{p} \\in \\mathbb {R}^2$ are the position and velocity.", "For most datasets, $p$ is given and $\\dot{p}$ can be estimated via finite difference.", "Given an observation $q_n^t$ of the $n$ th person, we consider her neighborhood set $\\Omega _n^t$ containing other nearby pedestrians $\\lbrace q_j^t: j \\in \\Omega _n^t\\rbrace $ .", "The neighborhood is also a function of time $\\Omega (t)$ .", "Then, in NSP the dynamics of a person (agent) in a crowd can be formulated as: $\\frac{dq}{dt}(t) = f_{\\theta , \\phi }(t, q(t), \\Omega (t), q^T, E) + \\alpha _{\\phi }(t, q^{t:t-M})$ where $\\theta $ and $\\phi $ are learnable parameters, $E$ represents the environment.", "$\\theta $ contains interpretable parameters explained later and $\\phi $ contains uninterpretable parameters (e.g.", "neural network weights).", "The agent dynamics are governed by $f$ which depends on time $t$ , its current state $q(t)$ , its time-varying neighborhood $\\Omega (t)$ and the environment $E$ .", "Similar to existing work, we assume there is dynamics stochasticity in NSP.", "But unlike them which assume simple forms (e.g.", "white noise) [19], we model time-varying stochasticity in a more general form: as a function of time, the current state and the brief history of the agent, $\\alpha _{\\phi }(t, q^{t:t-M})$ .", "Then we have the following equation in NSP: $q^T = q^0 + \\int _{t=0}^T {f_{\\theta , \\phi }(t, q(t), \\Omega (t), q^T, E) + \\alpha _{\\phi }(t, q^{t:t-M})} dt$ given the initial and final condition $q(0) = q^0 \\text{ and } q(T) = q^T$ .", "Physics models have been widely used to model crowd dynamics [40], [20].", "To leverage their interpretability, we model the dynamics as a physical system in NSP.", "Assuming the second-order differentiability of $p(t)$ , NSP expands $q(t)$ via Taylor's series for a first-order approximation: $q(t+\\triangle t) \\approx q(t) + \\dot{q}(t) \\triangle t =\\begin{pmatrix}p(t)\\\\\\dot{p}(t)\\end{pmatrix}+ \\triangle t\\begin{pmatrix}\\dot{p}(t) + \\alpha (t, q^{t:t-M})\\\\\\ddot{p}(t)\\end{pmatrix}$ where $\\triangle t$ is the time step.", "The stochasticity $\\alpha (t, q^{t:t-M})$ is assumed to only influence $\\dot{p}$ .", "Equation REF is general and any dynamical system with second-order differentiability can be employed here.", "Below, we realize NSP by combining a type of physics models-social force models (SFM) [20] and neural networks.", "We refer to our model NSP-SFM." ], [ "NSP-SFM", "We design the NSP-SFM by assuming each person acts as a particle in a particle system and each particle is governed by Newton's second law of motion.", "$\\ddot{p}(t)$ is designed to be dependent on three forces: goal attraction $F_{goal}$ , inter-agent repulsion $F_{col}$ and environment repulsion $F_{env}$ .", "$\\ddot{p}(t) = F_{goal}(t, q^T, q^t) + F_{col} (t, q^t, \\Omega ^t) + F_{env}(t, q^t, E)$ where $E$ is the environment and explained later.", "However, unlike [20], the three forces are partially realized by neural networks, turning Equation REF into a neural differential equation.", "The overall model is shown in Figure REF .", "Note that, in Equation REF , we assume $p^T$ is given, although it is not available during prediction.", "Therefore, we employ a Goal Sampling Network (GSN) to sample $p^T$ .", "During testing, we either first sample a $p^T$ for prediction or require the user to input $p^T$ .", "The GSN is similar to a part of Y-net [37] and pre-trained, and detailed in the supplementary materials.", "Given the current state and the goal, we compute $F_{goal}$ using the Goal-Network $NN_{\\phi _1}$ in Eq.", "REF (Fig.", "REF Left), $F_{col}$ using the Collision-Network $NN_{\\phi _2}$ in Eq.", "REF (Fig.", "REF Right) and $F_{env}$ using Eq.", "REF directly.", "The Goal-Network encodes $q^t$ then feeds it into a Long Short Term Memory (LSTM) network to capture dynamics.", "After a linear transformation, the LSTM output is concatenated with the embedded $p^T$ .", "Finally, $\\tau $ is computed by an MLP (multi-layer perceptron).", "In Collision-Network, the architecture is similar.", "Every agent $q_j^t$ in the neighborhood $\\Omega _n^t$ is encoded and concatenated with the encoded agent $q_n^t$ .", "Then $k_{nj}$ is computed.", "$\\tau $ and $k_{nj}$ are interpretable key parameters of $F_{goal}$ and $F_{col}$ .", "The corresponding parameter in $F_{env}$ is $k_{env}$ .", "Finally, we show our network for $\\alpha $ for stochasticity modeling in Figure REF .", "Figure: Left: Goal-Network and Right: Collision-Network.", "The numbers in square brackets show both the number and dimension of the layers in each component.Figure: The architecture of the CVAE, where p ¯ t+1 \\bar{p}^{t+1} is the intermediate prediction out of our force model and α t+1 =p t+1 -p ¯ t+1 \\alpha ^{t+1} = p^{t+1} - \\bar{p}^{t+1}.", "Encoder E bias E_{bias}, E past E_{past}, E latent E_{latent} and decoder D latent D_{latent} are all MLP networks with dimensions indicated in the square brackets.", "More Details of the network can be found in the supplementary materialGoal attraction.", "Pedestrians are always drawn to destinations, which can be abstracted into a goal attraction force.", "At time $t$ , a pedestrian has a desired walking direction $e^t$ determined by the goal $p^T$ and the current position $p^t$ : $e^t = \\frac{p^T - p^t}{\\Vert p^T - p^t \\Vert }$ .", "If there are no other forces, she will change her current velocity to the desired velocity $v_{des}^t = v_0^t e^t$ where $v_0^t$ and $e^t$ are the magnitude and direction respectively.", "Instead of using a fixed $v_0$ as in [20], we update $v_0^t$ at every $t$ to mimic the change of the desired speed as the pedestrian approaches the destination: $v_0^t = \\frac{\\Vert p^T - p^t \\Vert }{(T-t)\\triangle t}$ .", "Therefore, the desired velocity is defined as $v_{des}^t = v_0^t e^t = \\frac{p^T - p^t}{(T-t)\\triangle t}$ .", "The goal attraction force $F_{goal}$ represents the tendency of a pedestrian changing her current velocity $\\dot{p}^t$ to the desired velocity $v_{des}^t$ within time $\\tau $ : $F_{goal} = \\frac{1}{\\tau }(v_{des}^t - \\dot{p}^t) \\text{ where } \\tau = NN_{\\phi _1}(q^t, p^T)$ where $\\tau $ is learned through a neural network (NN) parameterized by $\\phi _1$ .", "Figure: (a) The neighborhood Ω(t)\\Omega (t) of a person is a sector within a circle (centered at this person with radius r col r_{col}) spanned by an angle ω\\omega from the current velocity vector (green arrow).", "(b) Each person has a view field (orange box) within which the environment repels a pedestrian.", "The view field is a square with dimension r env r_{env} based on the current velocity vector (green arrow).", "The current velocity is along the diagonal of the orange box.", "(c) The environment is segmented into walkable (red) and unwalkable (blue) areas.", "Within the view field of the pedestrian in (b), the yellow pixels are the environment pixels that repel the pedestrian.", "ω\\omega , r col r_{col} and r env r_{env} are hyperparameters.Inter-agent Repulsion.", "Pedestrians often steer to avoid potential collisions and maintain personal space when other people are in the immediate neighborhood (Fig.", "REF a).", "Given an agent $j$ in $\\Omega _n^t$ of agent $n$ and her state $q_j^t$ , agent $j$ repels agent $n$ based on $r_{nj} = p_n^t - p_j^t$ : $F_{col}^{nj} = -\\nabla _{r_{nj}}\\mathcal {U}_{nj}\\left(\\Vert r_{nj} \\Vert \\right), \\text{ where }\\mathcal {U}_{nj}\\left(\\Vert r_{nj} \\Vert \\right) = r_{col} k_{nj}e^{-\\Vert r_{nj}\\Vert /r_{col}}$ where we employ a repulsive potential field $\\mathcal {U}_{nj}\\left(\\Vert r_{nj}\\Vert \\right)$ modeled by a monotonic decreasing function of $\\Vert r_{nj}\\Vert $ .", "Then the repulsive force caused by agent $j \\in \\Omega _n^t$ to agent $n$ is the gradient of $\\mathcal {U}_{nj}$ .", "Previously, simple functions such as symmetric elliptic fields were employed for $\\mathcal {U}_{nj}$  [20].", "Here, we model $\\mathcal {U}_{nj}$ as a time-varying field parameterized by $k_{nj}$ which is learned via a neural network.", "Instead of directly learning $k_{nj}$ , we set $k_{nj} = a*sigmoid(NN_{\\phi _2}(q_n^t, q^t_{j, j\\in \\Omega _n^t})) + b$ .", "$a$ and $b$ are hyperparameters to ensure that the learned $k_{nj}$ value is valid.", "If we have $m$ agents at time t in $\\Omega _n^t$ , the net repulsive force on agent $n$ is: $F_{col}^n = \\sum _{j=0}^m F_{col}^{nj}$ .", "Environment Repulsion.", "Besides collisions with others, people also avoid nearby obstacles.", "We model the repulsion from the environment as: $F_{env} = \\frac{k_{env}}{\\Vert p_n^t - p_{obs}\\Vert } (\\frac{p_n^t - p_{obs}}{\\Vert p_n^t - p_{obs}\\Vert })$ where $p_{obs}$ is the position of the obstacle and $k_{env}$ is a learnable parameter.", "NSP-SFM learns $k_{env}$ directly via back-propagation and stochastic gradient descent.", "Since the environment is big, we assume the agent mainly focuses on her view field (Fig.", "REF b) within which the environment (Fig.", "REF c) repels the pedestrian.", "We calculate $p_{obs}$ as the center of the pixels that are classified as obstacles in the view field of an agent.", "$k_{env}$ is shared among all obstacles.", "So far, we have introduced all the interpretable parameters $\\theta = \\lbrace \\tau , k_{nj}, k_{env}\\rbrace $ in Equation REF .", "Dynamics Stochasticity $\\alpha (t, q^{t:t-M})$.", "Trajectory prediction needs to explicitly model the motion randomness caused by intrinsic motion stochasticity and observational noises [63], [64].", "We employ a more general setting by assuming the noise distribution can have arbitrary shapes and is also time varying, unlike previous formulations such as white noise [19] which is too restrictive.", "Generally, learning such functions requires large amounts of data, as it is unconstrained.", "To constrain the learning, we further assume the noise is Normally distributed in a latent space, rather than in the data space.", "Given a prediction $\\bar{p}^{t+1}$ without dynamics stochasticity and its corresponding observation $p^{t+1}$ , there is an error $\\alpha ^{t+1} = \\bar{p}^{t+1} - p^{t+1}$ .", "To model the arbitrary and time-varying shape of the distribution of $\\alpha ^{t+1}$ , we assume it depends on the brief history $p^{t:t-M}$ which implicitly considers the environment and other people.", "Then the conditional likelihood of $\\alpha ^{t+1}$ is: $P(\\alpha ^{t+1}|p^{t:t-M}) = \\int P(\\alpha ^{t+1}|p^{t:t-M}, z)P(z)dz$ , where $z$ is a latent variable.", "Assuming a mapping $Q(z | \\alpha ^{t+1}, p^{t:t-M})$ and $z$ being Normally distributed, minimizing the KL divergence between $Q$ , i.e., the variational posterior, and $P(z|\\alpha ^{t+1}, p^{t:t-M})$ leads to a conditional Variational Autoencoder (CVAE) [55].", "Our overall loss function is defined as $L = l_{traj} + l_{cvae}$ where: $l_{traj} =& \\frac{1}{N(T-M)}\\sum _{n=1}^N\\sum _{t=M+1}^T\\Vert p_n^t - \\bar{p}_n^t\\Vert _2^2 \\nonumber \\\\l_{cvae} =& \\frac{1}{N(T-M)}\\sum _{n=1}^N\\sum _{t=M+1}^T\\lbrace \\Vert \\alpha _n^t - \\tilde{\\alpha }_n^t\\Vert _2^2 \\nonumber \\\\&+ \\lambda D_{KL}(Q(z | \\alpha _n^t, p^{t:t-M})||P(z| \\alpha _n^t, p^{t:t-M}))\\rbrace $ $N$ is the total number of samples, $M$ is the length of the history, and $T$ is the total length of the trajectory.", "$l_{traj}$ minimizes the difference between the predicted position and the ground-truth, while $l_{cvae}$ learns the distribution of randomness $\\alpha $ .", "During training, in each iteration, we assume the first $M+1$ frames of the trajectory are given and run the forward pass iteratively to predict the rest of the trajectory, then back-propagate to compute the gradient to update all parameters.", "During the forward pass, we use a semi-implicit scheme for stability: $\\dot{p}^{t+1} = \\dot{p}^t + \\triangle t \\ddot{p}^t$ and $p^{t+1} = p^t + \\triangle t \\dot{p}^{t+1}$ .", "We employ a progressive training scheme for the sub-nets.", "We first train Goal-Network with $l_{traj}$ only, then fix Goal-Network and add Collision-Network and $F_{env}$ for training using $l_{traj}$ .", "Finally, we fix Goal-Network, Collision-Network and $F_{env}$ , add $\\alpha $ for training under $l_{cvae}$ .", "We find this progressive training significantly improves the convergence speed.", "This is because we first train the deterministic part with the main forces added gradually, which converges quickly.", "Then the stochasticity part is trained separately to capture complex randomness.", "Please see the supplementary material for implementation details." ], [ "NSP vs. Deep Neural Networks", "One big difference between NSP and existing deep learning is the deterministic system embedded in NSP.", "Instead of learning any function mapping the input to the output (as black box deep learning does), the deterministic system acts as a strong inductive bias and constrains the functional space within which the target mapping should lie.", "This is because a PDE family can be seen as a flow connecting the input and the output space [2], and the learning is essentially a process of finding the most fitting PDE within this flow.", "In addition to better data-fitting capability, this strong inductive bias also comes with two other advantages.", "First, the learned model can help explain motions because the PDE we employ is a physics system where the learnable parameters have physical meanings.", "Second, after learning, the PDE can be used to predict motions in drastically different scenes (e.g., with higher densities) and generate more plausible trajectories (e.g., fewer collisions).", "This is difficult for existing deep learning as it requires to extrapolate significantly to unseen interactions between pedestrians." ], [ "Datasets", "We employ six widely used datasets in human trajectory prediction tasks: the Stanford Drone Dataset [46], ETH Hotel, ETH University [43], UCY University, Zara1, and Zara2 datasets [28].", "Stanford Drone Dataset (SDD): SDD contains videos of a university campus with six classes of agents with rich interactions.", "SDD includes about 185,000 interactions between different agents and approximately 40,000 interactions between the agent and the environment.", "ETH/UCY Datasets: The datasets consist of human trajectories across five scenes recording the world coordinates of pedestrians.", "Following previous research [37], [38], we adopt the standard leave-one-out evaluation protocol, where the model is trained on four sub-datasets and evaluated one.", "Since our goal sampling network and $F_{env}$ need to work in the pixel space, we project the world coordinates in ETH/UCY into the pixel space using the homography matrices provided in Y-net [37].", "When computing the prediction error, we project the predictions in the pixel space back into the world space.", "Finally, for SDD and ETH/UCY, we follow previous work [37], [48] to segment trajectories into 20-frame samples and split the dataset for training/testing.", "Given the first 8 ($M=7$ ) frames, we train NSP to predict the remaining 12 frames for each trajectory." ], [ "Trajectory Prediction", "Average Displacement Error (ADE) and Final Displacement Error (FDE) are employed as previous research [1], [18], [38], [37].", "ADE is calculated as the $l_2$ error between a predicted trajectory and the ground truth, averaged over the entire trajectory.", "FDE is calculated as the $l_2$ error between the predicted final point and the ground truth.", "Following prior works, in the presence of multiple possible future predictions, the minimal error is reported.", "We compare our NSP-SFM with an extensive list of baselines, including published papers and unpublished technical reports: Social GAN (S-GAN) [18], Sophie [49], Conditional Flow VAE (CF-VAE) [9], Conditional Generative Neural System (CGNS) [29], NEXT [33], P2TIRL [14], SimAug [31], PECNet [38], Traj++ [50], Multiverse [32], Y-Net [37], SIT [56], S-CSR [77], Social-DualCVAE [16] and CSCNet [71].", "We divide the baselines into two groups due to their setting differences.", "All baseline methods except S-CSR report the minimal error out of 20 sampled trajectories.", "S-CSR achieved better results by predicting 20 possible states in each step, and it is the only method adopting such sampling to our best knowledge.", "We refer to the former as standard-sampling and the latter as ultra-sampling.", "We compare NSP-SFM with S-CSR and other baseline methods under their respective settings.", "Standard-sampling results are shown in Table REF .", "On SDD, NSP-SFM outperforms the best baseline Y-Net by 16.94% and 10.46% in ADE and FDE, respectively.", "In ETH/UCY, the improvement on average is 5.56% and 11.11% in ADE and FDE, with the maximal ADE improvement 12.5% in UNIV and the maximal FDE improvement 27.27% in ETH.", "We also compare NSP-SFM with S-CSR in Table REF .", "NSP-SFM outperforms S-CSR on ETH/UCY by 70% and 62.5% on average in ADE and FDE.", "In SDD, the improvement is 35.74% and 0.3% (Table REF ).", "S-CSR is stochastic and learns per-step distributions, which enables it to draw 20 samples for every step during prediction.", "Therefore, the min error of S-CSR is much smaller than the other baselines.", "Similarly, NSP-SFM also learns a per-step distribution (the $\\alpha $ function) despite its main behavior being dictated by a deterministic system.", "Under the same ultra-sampling setting, NSP-SFM outperforms S-CSR.", "Table: Results on ETH/UCY and SDD based standard-sampling.", "NSP-SFM outperforms all baseline methods in both ADE and FDE.", "20 samples are used in prediction and the minimal error is reported.", "M=7M=7 in all experiments.", "The unit is meters on ETH/UCY and pixels on SDD.Table: Results on ETH/UCY (left) and SDD (right) based on ultra-sampling.", "20 samples per step are used for prediction and the overall minimal error is reported.", "NSP-SFM outperforms S-CSR on both datasets in ADE and FDE." ], [ "Generalization to Unseen Scenarios", "We evaluate NSP-SFM on significantly different scenarios after training.", "We increase the scene density as it is a major factor in pedestrian dynamics [41].", "This is through randomly sampling initial and goal positions and let NSP-SFM predict the trajectories.", "Since there is no ground truth, to evaluate the prediction plausibility, we employ collision rate because it is widely adopted [34] and parsimonious: regardless of the specific behaviors of agents, they do not penetrate each other in the real world.", "The collision rate is computed based on the percentage of trajectories colliding with one another.", "We treat each agent as a disc with radius $r=0.2 \\; m$ in ECY/UCY and $r=15$ pixels in SDD.", "Once the distance between two agents falls below $2r$ , we count the two trajectories as in collision.", "Due to the tracking error and the distorted images, the ground truth $r$ is hard to obtain.", "We need to estimate $r$ .", "If it is too large, the collision rate will be high in all cases; otherwise the collision rate will be too low, e.g., $r=0$ will give $0\\%$ collision rate all the time.", "Therefore, we did a search and found that the above values are reasonable as they keep the collision rate of the ground-truth data approximately zero.", "We show two experiments.", "The first is the collision rate on the testing data, and the second is scenarios with higher densities.", "While the first is mainly to compare the plausibility of the prediction, the second is to test the model generalizability.", "For comparison, we choose two state-of-the-art baseline methods: Y-net and S-CSR.", "Y-net is published which achieves the best performance, while S-CSR is unpublished but claims to achieve better performance.", "Table: Collision rate on testing data in ETH/UCY and SDD.", "NSP-SFM universally outperforms all baseline methods.Table: Collision rates of the generalization experiments on ZARA2 (Z) and coupa0 (C).", "NSP-SFM shows strong generalizability in unseen high density scenarios.Table REF shows the comparison of the collision rate.", "NSP-SFM outperforms the baseline methods in generating trajectories with fewer collisions.", "Y-net and S-CSR also perform well on the testing data because their predictions are close to the ground-truth.", "Nevertheless, they are still worse than NSP-SFM.", "Next, we test drastically different scenarios.", "We use ZARA2 and coupa0 (a sub-dataset from SDD) as the environment and randomly sample the initial positions and goals for 32 and 50 agents respectively.", "Because the highest number of people that simultaneously appear in the scene is 14 in ZARA2 and 11 in coupa0, we effectively increase the density by 2-5 times.", "For NSP-SFM, the initial and goal positions are sufficient.", "For Y-net and S-CSR which require 8 frames (3.2 Seconds) as input, we use NSP-SFM to simulate the first 8 frames of each agent, then feed them into both baselines.", "Table REF shows the results of three experiments.", "Since the density is significantly higher than the data, both Y-net and S-CSR cause much higher collision rate.", "While NSP-SFM's collision rate also occasionally increases (i.e.", "SDD) compared with Table REF , it is far more plausible." ], [ "Interpretability of Prediction", "Unlike black-box deep learning methods, NSP-SFM has an embedded explainable model.", "While predicting a trajectory, NSP can also provide plausible explanations of the motion, by estimating the `forces' exerted on a specific person.", "This potentially enables NSP-SFM to be used in applications beyond prediction, e.g.", "behavior analysis [72].", "Figure REF Left shows that a person, instead of directly walking towards the goal, steered upwards (the green trajectory in the orange area).", "This could be explained by the strong repulsive force (the light blue arrow) which is generated by the potential collisions with the agents in front of this person, in line with existing studies [41].", "Similar explanations can be made in Figure REF Middle, where all three forces are present.", "$F_{env}$ (the black arrow) is the most prominent, as expected, as the person is very close to the car.", "The repulsive force (light blue arrow) also plays a role due to the person in front of the agent (the blue dot in the orange area).", "Figure REF Right shows an example where motion randomness is captured by NSP.", "In this example, there was no other pedestrian and the person was not close to any obstacle.", "However, the trajectory still significantly deviates from a straight line, which cannot be fully explained by e.g.", "the principle of minimal energy expenditure [61].", "The deviation could be caused by unobserved factors, e.g.", "the agent changing her goal or being distracted by something on the side.", "These factors do not only affect the trajectory but also the dynamics, e.g.", "sudden changes of velocity.", "These unobserved random factors are implicitly captured by the CVAE in NSP-SFM.", "More results are in the supplementary material.", "We emphasize that NSP-SFM merely provides plausible explanations and by no means the only possible explanations.", "Although explaining behaviors based on physics models has been widely used, there can be alternative explanations [66].", "Visualizing the forces is merely one possible way.", "Theoretically, it is also possible to visualize deep neural networks, e.g.", "layer activation.", "However, it is unclear how or which layer to visualize to explain the motion.", "Overall, NSP-SFM is more explainable than black-box deep learning." ], [ "Ablation Study", "To further investigate the roles of different components, we conduct an ablation study on SDD with three settings: $F_{goal}$ (w/o) with goal attraction only, i.e.", "omitting other components such as $F_{col}$ , $F_{env}$ and dynamics stochasticity; NSP-SFM (w/o) without dynamics stochasticity; and NSP-SFM (w) the full model.", "The results are shown in Table REF .", "Interestingly, $F_{goal}$ (w/o) can already achieve good results.", "This is understandable as it is trained first in our progressive training scheme and catches most of the dynamics.", "NSP-SFM (w/o) further improves the performance.", "The improvement seems to be small but we find the other repulsive forces are crucial for trajectories with irregular geometries such as avoiding obstacles.", "Further NSP-SFM (w) significantly improves the results because it enables NSP to learn the dynamics stochasticity via a per-step distribution.", "We show one example in Figure REF in all settings.", "More ablation experiments can be found in the supplementary material.", "In this paper, we have proposed a new Neural Differential Equation model for trajectory prediction.", "Through exhaustive evaluation and comparison, our model, Neural Social Physics, has proven to be more accurate in trajectory prediction, generalize well in significantly different scenarios and can provide possible explanations for motions.", "The major limitation of NSP lies in the physics model, which overly simplifies people into 2D particles.", "In real-world scenarios, people are much more complex, and their motions can be influenced by other factors such as their affective states or interact with dense scenarios [6], [7].", "It would be useful to extend our NSP framework by incorporating these ideas and handle complex systems such as fluids/fields/agent-based modeling can be adopted to replace the components in Equation REF .", "In the future, we would like to extend the current framework to model high-density crowds, where continuum models or reciprocal velocity obstacles need to be used.", "We would also like to incorporate learning-based collision detection techniques into this framework [57], [58]." ], [ "Acknowledgements", "This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 899739 CrowdDNA.", "A Additional Experiments A.1 Generalization to Unseen Scenarios We use the collision rate to evaluate prediction plausibility.", "We first elaborate on the definition of the collision rate and then show more experimental results.", "Provided there are N agents in a scene, we consider their collision rates during a period of time such as 4.8 seconds which is widely used to evaluate trajectory predictions [38], [37], [77].", "We count one collision if the minimum distance between two agents is smaller than 2r at any time, where r is the radius of a disc representing an agent.", "The maximum possible number of collisions is $N(N-1)/2$ .", "The final collision rate is defined as: $R_{col} = \\frac{M}{N(N-1)/2}$ where $M$ is the number of collisions.", "We show more results on the scene, coupa0, with different numbers of agents.", "We chose coupa0 because it is a large space and can accommodate many people.", "The highest number of people simultaneously in the environment in the original data is merely 11.", "Therefore, this is a good scene to show how different methods can generalize to higher densities when learning from low density data.", "In each experiment, the agents are randomly initialized with different initial positions, initial velocities and goals near the boundary of the scene, which is sufficient for our method to simulate.", "Therefore, we use NSP to predict trajectories of 30 seconds (t = 0 to 29) at FPS = 10 for all agents.", "We sample three intervals out of every trajectory, from t = 0 to 8, t = 4 to 12 and t = 8 to 16, where the density in the central area reaches the highest during t=8 to 16.", "For each interval (8 seconds long), we subsample at FPS = 2.5 to get 20 frames, where the first 8 frames are used as input for Y-net [37] and S-CSR [77].", "The remaining 12 frames and the predictions (12 frames) of Y-net and S-CSR are used to calculate the collision rate.", "Before prediction, all methods are trained on the training dataset of SDD under the same setting explained in the main paper.", "The results are shown in Table REF .", "We tested 50, 74, 100, 150 and 200 agents on the aforementioned three methods including ours.", "We can see that our method is always the best in the collision rate under different settings.", "Although its collision rate increases with the growth of the number of agents, our method is still the best compared with the baselines and our predictions are more plausible.", "In addition, we also plot the relation between the collision rate (and the number of collisions) and the agent number ranging from 50 to 200 in Figure REF .", "Y-net is worse than S-CSR and NSP.", "In addition, although the trend of NSP and S-CSR are similar, the number of collisions of S-CSR increases faster than NSP.", "Finally, some visualization results can be found in Figure REF .", "Here, every green disc has a radius of 7.5 pixels.", "When two green discs intersect, they collide with each other.", "Figure REF demonstrates that our method (NSP) has better performance in avoiding collisions than Y-net and S-CSR.", "Figure: Number of CollisionsFigure: 200 AgentsA.2 Interpretability of Prediction More examples of interpretability are shown in Figure REF .", "In Figure REF (1)-(2), we show the influence of different three forces, $F_{goal}$ , $F_{col}$ and $F_{env}$ , on the whole trajectory of an agent.", "In Figure REF (3)-(4), we choose two consecutive moments of one agent for analysis.", "In Figure REF (1), instead of directly aiming for the goal, the agent suddenly turns (at the intersection between red and green dots) due to the incoming agents (the three blue dots under the green dots).", "The result is a result of major influence from $F_{goal}$ and $F_{col}$ .", "Similarly, the agent in Figure REF (2) did not need to avoid other agents but still did not directly walk towards the goal, because of $F_{env}$ from the grass.", "In Figure REF (3)-(4), we show the detailed analysis of forces at two consecutive time steps of the same agent, where $F_{env}$ is from the lawn which is a 'weakly repulsive area'.", "More examples where randomness is captured by our model are shown in Figure REF .", "Figure: Examples of interpretability.", "Red dots are observed, green dots are our prediction.", "Bule dots in (1), (3) and (4) are other pedestrians at time step 7, 16 and 17 respectively.", "We show the influence of all forces, F goal F_{goal}, F col F_{col} and F env F_{env}, on the whole trajecroty in (1) and (2).", "We display detailed analysis of three forces at two consecutive time steps of the same agent, where F goal F_{goal}, F col F_{col} and F env F_{env} are shown as yellow, light blue and black arrows respectively.Figure: Motion randomness is captured by our model.", "Red dots are observed, green dots are our prediction and black dots are theground-truth.A.3 Ablation Experiments We conduct more ablation experiments to further validate our design decisions and explore the effect of components of our model.", "The ablation studies on the network architectures focuses on the Goal-Network and the Collision-Network.", "The main variants are with/without LSTM to show the importance of the temporal modeling for learning $\\tau $ and $k_{nj}$ , and replacing the MLPs with simple two-layer MLPs.", "Table REF shows the results on SDD.", "We can see that the temporal modeling and the original MLPs make our model achieve the best performance.", "To understand the role of each component in our model, we take social force model (SFM) as the baseline and incrementally add components from our model.", "The results are shown in Table REF .", "We tried our best to manually find good parameter values: $\\tau =0.5$ , $k_{nj}$ =25/50 and $k_{env}$ =65.", "We adopted the same way with our model to sample destinations for SFM.", "Then we only learn $\\tau $ and $k_{nj}$ .", "At last, the result of the full model without CVAE is given.", "The performance is better when more components are added.", "Table: Ablation experiments on network architecture.", "Goal-Network and the Collision-Network possess the same architecture under each experimental setup.Table: Ablation experiments on SDD.", "Different components from our model are added incrementallyB Details of the Neural Social Physics Model In this section, we elaborate the details of the Goal Sampling Network (GSN) and the conditional Variational Autoencoder (CVAE) in our model.", "B.1 Goal Sampling Network The main components of the GSN are two U-nets [47] as illustrated in Figure REF .", "We first feed the scene image $\\textit {\\textbf {I}}$ to a U-net, $U_{seg}$ , to get its corresponding environment pixel-wise segmentation with dimension of $H*W*K_c$ .", "$H$ and $W$ are the height and width of $\\textit {\\textbf {I}}$ , and $K_c$ is the number of classes for segmentation.", "The segmantation maps are byproducts of the GSN from [37].", "NSP can use manually annotated or automatically segmented environment maps to calculate $F_{env}$ , but using segmentation maps from the GSN is more efficient.", "Then the past trajectories $\\lbrace p^t\\rbrace _{t=0}^M$ are converted into M+1 trajectory heatmaps by: $Hm(t,i,j) = 2\\frac{\\Vert (i,j) - p^t\\Vert }{ \\max \\limits _{(x,y) \\in \\textit {\\textbf {I}}} \\Vert (x,y) - p^t\\Vert }$ where $(i,j)$ is the pixel coordinate on the heatmap and $(x,y)$ is the pixel coordinate on the scene image $\\textit {\\textbf {I}}$ .", "Then, we concatenate these trajectory heatmaps and the segmentation map to get the input with dimension of $H*W*(K_c+M+1)$ for the network $U_{goal}$ .", "$U_{goal}$ will output a non-parametric probability distribution map, $\\tilde{D}_{goal}$ , with dimensions $H*W$ .", "Every pixel in $\\tilde{D}_{goal}$ has a corresponding probability value between 0 and 1, and their sum is equal to 1.", "Details of these two U-nets can be found in [37].", "We train the GSN by minimizing the Kullback–Leibler divergence between predicted $\\tilde{D}_{goal}$ and its ground truth $D_{goal}$ .", "We assume that $D_{goal}$ is a discrete gaussian distribution with a mean at the position of the ground-truth goal and a hyper-parameter variance $\\sigma _{goal}$ .", "During testing, instead of picking the position with highest probability, we adopt the test-time sampling trick introduced by [37] to sample goals for better performance.", "Figure: Model Architecture of Goal Sampling Network.", "The detailed network architecture of two U-nets, U seg U_{seg} and U goal U_{goal}, can be found in .B.2 Conditional Variational Autoencoder We model the dynamics stochasticity for each agent individually by using a CVAE as illustrated in Figure REF .", "Red connections are only used in the training phase.", "Given an agent $p^t$ and his/her destination, a deterministic prediction $\\bar{p}^{t+1}$ without dynamics stochasticity is first calculated from $F_{goal}$ , $F_{col}$ and $F_{env}$ and a semi-implicit scheme.", "During training time, we use the corresponding ground truth $p^{t+1}$ to calculate the error $\\alpha ^{t+1} = p^{t+1} - \\bar{p}^{t+1}$ , and feed $\\alpha ^{t+1}$ into an encoder $E_{bias}$ to get the feature $f_{bias}$ .", "The brief history $(p^{t-7}, \\dots , p^{t-1}, p^t)$ is encoded as $f_{past}$ by using an encoder $E_{past}$ .", "We concatenate $f_{bias}$ with $f_{past}$ and encode it using a latent encoder to yield the parameters $(\\mu , \\sigma )$ of the gaussian distribution of the latent variable Z.", "We sample Z, concatenate it with $f_{past}$ for history information, and decode using the decoder $D_{latent}$ to acquire our guess for stochasticity $\\tilde{\\alpha }^{t+1}$ .", "Finally, the estimated stochasticity will be added to the deterministic prediction $\\bar{p}^{t+1}$ to get our final prediction $\\tilde{p}^{t+1}$ .", "During testing time, the ground truth $p^{t+1}$ is unavailable.", "Therefore, we sample the latent variable Z from a gaussian distribution $N(0, \\sigma _{latent}I)$ where $\\sigma _{latent}$ is a hyper-parameter.", "We concatenate the sampled Z and $f_{past}$ to decode directly using the learned decoder $D_{latent}$ to get the estimate of stochasticity $\\tilde{\\alpha }^{t+1}$ .", "We can produce final prediction $\\tilde{p}^{t+1}$ using the same way as the training phase.", "Encoders $E_{bias}$ , $E_{past}$ , $E_{latent}$ and the decoder $D_{latent}$ are all multi-layer perceptrons (MLP) with dimensions indicated in the square brackets in Figure REF .", "Figure: The architecture of the CVAE, where p ¯ t+1 \\bar{p}^{t+1} is the intermediate prediction out of our force model and α t+1 =p t+1 -p ¯ t+1 \\alpha ^{t+1} = p^{t+1} - \\bar{p}^{t+1}.", "Encoder E bias E_{bias}, E past E_{past}, E latent E_{latent} and decoder D latent D_{latent} are all MLP networks with dimensions indicated in the square brackets.", "Red connections are only used in the training phase.Table: Hyper-parameters for all six datasets.C Implementation Details We use ADAM as the optimizer to train the Goal-Network, Collision-Network and $F_{env}$ with a learning rate between $3 \\times 10^{-5}$ and $3 \\times 10^{-4}$ , and to train the CVAE with a learning rate between $3 \\times 10^{-6}$ and $3 \\times 10^{-5}$ .", "When we train the CVAE of our model, the training data is scaled by 0.005 to balance reconstruction error and KL-divergence in $l_{cvae}$ .", "The hyper-parameter $\\lambda $ in $l_{cvae}$ is set to 1.", "Concrete structures of all sub-network are shown in Figure REF .", "For the Goal-Network, instead of learning parameter $\\tau $ directly, we set $\\tau = a*sigmoind(NN_{\\phi _1}(q^t, p^T))+b$ where $a$ and $b$ are hyper-parameters.", "We list all hyper-parameters of our model in Table REF .", "We segment scene images into two classes and three classes on ETH/UCY and SDD, respectively.", "The two classes on ETH/UCY are `walkable area' and `unwalkable area'.", "Three classes on SDD include `walkable area', `unwalkable area' and `weakly repulsive area' that some people tend to avoid such as lawns.", "The calculation of $F_{env}$ on ETH/UCY has been introduced in our main paper.", "On SDD, we calculate the position of the obstacle $p_{obs}$ and the position of the weak obstacle $p_{w-obs}$ (i.e.", "in the weakly repulsive area) by averaging pixels that are classified as `unwalkable area' and `weak repulsive area' respectively.", "Then, the $F_{env}$ consists of two repulsive forces from $p_{obs}$ and $p_{w-obs}$ as shown in Equation REF , where the parameter $k_{env}$ is shared and an additional hyper-parameter $\\lambda _{weak}$ is introduced for $p_{w-obs}$ : $F_{env} = \\frac{k_{env}}{\\Vert p_n^t - p_{obs}\\Vert } (\\frac{p_n^t - p_{obs}}{\\Vert p_n^t - p_{obs}\\Vert }) + \\frac{\\lambda _{weak}k_{env}}{\\Vert p_n^t - p_{w-obs}\\Vert } (\\frac{p_n^t - p_{w-obs}}{\\Vert p_n^t - p_{w-obs}\\Vert })$" ] ]
2207.10435
[ [ "Hall viscosity and hydrodynamic inverse Nernst effect in graphene" ], [ "Abstract Motivated by Hall viscosity measurements in graphene sheets, we study hydrodynamic transport of electrons in a channel of finite width in external electric and magnetic fields.", "We consider electric charge densities varying from close to the Dirac point up to the Fermi liquid regime.", "We find two competing contributions to the hydrodynamic Hall and inverse Nernst signals that originate from the Hall viscous and Lorentz forces.", "This competition leads to a non-linear dependence of the full signals on the magnetic field and even a cancellation at different critical field values for both signals.", "In particular, the hydrodynamic inverse Nernst signal in the Fermi liquid regime is dominated by the Hall viscous contribution.", "We further show that a finite channel width leads to a suppression of the Lorenz ratio, while the magnetic field enhances this ratio.", "All of these effects are predicted in parameter regimes accessible in experiments." ], [ "1. Equation of state for graphene", "Electrons in graphene have a relativistic dispersion relation $\\varepsilon _{\\mathbf {k}}= \\pm \\hbar v_F |\\mathbf {k}|$ around the two Dirac points at which conduction and valence bands cross in the Brillouin zone [70], [71], where $v_F\\approx 10^6~{\\rm m/s}$ is the Fermi velocity.", "The density of states is $\\nu (\\varepsilon )= N\\varepsilon / (2\\pi \\hbar ^2 v_F^2)$ , where $N=4$ counts the two spin and two valley degrees of freedom.", "The charge density, imbalance density, and energy density read [72] $n &= \\frac{N k_B^2 T^2}{2\\pi \\hbar ^2 v_F^2R_\\Lambda ^2}\\tilde{n},\\qquad \\tilde{n}=\\text{Li}_2(-e^{-\\xi })-\\text{Li}_2(-e^{\\xi }),\\\\n_\\text{imb} &= \\frac{N k_B^2 T^2}{2\\pi \\hbar ^2 v_F^2R_\\Lambda ^2}\\tilde{n}_\\text{imb},\\qquad \\tilde{n}_\\text{imb}=-\\text{Li}_2(-e^{-\\xi })-\\text{Li}_2(-e^{\\xi }) ,\\\\n_E &= \\frac{N k_B^3 T^3}{\\pi \\hbar ^2 v_F^2 R_\\Lambda ^2} \\tilde{n}_E,\\qquad \\tilde{n}_E=-\\text{Li}_3(-e^{-\\xi })-\\text{Li}_3(-e^{\\xi }) ,$ respectively, where $\\xi =\\mu /k_BT$ .", "At one-loop, $v_F$ acquires a logarithmic running due to the marginally irrelevant nature of the Coulomb interaction, and the resummation of one-loop diagrams is summarized in the renormalization factor [52], [73] $R_\\Lambda =1+\\frac{\\alpha }{4}\\ln \\left( \\frac{T_\\Lambda }{T\\max (1,\\xi )} \\right).$ The ultraviolet cutoff scale is set to the temperature $T_\\Lambda \\approx 8.34 \\times 10^4~{\\rm K}$ of the band cutoff.", "$\\alpha $ is the bare dimensionless Coulomb interaction coupling constant, $\\alpha =\\frac{e^2}{4 \\pi \\hbar \\epsilon v_F }\\approx 0.5$ with the dielectric constant $\\epsilon \\approx 4\\epsilon _0$ in graphene on SiO$_2$ substrate taken from Refs.", "[71], [74], [75]." ], [ "2. Two-dimensional charged relativistic hydrodynamics", "We describe the electronic fluid in graphene by two-dimensional relativistic charged hydrodynamics, with the speed of light replaced by the Fermi velocity $v_F$ .", "The hydrodynamic equations are the conservation equations of energy, momentum, and charge, [12], [49], [57] $\\nabla _\\nu T^{\\nu \\mu } &= F^{\\mu \\nu } J_\\nu + \\Gamma ^\\mu , \\qquad \\nabla _\\mu J^\\mu = 0.$ Here, the metric tensor is $\\eta _{\\mu \\nu } = \\text{diag}\\left(-,+,+\\right)$ .", "$T^{\\mu \\nu }$ and $J^\\mu $ are the energy-momentum (EM) tensor and the charge current of the fluid.", "$F^{\\mu \\nu }$ is the Maxwell tensor of external electromagnetic fields that also includes the self-consistently determined Vlasov field.", "We allow for a slight non-conservation of energy and momentum, summarized in the relaxation terms $\\Gamma ^\\mu $ .", "The constitutive relations in Landau frame, defined by $u_\\mu T^{\\mu \\nu }=-n_E u^\\nu ,\\quad u_\\mu J^\\mu =-en\\,,$ express the EM tensor and charge current in terms of the relativistic three-velocity $u^\\mu =(v_F,\\mathbf {v})/{\\sqrt{1-|\\mathbf {v}|/v_F^2}}$ , the chemical potential $\\mu $ , and the temperature $T$ to first order in derivatives as $J^{\\mu } &= -en u^{\\mu } + \\sigma _Q P^{\\mu \\nu }\\left(\\frac{T}{e}\\partial _\\nu \\frac{\\mu }{T}+F_{\\nu \\rho } u^{\\rho }\\right),\\\\T^{\\mu \\nu } &= w\\frac{ u^{\\mu } u^{\\nu }}{v_F^2}+p \\eta ^{\\mu \\nu }-\\eta P^{\\mu \\rho } P^{\\nu \\sigma } V_{\\rho \\sigma } -\\eta _H n_E ^{(\\mu |\\rho \\sigma }\\frac{u_{\\rho }}{v_F}V_{\\sigma }{}^{|\\nu )}-\\zeta P^{\\mu \\nu } \\partial _\\alpha u^{\\alpha }.$ Here, $P^{\\mu \\nu } = \\eta ^{\\mu \\nu }+u^{\\mu } u^{\\nu }/v_F^2$ , $V_{\\mu \\nu }=2\\partial _{(\\mu }u_{\\nu )}-\\eta _{\\mu \\nu } \\partial _\\rho u^{\\rho }$ , $\\sigma _Q$ is the quantum critical conductance, $\\zeta $ is the bulk viscosity, $\\eta $ is the shear viscosity, and $\\eta _H$ is the Hall viscosity.", "The term proportional to $\\eta _H$ breaks both parity and time-reversal symmetry." ], [ "3. Dependence of the transport coefficients on temperature and chemical potential", "The quantum critical conductivity $\\sigma _Q$ calculated in Ref.", "[28] stems from momentum-preserving Coulomb interactions between electrons and holes.", "Its functional dependence on $T$ and $\\mu $ is given by $\\sigma _Q &= \\frac{e^2}{\\hbar }\\frac{1}{\\alpha ^2}\\frac{N}{2\\hat{g}_1(\\mu )}\\left(\\tau _{ee}\\frac{\\alpha ^2 k_B T}{\\hbar }\\right)^2,\\\\\\tau _{ee}^{-1} &= \\alpha ^2\\frac{N}{2\\hat{g}_1(\\mu )}\\frac{k_BT}{\\hbar }\\left[\\frac{N}{2\\pi }\\ln \\left(2\\cosh \\left(\\frac{\\mu }{2k_B T}\\right)\\right)-\\frac{(en\\hbar v_F)^2}{wT}\\right]^{-1},$ with the function $\\hat{g}_1(\\mu )$ evaluated by the matrix formalism in Ref. [28].", "As shown in Fig.", "REF , $\\sigma _Q$ is highly suppressed when approaching the Fermi liquid regime $\\mu \\gg k_B T$ .", "Figure: Ratio σ Q (μ)/σ Q (μ=0)\\sigma _Q(\\mu )/\\sigma _Q(\\mu =0) as a function of μ/k B T\\mu /k_BT.", "Figure adapted from .In our simulations, we use the viscosities calculated from kinetic theory [51], [52], [47] [76] $\\zeta \\approx 0,\\qquad \\eta = \\frac{k_B^2T^2}{\\hbar v_F^2\\alpha ^2}\\tilde{\\eta },\\qquad \\eta _H =\\frac{k_B^2T^2}{\\hbar v_F^2\\alpha ^2}\\tilde{\\eta }_H,$ where $\\tilde{\\eta }&=\\frac{\\mathcal {T}}{4k_B T}\\begin{pmatrix}0 & 0 & 1\\end{pmatrix}\\hat{\\mathfrak {M}}_h\\left(1+\\pi ^2\\gamma _B^2\\hat{\\mathfrak {T}}_\\eta ^{-1}\\hat{\\mathfrak {M}}_K\\hat{\\mathfrak {T}}_\\eta ^{-1}\\hat{\\mathfrak {M}}_K\\right)^{-1}\\hat{\\mathfrak {T}}_\\eta ^{-1}\\begin{pmatrix}\\tilde{n}\\\\(\\xi ^2+\\pi ^2/3)/2\\\\3\\tilde{n}_E\\end{pmatrix},\\\\\\tilde{\\eta }_H&= -\\pi \\gamma _B \\frac{\\mathcal {T}}{4k_B T}\\begin{pmatrix}0 & 0 & 1\\end{pmatrix}\\hat{\\mathfrak {M}}_h\\left(1+\\pi ^2\\gamma _B^2\\hat{\\mathfrak {T}}_\\eta ^{-1}\\hat{\\mathfrak {M}}_K\\hat{\\mathfrak {T}}_\\eta ^{-1}\\hat{\\mathfrak {M}}_K\\right)^{-1}\\hat{\\mathfrak {T}}_\\eta ^{-1}\\hat{\\mathfrak {M}}_K\\hat{\\mathfrak {T}}_\\eta ^{-1}\\begin{pmatrix}\\tilde{n}\\\\(\\xi ^2+\\pi ^2/3)/2\\\\3\\tilde{n}_E\\end{pmatrix},$ and $\\mathcal {T}=2k_BT\\ln \\left(2\\cosh \\frac{\\xi }{2}\\right),\\qquad \\gamma _B = \\frac{\\hbar |e|v_F^2B_z}{\\alpha ^2k_B^2T^2}.$ The matrices $\\hat{\\mathfrak {M}}_K$ and $\\hat{\\mathfrak {M}}_h$ depend on $\\xi $ only, while the matrix $\\hat{\\mathfrak {T}}_\\eta $ depends on $\\xi $ and $\\alpha $ [52].", "The components of $\\hat{\\mathfrak {T}}_\\eta $ are $(\\hat{\\mathfrak {T}}_\\eta )_{ij}=\\frac{2\\pi }{\\alpha ^2}\\frac{\\hbar \\mathcal {T}}{N k_B^2 T^2} \\tau ^{-1}_{ij}=(2\\pi )^4\\int _0^\\infty dQ\\int _{-\\infty }^{+\\infty }dW\\frac{2\\pi Q}{(2\\pi )^2}\\frac{1}{2\\pi }\\frac{|\\tilde{U}|^2}{\\sinh ^2(W)}\\left[Y_{00}\\tilde{Y}_{ij}-\\tilde{Y}_{0j}\\tilde{Y}_{0i}\\right], $ where $q = |{\\mathbf {q}}|$ , $Q = v_FR_\\Lambda \\hbar q/2k_BT$ and $W=\\hbar \\omega /2k_BT$ are the dimensionless momentum and frequency, respectively.", "$Y_{00}$ and $\\tilde{Y}_{ij}$ are functions of $Q$ , $W$ , and $\\xi $ only and their integral expressions are given in Appendix C of Ref. [47].", "The dynamically screened Coulomb potential is $U(\\omega ,\\mathbf {q}) = U_0 \\tilde{U},\\quad U_0 = \\frac{2\\pi \\hbar \\alpha v_F}{ q},\\quad \\tilde{U} = \\left(1+U_0\\Pi ^R\\right)^{-1}\\,,$ with the polarization operator $\\Pi ^R=\\frac{q}{4\\pi ^2\\hbar v_FR_\\Lambda ^2}\\iint _0^1 \\frac{dz_1dz_2}{z_1\\sqrt{(1-z_1^2)(1-z_2^2)}}& \\left[(z_1^{-2}-1)\\left(\\frac{Q}{z_2Q+W+i\\eta }+\\frac{Q}{z_2Q-W-i\\eta }\\right)J_1(z_1^{-1},z_2,\\xi )\\right.", "\\nonumber \\\\&~+\\left.", "(1-z_2^{2})\\left(\\frac{Q}{z_1^{-1}Q+W+i\\eta }+\\frac{Q}{z_1^{-1}Q-W-i\\eta }\\right)J_2(z_1^{-1},z_2,\\xi )\\right],$ with two functions $J_{1,2}(z_1^{-1},z_2,\\xi )$ as given in Ref. [72].", "We show the viscosity $\\eta $ and Hall viscosity $\\eta _H$ as a function of the magnetic field $B_z$ in Fig.", "REF .", "In the Fermi liquid region, the polarization operator at static screening is well approximated by the thermodynamic density of states [72], [52], namely, $\\Pi ^R\\approx \\frac{\\partial n}{\\partial \\mu }=\\frac{N\\mathcal {T}}{2\\pi \\hbar ^2 v_F^2R_\\Lambda ^2}.$ Figure: Viscosity η\\eta and Hall viscosity η H \\eta _H as functions of the magnetic field B z B_z for different μ/k B T\\mu /k_BT at T=120KT=120{\\rm K}." ], [ "4. Momentum relaxation", "Momentum is relaxed in graphene due to electron-impurity and electron-phonon scattering.", "In the limit of weak momentum relaxation, the relaxation time approximation can be employed, simplifying the relaxation term to $\\Gamma ^i=-\\tau _\\text{MR}^{-1}T^{ti}, \\qquad i=x,y.$ The momentum relaxation time $\\tau _\\text{MR}$ is related to the electron-impurity scattering time $\\tau _\\text{imp}^{-1}$ [28], [8] and electron-phonon scattering time $\\tau _\\text{ph}^{-1}$ [37], [38], [39] by Matthiessen’s rule $\\tau _\\text{MR}^{-1}=\\tau _\\text{imp}^{-1}+\\tau _\\text{ph}^{-1}.$ Here, $\\tau _\\text{imp}^{-1} = \\frac{n_\\text{imp}n_\\text{imb}}{\\hbar w}\\left(\\frac{e^2}{4\\epsilon }\\right)^2,\\qquad \\tau _\\text{ph}^{-1}=\\frac{ g^2 \\mu k_B T}{2\\hbar ^3v_F^2},$ where $n_\\text{imp}\\approx 2.1 \\times 10^9~{\\rm cm}^{-2}$ is the density of charged impurities [8], $g = D/\\sqrt{2\\rho _m v_s^2}$ is the electron-phonon coupling, $D\\approx 20~{\\rm eV}$ is the deformation potential constant, $\\rho _m\\approx 0.77~{\\rm mg/m^2}$ is the mass density of the graphene sheet, and $v_s\\approx 2 \\times 10^4~{\\rm m/s}$ is the longitudinal acoustic (LA) sound velocity.", "We further consider $T>T_\\text{BG}$ for $\\tau _\\text{ph}^{-1}$ , where $T_\\text{BG} = 2 v_s k_F/k_B$ is the Bloch-Grüneisen temperature that is below $100~{\\rm K}$ throughout this letter.", "Hence, only the acoustic phonons with momentum $k_\\text{ph}\\le 2k_F$ can scatter from electrons.", "For typical values for the scattering times, $\\tau _\\text{imp}\\sim 1~{\\rm ps}$ and $\\tau _\\text{ph}\\sim 100~{\\rm ps}$ , so that $\\tau _\\text{imp}\\ll \\tau _\\text{ph}$ and $\\tau _\\text{MR}\\approx \\tau _\\text{imp}$ ." ], [ "5. Energy relaxation", "The origin of energy relaxation of the electron fluid are electron-phonon collisions involving impurities.", "In the electronic cooling experiments in graphene [45], [77], [44], the electron and holes heated up by photo excitation or Joule heating relax their energy to phonons.", "With the lattice temperature $T_\\text{ph}$ , the power density of the energy relaxation in the charged fluid at temperature $T$ is found to be [43], [40], [41] $\\Gamma ^0= -A_1(T-T_\\text{ph})-A_2(T^3-T_\\text{ph}^3)\\,,$ where $A_1=\\pi N g^2\\nu (\\mu )^2\\hbar k_F^2 v_s^2 k_B,\\qquad A_2=9.62 \\times \\frac{g^2\\nu ^2(\\mu )k_B^3}{\\hbar k_F l_\\text{imp}}.$ Here, $l_\\text{imp}=v_F\\tau _\\text{imp}$ is the disorder mean free path.", "The standard cooling pathways mediated by optical and acoustic phonons resulting in the first term is inefficient due to the large value of the optical phonon energy and the strong constraint of the Fermi surface and momentum conservation on the phase space for acoustic phonon scattering [40], [41].", "On the other hand, disorder could assists the electron-phonon collisions (supercollisions) by exploring the available phonon phase space [43], [44], [45].", "It results in the second term and becomes dominated for electron temperatures $T \\gtrsim 3T_\\text{BG}$ .", "We will consider that the electron fluid reaches global equilibrium at the lattice temperature, namely $T=T_\\text{ph}$ .", "For small deviations $\\delta T = T - T_\\text{ph}$ , we can approximate the energy relaxation term on the right hand side of Eq.", "(REF ) at the linear level as $\\Gamma ^0 = -\\tau _\\text{ER}^{-1}C \\, \\delta T\\,,$ with the energy relaxation rate $\\tau _\\text{ER}^{-1}=\\frac{A_1}{\\gamma T}+\\frac{3 A_2 T}{\\gamma }.$ Here, we have used $\\delta n_E = C\\delta T$ , the specific heat $C=\\gamma T$ , and $\\gamma =\\frac{1}{3}\\pi ^2 N \\nu (\\mu )k_B^2$ .", "For the typical values of parameters, we have $\\tau _\\text{ER}\\sim 1000{\\rm ps}$ , which is negligible compared to the other scales in the system." ], [ "6. Analytic solutions for $\\delta \\mu $ and {{formula:8352f092-4a61-4a85-bb47-710ef2c30b8a}}", "The analytic solution for $\\delta \\mu $ and $\\delta T$ in the linearized approximation is $\\delta \\mu &= \\frac{1}{\\eta w}\\left[e E_1 n l_G^\\text{eff} \\left(e w B_z l_G^2-\\mu \\eta _H\\right) \\operatorname{csch}\\left(\\frac{W}{2 l_G}\\right) \\sinh \\left( \\frac{y}{l_G}\\right) -y B_z \\left(e^2 E_1 n w l_G^2+E_2 \\eta \\mu \\sigma _Q\\right)\\right] ,\\\\\\delta T &= -\\frac{T}{\\eta w} \\left[E_2 \\eta y B_z \\sigma _Q+e E_1 n \\eta _H l_G^\\text{eff} \\operatorname{csch}\\left(\\frac{W}{2 l_G}\\right) \\sinh \\left(\\frac{y}{l_G}\\right) \\right],$ with $l_G^\\text{eff}$ and $E_{1,2}$ as defined in the main text.", "Their profiles, together with $v_x$ , are shown in Fig.", "REF , from which we see that their boundary values change sign when the magnetic field is increased.", "Figure: The profiles of v x v_x, δμ\\delta \\mu , and δT\\delta T across the channel.", "We fix μ=k B T\\mu = k_B T, T=120KT=120~{\\rm K}, E x =-1000V/mE_x=-1000~{\\rm V/m}, ∇ x T=0\\nabla _x T = 0, W=2μmW=2~{\\rm \\mu m}, l s =0.02μml_s = 0.02~{\\rm \\mu m}, and l MR =v F τ MR =1.4μml_\\text{MR} = v_F\\tau _\\text{MR} = 1.4~{\\rm \\mu m}." ], [ "7. Thermoelectric transport in infinite geometries", "We apply a homogeneous electric field $\\mathbf {E}=E_x\\mathbf {e}_x$ and temperature gradient $\\nabla _x T$ to an infinite sample of graphene, namely, $\\mu =\\mu +xeE_x +\\delta \\mu (y),\\qquad T=T+x\\nabla _x T+\\delta T(y).$ The velocity profile is homogeneous and the viscosity terms can be neglected.", "The momentum conservation in Eq.", "(REF ) is simplified as $0 &= \\partial _x\\delta p-BJ_y+\\frac{v_x w}{\\tau _\\text{MR} v_F^2},\\\\0 &= \\partial _y\\delta p + BJ_x+\\frac{v_y w}{\\tau _\\text{MR} v_F^2} - e n \\partial _y\\phi _V ,$ with currents in Eq.", "(REF ).", "For a system that has infinite width in $x$ and $y$ direction, the general Onsager relation in the DC frequency limit and momentum space is given by [49], [57] $\\begin{pmatrix}\\mathbf {J} \\\\ \\mathbf {Q}\\end{pmatrix}=\\begin{pmatrix}\\mathbf {\\sigma }& \\mathbf {\\alpha }\\\\{\\mathbf {\\alpha }} T & \\bar{\\mathbf {\\kappa }} \\\\\\end{pmatrix}\\begin{pmatrix}\\mathbf {E} \\\\-\\mathbf {\\nabla }T\\end{pmatrix} .$ where $\\mathbf {J}=(J_x,J_y)$ , $\\mathbf {Q}=(Q_x,Q_y)$ , $\\mathbf {E}=(E_x,E_y)$ , $\\mathbf {\\nabla }T=(\\nabla _x T,\\nabla _y T)$ , and $\\mathbf {\\sigma },\\mathbf {\\alpha },\\bar{\\mathbf {\\kappa }}$ are $2\\times 2$ antisymmetric matrices.", "Their matrix components are $\\sigma _{xx} &= \\sigma _{yy}=\\frac{\\Gamma \\sigma _Q \\left( \\gamma +\\Gamma +\\omega _c^2/\\gamma \\right)}{(\\gamma +\\Gamma )^2+\\omega _c^2}, &\\sigma _{xy}&=-\\sigma _{yx}=-\\frac{\\omega _c \\sigma _Q \\left( \\gamma +2 \\Gamma +\\omega _c^2/\\gamma \\right)}{(\\gamma +\\Gamma )^2+\\omega _c^2},\\nonumber \\\\\\alpha _{xx} &= \\alpha _{yy}= \\Gamma \\frac{(\\gamma +\\Gamma ) \\mu \\sigma _Q/eT- s\\omega _c/B_z}{(\\gamma +\\Gamma )^2+\\omega _c^2}, &\\alpha _{xy}&=-\\alpha _{yx}=\\frac{s}{B_z}\\frac{ \\gamma ^2 +\\omega _c^2+ \\gamma \\Gamma (1 -\\mu n/sT)}{(\\gamma +\\Gamma )^2+\\omega _c^2},\\\\\\bar{\\kappa }_{xx} &= \\bar{\\kappa }_{yy}=\\frac{\\Gamma \\mu ^2 (\\gamma +\\Gamma ) \\sigma _Q/e^2+v_F^2 \\left(\\gamma w+\\Gamma s^2 T^2/w\\right)}{T \\left((\\gamma +\\Gamma )^2+\\omega _c^2\\right)}, &\\bar{\\kappa }_{xy}&=-\\bar{\\kappa }_{yx}=\\frac{\\omega _c s^2 T^2 - \\left(\\gamma +(\\gamma +2 \\Gamma )s T/w\\right)\\mu B_z \\sigma _Q/e}{T w \\left((\\gamma +\\Gamma )^2+\\omega _c^2\\right)} ,\\nonumber $ where $\\Gamma =1/\\tau _\\text{MR}$ , $\\omega _c=e n B_z v_F^2/w$ and $\\gamma =B_z^2 v_F^2 \\sigma _Q/w$ .", "However, the boundary conditions in $y$ direction require that $J_y = Q_y = 0$ locally, from which we solve for the two generated sources $E_y$ and $-\\nabla _y T$ in terms of the sources $E_x$ and $-\\nabla _x T$ .", "They read $\\begin{pmatrix}E_y \\\\ -\\nabla _y T\\end{pmatrix}=\\frac{B_z}{w}\\begin{pmatrix}-\\frac{\\mu \\sigma _Q}{e}-e n v_F^2 \\tau _\\text{MR} & -\\frac{\\mu ^2 \\sigma _Q}{e^2 T}+s v_F^2 \\tau _\\text{MR} \\\\T \\sigma _Q & \\frac{\\mu \\sigma _Q}{e}\\end{pmatrix}\\begin{pmatrix}E_x \\\\ -\\nabla _x T\\end{pmatrix} .$ Using these, we obtain the reduced Onsager relation along the $x$ direction $\\begin{pmatrix}J_x \\\\Q_x\\end{pmatrix}=\\begin{pmatrix}\\sigma _Q+ \\frac{e^2 n^2 v_F^2 }{w}\\tau _\\text{MR} & \\frac{\\mu \\sigma _Q}{e T}-\\frac{e n s v_F^2 }{w}\\tau _\\text{MR} \\\\\\frac{\\mu \\sigma _Q}{e}-\\frac{e n s T v_F^2 }{w}\\tau _\\text{MR} & \\frac{\\mu ^2 \\sigma _Q}{e^2 T}+\\frac{s^2 T v_F^2 }{w}\\tau _\\text{MR}\\end{pmatrix}\\begin{pmatrix}E_x \\\\ - \\nabla _x T\\end{pmatrix}$ that takes into account the effects of a finite width in $y$ direction.", "Equation (REF ) in the main text is in the same form but with $\\tau _\\text{MR}$ replaced by $\\tau _\\text{MR}^\\text{avg}$ ." ], [ "8. More on the Hall voltage and inverse Nernst effect", "We show the total Hall voltage $\\Delta V$ and temperature gradient $\\Delta T$ as functions of $\\mu /k_BT$ and magnetic field $B_z$ .", "Furthermore, we compare the contributions from Lorentz and Hall viscous effects in Figs.", "REF and REF .", "In general, the critical magnetic field for $\\Delta V=0$ is small compared to the one for $\\Delta T = 0$ .", "It could be explained by comparing the coherent and incoherent contributions in the Lorentz contributions $\\Delta V_B$ and $\\Delta T_B$ , where the coherent and incoherent contributions refer to the parts related to the momentum density $P_x$ sourced by $E_1$ and the quantum critical current density $J_Q$ sourced by $E_2$ .", "While the former becomes dominant in the Fermi liquid regime, the latter is dominant in the Dirac regime.", "In Eq.", "(REF ), $\\Delta V_B$ contains both coherent and incoherent contributions that stay finite for general values of $\\mu /k_BT$ .", "In contrast, $\\Delta T_B$ in Eq.", "(REF ) contains the incoherent contribution only, which is suppressed in the Fermi liquid regime.", "Hence, we find $|\\Delta T_{\\eta _H}|\\gg |\\Delta T_B|$ in the Fermi liquid regime, as shown in Fig.", "REF , which allows to extract the pure Hall viscous effect from the total temperature gradient $\\Delta T$ .", "Figure: ΔT\\Delta T and log 10 |ΔT η H /ΔT B |\\log _{10}|\\Delta T_{\\eta _H}/\\Delta T_B| as functions of μ/k B T\\mu /k_BT and B z B_z at T=120KT=120{\\rm K}.", "The contours of 0 are labeled.", "The deep blue regime is clipped as ΔT<-10K\\Delta T<-10{\\rm K}.", "The competition between ΔT B \\Delta T_B and ΔT η H \\Delta T_{\\eta _H} mainly depends on the ratio μ/k B T\\mu /k_BT and ΔT η H \\Delta T_{\\eta _H} becomes dominant when μ≫k B T\\mu \\gg k_BT." ] ]
2207.10528
[ [ "Revisiting big bang nucleosynthesis with a new particle species : effect\n of co-annihilation with neutrons" ], [ "Abstract In big bang nucleosynthesis (BBN), the light matter abundance is dictated by the neutron-to-proton ($n/p$) ratio which is controlled by the standard weak processes in the early universe.", "Here, we study the effect of an extra particle species ($\\chi$) which co-annihilates with neutron, thereby potentially changing the ($n/p$) ratio in addition to the former processes.", "We find a novel interplay between the co-annihilation and the weak interaction in deciding the ($n/p$) ratio and the yield of $\\chi$.", "At the initial stage of BBN for the large co-annihilation strength ($G_D$) in comparison to the weak coupling ($G_F$), more neutrons are removed from the thermal bath modifying the ($n/p$) ratio from its standard evolution.", "We find that the standard BBN prediction is restored for $G_D/G_F \\lesssim 10^{-1}$, while the mass of $\\chi$ being much smaller than the neutron mass.", "When the mass of $\\chi$ is comparable to the neutron mass, we can allow large $G_D/G_F$ values, as the thermal abundance of $\\chi$ becomes Boltzmann-suppressed.", "Therefore, the ($n/p$) ratio is restored to its standard value via dominant weak processes in later epochs.", "We also discuss the viability of this new particle to be a dark matter candidate." ], [ "Introduction and Summary", "Big bang nucleosynthesis (BBN) is one of the great achievements of both the standard model (SM) of particle physics and hot big bang cosmology.", "The observed primordial abundances of light matter in the universe agree with the theoretical prediction of BBN to a good approximation.", "One of the important quantities observed in this context is the primordial abundance of helium, i.e.", "the mass fraction of helium, $Y_P = 0.2449 \\pm 0.0040$ [1], which is sensitive to the neutron-to-proton ($n/p$ ) ratio at the initial epoch of BBN.", "Neutrons and protons are in thermal equilibrium with the cosmic plasma after the QCD phase transition via SM weak processes.", "Weak processes become inefficient compared to the expansion of the universe at around temperature, $T=1\\,\\rm MeV$ .", "Subsequently the ($n/p$ ) ratio is frozen at a value around ($1/6-1/7$ )[2].", "This provides the initial condition for generating the light nuclei in correct abundance at later epochs.", "It is apparent that any alteration to this ratio due to some new physics phenomena would change the prediction of BBN to a great extent.", "The addition of a new particle species affects the ($n/p$ ) ratio broadly in two ways.", "Additional particle species contributing significantly to the energy density of the universe during BBN, changes the expansion rate of the universe which in turn, delays or hastens the freeze-out of neutrons.", "Chemical processes involving new particle species can alter the ($n/p$ ) ratio by removing or adding extra nucleons to the thermal bath.", "In scenario I, the inclusion of an extra relativistic species during BBN is accounted as the effective number of extra neutrino species, defined as the following.", "Neff =87 Where, $\\rho _\\chi $ is the energy density of new particle species, $\\chi $ and $\\rho _\\gamma $ is the photon counterpart.", "This parametrization works well for the species thermally decoupled from the SM plasma before BBN.", "In ref.", "[3], [4] the authors have shown that an additional particle species, strongly coupled either to photons or to neutrinos via elastic scatterings during BBN alters neutrino-to-photon temperature ratio ($T_\\nu /T_\\gamma $ ), thereby changing the standard weak interaction rates.", "As a result the freeze-out time of neutrons gets affected, in turn the final abundances of the light nuclei are altered.", "In scenario II, the standard BBN (SBBN) prediction can potentially be altered due to the infusion of extra nucleons either from the decay [5] or from the annihilation [6], [7] of new particles.", "In this article we have introduced a new particle species $\\chi $ , which co-annihilates with neutrons.", "In the co-annihilation process [8] detailed in Sec., a neutron and a $\\chi $ particle are removed from the thermal bath.", "Therefore, the ($n/p$ ) ratio and the yield of $\\chi $ can simultaneously be affected by the co-annihilation at the relevant epoch.", "The freeze-out value of the ($n/p$ ) ratio and the abundance of $\\chi $ are decided by the relative strength of the standard weak processes and the newly added co-annihilation, which we have discussed in great details in Sec.. Additionally, we have considered the possibility of $\\chi $ to be a viable dark matter (DM) candidate if it is stable over the cosmological time scale.", "We now summarize our findings regarding the modification of the SBBN scenario due to newly added co-annihilation process.", "The effect of co-annihilation depends on the initial relative abundances of the neutron and $\\chi $ in the cosmic soup.", "The number density of neutron is decided by the observed baryon asymmetry, whereas the number density of $\\chi $ is assumed to be thermal.", "Hence, the number density is decided by the mass of $\\chi $ ($m_\\chi $ ) and the temperature of the cosmic soup.", "For $m_\\chi < \\mathcal {O}(m_n)$ , $m_n$ being the neutron mass, $\\chi $ freezes out relativistically, therefore the ambient number density becomes too large compared to the number density of neutrons.", "Consequently occasional co-annihilations are enough to eliminate neutrons even after the freeze-out, altering the ($n/p$ ) ratio substantially from its SBBN value.", "This puts a constraint on the co-annihilation strength, i.e.", "$G_D/G_F \\lesssim 10^{-1}$ , $G_F$ being the weak interaction.", "The constraint on the $G_D/G_F$ is significantly relaxed for $m_\\chi \\sim \\mathcal {O}(m_n)$ , unlike the previous case.", "This is due to an interesting interplay between the weak interaction and the co-annihilation in deciding the evolution of the ($n/p$ ) ratio and the yield of $\\chi $ over different epochs.", "Initially ($T>>1\\,\\rm MeV$ ) there is a large number of $\\chi $ and for large $G_D/G_F$ ($\\gtrsim 10^2$ ) values the co-annihilation dominates over weak processes.", "Therefore, more neutrons are removed from the bath, thereby altering the ($n/p$ ) ratio from its SBBN value.", "However for such a large co-annihilation strength $\\chi $ remains longer in the thermal bath via the co-annihilation, consequently its number density becomes Boltzmann-suppressed.", "The co-annihilation rate per neutron becomes sub-dominant and the SBBN prediction of the ($n/p$ ) ratio is eventually satisfied via weak processes in the later epoch.", "In this scenario, $\\chi $ undergoes a non-relativistic freeze-out for which we find that the observed DM relic density is satisfied for $m_\\chi \\simeq 0.92 \\,\\rm GeV$ keeping SBBN predictions in tact.", "For $m_\\chi \\gtrsim 0.92\\,\\rm GeV$ , $\\chi $ contributes to the small fraction of the DM relic density.", "The universe becomes over-abundant with $\\chi $ for $m_\\chi \\lesssim 0.92\\,\\rm GeV$ .", "In that case, there must be some additional number-changing processes of $\\chi $ to alleviate the situation." ], [ "Coannihilation with neutron", "In the SBBN, the initial number densities of neutron and proton are controlled by the following electro-weak processes in which scattering processes freeze out at $ T\\sim 1 \\,\\rm MeV$ and the $(n/p)$ ratio becomes fixed.", "The frozen-out value of the $(n/p)$ ratio changes slightly due to occasional decays of neutron until the helium atom is produced, in which most of the neutrons are trapped.", "A.", "$n+\\nu _e \\rightleftharpoons p+e^-$ ,     B.", "$n+e^+\\rightleftharpoons p+\\bar{\\nu _e} $ ,     C. $n \\rightleftharpoons p+e^-+\\bar{\\nu _e} $ We alter the standard scenario by incorporating a co-annihilation process, i.e.", "$n+\\chi \\rightleftharpoons p+e^-$ , where $\\chi $ is considered as a charge-neutral Dirac fermion.", "Such a process can be motivated by the lepto-quark mediator models [9] in solving B-physics anomalies as well as solving the neutron decay anomaly [10].", "Moreover, the process considered here can be instrumental for detecting sub-GeV particles (DM or exotic neutrinos) on the beta-decaying nuclear targets [11], [12], [13].", "Apart from being a fundamental particle, $\\chi $ can well be a composite particle like neutron, which have been studied in Refs.", "[14], [15] and references therein.", "Here, we want to investigate the cosmological implication of the process being agnostic about the exact ultra-violet completion of the low energy effective theory, taken as LGD2(p an) (e a) where $\\Gamma _{a}$ in general, represents all possible independent combinations of Dirac matrices.As a proto-type example of our scenario, We assume the standard $V-A$ structure for $\\Gamma ^a$ , which is reminiscent of SM weak interaction.", "$\\psi _i$ is the fermion field for the $i^{th}$ particle.", "We want to study only the effect of the co-annihilation on the SBBN case, therefore we take $m_\\chi < m_n+m_p+m_e \\sim 1.88 \\,\\rm GeV$ to prevent the decay, i.e.", "$\\chi \\rightarrow \\bar{n}+ p + e^{-}$ on the kinematic ground.", "In addition, to exclude the scenario I discussed in Sec., $\\chi $ should not contribute significantly to the energy density of the universe during BBN.", "If $\\chi $ is a decoupled but internally thermalized relativistic species, the measurement of $ N_{eff}$ [16] sets an upper bound on the temperature ($T_\\chi $ ) of $\\chi $ as, (T/T) 0.6         at 68% C.L.", "If $\\chi $ is thermalised with the photon bath during BBN, the lower bound on $m_\\chi $ comes to be approximately $1.68 \\, \\rm MeV$ .", "For more detailed analysis on this frontier see Ref.", "[17], and references therein.", "In the subsequent analysis, we take the number density of $\\chi $ to be thermal to start with.", "Thus we get a very general mass window for a stable thermalised extra fermion species as, 1.68  MeV m1.88  GeV Now, the co-annihilation process is active once neutrons and protons are available in the bath after the QCD phase transition, $T \\sim 150\\,\\rm MeV$ .", "To check the thermalization condition for $\\chi $ via the co-annihilation with neutrons in the expanding universe, we need $\\Gamma (T) > H (T)$ .", "Here, $\\Gamma (T)$ is the co-annihilation rate per $\\chi $ particle and $H (T)$ is the Hubble constant at temperature $T$ .", "The interaction rate depends on the co-annihilation cross-section and the number density of neutrons at a certain epoch.", "The number density of neutrons is estimated from the observed baryon asymmetry, i.e.", "B = nB-nBs = nn+nps=Yn+Yp 10-10 Where, $Y_i=n_i/s$ , $n_i$ is the number density of $i^{th}$ particle and $s$ is the entropy density of the universe.", "For $T >> 1\\, \\rm MeV$ the co-moving number density of neutron can be well be taken as $Y_n=Y_p=\\Delta _B/2$ .", "The s-wave contribution to the thermally averaged co-annihilation cross-section is given by, v G2D2(m2+2mmn) in which we have assumed the electron to be massless and neutron and proton to have identical masses.", "Assuming usual radiation-dominating universe for the relevant epoch, we arrive at an approximate condition for thermalization of $\\chi $ in the non-relativistic regime to be, GD 410-4  GeV-2 (1  GeV2m2+2mmn)1/2(0.1 GeVT)1/2(10g*)1/2 To study the neutron freeze-out in this modified scenario we need to write Boltzmann equations for neutrons, protons and $\\chi $ .", "Due to high entropy density per baryon the light nuclei are not synthesized immediately after the neutron freeze-out, e.g.", "the synthesis of $He^4$ takes place around $T=0.1\\, \\rm MeV$ .", "Hence, for $T\\gtrsim 0.1 \\,\\rm MeV$ the relevant processes are only the inter-conversions between neutrons and protons via three weak processes (A,B,C) and newly added co-annihilation.", "Hence, the total co-moving number density of baryons is conserved, i.e.", "d Ypd x+d Ynd x=0          for $T \\gtrsim 0.1 \\,\\rm MeV$ where $x=m_n/T$ .", "Now, defining the ($n/p$ ) ratio as $R=Y_n/Y_p$ and $Y_\\chi $ as the co-moving number density of $\\chi $ , we write Boltzmann equations using Eq.", "as the following.", "d Rd x= 1+RH x[(pn) -(np)R ]-(1+R)s vHxR[(Y-R0R Y0)] d Yd x=-s vHxB R1+R[(Y-R0R Y0)] where $Y^0_\\chi $ is the equilibrium co-moving number density for $\\chi $ , $R_0=Y^0_n/Y^0_p \\simeq e^{-Q/T}$ and $Q=m_n-m_p=1.293\\, \\rm MeV $ .", "$\\lambda (n\\rightarrow p)$ is the neutron to proton conversion rate via SM weak processes defined assuming zero chemical potential for neutrinos as [18], (np)=11d[2-1    (q+)2((meT)+1) ((- (q+)me T)+1)+ 2-1   (-q)2((-meT)+1) ((me(-q) T)+1)] where, $\\tau $ is the neutron lifetime and $T_\\nu $ is the neutrino temperature and $q=(m_n-m_p)/m_e=2.53$ .", "The proton to neutron conversion rate, $\\lambda (p\\rightarrow n)$ is achieved replacing $q$ by $-q$ in the expression of $\\lambda (n\\rightarrow p)$ .", "In Eq.", "$T_\\nu $ can be written as a function of photon temperature, $T_\\gamma = T$ and $T$ is replaced by the scaled temperature variable $x$ .", "It is apparent from above equations that both the ($n/p$ ) ratio and the number density of $\\chi $ are inter-related due to added co-annihilation.", "In particular, $Y_\\chi $ follows its equilibrium form ($Y^0_\\chi $ ) as long as $R = R_0$ , i.e.", "the ($n/p$ ) ratio maintains its equilibrium value." ], [ "Study of the evolution equations", "We shall now study the evolution of $R$ and $Y_\\chi $ by solving Eq.", "numerically keeping $G_D$ and $m_\\chi $ as free parameters.", "The evolution equations for our scenario are solved upto $x=5000$ , which corresponds to $T\\simeq 0.2 \\,\\rm MeV$ .", "As discussed earlier, around $T=0.1 \\,\\rm MeV$ other nucleons start building up significantly, prompting to include all nuclear reactions into our Boltzmann equations.", "The temperature range of the current study is adequate to capture the general feature of co-annihilation in the SBBN scenario.", "It is apparent that in a co-annihilation-type process, the reaction rate per particle is different for each of the colliding particles.", "Therefore, the co-annihilation process is very much sensitive to the initial relative abundances of two particle species, which is decided by $m_\\chi $ .", "This is because the number density of $\\chi $ thermal and the neutron number density is set by the baryon asymmetry.", "To demonstrate the effect of new physics on BBN we take two benchmark points (BPs) for $m_\\chi $ , i.e.", "$m_\\chi =50 \\,\\rm MeV$ and $m_\\chi = 1.0\\,\\rm GeV$ .", "Figure: Evolution of the (n/pn/p) ratio (RR) (Left panel) and the co-moving number density (Y χ Y_\\chi ) of the additional species (Right panel) shown for m χ =50 MeV m_\\chi = 50 \\,\\rm MeV, varying the scaled co-annihilation strength, G D /G F G_D/G_F.BP - I : Relativistic freeze-out of $\\chi $ In Fig.REF we have shown the evolution of $R$ (left panel) and $Y_\\chi $ (right panel) for $m_\\chi = 50 \\,\\rm MeV$ for different values $G_D$ , scaled by $G_F=1.166\\times 10^{-5} \\,\\rm GeV^{-2}$ .", "In the right panel, the black dashed line denotes the equilibrium number density ($Y^0_\\chi $ ) of $\\chi $ and other lines correspond to the yields with different $G_D/G_F$ values.", "We find that the additional species, $\\chi $ freezes out approximately at its equilibrium value, almost independent of the variation of $G_D/G_F$ .", "For instance, $G_D/G_F$ varying from $10^{-2}$ to $10^{3}$ , the freeze-out temperature is approximately same, $T_F \\simeq 23\\,\\rm MeV$ which in terms of the scaled temperature becomes $m_\\chi /T_F \\simeq 2.2$ .", "This indicates the relativistic freeze-out of $\\chi $ as $m_\\chi /T_F \\lesssim 3$ [2].", "In the left panel, the black dashed line denotes the equilibrium value of the ($n/p$ ) ratio ($R_0$ ) and the cyan dashed line corresponds to that of the SBBN scenario, i.e.", "$G_D/G_F=0$ .", "As long as the co-annihilation keeps $\\chi $ in chemical equilibrium, the $(n/p)$ ratio follows the equilibrium value ($R_0$ ) because there is no extra loss of neutrons via the co-annihilation.", "After the freeze-out of $\\chi $ , the ($n/p$ ) ratio departs from its standard evolution for $G_D/G_F= \\lbrace 10, 10^2, 10^3\\rbrace $ , whereas for $G_D/G_F= 10^{-2}$ (red solid line in the left panel) it is unaltered from the SBBN case.", "This feature can be understood from a comparison between the weak interaction rate ($\\Gamma _w$ ) and the co-annihilation rate ($\\Gamma _c$ ) per neutron as the following.", "cw(GD GF)2 From Eq.REF we note that for $G_D/G_F= 10^{-2}$ the weak interaction dominates over the co-annihilation throughout the evolution history of neutron and $\\chi $ .", "For $G_D/G_F = \\lbrace 10, 10^2, 10^3\\rbrace $ neutrons disappear due to the dominant co-annihilation, which eliminates more neutrons than the SBBN scenario.", "The disappearance of neutrons takes place even after the freeze-out of $\\chi $ .", "This is due to a large number of $\\chi $ particle present in the cosmic soup compared to neutrons.", "In fact, in this case $Y_\\chi \\simeq 10^{-2}$ and $Y_n \\simeq 10^{-10}$ , which imply that there are $10^8$ number of $\\chi $ particle available in 1 neutron in the bath.", "Therefore, the occasional co-annihilation is sufficient to modify the number density of neutrons drastically in later epochs.", "To sum up, for the relativistic freeze-out case the co-annihilation with larger interaction strength than the weak interaction strength reduces the ($n/p$ ) ratio by two orders of magnitude from the SBBN value at temperature, $T\\simeq 10\\, \\rm MeV$ (see the left panel of Fig.REF ).", "This eventually jeopardizes the predictions of light nuclei abundance, albeit the weak interaction is still on.", "Figure: Evolution of neutron-to-proton ratio (RR) (Left panel) and the co-moving number density (Y χ Y_\\chi ) of the additional species (Right panel) have been shown for m χ =1.0 GeV m_\\chi =1.0 \\,\\rm GeV,varying the scaled co-annihilation strength, G D /G F G_D/G_F.BP - II : Non-relativistic freeze-out of $\\chi $ Similar to the previous one, we now show the evolution of $R$ (left panel) and $Y_\\chi $ (right panel) in Fig.REF for $m_\\chi = 1.0\\, \\rm GeV$ for different values of $G_D/G_F$ .", "Unlike the previous case, the freeze-out of $\\chi $ depends on the co-annihilation strength.", "For example, the freeze-out temperature of $\\chi $ for $G_D/G_F = 10^2$ (red solid line) is $T_F \\simeq 50 \\,\\rm MeV$ , whereas for $G_D/G_F= 200$ (magenta dashed line for the corresponding yield) the freeze-out happens bit late, i.e.", "at $T_F \\simeq 20 \\,\\rm MeV$ .", "To note, for $m_\\chi = 1\\,\\rm GeV$ the freeze out of $\\chi $ happens in the non-relativistic regime, as $m_\\chi /T_F >> 3$ in our example scenarios.", "In particular, there is a non-trivial relation between freeze-out temperature and the co-annihilation strength, i.e.", "$T_F \\propto G^{-2}_D$ evident from Eq..", "This should be contrasted with the freeze-out temperature in the standard WIMP scenario of which the dependence on the coupling strength is rather weak, i.e.", "a logarithmic dependence [19].", "The difference manifests from the fact that the freeze-out condition is determined by the baryon asymmetry in our case, whereas in the WIMP scenario it is decided by the thermal densities with vanishing chemical potentials.", "For $G_D/G_F=\\lbrace 500, 10^3\\rbrace $ (blue dotted and orange dot-dashed line respectively) the co-annihilation keeps $Y_\\chi $ in its equilibrium form ($Y^0_\\chi $ , black dashed line) for the relevant epoch.", "In the left panel, for $G_D/G_F = 10^2$ (red solid line), $R$ freezes out at slightly smaller value than the SBBN scenario, i.e.", "$G_D/G_F =0$ (cyan dashed line).", "For $G_D/G_F \\,(>10^2)$ (see magenta dashed, blue dotted and orange dot-dashed lines) the freeze-out value of $R$ agrees with the SBBN prediction.", "This is a sheer contrast to the relativistic case where largish value of $G_D/G_F$ destroys the SBBN prediction by changing the ($n/p$ ) ratio significantly.", "In addition, for each of the parameter values there is a dip seen in the ($n/p$ ) ratio at the initial epoch.", "This is a non-trivial feature emerging from a novel interplay between the weak interaction and the co-annihilation throughout the evolution history of neutron and $\\chi $ .", "To understand the situation, let's look at the relative strengths of the co-annihilation and the weak interaction rate per neutron at the initial epoch, which is given by the following expression, noticeably different from Eq.REF .", "cw(GD GF)2 (m2+2 mmnT2) (mT)3/2e- m/T At around $T \\simeq 90\\,\\rm MeV$ , for $G_D/G_F=10^2$ , $\\Gamma _c/\\Gamma _w \\simeq 10^3$ , i.e.", "the co-annihilation rate per neutron is large by several orders of magnitude from the standard weak processes.", "Therefore, at this epoch more neutrons are removed than in the SBBN scenario from the thermal bath via the dominant co-annihilation.", "As a result the ($n/p$ ) ratio deviates from its standard evolution.", "However, the removal of neutrons does not go incessantly because very soon the weak processes become dominant.", "For large $G_D/G_F (> 10^2)$ values $Y_\\chi $ follows its equilibrium form for longer period of time, thereby the number density of $\\chi $ becomes Boltzmann-suppressed.", "In other words, the available $\\chi $ particle per neutron becomes scarce to facilitate co-annihilation any further in the later epoch, e.g.", "at $T \\sim 40\\,\\rm MeV$ for $G_D/G_F=10^2$ , $\\Gamma _c/\\Gamma _w \\simeq 10^{-2}$ .", "Now, the small ($n/p$ ) ratio indicates that there are more protons than neutrons in the thermal bath.", "Therefore, the proton-to-neutron conversion rate is enhanced compared to the neutron-to-proton conversion rate.", "Subsequently once `deceased' neutrons reincarnate themselves via fast weak processes restoring the standard evolution of the ($n/p$ ) ratio.", "Note that, the occasional co-annihilation of neutrons after the freeze-out of $\\chi $ can not change the ($n/p$ ) ratio substantially unlike the relativistic case, because of negligible relative abundance of $\\chi $ to neutrons ($Y_\\chi /Y_n$ ) in the bath.", "However, for $G_D/G_F = 10^2$ , at the freeze-out temperature $Y_\\chi /Y_n \\sim \\mathcal {O}(1)$ , which enables few occasional collisions, thereby changing the freeze-out value of $R$ by a small fraction.", "In passing, we also note that during the removal and reincarnation of neutrons $Y_\\chi $ is also slightly deviated from its equilibrium value due to correlated chemical phases of $\\chi $ and neutrons via the co-annihilation, apparent from Eq.. We can now summarize our discussion regarding the interplay between the weak interaction and the co-annihilation for both the cases, i.e.", "the relativistic and the non-relativistic freeze-out of $\\chi $ using an instructive diagram shown in Fig.REF .", "The freeze-out value of the ($n/p$ ) ratio is denoted by $R_F$ , which is calculated at $m_n/T_F = 5000$ varying $G_D/G_F$ continuously over several orders of magnitude for different $m_\\chi $ values.", "In SBBN scenario, $R_F(\\rm BBN)\\approx 1/7$ [20], [21], denoted by the black dashed line in Fig.REF including the effect of the neutron decay.", "As suggested by the previous two example scenarios, the mass of $\\chi $ controls two different features in determining $R_F$ .", "Figure: The freeze-out value of the (n/pn/p) ratio, R F R_F shown as a function of the scaled co-annihilation strength, G D /G F G_D/G_F for different values of m χ m_\\chi .", "For $m_\\chi < \\mathcal {O}(m_n)$ , there is a large number of $\\chi $ compared to neutrons, which makes weak processes inefficient for most of the time.", "Thus $R_F$ deviates from its SBBN value for $G_D/G_F \\gtrsim 10^{-1}$ .", "This is illustrated for two masses, i.e.", "$m_\\chi =100\\, \\rm MeV$ (magenta dot-dashed line) and $m_\\chi =50\\, \\rm MeV$ (orange solid line) in Fig.REF .", "We note, for $G_D/G_F \\approx 10^2$ , $R_F \\approx 10^{-11}$ , which completely ruins the SBBN predictions.", "For $m_\\chi \\sim \\mathcal {O}(m_n)$ , $R_F$ remains at its SBBN value for two regions, i.e.", "for $G_D/G_F \\lesssim 10^{-1}$ and for $G_D/G_F \\gtrsim 10^{2}$ .", "The SBBN prediction is altered only for the intermediate region, i.e.", "$10^{-1}\\lesssim G_D/G_F \\lesssim 10^{2}$ .", "The situation is depicted through two illustrative masses, i.e.", "$m_\\chi =1\\,\\rm GeV$ (red dashed line) and $m_\\chi =1.5 \\,\\rm GeV$ (blue dotted line).", "For $G_D/G_F \\lesssim 10^{-1}$ the weak interaction dominates over the co-annihilation throughout the evolution history, therefore the SBBN scenario is trivially satisfied.", "For $G_D/G_F \\gtrsim 10^{2}$ the relative abundance of $\\chi $ to neutrons ($Y_\\chi /Y_n$ ) becomes too small to modify $R_F(\\rm BBN)$ via the interplay between the weak and the co-annihilation processes discussed earlier.", "In the intermediate region, the co-annihilation can not keep $Y_\\chi $ in its equilibrium as indicated in Eq..", "Therefore, $Y_\\chi /Y_n$ becomes large enough (but not as large as in the case of $m_\\chi < \\mathcal {O}(m_n)$ ) to ruin BBN predictions.", "To sum up, when $m_\\chi < \\mathcal {O}(m_n)$ , BBN prediction allows only small values of $G_D/G_F$ .", "The parameter space for $G_D/G_F$ is enhanced for $m_\\chi \\sim \\mathcal {O}(m_n)$ , as both small and large values of $G_D/G_F$ can reproduce BBN predictions." ], [ "Relic density constraint", "Now, we shall study the possibility of $\\chi $ to be a DM candidate while being consistent with the BBN predictions.", "In Fig.REF we have shown contours of constant relic densities of $\\chi $ in the $G_D/G_F - m_\\chi $ plane, where the red solid line represents the central value of the observed DM relic density, i.e.", "$\\Omega _\\chi h^2 =0.12$ [22].", "The grey shaded region in Fig.REF is allowed from the BBN prediction, i.e.", "$R_F=R_F(BBN)$ .", "The kinematic bound on $m_\\chi $ indicated by the black dashed line in the left panel ensures the stability of $\\chi $ , already shown in Eq.. We note that for $m_\\chi \\simeq 0.92 \\,\\rm GeV$ , the new particle species contributes to the whole of the observed DM relic density, simultaneously satisfying the BBN constraint.", "For $m_\\chi \\gtrsim 0.92 \\,\\rm GeV$ , the energy density of $\\chi $ contributes to the small fraction of the total DM density, e.g.", "see the magenta dashed and blue dot-dashed lines representing $\\Omega _\\chi h^2=10^{-3}$ and $\\Omega _\\chi h^2=10^{-6}$ respectively.", "For $m_\\chi \\lesssim 0.92\\,\\rm GeV$ , the universe would be over-abundant with $\\chi $ , as shown by the black and orange dotted lines (in the right panel) denoting $\\Omega _\\chi h^2 =10$ and $\\Omega _\\chi h^2=10^2$ respectively.", "Figure: Contours in the G D /G F -m χ G_D/G_F - m_\\chi plane represent the relic densities of χ\\chi particle, where the central value of the observed DM relic density, Ω χ h 2 =0.12\\Omega _\\chi h^2 =0.12 is achieved for the parameter points on the red solid line.", "The right panel is the zoomed in version of the left panel focusing on the parameter region, G D /G F ∈[100,250]G_D/G_F \\in [100,250] and m χ ∈[0.91,0.93] GeV m_\\chi \\in [0.91,0.93]\\,\\rm GeV.Therefore, the additional particle species, co-annihilating with neutron having $m_\\chi \\lesssim 0.92\\,\\rm GeV$ is problematic from the DM relic density constraint, which calls for some extra number-changing processes to further reduce its number density.", "In passing, we notice that the additional species becomes a DM candidate only when its mass is near the neutron mass.", "This is a kind of generic feature in determining the relic via co-annihilations in which the mass difference between two particle species should be comparable to the temperature at the relevant epoch [8].", "In particular, the freeze-out temperature of $\\chi $ in the non-relativistic scenario turns out to be $T_F \\sim (m_n-m_\\chi ) \\sim \\mathcal {O}(10)\\,\\rm MeV$ for $m_\\chi = 0.92 \\,\\rm GeV$ .", "It is clear that this condition is not satisfied for the relativistic freeze-out scenario when the mass of $\\chi $ is away from the neutron mass." ], [ "Acknowledgment", "We thank Satyanarayan Mukhopadhyay for various helpful conversations since the outset of this work.", "We also thank Rohan Pramanick and Utpal Chattopadhyay for computational help and Sougata Ganguly, Avirup Ghosh for interesting discussions.", "This work is supported by the Institute Fellowship provided by the Indian Association for the Cultivation of Science (IACS), Kolkata." ] ]
2207.10499
[ [ "SME Gravity: Structure and Progress" ], [ "Abstract This proceedings contribution outlines the current structure of the gravity sector of the Standard-Model Extension and summaries recent progress in gravitational wave analysis." ], [ "Lorentz violation in gravity", "The gravitational Standard-Model Extension (SME) [1], [2], [3] provides a field theoretic test framework for Lorentz symmetry.", "Originally motivated by the search for new physics at the Planck Scale,[4] the search for Lorentz violation using the SME continues to be an active and growing area of research several decades later, as illustrated by the scope of both these proceedings and the Data Tables for Lorentz and CPT Violation.", "[5] In terms of structure, the SME can be thought of as a series expansion about known physics as the level of the action.", "The additional terms are constructed from conventional fields coupled to coefficients for Lorentz violation, which can be thought of as providing directionalities to empty spacetime.", "The mass dimension of the additional operators labels the order in the expansion.", "[6] The leading terms, which are of mass dimension $d=3,4$ , form a limit known as the minimal SME.", "In the gravity sector, a variety of complementary limits have been explored in the context of theory, phenomenology, and experiment.", "The goal of my contribution to the CPT'19 proceedings was to summarize the relations among, and the status of, these efforts, [7] in part through the creation of Fig.", "REF , which remains a useful description of much of the gravity-sector structure, though additional work has been done in many of its areas.", "For additional discussion of Fig.", "REF , see Ref.", "jtcpt19.", "The evolution of the field since CPT'19 has led to an understanding of the content of Fig.", "REF as being but one facet of an expanded array of areas to be explored via the framework of Ref.", "backgrounds.", "Figure REF highlights the addition of this prequel relative to the 2019 structure.", "Figure: Structure of the gravity sector as of CPT'19.Light gray boxes show the various limits of this sectorthat had been explored to this point.Work that builds out the search in the respective limitsappears in dark gray boxes.Theoretical contributions are shown in dashed boxes.While the structure shown is the same as seen in CPT'19,references have been updated to reflect the additional work doneon a number of nodes.Figure: Additional progress in SME gravity as of CPT'22.Reference backgrounds can be thought of as a prequel to the work outlined in Fig.", "1as well as opening new avenues of investigation.While an effort has been made to point the reader to key works, those that are recent, and those discussed elsewhere in these proceedings, it is not possible to address all of the work done in this area in this short summary.", "We refer the reader to other contributions to these proceedings along with Refs.", "SME3,datatables for additional discussion and references." ], [ "Reach, Separation, and Gravitational Waves", "As can be seen from the data tables,[5] experiments have achieved a high level of sensitivity to coefficients for Lorentz violation and have explored a large breadth of coefficient space.", "In performing such analysis, the question of which and how many coefficients to extract measurements for using a given data set naturally arrises.", "Practical progress dictates that experimental data be used to extract likelihood bounds on the coefficients for Lorentz violation in the context of a model involving a subset of the full (and in general infinite) coefficient space of the SME.", "This highlights the nature of the SME as a test framework rather than a model.", "One popular approach is to consider each SME coefficient one-at-a-time, perhaps re-using a data set to attain likelihood bounds on multiple coefficients.", "This approach is sometimes referred to as a maximum reach approach[25] because it characterizes the maximum reach that the experiment can attain for the coefficient.", "When these measurements are consistent with zero, they provide a good order-of-magnitude sense of how big the particular Lorentz-violating effect could be in nature in the absence of a model involving a fine-tuned cancelation of the effects of multiple coefficients in the observable under consideration.", "When data permits, it is also common to obtain simultaneous measurements of all or multiple coefficients of the same observer tensor object, or even several tensor coefficients from a given sector at a given mass dimension.", "This is sometimes referred to as a coefficient separation procedure[25] and it can more definitively exclude a larger set of models.", "In my CPT'19 proceedings contribution,[7] I highlighted two key expansions in experimental reach that had recently emerged at that time: the MICROSCOPE mission in the context of matter-gravity couplings[26] and multimessenger astronomy in the form of gravitational wave (GW) event GW170817 and gamma ray burst GRB 170817A in the context of the minimal gravity sector.", "[27] While little coefficient separation had been done at that time in the context of GW studies, the now extensive catalog of GW events[28], [29], [30] has led to a blossoming of these studies.", "By taking advantage of the arrival time at the different GW detectors situated around the Earth, simultaneous measurements of 4 of the 9 minimal gravity-sector coefficients have been achieved[31] using data from the first GW catalog.", "[28] A simultaneous extraction of all 9 minimal gravity sector coefficients using all suitable GW events released to date[28], [29], [30] is in preparation.", "[32] The search for birefringence and dispersion of gravitational waves based on the dimension 5 and 6 coefficients has also now been the focus of numerous studies.", "Dimension 5 effects have been incorporated into into a version of the Laser Interferometer GW Observatory (LIGO) Algorithm Library suite LALSuite, and a sensitivity study has been performed using this implementation.", "[17] Results from the body of recent gravitational wave events based on this implementation are in preparation.", "[33] An implementation of dimension 5 and 6 effects in the Bilby analysis code has also generated results[19] based on recent GW events.", "Similar work has previously been done based on the duration of LIGO/Virgo chirps.", "[18] Additional studies of GW Birefringence that are yet to incorporate direction dependence have also been done,[34] and isotropic studies of dispersion are ongoing.", "[35]" ], [ "Acknowledgments", "J.T.", "is supported by NSF grant PHY1806990 to Carleton College and thanks M. Seifert for useful conversations." ] ]
2207.10515
[ [ "A cost effective eye movement tracker based wheel chair control\n algorithm for people with paraplegia" ], [ "Abstract Spinal cord injuries can often lead to quadriplegia in patients limiting their mobility.", "Wheelchairs could be a good proposition for patients, but most of them operate either manually or with the help of electric motors operated with a joystick.", "This, however, requires the use of hands, making it unsuitable for quadriplegic patients.", "Controlling eye movement, on the other hand, is retained even by people who undergo brain injury.", "Monitoring the movements in the eye can be a helpful tool in generating control signals for the wheelchair.", "This paper is an approach to converting obtained signals from the eye into meaningful signals by trying to control a bot that imitates a wheelchair.", "The overall system is cost-effective and uses simple image processing and pattern recognition to control the bot.", "An android application is developed, which could be used by the patients' aid for more refined control of the wheelchair in the actual scenario." ], [ "Introduction", "According to a study conducted by American Spinal Injury Association in 2016 it was found that a total of 12500 people are affected by the spinal cord injuries in varying degree of spinal cord injuries [1].", "According to their estimates, the global yearly average is anywhere between 1.3 to 2.6 hundred thousand cases.", "Many of these injuries lead to quadriplegia which severely affects the mobility of a person.", "Apart from trauma, quadriplegia can be inherited [2].", "A motorised wheelchair appropriately controlled by signal generated by the patients body could be an effective solution for this problem.", "Control signals such as electromyogram (EMG) [3], electroencephalogram (EEG) [4], or electrooculogram (EOG) [5], generated from the quadriplegic patients eyes, face or head could be used to control this device.", "However, capturing and processing minute signals is often difficult and the sensing and signal conditioning circuits are highly sophisticated and expensive making the affordability a big concern.", "In a country like India where the total number of physically disabled people was found to be close to 5.36 million in the year 2016, such eye tracking devices can give them a feel of self reliability [6].", "Another alternative to using physiological signals for the control of wheelchair is to track the eye movement of the patient.", "The history of eye tracking dates back to mid 19th century.", "Some of the earlier applications of eye tracking were in text detection, text reading, to measure the effectiveness of advertisements, to control devices using the eye [7].", "Nowadays, eye tracking is used in scientific research, augmented reality, medical field and smart devices[8], [9], [10].", "The system requires simple cameras clubbed with image processing and pattern recognition algorithms and can be implemented in commonly available processors.", "This also brings down the cost of the overall control system in comparison to those which uses physiological signals[11], [12].", "This project aims to make a reliable eye tracking model and implement the hardware.", "This model should be able to accurately and efficiently track the eyeball movement and assign commands to each unique movement, to move a bot/ wheelchair [13], [14], [15].", "This model can be helpful in different fields.", "The following flowchart shown in Fig.", "REF gives our approach on the implementation of the model.", "Figure: Flowchart" ], [ "Eye Tracking", "say generally about the eye tracking algorithm with the overall flow chart here." ], [ "Hardware Setup", " Raspberry Pi Camera: It is a 5 megapixel camera module which was set to record video at 60 frames per second (fps).It has been highly optimized to work with the Raspberry Pi Processor.", "It has extremely fast transmission speeds due to its Parallel-In-Parallel-Out (PIPO) set of ribbon cables.", "The RPi camera is mounted, at a distance of 9cm to 12cm, in front of the eye for the best focus.", "This RPi camera then captures the movements of the pupil of the eye and sends it to the RaspberryPi for processing.", "RaspberryPi: It is a very powerful and low-cost micro-processor.", "It has a 1GB RAM capacity and supports many of the common python libraries.", "The RaspberryPi is used to process the incoming video frames from the RPi Camera.", "The incoming frame from the RPi camera is a three-dimension colour image This coloured image was first made into a greyscaled image and then resized to a 128px by 128px image.This was done to reduce the overall computational time.Then this resized gray scaled image is passed onto the image processing model sitting on the RP" ], [ "Image Processing and control command generation", "The image processing model we used is a convolutional neural network trained on a self created dataset.", "It receives the image from rpi local host, processes it and predicts the orientation of the eye.", "The predicted orientation is passed on to the firebase.", "The dataset is self made using pictures clicked of around 10 persons.", "The dataset distribution is as follows: Down: 2045 images Up: 2475 images Left: 2858 images Right: 2797 images Straight: 3058 images The images are of various sizes in the RGB(color) format.", "Figure: CNN Flowchart" ], [ "Data Preprocessing", "In the data preprocessing phase, all the images are cropped to focus only on the eye and the images are converted to gray scale.", "The images are further resized to 128 by 128 pixels to have a uniform size throughout the dataset." ], [ "Neural Network architecture", "As the flow chart depicts, the neural network consists of four sets of convolutional layer, dropout layer and max pooling layer followed by flattening layer and full connection using dense layer.", "The convolutional layers have 32, 64, 128 and 128 filters in successive layers.", "The kernel size of these filters are fixed to 3 by 3.", "Dropout layers are added to reduce over-fitting with a dropout rate of 0.4.", "The kernel size in max pooling layers are fixed to 2 by 2.", "A flatten layer flattens out all the inputs received by it which is then fed to fully connected dense layer with 256 neurons.", "The output of this layer is given to the final output layer with 5 neurons which predict the orientation.", "The convolutional layers and deep dense layer uses rectified linear units as activation function.", "The output layer uses softmax to predict the probabilities that the input belongs to a certain class.", "Figure: Rectified Linear UnitFigure: Softmax Function" ], [ " Training Phase", "In the training phase, the processed images are first loaded.", "The entire dataset is further divided into training and validation sets randomly.", "The validation set is approximately 30% of the total dataset.", "The model is trained with adam optimizer and categorical cross-entropy as loss function for 25 epochs.", "The trained model is saved.", "An Android Application is made to enable manual override of the bot during certain situations.", "It has options to update the direction and even the speed of the bot to the firebase.", "It also allows the user to give voice commands.", "The android application acts as a safety precaution in case the RaspberryPi is unable to send controls and communicate with firebase." ], [ "iDroid", "iDroid is the hardware prototype of a wheelchair.", "The prototype of the wheelchair is composed mainly Arduino UNO, NodeMCU, Ultrasonic Sensor (HC-SR04) and L293D Motor Driver Shield.", "The Arduino and Motor Driver Shield are interfaced and made into a single unit.", "The Motors and the Ultrasonic Sensor are interfaced to the Motor Driver Shield.", "A power bank with dual port is used to power the Arduino and the Motor Driver Shield for better performance.", "The firebase has a tag named Signals which is updated according to the eyeball movement of the user.", "The following table indicates the relation between the movement of the eye and the movement of the bot.", "These signals will be considered valid and get updated to the firebase only if the same signal is given by the user for more than 30 frames.", "This information is then received from the firebase using the NodeMCU.", "This received information is then communicated to the Arduino using the SoftwareSerial library.", "The Arduino now receives the information and signals the Motor Driver Shield to appropriately move the bot.", "All this happens with a delay of about 500ms.", "When the bot is in motion the Ultrasonic Sensor continuously checks for any obstacles.", "If any obstacle is detected within the threshold distance, the bot stops immediately.", "Figure: NO_CAPTION" ], [ "NodeMCU", "NodeMCU was chosen for this project because of its ability to connect easily to a nearby Wi-Fi Router or a Mobile Hotspot.", "It is small in size, inexpensive and has low power requirements.", "It also has a plenty of online learning resources and help from the online community.", "Moreover it can be programmed using the Arduino IDE by installing some necessary board packages and thereby reduces the effort of getting used to a new programming environment.", "The GPIO pins are used for Serial Communication with the help of the SoftwareSerial Library.", "The NodeMCU connects to a nearby Wi-Fi Router or a Mobile Hotspot and fetches the information from the Firebase.", "After this it communicates the information to the Arduino using the SoftwareSerial library.", "Table: Control Signal" ], [ "Arduino UNO", "Arduino was chosen for this project because it can easily connect with different sensors and has got many shields which provide it with additional capabilities.", "It also has got a large support from the Arduino community which is very active.", "Arduino UNO was chosen because it is inexpensive and small in size.", "It also is the most popular and the most used board among the other microcontrollers of the Arduino Family and a result has very good documentation too.", "The Arduino is programmed using the Arduino IDE.", "The TX and RX pins on the board are the pins meant for Serial Communication.", "The other pins can be used for Serial Communication by using the SoftwareSerial Library.", "The information that the NodeMCU receives from the Firebase is communicated to the Arduino using the SoftwareSerial library.", "The Arduino receives the information and signals the Motor Driver Shield to appropriately the move to the bot according to the command received." ], [ "L293D Motor Driver Shield", "L293D Motor Driver Shield was chosen because of its ability to power 4 bi-directional DC Motors.", "It also allows easy interface of additional sensors thereby eliminating the need of a breadboard.", "The Motor Driver Shield offers an 8 bit speed control (0-255) for varying the speed of the bot.", "The speed of the bot is gradually increased to the required value to prevent sudden jerks to the user and also sudden loading of the power source.", "The use of the AFMotor Library simplifies the coding part as it has direct inbuilt functions for controlling the motors." ], [ "Ultrasonic Sensor (HC-SR04)", "Ultrasonic Sensor was chosen for this project because it is inexpensive, easy to use, small in size, available readily, highly accurate and detects most type of obstacles.", "When the bot is in motion the Ultrasonic Sensor continuously checks for any obstacles.", "If any obstacle is detected within the threshold distance, the bot stops immediately.", "Figure: Ultrasonic Timing DiagramThe Ultrasonic Sonic Sensor consists of 2 Ultrasonic Transducers which act as Transmitter and Receiver.", "The Trigger Pin of the Ultrasonic Sensor is given a pulse of a minimum duration of 10 µS, resulting in the transmission a sonic burst of 8 pulses with a frequency of 40 KHz.", "Once this is transmitted the Echo pin goes high and then goes low only if the pulses reflect from the surface of an obstacle and reach the sensor.", "If there are no obstacles encountered then the echo signal goes low by itself after 38 mS.", "Hence once we know the time and speed, we estimate the distance of the obstacle from the bot and its accuracy is further increased by using the NewPing library." ], [ "Performance of the eye tracking algorithm", "The confusion matrix is shown in Tab Table: Confusion Matrix.The classification report is as follows: Table: Classification Report" ], [ "Conclusions", " The RPi camera needs sufficient illumination to capture the images correctly.", "The prediction on the RPi camera is slow as it requires high computation power to process the convolutional neural network.", "Also, RPi faced compatibility issues with TensorFlow versions.", "So RPi just resizes the captured images, grayscale’s the image and feeds this to the computer to do the predictions.", "The resizing of images and conversion from color to grayscale is done to reduce latency issues.", "It should be ensured that the RPi camera focuses only on the eye and doesn’t move much.", "The predictions are fast enough to detect even the quick blinks.", "To avoid the detection of unintentional blinks, the firebase is not updated until the predictions are the same for a minimum of 20 frames.", "This also reduces latency issues in firebase.", "The four-wheeler iDroid being used requires a lot of power.", "So the iDroid must be powered by well charged Power Bank.", "The training accuracy is found to be 99.99848%.", "This indicates that there is a chance that the model is overfitting.", "Attempts has been made to make the model more robust and error-free by adding dropout layer and making the data set skew free.", "The image processing model(CNN) is able to process and predict at a rate of 15-16 FPS." ] ]
2207.10511
[ [ "Surface defects, flavored modular differential equations and modularity" ], [ "Abstract Every 4d $\\mathcal{N} = 2$ SCFT $\\mathcal{T}$ corresponds to an associated VOA $\\mathbb{V}(\\mathcal{T})$, which is in general non-rational with a more involved representation theory.", "Null states in $\\mathbb{V}(\\mathcal{T})$ can give rise to non-trivial flavored modular differential equations, which must be satisfied by the refined/flavored character of all the $\\mathbb{V}(\\mathcal{T})$-modules.", "Taking some $A_1$ theories $\\mathcal{T}_{g,n}$ of class-$\\mathcal{S}$ as examples, we construct the flavored modular differential equations satisfied by the Schur index.", "We show that three types of surface defect indices give rise to common solutions to these differential equations, and therefore are sources of $\\mathbb{V}(\\mathcal{T})$-module characters.", "These equations transform almost covariantly under modular transformations, ensuring the presence of logarithmic solutions which may correspond to characters of logarithmic modules." ], [ "Introduction", "Four dimensional superconformal field theories (SCFTs) with $\\mathcal {N} = 2$ supersymmetry are fascinating objects to study, as they are constrained enough to allow various exact computations, and also rich enough to generate numerous physical and mathematical interesting structures.", "One remarkable example is the SCFT/VOA correspondence between the 4d $\\mathcal {N} = 2$ SCFTs and 2d vertex operator algebras (VOAs) [1], which maps the OPE algebra of the Schur operators in any 4d SCFT $\\mathcal {T}$ to that of an associated VOA $\\mathbb {V}(\\mathcal {T})$ .", "According to the correspondence, the Schur limit $\\mathcal {I}$ of the 4d $\\mathcal {N} = 2$ superconformal index equals the vacuum character of the associated VOA, the $c$ central charge and the flavor central charges of $\\mathcal {T}$ are related to the 2d central charge and the levels of affine subalgebras of $\\mathbb {V}(\\mathcal {T})$ by simple proportionality $c_\\text{2d} = - 12 c_\\text{4d}, \\qquad k_\\text{2d} = - \\frac{1}{2} k_\\text{4d} \\ .$ The minus signs imply that whenever the 4d theory is unitary, the associated VOA will be non-unitary.", "Like the Lie algebras, VOAs are interesting objects to study from a representation-theoretic point of view, as they admit many or infinite interesting modules.", "When a VOA is rationalA rational VOAs is special case of a lisse/$C_2$ -cofinite VOA, which is a VOA with zero dimensional associated variety [2], [3], [4], [5].", "In the SCFT/VOA correspondence, the associated variety of an associated VOA equals the Higgs branch of the 4d theory [6].", "Therefore, rationality of the associated VOA implies the absence of Higgs branch in 4d, and in particular, absence of flavor symmetry., namely, when it admits only finitely many irreducible modules whose characters form a vector-valued modular function, it could be considered as the chiral (symmetry) algebra of a rational conformal field theory (RCFT), with its modules corresponding to the primaries of the RCFT.", "Outside of the realm of RCFT, the representation theory of a VOA could be much more complicated.", "For instance, logarithmic modules may be present on which $L_0$ does not act diagonally and the corresponding character is logarithmic.", "In general, the associated VOAs of class-$\\mathcal {S}$ theories are not rationalTheories of class-$\\mathcal {S}$ in general have non-trivial Higgs branches.", "For genus-zero theories, the associated VOAs are shown to be quasi-lisse [7], [8] (i.e., the associated variety has finitely many symplectic leaves).. Fortunately, there are tools that may help explore the structure of the modules of the associated VOAs.", "Crucially, sources of modules can be found in the 4d physics.", "In a 4d $\\mathcal {N} = 2$ SCFT $\\mathcal {T}$ , one can introduce surface operators that perpendicularly penetrate the VOA plane at the origin while simultaneously preserve a 2d $\\mathcal {N} = (2,2)$ superconformal subalgebra of the 4d superconformal algebra [9], [10], [11], [12], [13].", "It is conjectured that such a defect corresponds to a non-vacuum (twisted) module of the associated VOA $\\mathbb {V}(\\mathcal {T})$ [14], [12], [15], [16], [13], [17], [18].", "In particular, the character of such module should coincide with the Schur index in the presence of the defect.", "In cases where $\\mathbb {V}(\\mathcal {T})$ have been explicitly known, e.g., when $\\mathcal {T}$ is an Argyres-Douglas theory, surface defects that arise from the Higgsing prescription have been identified with the modules of the associated VOAs.", "However, in general it remains challenging to verify the conjecture.", "Another tool that comes in handy is the the modular differential equations.", "In an RCFT, the chiral symmetry algebra has finitely many irreducible modules, whose characters $\\chi _i$ form a vector-valued modular form of weight-zero with respect to $SL(2, \\mathbb {Z})$ or a suitable subgroup.", "As a result, any module character (of the chiral algebra) must satisfy a universal unflavored modular differential equation whose coefficients are ratios of Wronskian matrices made out of characters $\\chi _i$ [19], [20].", "These coefficients are strongly constrained by their modularity and this fact has been extensively exploited to classify RCFTs with a fixed number of chiral primaries (with or without fermions) [21], [22], [23], [24], [25], [26], [27], [28].", "Modular differential equations also arise in the context of SCFT/VOA correspondence.", "As shown in [6], the stress tensor of the associated VOA must be nilpotent up to an element $\\varphi $ in the subspace $C_2(\\mathbb {V}(\\mathcal {T}))$ and a null state $\\mathcal {N}$ , $(L_{-2})^n |0\\rangle = \\mathcal {N} + \\varphi $ for some $n \\in \\mathbb {N}_{>0}$ .", "Combining with Zhu's recursion formula [2], [29], [30] that computes torus one-point functions, the nilpotency may lead to a non-trivial unflavored modular differential equation satisfied by the unflavored Schur index.", "Such equation has been exploited to classify rank-two 4d $\\mathcal {N} = 2$ SCFTs [31].", "The same reasoning naturally generalizes to characters of other (twisted) modules of the associated VOA, and one expects the untwisted characters to satisfy the same equation, while the twisted characters to satisfy a twisted version of the equation.", "When the 4d theory has flavor symmetry, flavor fugacities can be introduced into the Schur index and defect indices.", "With the help of the flavored Zhu's recursion formula [32], [33], [17], some null states lead to flavored modular differential equations.", "Note that there are usually additional null states giving rise to a few more equations besides the one corresponding to the nilpotency of the stress tensor.", "The character of any (twisted) $\\mathbb {V}(\\mathcal {T})$ -module are expected to satisfy all of these (twisted version of) differential equations simultaneously, and therefore are heavily constrained.", "This paper aims to further explore the relation between surface defects in a class of 4d $\\mathcal {N} = 2$ SCFTs and the module characters of their associated VOAs.", "For simplicity we will focus on the class-$\\mathcal {S}$ theories of type-$A_1$ : these theories are the simplest in the sense that their Schur indices and vortex defect indices (from Higgsing) are known in closed-form in terms of some well-known analytic functions [34].", "We construct the (flavored) modular differential equations that their Schur indices satisfy, and study the common solutions to such equations.", "It turns out that there are several physical sources of common solutions: the Schur index $\\mathcal {I}_{g,n}$ itself obviously, and vortex defect indices [11] $\\mathcal {I}_{g,n}^\\text{defect}(k = \\text{even})$ (namely, with even vorticity $k$ ), the defect indices of Gukov-Witten type surface defects [9], and some surface defects related to modular transformations.", "Based on these computational results, we conjecture that these surface defects indeed correspond to non-vacuum modules of the associated VOA.", "Furthermore, vortex defects with odd vorticities are solutions to some twisted version of the differential equations, and therefore it is natural to associate them with the twisted modules.", "Although the presence of additional flavored modular differential equations makes the special equation (temporarily called $\\text{eq}_\\text{Nil}$ ) from the nilpotency of the stress tensor seem less prominent, in several examples we find that $\\text{eq}_\\text{Nil}$ actually contains all the information on the allowed flavored characters.", "The key is modularity.", "When flavored, the coefficients of the flavor modular differential equations are no longer modular forms, but rather quasi-Jacobi forms.", "Under suitable modular transformation, $\\text{eq}_\\text{Nil}$ does transform and it actually generates all the necessary modular differential equations of lower weights.", "Schematically, $S(\\text{eq}_\\text{Nil}) = \\sum _{m,n} \\tau ^m \\mathfrak {b}^n \\text{(FMDEs of lower weights)}_{m,n} \\ .$ They together determine all the allowed non-logarithmic and logarithmic characters.", "When unflavored, the presence of logarithmic solutions is expected whenever the indicial roots are integral-spaced, e.g., by the Frobenius method.", "See also [35], [36], [7].", "In the cases we have studied where the Schur and vortex defect indices have closed-form expressions, these logarithmic solutions are just modular transformations of the non-logarithmic solutions, thanks to the modularity of the coefficients which makes the differential equations covariant (or invariant, up to an overall factor of $\\tau ^n$ ) under suitable modular transformations.", "However, the quasi-Jacobi-ness upon flavoring would naively breaks this logic.", "Luckily, the covariance can be almost restored by introducing some additional fugacities $y_i$ associated with the flavor central charges, and this leads to the generation of flavored modular differential equations of lower weights we just mentioned.", "The organization of this paper is as follows.", "In section , we recall some basics of the SCFT/VOA correspondence and surface defects in 4d $\\mathcal {N} = 2$ SCFTs.", "In particular we review the closed-form expression for the Schur index of all $A_1$ class-$\\mathcal {S}$ theories and the defect indices from Higgsing.", "We also recall how modular differential equation arise in the context of 2d RCFT and 4d SCFT.", "In section , we analyze in detail the $\\beta \\gamma $ system of conformal weighs $\\frac{1}{2}$ (also known as the symplectic bosons).", "In both the untwisted and twisted sector, we construct flavored modular differential equations from trivial null states in the vacuum module, and study their common solutions and modularity.", "In section we focus on the $A_1$ class-$\\mathcal {S}$ theories $\\mathcal {T}_{g,n}$ , and in simple examples study their associated (flavored) modular differential equaitons and the solutions given in terms of different defect indices." ], [ "The Schur index", "4d $\\mathcal {N} = 2$ SCFT has been one of the most interesting subject as it bridges different branches in mathematical physics.", "These theories are substantially constrained by the symmetry whith allows exact computation of many quantities, yet they retain extremly rich internal mathematical and physical structures.", "We recall that the 4d $\\mathcal {N} = 2$ superconformal algebra $\\mathfrak {su}(2,2|2)$ In Euclidea signature, the superconformal algebra is $\\mathfrak {su}^*(4|2)$ instead.", "contains the generators $P_{\\alpha \\dot{\\alpha }}, \\quad K^{\\dot{\\alpha }\\alpha }, \\quad D, \\quad M_\\alpha {^\\beta }, \\quad M^{\\dot{\\alpha }}{_{\\dot{\\beta }}}, \\quad R^I{_J}, \\quad Q^I_\\alpha , \\quad S_I^\\alpha , \\quad \\tilde{Q}_{I \\dot{\\alpha }}, \\quad \\tilde{S}^{I \\dot{\\alpha }} \\ .$ The (anti-)commutation relations can be found in [1].", "Two quantities of a 4d $\\mathcal {N} = 2$ SCFT attract considerable attention, the $S^4$ partition function [37] and the superconformal index $\\mathcal {I}(p,q,t)$ (which is also an $S^3 \\times S^1$ -partition funciton) [38], [39], [40], $\\mathcal {I}(p,q,t)\\operatorname{tr} (-1)^Fp^{\\frac{1}{2}(\\Delta - 2j_1 - 2 \\mathcal {R} - r)}q^{\\frac{1}{2}(\\Delta + 2j_1 - 2 \\mathcal {R} - r)}t^{\\mathcal {R} + r}e^{- \\beta \\lbrace \\tilde{Q}_{2 \\dot{-}}, \\tilde{S}^{2 \\dot{-}}\\rbrace } a^f\\ .$ Both quantities participate in certain AGT-type 4d/2d correspondence when the theory under consideration is of class-$\\mathcal {S}$ [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55]The AGT correspondence is also extensively studied in the presence of surface defects.", "See for example [56], [57].", "In this context, modular property (with respect to the complexified gauge coupling $\\tau _\\text{gauge} \\frac{\\theta }{2\\pi } + \\frac{4\\pi i}{g_\\text{YM}^2}$ ) of the effective superpotential $\\mathcal {W}$ is also studied [58], where the modular anomaly equation [59], [60] determines $\\mathcal {W}$ .", "See also [61] for an application of modular anomaly equation to Schur index..", "In comparison, the index is much simpler as it is independent of exactly marginal deformations of the theory.", "In the context of the 4d/2d correspondence, the superconformal index of a theory $\\mathcal {T}[\\Sigma ]$ equals a topological correlator on the associated Riemann surface $\\Sigma $ [48].", "In different limits, the superconformal index often enjoys supersymmetry enhancement, receiving contributions only from states annihilated by more than two supercharges.", "The Schur index of a 4d $\\mathcal {N}$ = 2 SCFT is the Schur limit $t \\rightarrow q$ of the full superconformal index $\\mathcal {I}(p,q,t)$ [39], [1], and can be written simply as a (super)trace over the Hilbert space of states in the radial quantization, $\\mathcal {I} = \\operatorname{tr}(-1)^Fe^{- \\beta _1\\lbrace Q^1_-, S_1^-\\rbrace }e^{- \\beta _2 \\lbrace \\tilde{Q}_{2 \\dot{-} },\\tilde{S}^{2 \\dot{-}}\\rbrace }q^{E - R} \\mathbf {b}^\\mathbf {f} \\ .$ Here $E$ is the conformal dimension, $R$ the $SU(2)_\\mathcal {R}$ -charge generator defined by $2 R = R^1{_1} - R^2{_2}$ , and $F$ the fermion number.", "Bolded letters $\\mathbf {b}$ and $\\mathbf {f}$ denote collectively any flavor fugacities and the associated Cartan generators of the flavor group that one may include in the trace.", "Thanks to the anti-commutivity and the neutrality under $E - R$ (and $\\mathbf {f}$ ) of the two pairs of supercharges $\\tilde{Q}_{2 \\dot{-} },\\tilde{S}^{2 \\dot{-}}$ and $ Q^1_-, S_1^-$ , the index is actually independent of $\\beta _1$ , $\\beta _2$ , and the $(-1)^F$ insertion leads to vast cancellations between bosonic and fermionic states.", "The only contributions to the Schur index are from the states satisfying the Schur conditions, $\\lbrace Q^1_-, S_1^-\\rbrace = \\lbrace \\tilde{Q}_{2 \\dot{-} },\\tilde{S}^{2 \\dot{-}}\\rbrace = 0, \\qquad \\Leftrightarrow \\qquad E - 2R - M^\\chi = r + M^\\text{def} = 0\\ .$ Here, $M^\\chi $ and $M^\\text{def}$ denote the spin under the rotations within $\\mathbb {R}^2_{x_3, x_4}$ and $\\mathbb {R}^2_{x_1, x_2}$ , or in other words, the eigenvalues of $M_+{^+} + M^{\\dot{+}}{_{\\dot{+}}}$ and $M_+{^+} - M^{\\dot{+}}{_{\\dot{+}}}$ respectively.", "The $U(1)_r$ generator $r \\equiv \\frac{1}{2}(R^1{_1} + R^2{_2})$ .", "These states correspond to the so-called Schur operators in the 4d theory which are typically restricted to the $\\mathbb {R}^2_{x_3, x_4}$ plane.", "As a superconformal index, the Schur index is invariant under exactly marginal deformation of the 4d theory.", "Exploiting such independence, the Schur index of Lagrangian 4d $\\mathcal {N} = 2$ SCFTs can be easily computed in the free limit.", "The result is organized into a contour integral $\\mathcal {I}= & \\ \\frac{(-i)^{\\operatorname{rank}\\mathfrak {g} - \\dim \\mathfrak {g}}}{|W|}\\oint \\prod _{A = 1}^{\\operatorname{rank}\\mathfrak {g}} \\frac{d a_A}{2\\pi i a_A}\\eta (\\tau )^{- \\dim \\mathfrak {g} + 3 \\operatorname{rank}\\mathfrak {g}} \\prod _{\\alpha \\ne 0}\\vartheta _1(\\alpha (\\mathfrak {a}))\\prod _{w \\in \\mathcal {R}} \\frac{\\eta (\\tau )}{\\vartheta _4(w (\\mathfrak {a} + \\mathfrak {b}))} \\nonumber \\\\& \\ \\oint \\frac{da}{2\\pi i a} \\mathcal {Z}(\\mathfrak {a}) \\ .$ Here $\\mathfrak {g}$ denotes the gauge algebra with the Weyl group $W$ , $\\mathcal {R}$ denotes the joint representation of the gauge and flavor group in which the hypermultiplets transform.", "The $\\vartheta _i$ are the Jacobi theta functions and $\\eta $ the Dedekind $\\eta $ -function.", "Their definitions and properties are collected in appendix .", "Through out this paper, the letters $\\mathfrak {a}, \\mathfrak {b}$ , $\\ldots $ in fraktur font are related to the letters $a, b$ , $\\ldots $ by $a_A = e^{2\\pi i \\mathfrak {a}_A}, \\qquad b_j = e^{2\\pi i \\mathfrak {b}_j}, \\qquad \\ldots , \\qquad q = e^{2\\pi i \\tau } \\ .$ The integration contour of each integration over $a_A$ is taken to be the unit circle $|a_A| = 1$ , and $|b_j| = 1$ .", "Note that there is no pole along the integration contour, since the zeroes of $\\vartheta _4(\\mathfrak {z})$ are given by $\\mathfrak {z} = \\frac{\\tau }{2} + m + n \\tau $ .", "The contour integral can be reproduced from a supersymmetric localization computation on $S^3 \\times S^1$ [62], [63].", "In radial quantization, the Euclidean spacetime $\\mathbb {R}^4$ is viewed as $S^3 \\times \\mathbb {R}$ where $\\mathbb {R}$ denotes the radial direction.", "The Schur index as a trace over the Hilbert space, can be equivalently computed by first compactifying the radial $\\mathbb {R} \\rightarrow S^1$ and placing some appropriate background metric (that depends on a complex modulus $\\tau $ controlling the relative size and angle between $S^3$ and $S^1$ ) and $\\mathcal {R}$ -symmetry gauge fields.", "Let us parametrize $S^3$ by a coordinate system $\\varphi , \\chi , \\theta $ adapted to the $T^2$ -fibration structure of $S^3$ , with ranges $\\varphi , \\chi \\in [0, 2\\pi ]$ and $\\theta \\in [0, \\frac{\\pi }{2}]$ .", "The space $S^3_{\\varphi , \\chi , \\theta } \\times S^1_t$ has a $T^2_{\\varphi , t}$ subspace at $\\theta = 0$ , and another $T^2_{\\chi , t}$ subspace at $\\theta = \\frac{\\pi }{2}$ .", "The path integral of a 4d $\\mathcal {N} = 2$ Lagrangian SCFT localizes to an ordinary integral of a 2d path integral on $T^2_{\\varphi , t}$ over flat dynamical gauge fields of the form $A \\sim \\mathfrak {a} dt$ , $\\mathcal {I} = \\oint \\frac{da}{2 \\pi i a} \\int D\\Phi e^{-S^{T^2_{\\varphi , t}}(a)} \\ , \\qquad \\mathfrak {a} \\in \\text{Cartan of the gauge group, } a \\sim e^{2\\pi i \\mathfrak {a}} \\ .$ The integrand $\\mathcal {Z}$ of (REF ) enjoys a crucial property: ellipticity [64].", "By that we mean that the integrand, as a meromorphic function of the $\\operatorname{rank}\\mathfrak {g}$ variables $\\mathfrak {a}_A$ , is separately doubly periodic under shifts $\\mathfrak {a}_A \\rightarrow \\mathfrak {a}_A + \\tau $ , and $\\mathfrak {a}_A \\rightarrow \\mathfrak {a}_A + 1$ of any one variable $\\mathfrak {a}_A$ , $\\mathcal {Z}(\\mathfrak {a}_A + 1) = \\mathcal {Z}(\\mathfrak {a}_A + \\tau ) = \\mathcal {Z}(\\mathfrak {a}_A) \\ .$ This fact enables an elementary method to evaluate the integral exactly, and organize the result in terms of finitely many twisted Eisenstein series and Jacobi theta functions [34], [65] (see also [66]).", "In this paper, we will mainly focus on the $A_1$ theories $\\mathcal {T}_{g, n}$ of class-$\\mathcal {S}$ associated to a genus-$g$ Riemann surface with $n$ punctures.", "Their fully flavored Schur index $\\mathcal {I}_{g,n}$ has an elegant compact form given by $\\mathcal {I}_{g, n \\ge 1} = & \\ \\frac{i^n}{2} \\frac{\\eta (\\tau )^{n + 2g - 2}}{\\prod _{j = 1}^{n} \\vartheta _1(2 \\mathfrak {b}_j)}\\sum _{\\alpha _j = \\pm }\\left(\\prod _{j = 1}^{n}\\alpha _j\\right)\\sum _{k = 1}^{n + 2g - 2} \\lambda _k^{(n + 2g - 2)} E_k\\left[\\begin{matrix}(-1)^n \\\\ \\prod _{j = 1}^{n}b_j^{\\alpha _j}\\end{matrix}\\right] \\ , \\\\\\mathcal {I}_{g \\ge 1, 0} = & \\ \\frac{1}{2}\\eta (\\tau )^{2g - 2}\\sum _{k = 1}^{g - 1}\\lambda _{2k}^{2g - 2}\\left(E_{2k} + \\frac{B_{2k}}{(2k)!", "}\\right) \\ .$ Here $\\mathcal {I}_{g,n}$ is fully flavored with respect to the class-$\\mathcal {S}$ description, and $b_{i = 1, \\ldots , n}$ denotes the $SU(2)$ flavor fugacities of the $n$ punctures.", "coefficients $\\lambda $ 's are rational numbers determined by the following recursion relations, $\\lambda _\\text{even}^{(\\text{odd})} = & \\ \\lambda _\\text{odd}^{(\\text{even})} = 0, \\qquad \\lambda _1^{(1)} = \\lambda _2^{(2)} = 0 \\\\\\lambda _0^{(\\text{even})} = & \\ 0, \\qquad \\lambda _1^{(2k + 1)} = \\sum _{\\ell = 1}^{k}\\lambda _{2\\ell }^{2k}(\\mathcal {S}_{2\\ell } - \\frac{B_{2\\ell }}{(2\\ell )!", "})\\ ,\\\\\\lambda _{2m + 1}^{(2k + 1)} = & \\ \\sum _{\\ell = m}^{k}\\lambda _{2\\ell }^{(2k)} \\mathcal {S}_{2(\\ell - m)}, \\qquad \\lambda _{2m + 1}^{(2k + 1)} = \\sum _{\\ell = m}^{k}\\lambda _{2\\ell }^{(2k)} \\mathcal {S}_{2(\\ell - m)}\\ .$ Here $B_n$ denotes the $n^\\text{th}$ Bernoulli number, $\\mathcal {S}$ are rational numbers that are given by the $2n^\\text{th}$ coefficient of a $y$ -series expansion, $\\mathcal {S}_{2n} \\left[\\frac{y}{2} \\frac{1}{\\sinh \\frac{y}{2}}\\right]_{2n} \\ .$" ], [ "SCFT/VOA correspondence", "The Schur states contributing to the Schur index are harmonic with respect to the two pairs of supercharges, $(Q^1_-, S_-^1)$ and $(\\tilde{Q}_{2 \\dot{-}}, \\tilde{S}^{2 \\dot{-}})$ .", "By the state/operator correspondence, any Schur state can be created by a Schur operator $\\mathcal {O}(0)$ acting on the unique vacuum.", "This Schur operator at the origin (anti-)commute with all four supercharges.", "Translating the operator away from the origin typically breaks this BPS condition.", "However, one may consider moving the operator along the $\\mathbb {R}^2_{34} = \\mathbb {C}_{z, \\bar{z}}$ plane by the twisted translation [1] $\\mathcal {O}(z, \\bar{z}) e^{- z L_{-1} - \\bar{z} \\widehat{L}_{-1}} \\mathcal {O}(0)e^{+ z L_{-1} + \\bar{z} \\widehat{L}_{-1}} \\ ,$ where $L_{-1} = P_{+ \\dot{+}}, \\qquad \\widehat{L}_{-1} = P_{- \\dot{-}} + R^2{_1}\\ .$ The translated Schur operator $\\mathcal {O}(z, \\bar{z})$ remains in the kernel of two supercharges $\\mathbb {Q}_1 Q^1_- + \\tilde{S}^{2 \\dot{-}}$ , $\\mathbb {Q}_2 \\tilde{Q}_{2 \\dot{-}} - S^-_1$ , and the $\\bar{z}$ -dependence is $\\mathbb {Q}_{1,2}$ -exact.", "Hence, at the level of cohomology, $\\mathcal {O}(z) [\\mathcal {O}(z, \\bar{z})]$ is holomorphic in $z$ .", "Moreover, their OPE coefficients are also holomorphic, forming a 2d vertex operator algebra (VOA)/chiral algebra on the plane $\\mathbb {C}_{z, \\bar{z}}$ [1].", "For any local unitary 4d $\\mathcal {N} = 2$ SCFT $\\mathcal {T}$ , the associated VOA $\\mathbb {V}(\\mathcal {T})$ must be non-trivial and non-unitary, since a component of the $SU(2)_\\mathcal {R}$ Noether current must be a non-trivial Schur operator, which gives rise to the stress tensor in the VOA with a negative central charge $c_\\text{2d} = -12 c_\\text{4d}$ .", "Furthermore, any flavor symmetry $G$ in $\\mathcal {T}$ will be associated to an affine subalgebra $\\widehat{\\mathfrak {g}}_{k_\\text{2d}} \\subset \\mathbb {V}(\\mathcal {T})$ , whose generators descend from the moment map operator of the symmetry $G$ , and they transform in the adjoint representation of $G$ .", "For the $A_1$ theories $\\mathcal {T} = \\mathcal {T}_{g,n}$ , the exact form (REF ) of Schur index $\\mathcal {I}_{g,n}$ highlights several flavor representations in which the VOA generators transform.", "In particular, the denominators $\\vartheta _1(2 \\mathfrak {b}_i)$ are tied to the $SU(2)$ -adjoint moment map operators/affine currents of the $n$ puncture, while the $E_k$ 's seem to come from the multi-fundamentals.", "The associated VOA is an important invariant of 4d $\\mathcal {N} = 2$ SCFT, constituting a VOA-valued TQFT for theories of class-$\\mathcal {S}$ thanks to the nontrivial associativity properties descending from the class-$\\mathcal {S}$ duality [48], [67].", "Under the correspondence, the Schur index of $\\mathcal {T}$ is identified with the character of the vacuum module of $\\mathbb {V}(\\mathcal {T})$ and it plays a central role in the SCFT/VOA correspondence.", "(See, for example, [68], [67], [69], [70], [71], [53], [72], [73], [74], [75], [76].)", "2d VOA is an interesing subject in its own right, say, from the representation theoretic persepctive.", "A VOA typically admits many modules besides the vacuum module.", "For those constituting an RCFT, there may be finitely many irreducible modules generated from the primaries.", "Unfortunately, the VOAs that arise in the SCFT/VOA correspondence often don't have nice properties such as rationality, making their modules less straightforward to study.", "However, one may still hope to access their modules through four dimensional physics.", "In a 4d $\\mathcal {N} = 2$ SCFT $\\mathcal {T}$ one can insert surface defects that perpendicularly intersect at the origin with the $\\mathbb {C}$ -plane where the VOA resides.", "In particular, one may consider those preserving $\\mathcal {N} = (2,2)$ superconformal symmetry on their support.", "It is generally believed that such defects correspond to non-vacuum modules of the associated VOA $\\mathbb {V}(\\mathcal {T})$ , since the Schur operators in the 4d theory may act on the defect operators via a bulk-defect OPE.", "A quantity that captures information of such defect systems is the defect index which we now review." ], [ "Defect indices ", "Let us focus on the $\\mathcal {N} = (2,2)$ superconformal surface defects supported on $x_1, x_2$ -plane which preserve the supercharges $\\tilde{Q}_{2 \\dot{-}}, \\tilde{S}^{2 \\dot{-}}$ [13].", "One can compute the full 4d $\\mathcal {N} = 2$ superconformal index in the presence of such defect [10], which admits the usual Schur limit $t \\rightarrow q$ [10], [13].", "In this section we will briefly review the indices of three types of surface defects.", "We will see in later section that they all give rise to solutions to the modular differential equations, and therefore potentially correspond to VOA modules." ], [ "Vortex defects", "A vortex defect that we are interested in is labeled by a natural number $k \\in \\mathbb {N}$ .", "One begines with a 4d $\\mathcal {N} = 2$ SCFT which is usually referred to as the $\\mathcal {T}_\\text{IR}$ , and we will take it to be $\\mathcal {T}_{g,n}$ .", "First we assume $n \\ge 1$ , in which case the theory contains at least one $SU(2)_\\text{f}$ flavor symmetry.", "This theory is then coupled to the theory of four hypermultiplets $\\mathcal {T}_{0,3}$ by gauging an $SU(2)$ flavor symmetry associated to a puncture, by gauging the diagonal of $SU(2)_\\text{f}$ and one $SU(2)$ flavor symmetry of $\\mathcal {T}_{0,3}$ .", "The resulting theory is denoted as $\\mathcal {T}_\\text{UV}$ , which has an additional $SU(2)$ flavor symmetry, say, the $(n + 1)^\\text{th}$ puncture.", "One then turns on a position dependent VEV for the corresponding moment map operator with a profile $\\sim (x_1 + i x_2)^k$ , triggering an RG-flow to the IR.", "When $k = 0$ , the IR fixed point reproduces the original IR theory $\\mathcal {I}_\\text{IR}$ , while for $k \\ge 1$ , the IR fixed point is $\\mathcal {T}_\\text{IR}$ coupled to a vortex defect.", "The Schur index of the resulting IR theory can be computed by [11], [51], [16] $\\mathcal {I}_{g, n}^{\\text{defect}} (k) = 2 (-1)^k \\mathop {\\operatorname{Res}}_{b_{n + 1} \\rightarrow q^{\\frac{1}{2} + \\frac{k}{2}}} q^{- \\frac{(k + 1)}{2}} \\frac{\\eta (\\tau )^2}{b} \\mathcal {I}_{g, n + 1} \\ .$ Inserting the exact formula for $\\mathcal {I}_{g, n + 1}$ , one arrives at the closed-form of the defect index [34]Note that $k = 0$ reproduces the original Schur index $\\mathcal {I}_{g,n}$ , $\\mathcal {I}^\\text{defect}_{g,n}(k = 0) = \\mathcal {I}_{g,n} \\ .$ $\\mathcal {I}^{\\text{defect}}_{g, n}(k) = (-1)^k & \\ \\frac{i^n}{2}\\frac{\\eta (\\tau )^{n + 2g - 2}}{\\prod _{i = 1}^n\\vartheta _1(2 \\mathfrak {b}_i)}\\\\& \\ \\times \\sum _{\\alpha _i = \\pm }\\left(\\prod _{i = 1}^{n}\\alpha _i\\right)\\sum _{\\ell = 1}^{n + 1 + 2g - 2}\\tilde{\\lambda }^{n + 1 + 2g - 2}_\\ell (k + 1) E_\\ell \\left[\\begin{matrix}(-1)^{n + k}\\\\ \\prod _{i = 1}^{n}b_i^{\\alpha _i}\\end{matrix}\\right]\\ ,$ where $\\tilde{\\lambda }$ are rational numbers defined by $\\tilde{\\lambda }_\\ell ^{(n + 1 + 2g -2)}(k) \\sum _{\\ell ^{\\prime } = \\ell }^{n + 1 + 2g -2} \\left(\\frac{k}{2}\\right)^{\\ell ^{\\prime } - \\ell } \\frac{1}{(\\ell ^{\\prime } - \\ell )!}", "\\lambda _{\\ell ^{\\prime }}^{(n + 1 + 2g - 2)}\\ .$ Here we list a few values of $\\tilde{\\lambda }$ for readers' convenience.", "The exact formula of $\\mathcal {I}^\\text{defect}_{g,n}(k)$ suggests that the defect indices are (combinations of) spectral-flowed vacuum character $\\mathcal {I}_{g,n}$ [34].", "In particular, when $k = $ odd, the corresponding flowed modules are twisted modules where the multi-fundamental generators in the VOA have their conformal weights shifted by half-integers, while the affine currents associated to the $n$ punctures keep their weights (mod 1).", "We will observe such pattern again when we discuss the flavored modular differential equations.", "Alghough the above construction of a vortex defect generalizes to $g \\ge 1, n = 0$ by cutting a handle and inserting a $\\mathcal {T}_{0,3}$ , the formula (REF ) is only valid for $n \\ge 1$ .", "Indeed, with $n = 1$ the poles $b \\rightarrow q^{\\frac{1}{2} + \\frac{k = \\text{even}}{2}}$ are actually double poles due to the twisted Eisenstein seriesSee also [77] for a discussion on higher order poles in the context of the Hall-Littlewood index..", "In more details, $\\mathcal {I}_{g, 1} = \\frac{i}{2} \\frac{\\eta (\\tau )^{2g - 1}}{ \\vartheta _1(2 \\mathfrak {b}_1)}\\sum _{\\alpha _1 = \\pm }\\alpha _1\\sum _{\\ell = 1}^{2g - 1} \\lambda _\\ell ^{(2g - 1)} E_\\ell \\left[\\begin{matrix}-1 \\\\ b_1^{\\alpha _1}\\end{matrix}\\right] \\ .$ The twisted Eisenstein series involved have simple poles at $b_1 = q^{\\frac{1}{2} + \\frac{k = \\text{even}}{2}}$ (see REF ), which collide with the simple poles of the $\\vartheta _1(2 \\mathfrak {b}_1)$ .", "Still, the residue in (REF ) can be computed using $\\mathop {\\operatorname{Res}}_{b \\rightarrow q^{\\frac{1}{2}}}\\frac{1}{b}\\frac{1}{\\vartheta _1(2 \\mathfrak {b})} E_{2n + 1} \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}= e_n + \\sum ^{n - 1}_{\\ell = 0} \\frac{1}{4^\\ell (2\\ell +1)!}", "E_{2n - 2\\ell }\\ ,$ where $e_n$ are some rational number.", "Combining with (REF ), we have the following defect indices, $\\mathcal {I}_{g, 0}^\\text{defect}(k = \\text{even}) = & \\ \\eta (\\tau )^{2g - 2} \\sum _{\\ell = 0}^{g - 1} c_\\ell (k) E_{2\\ell } \\ ,\\\\\\mathcal {I}_{g, 0}^\\text{defect}(k = \\text{odd}) = & \\ \\eta (\\tau )^{2g - 2} \\sum _{\\ell = 0}^{g - 1} c^{\\prime }_\\ell (k) E_{2\\ell }\\begin{bmatrix}-1 \\\\ 1\\end{bmatrix} \\ .$ Here $c_\\ell (k), c^{\\prime }_\\ell (k)$ are some rational numbers that can be worked out explicitly.", "Given $g $ and $ n \\ge 1$ , there are infinitely many vortex defects corresponding to $k \\in \\mathbb {N}$ .", "However, we see from the exact form of $\\mathcal {I}^\\text{defect}_{g,n}(k)$ that they are all linear combinations of the fixed structures with different $\\ell $ , $\\sum _{\\alpha } \\prod _{i = 1}^{n}\\alpha _i E_\\ell \\begin{bmatrix}\\pm 1 \\\\ \\prod _{i = 1}^{n}b_i^{\\alpha _i}\\end{bmatrix} \\ .$ From the symmetry property (REF ), such structure is non-zero only when $\\ell = n$ mod 2 and $\\ell > 0$ Here we have assumed $n \\ge 1$ , and we also define $E_0\\big [ \\begin{array}{c}\\theta \\\\ \\phi \\end{array} \\big ] = -1$ ; see appendix ..", "In particular, when $n = $ even, only the following ${\\frac{n + 2g - 2}{2}}$ Eisenstein series contribute, $E_2 \\begin{bmatrix}\\pm 1 \\\\ \\prod _{i} b_i^{\\alpha _i}\\end{bmatrix}, \\qquad E_4 \\begin{bmatrix}\\pm 1 \\\\ \\prod _{i} b_i^{\\alpha _i}\\end{bmatrix}, \\qquad \\ldots \\qquad E_{n + 2g - 2} \\begin{bmatrix}\\pm 1 \\\\ \\prod _{i} b_i^{\\alpha _i}\\end{bmatrix}\\ ,$ while when $n = $ odd, only $E_1 \\begin{bmatrix}\\pm 1 \\\\ \\prod _{i} b_i^{\\alpha _i}\\end{bmatrix}, \\qquad E_3 \\begin{bmatrix}\\pm 1 \\\\ \\prod _{i} b_i^{\\alpha _i}\\end{bmatrix}, \\qquad \\ldots \\qquad E_{n + 2g - 2} \\begin{bmatrix}\\pm 1 \\\\ \\prod _{i} b_i^{\\alpha _i}\\end{bmatrix}\\ $ contribute, and again there are ${\\frac{n + 2g - 2}{2}}$ of them.", "The $\\pm 1$ in the Eisenstein series is given by $(-1)^{n + k}$ .", "These Eisenstein structures are linear independent functions of $b_1, \\ldots , b_n$ , hence for even $k$ or odd $k$ (so that $(-1)^{n + k}$ is fixed), there are ${\\frac{n + 2g - 2}{2}}$ linear independent vortex defect indices (including the original Schur index corresponding to $k = 0$ ).", "A similar analysis can be applied to defect indices with $n = 0$ .", "There, the defect indices $\\mathcal {I}^\\text{defect}_{g, 0}(k)$ are all linear combinations of $g$ Eisenstein series (multiplied by $\\eta (\\tau )^{2g - 2}$ which is omitted here), $& E_0 = -1, \\quad E_2, \\quad E_4, \\quad \\ldots \\quad E_{2g - 2} \\, & \\text{when }k \\text{ is even} \\ , \\\\& E_0 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix} = -1, \\quad E_2 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix} , \\quad E_4 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix}, \\quad E_{2g - 2}\\begin{bmatrix}-1 \\\\ 1\\end{bmatrix} \\ , & \\text{when }k \\text{ is odd} \\ .$ Hence, for $n = 0$ , there will be $g$ independent defect indices for either parities of $k$ ." ], [ "Gukov-Witten defects", "Another type of superconformal surface defects the will be relevant are the Gukov-Witten surface defects [9], where the dynamical gauge fields are prescribed with some singular background profile at a defect plane orthogonal to the VOA plane.", "Figure: The dynamic gauge field with a prescribed singular behavior near the defect plane drawn vertically.", "ϕ\\varphi is the angular coordinate in the x 3 ,x 4 x_3, x_4 plane where the associated VOA lives.", "The wedge denotes the x 1 ,x 2 x_1, x_2 plane on which the surface defect is supported.Upon mapping to $S^3_{\\varphi , \\chi , \\theta } \\times S^1_t$ , the defect plane is mapped to the torus $T^2_{\\chi , t} \\subset S^3 \\times S^1$ at $\\theta = \\frac{\\pi }{2}$ , linking (not intersecting) the VOA torus $T^2_{\\varphi , t}$ at $\\theta = 0$ .", "The singular profile in flat space then translates to a background gauge field $A \\sim \\mathfrak {a}^{\\prime } d\\varphi $ which is singular at the locus $T^2_{\\chi , t} $ , since the $\\varphi $ -circle is contractible in $S^3$ .", "Once the supersymmetric localization is performed, the final integral over flat gauge fields $\\mathfrak {a}$ will be shifted to an integral over $\\mathfrak {a} + \\mathfrak {a}^{\\prime } \\tau $ , and the Schur index reads $\\mathcal {I} = \\oint \\frac{da}{2\\pi i a} \\mathcal {Z}(\\mathfrak {a} + \\mathfrak {a}^{\\prime } \\tau ) \\ .$ This effectively shift (expand or shrink) the integration contour away from the unit circles.", "For small $\\mathfrak {a}^{\\prime }$ , the integral does not change, until the integration contour hits the poles of the integrand $\\mathcal {Z}$ and residues $\\operatorname{Res}_i$ are picked up.", "Schematically, in the presence of different Gukov-Witten type surface defects [17], $\\mathcal {I}^\\text{defect} \\sim \\mathcal {I} + \\sum _{i} c_i \\operatorname{Res}_i \\ ,$ where $c_i$ are numbers that depend on the precise configuration of the singular background value.", "Therefore, the residues of the integrand $\\mathcal {Z}$ can be identified with the Schur index in the presence of Gukov-Witten type surface defects.", "These residues can be also interpreted as free field characters of some $bc \\beta \\gamma $ systems since the residues are just ratios of $\\eta (\\tau )$ and $\\vartheta _i$ functions." ], [ "Defect indices from $S$ -transformation", "There is yet another type of surface defects that we will encounter.", "We have explained that the Schur index $\\mathcal {I}_{g,n}(b)$ can be computed as a $S^3_{\\varphi , \\chi , \\theta } \\times S^1_t$ partition function [62].", "In such an interpretation, the flavor fugacities $b$ correspond to flat background gauge fields $B$ of the flavor symmetry, roughly of the form $B \\sim \\mathfrak {b} dt$ (where $b \\sim e^{2\\pi i \\mathfrak {b}}$ ), leading to a smooth and vanishing background field strength.", "Suppose one performs an $S$ -transformation on the index $\\mathcal {I}_{g,n}$ , mapping $\\mathfrak {b} \\rightarrow \\mathfrak {b}^{\\prime } = \\frac{\\mathfrak {b}}{\\tau }, \\qquad \\tau \\rightarrow \\tau ^{\\prime } = - \\frac{1}{\\tau }\\ .$ The new index $S\\mathcal {I}(\\mathfrak {b}, \\tau ) \\mathcal {I}_{g,n}(\\frac{\\mathfrak {b}}{\\tau }, - \\frac{1}{\\tau })$ can also be reinterpreted as an $S^3 \\times S^1$ partition function with a new background flavor gauge field.", "Now that the background gauge field $\\mathfrak {b}^{\\prime } = - \\mathfrak {b}( - \\frac{1}{\\tau })$ is proportional to the new complex modulus $\\tau ^{\\prime } = - \\frac{1}{\\tau }$ , the background flavor gauge field will be of the form $B^{\\prime } \\sim \\mathfrak {b} d\\varphi $ .", "Although the gauge field $B^{\\prime }$ is flat almost everywhere, it is singular along the torus $T^2_{\\chi , t}$ at $\\theta = \\frac{\\pi }{2}$ , where the flavor background field strength has a $\\delta $ -function profile.", "Therefore, an $S$ -transformed Schur index $S\\mathcal {I}(\\mathfrak {b}, \\tau )$ can be interpreted as a flavor defect partition function on the geometry with complex modulus $\\tau ^{\\prime }$ ." ], [ "Modular differential equations ", "Modular differential equations will play an important role in our subsequent analysis, and it have already been a useful tool to study both 2d CFT and 4d $\\mathcal {N} = 2$ SCFTs.", "There are two major ways where such an object comes into play.", "Rational CFTs are CFTs with finitely many primaries.", "Each primary generates an irreducible module of the chiral symmetry algebra (namely, VOA) with character $\\operatorname{ch}_{i = 1, \\ldots , N}$ .", "The full partition function $Z \\operatorname{tr} q^{L_0 - \\frac{c}{24}} \\bar{q}^{\\bar{L}_0 - \\frac{c}{24}}$ can be expanded in these module characters, $Z = \\sum _{i, j} M_{ij} \\operatorname{ch}_i(\\tau ) \\operatorname{ch}_j(\\bar{\\tau }) \\ , \\qquad q e^{2\\pi i \\tau } \\ ,$ where $M_{ij}$ is the paring matrix independent of $q, \\bar{q}$ .", "The full partition function $Z$ is also a $T^2$ -partition function where the torus has complex structure labeled by $\\tau $ , and therefore $Z$ is expected to be invariant under the modular group $SL(2, \\mathbb {Z})$ .", "Consequently, the characters $\\operatorname{ch}_i$ are required to form vector-valued modular form of weight-zero under $SL(2, \\mathbb {Z})$ (or its subgroups, if fermions are present [26], [27]).", "For example, for bosonic theory, $\\operatorname{ch}_i\\left(- \\frac{1}{\\tau }\\right) = \\sum _{j} S_{ij}\\operatorname{ch}_j(\\tau )\\ .$ The $S_{ij}$ form the well-known modular $S$ -matrix, from which one can compute fusion coefficients between the said primaries by the Verlind formula [78].", "Using the $N$ characters $\\operatorname{ch}_i$ , one can write down a “trivial” ordinary linear differential equation[79] $D_q^{(N)} \\operatorname{ch}_i + \\sum _{r = 0}^{N - 1} \\phi _r D^{(r)}_q \\operatorname{ch}_i = 0\\ ,$ using the Wronskian matrices $W_r$ $W_r \\begin{pmatrix}\\operatorname{ch}_1 & \\operatorname{ch}_2 & \\cdots & \\operatorname{ch}_N \\\\D_q^{(1)}\\operatorname{ch}_1 & D_q^{(1)}\\operatorname{ch}_2 & \\cdots & D_q^{(1)}\\operatorname{ch}_N \\\\\\vdots & \\vdots & \\vdots & \\vdots \\\\D_q^{(r - 1)} \\operatorname{ch}_1 & D_q^{(r - 1)} \\operatorname{ch}_2 & \\cdots & D_q^{(r - 1)} \\operatorname{ch}_N\\\\D_q^{(r + 1)} \\operatorname{ch}_1 & D_q^{(r + 1)} \\operatorname{ch}_2 & \\cdots & D_q^{(r + 1)} \\operatorname{ch}_N\\\\\\vdots & \\vdots & \\vdots & \\vdots \\\\D_q^{(N)} \\operatorname{ch}_1 & D_q^{(N)} \\operatorname{ch}_2 & \\cdots & D_q^{(N)} \\operatorname{ch}_N\\end{pmatrix} \\ , \\qquad \\phi _r (-1)^{N - r} \\frac{W_r}{W_N} \\ .$ What is non-trivial about this equation, however, is the fact that the coefficients $\\phi _r$ must be weight-$(2N - 2r)$ modular forms, and therefore are severly constrained by modularity.", "Note also that the differential equation is homogeneous in modular weight, and therefore transforms covariantly under suitable modular transformations.", "Any reducible module of the given rational VOA is a direct sum of the above finitely many irreducible modules, and the corresponding module character must also be a solution to the above modular differential equation.", "This fact has been exploited to classify (bosonic/fermionic) RCFTs [21], [22], [23], [24], [25], [26], [27], [28].", "A modular differential equation can be labeled by its order $N$ , and the “total order $\\ell $ of zeros” of the Wronskian $W_N$ .", "The parameter $\\tau $ takes value in the fundamental region of the $SL(2, \\mathbb {Z})$ (or suitable subgroups when fermions are involved), and the zeros can sit at the orbifold points, internal points or the cusp $i\\infty $ .", "The total order of the zeros equals $\\ell /6$ , $\\ell = \\mathbb {N} - \\lbrace 1\\rbrace $ .", "Fow low values of $(N, \\ell )$ , there are only a small number of free coefficients in the modular differential equation due to modularity.", "In these situations, it is possible to scan a large range of values for these coefficients and look for “admissible character solutions” with non-negative integral coefficients when expanded in $q$ -series.", "These solutions are then tested against more stringent conditions, e.g., by demending non-negative and integral fusion coefficients.", "Another way in which the modular differential equaitons arise is through some null states in the vacuum module of a VOA $\\mathbb {V}$ .", "One can insert the zero mode $\\mathcal {N}_{[0]}$ (if non-zero; see appendix for a brief review on notations) of a null state $\\mathcal {N}$ into the (super-)trace that computes the (super-)character of a module $M$ of $\\mathbb {V}$ , $0 = \\operatorname{str}_{M} \\mathcal {N}_{[0]} q^{L_0 - \\frac{c}{24}} \\mathbf {b}^\\mathbf {f} \\ .$ Here we have included flavor fugacities $\\mathbf {b}$ associated to the Cartan of the flavor symmetries $\\mathbf {f}$ .", "When $\\mathcal {N}$ takes certain form, the zero mode $\\mathcal {N}_{[0]}$ can be “pulled out” of the trace using Zhu's recursion formula, and the equation turns into a (flavored) modular differential equation [2], [29], [32], [6].", "For any 4d $\\mathcal {N} = 2$ SCFT $\\mathcal {T}$ , the associated VOA $\\mathbb {V}(\\mathcal {T})$ descend from the Schur operators in 4d.", "These operators may originate from different superconformal multiplets, and some are outside of the Higgs branch chiral ring $\\mathcal {R}_\\text{H}$ .", "In particular, the 2d stress tensor $T$ of $\\mathbb {V}(\\mathcal {T})$ descends from a component of the $SU(2)_\\mathcal {R}$ Noether current in 4d, which does not belong to $\\mathcal {R}_\\text{H}$ .", "The chiral ring $\\mathcal {R}_\\text{H}$ is identical to the associated variety of $\\mathbb {V}(\\mathcal {T})$ , and as a result, $T$ must be nilpotent up to $C_2(\\mathbb {V}(\\mathcal {T}))$ and a null state $\\mathcal {N}$ of the VOA [6], $(L_{-2})^n |0\\rangle = \\mathcal {N} + \\varphi , \\qquad \\varphi \\in C_2(\\mathbb {V}(\\mathcal {T})) \\ , \\qquad n \\in \\mathbb {N}_{\\ge 1} \\ .$ Inserting this equation to the supertrace, it is believed to turn into an unflavored modular differential equation for the unflavored Schur index of $\\mathcal {T}$ /vacuum character of $\\mathbb {V}(\\mathcal {T})$ .", "Such an equation plays an important role in a recent classification of rank-two 4d $\\mathcal {N} = 2$ SCFT [31]." ], [ "$\\beta \\gamma $ system", "Although our focus will be on the $\\mathfrak {a}_1$ -type class-$\\mathcal {S}$ theories, it proves helpful to begin with a detailed analysis of the simple theory of $\\beta \\gamma $ system with conformal weights $\\frac{1}{2}$ .", "The theory is also the associated VOA of a free hypermultiplet in four dimensions.", "The $\\beta , \\gamma $ OPE reads $\\beta (z)\\gamma (w) \\sim \\frac{1}{z - w} \\ \\Rightarrow \\ [\\beta _m, \\gamma _n] = \\delta _{m + n, 0} \\ ,$ where the two fields are expanded in the traditional manner, $\\beta (z) = \\sum _{n \\in \\mathbb {Z} - h[\\beta ]} \\beta _n z^{-n - h[\\beta ]}, \\qquad \\gamma (z) = \\sum _{n \\in \\mathbb {Z} - h[\\gamma ]} \\beta _n z^{-n - h[\\gamma ]} \\ .$ The theory possesses a stress tensor $T$ and a $U(1)$ current $J$ , given by $T \\frac{1}{2} (\\beta \\partial \\gamma ) - \\frac{1}{2} (\\gamma \\partial \\beta ), \\qquad J (\\gamma \\beta )\\ .$ Note that the stress tensor $T$ is defined as a composite, and also as an element in the subspace $C_2(\\beta \\gamma )$ , since $L_{-2} |0\\rangle \\propto (\\beta _{-\\frac{3}{2}}\\gamma _{- \\frac{1}{2}} - \\gamma _{-\\frac{3}{2}}\\beta _{- \\frac{1}{2}}) |0\\rangle \\ ,$ where $|0\\rangle $ is the vacuum state of the VOA.", "The $\\beta $ and $\\gamma $ fields carry charges under $L_0$ and $J_0$ , Table: NO_CAPTIONThe vacuum module of the $\\beta \\gamma $ system is simply the Fock module from acting $\\beta _{- n -\\frac{1}{2}}$ , $\\gamma _{-n - \\frac{1}{2}}$ on the vacuum $|0\\rangle $ , $n \\in \\mathbb {N}$ .", "We recall that $\\beta _{n - \\frac{1}{2}}$ , $\\gamma _{n - \\frac{1}{2}}$ annihilate $|0\\rangle $ , $\\forall n > 0$ .", "The vacuum character, or the Schur index of a free hypermultiplet in 4d, is thus $\\operatorname{ch} = \\operatorname{tr} q^{L_0 - \\frac{c}{24}} b^{J_0}= q^{\\frac{1}{24}} \\operatorname{PE}\\left[\\frac{q^{\\frac{1}{2}} b^{-1} + q^{\\frac{1}{2}} b}{1-q}\\right]= \\frac{\\eta (\\tau )}{\\vartheta _4(\\mathfrak {b})} \\ .$" ], [ "Untwisted sector", "Following [29], we consider inserting the zero modes of the null states $J - (\\gamma \\beta )$ and $T - \\frac{1}{2} ((\\beta \\partial \\gamma ) - (\\gamma \\partial \\beta ))$ into the trace.", "For example,All modes in the trace are taken to be the “square-modes”; see [2], [6] for more detail.", "$0 = q^{-\\frac{c}{24}} \\operatorname{tr} o(J - (\\gamma \\beta )) q^{L_0} b^{J_0}= q^{-\\frac{c}{24}}\\operatorname{tr}\\left[J_0 - o(\\gamma _{-\\frac{1}{2}}\\beta _{- \\frac{1}{2}}|0\\rangle )\\right]q^{L_0} b^{J_0} \\ .$ The first trace involving $J_0$ is nothing but $D_b \\operatorname{ch}$ , where $D_b b \\partial _b$ .", "The second term can be easily computed by Zhu's recursion relations [2], [30], $q^{-\\frac{c}{24}}\\operatorname{tr}o(\\gamma _{-\\frac{1}{2}}\\beta _{- \\frac{1}{2}}|0\\rangle ) & \\ q^{L_0} b^{J_0} \\nonumber \\\\= & \\ \\sum _{n = 1}^{+\\infty }E_n\\left[\\begin{matrix}-1 \\\\ b\\end{matrix}\\right]\\operatorname{tr}o(\\gamma _{n - \\frac{1}{2}}\\beta _{-\\frac{1}{2}} |0\\rangle ) q^{L_0 - \\frac{c}{24}} b^{J_0}= - E_1\\left[\\begin{matrix}-1 \\\\ b\\end{matrix}\\right] \\operatorname{ch} \\ .$ Altogether, the vacuum character satisfy a weight-one flavored modular differential equation, $D_b \\operatorname{ch} + E_1\\left[\\begin{matrix}-1 \\\\ b\\end{matrix}\\right] \\operatorname{ch} = 0 \\ .$ A similar computation can be performed for the $T - \\frac{1}{2} ((\\beta \\partial \\gamma ) - (\\gamma \\partial \\beta ))$ insertion, $0 = \\operatorname{tr} o\\left(T - \\frac{1}{2} ((\\beta \\partial \\gamma ) - (\\gamma \\partial \\beta ))\\right)q^{L_0 - \\frac{c}{24}} b^{J_0} \\ .$ Zhu's recursion relations leads to a weight-two equation, $D_q^{(1)} \\operatorname{ch} - E_2\\left[\\begin{matrix}-1 \\\\ b\\end{matrix}\\right] \\operatorname{ch} = 0 \\ .$ Here we used the symmetry property $E_\\text{even}\\big [\\begin{array}{c}\\pm 1 \\\\ b^{-1}\\end{array}\\big ] = E_\\text{even}\\big [\\begin{array}{c}\\pm 1 \\\\ b\\end{array}\\big ]$ , and $D_q^{(n)}$ denotes the $n$ -th modular differential operator given in (REF ).", "In particular, $D_q^{(1)} = q \\partial _q$ .", "Note that this null state expresses the stress tensor in terms of an element $(\\beta \\partial \\gamma ) - (\\gamma \\partial \\beta ) \\in C_2 (\\beta \\gamma )$ , which precisely corresponds to the nilpotency of $T$ [7], [6].", "The above computation generalized to all kinds of normal ordered products of $T$ and $J$ .", "One simply subtracts from it its explicit expression in terms of the free fields $\\beta \\gamma $ .", "For example, $(JJ) - (\\beta (\\beta (\\gamma \\gamma ))) - (\\beta \\partial \\gamma ) + (\\partial \\beta \\gamma ) = 0$ gives rise to $\\left(D_b^2 - E_2 - 2 E_1\\bigg [\\begin{matrix}-1 \\\\ b\\end{matrix}\\bigg ]^2 - 2 E_2\\bigg [\\begin{matrix}-1 \\\\ b\\end{matrix}\\bigg ]\\right)\\operatorname{ch} = 0 \\ .$ Note that the square of $E_1$ can be eliminated by combining the weight-two null descending from $(\\beta (J - (\\gamma \\beta )))$ , leaving $\\left(D_b^2+ E_1\\bigg [\\begin{matrix}-1 \\\\ b\\end{matrix}\\bigg ] D_b- E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}^2- 2 E_2 \\bigg [ \\begin{matrix}-1 \\\\ b\\end{matrix}\\bigg ]- E_2\\right)\\operatorname{ch} = 0 \\ .$ However, these higher-weight equations are not independent equations, since they can be derived from the (REF ) and (REF ).", "For example, (REF ) can be obtained from (REF ) by taking a $D_b$ derivative $D_b^2 \\operatorname{ch} + D_b E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix} \\operatorname{ch} + E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}D_b\\operatorname{ch} = 0 \\ .$ Applying the identity $D_b E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}= -2 E_2 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}- E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}^2 - E_2 \\ ,$ one recovers (REF ).", "The vacuum character admits a smooth unflavoring limit, $\\operatorname{ch} \\xrightarrow{} \\frac{\\eta (\\tau )}{\\vartheta _4(0)} \\ .$ Consequently, some of the flavored modular differential equations reduce to the more familiar unflavored modular differential equations when unflavoring.", "For example, the weight-two equation (REF ) reduces directly to $\\left(D_q^{(1)} - E_2 \\begin{bmatrix}-1 \\\\ + 1\\end{bmatrix}\\right) \\operatorname{ch}(b = 1) = 0 \\ .$ However, the weight-one equation (REF ) does not reduce to a non-trivial unflavored equation.", "Instead, $D_b \\operatorname{ch} \\xrightarrow{} 0 \\ , \\qquad E_1 \\begin{bmatrix}- 1 \\\\ b\\end{bmatrix} \\operatorname{ch} \\xrightarrow{} 0 \\ ,$ hence its unflavoring limit is trivial." ], [ "Twisted sector", "Besides the vacuum module in the untwisted sector, one may also consider twisted modules of the $\\beta \\gamma $ VOA.", "For our purpose, we consider the $\\frac{1}{2}$ -twisted sector based on a twisted vacuum $|0\\rangle _{\\frac{1}{2}}$ .", "The two fields expand in the following form, $\\beta (z) = \\sum _{n \\in \\mathbb {Z}} \\beta _n z^{- n - \\frac{1}{2}}, \\qquad \\gamma (z) = \\sum _{n \\in \\mathbb {Z}} \\gamma _n z^{-n - \\frac{1}{2}} \\ .$ such that the twisted vacuum $|0\\rangle _{\\frac{1}{2}}$ is annihilated by $\\gamma _{n \\in \\mathbb {Z}_{\\ge 0}}$ , $\\beta _{n \\in \\mathbb {Z}_{>0}}$ .", "In this sector, $T$ and $J$ still have integer moding, however, their precise relations with modes of $\\beta $ , $\\gamma $ are shifted.", "In particular [80], $J_0 = \\sum _{k \\in \\mathbb {Z}_{< 0}} \\gamma _k \\beta _{-k} + \\sum _{k \\in \\mathbb {Z}_{\\ge 0}} \\beta _{-k}\\gamma _k - \\frac{1}{2} \\ .$ and $L_0 = & \\ - \\frac{1}{2} \\left(\\sum _{k < 0}(k - 1) \\beta _k \\gamma _{- k} + \\sum _{k \\ge 0} (k - 1) \\gamma _{- k}\\beta _k\\right)\\\\& \\ + 1\\left(\\sum _{k < 0}(k - 1) \\gamma _k \\beta _{- k} + \\sum _{k \\ge 0} (k - 1) \\beta _{- k}\\gamma _k\\right) + \\frac{3}{8} \\ .", "\\nonumber $ The relevant charges are Table: NO_CAPTIONWith these charges, the character of the twisted Fock module built from $|0\\rangle _{\\frac{1}{2}}$ reads $\\operatorname{ch}_{\\frac{1}{2}} = q^{- \\frac{1}{8}} b^{- \\frac{1}{2}} \\operatorname{tr} q^{L_0 - \\frac{c}{24}} b^{J_0}= q^{- \\frac{1}{8}} b^{- \\frac{1}{2}} q^{\\frac{1}{24}} PE\\left[\\frac{q^0 b^{-1} + q^1 b}{1 - q}\\right]= -i\\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {b})} \\ .$ As before, one can insert the same null states discussed above into the trace to produce flavored modular differential equations satisfied by the twisted character $\\operatorname{ch}_{\\frac{1}{2}}$ .", "The only difference from the untwisted case is that now the conformal weights of $\\beta , \\gamma $ are integers, and therefore all $E_n\\left[\\begin{array}{c}-1\\\\b\\end{array}\\right]$ should be replaced by $E_n\\left[\\begin{array}{c}+1 \\\\ b\\end{array}\\right]$ .", "For example, $0 = & \\ \\left(D_b + E_1\\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix}\\right)\\operatorname{ch}_{\\frac{1}{2}} \\ , \\\\0 = & \\ \\left(D_q^{(1)} - E_2\\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix}\\right)\\operatorname{ch}_{\\frac{1}{2}} \\ , \\\\0 = & \\ \\left(D_b^2 - E_2 + E_1\\left[\\begin{matrix}+ 1 \\\\ b\\end{matrix}\\right] D_b - 2 E_2\\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix}\\right)\\operatorname{ch}_{\\frac{1}{2}} \\ .$" ], [ "Unique character(s)", "Now that the (twisted) characters are constrained by an infinitely many (with only two independent) partial differential equations, it is natural to ask if there are additional solutions.", "It turns out that the equations in the untwisted and twisted sector uniquely determine (up to a numerical coefficient) the corresponding characters.", "For instance, the weight-two equation (REF ) in the untwisted sector is an ordinary differential equation in $q$ .", "Recall $E_2 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}= \\frac{1}{8\\pi ^2} \\frac{\\vartheta _4^{\\prime \\prime }(\\mathfrak {b})}{\\vartheta _4(\\mathfrak {b})} - \\frac{1}{2}E_2= D_q^{(1)}\\left[\\frac{1}{3} \\ln \\frac{\\vartheta _1^{\\prime }(0)}{\\vartheta _4(\\mathfrak {b})^3}\\right] \\ ,$ and therefore the equation (REF ) can be solved by (as an analytic function) $\\operatorname{ch} = C(b) \\frac{\\eta (\\tau )}{\\vartheta _4(\\mathfrak {b})} \\ .$ Finally, the weight-one equation (REF ) further fixes $C(b)$ to be independent of $b$ .", "Similar arguments show that the twisted character is also uniquely fixed by the weight-one and two equations.", "At the end of this section we will see that the weight-one equation (REF ) is actually redundant, in the sense that it can be generated from the weight-two equation (REF ) through a modular transformation, and (REF ) (or its twisted version) alone actually encodes all the character information." ], [ "Modular properties of the equations", "The coefficients of the unflavored modular differential equations in [6], [21], [22], [23], [24], [25], [26], [27], [28] are modular forms with respect to suitable modular groups.", "Consequently, the equations transform covariantly under $SL(2, \\mathbb {Z})$ (or a subgroup).", "In contrast, the coefficients of the flavored modular differential equations (REF ), (REF ) are quasi-Jacobi forms, and their modular properties are less straightforward.", "For simplicity, we first look at the twisted sector.", "The simpler equation is the weight-one equation (REF ), $\\left(D_b + E_1 \\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix}\\right) \\operatorname{ch} = 0 \\ .$ Consider the naive $S$ -transformation that acts on the $\\tau $ and $\\mathfrak {b}$ parameter, $\\tau \\rightarrow - \\frac{1}{\\tau }, \\qquad \\mathfrak {b} \\rightarrow \\frac{\\mathfrak {b}}{\\tau } \\ .$ The modular differential equation transforms non-trivially and non-covariantly under $S$ , $D_b \\rightarrow \\tau D_b, \\ E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix} \\rightarrow \\mathfrak {b} +\\tau E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix} \\quad \\Rightarrow \\quad D_b + E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix}\\rightarrow D_b + E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix} + \\mathfrak {b} \\ .$ Even so, the $S$ -transformed solution remains a solution: in the case at hand, the $S$ -transformed twisted character differs from the original by a simple exponential factor, $\\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {b})} \\rightarrow i e^{- \\frac{i \\pi \\mathfrak {b}^2}{\\tau }} \\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {b})} \\ .$ This transformed twisted character is annihilated by the transformed equation, $& \\ \\left(\\tau D_b + \\tau E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix} + \\mathfrak {b}\\right) \\left(e^{- \\frac{i \\pi \\mathfrak {b}^2}{\\tau }} \\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {b})}\\right)\\\\= & \\ \\tau \\left(D_b + \\tau E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix}\\right)\\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {b})}+ \\frac{1}{2\\pi i}\\tau \\frac{-2 i \\pi \\mathfrak {b}}{\\tau }\\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {b})}+ \\mathfrak {b}\\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {b})}= \\tau \\left(D_b + \\tau E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix}\\right)\\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {b})} = 0 \\ .", "\\nonumber $ Phrased differently, after stripping off the exponential factor in the $S$ -transformed character, the remaining object is the solution to the original modular differential equation.", "At the moment, this statement holds true trivially in the current case (since the $S$ -transformation merely introduces a simple factor).", "A more systematic approach to deal with the modular properties of flavored modular differential equations is to introduce an additional fugacity that couples to the affine level of the current $J$ , in this case, $k = - 1/2$ .", "Define the $y$ -extended character (using the same symbol) $\\operatorname{ch} \\operatorname{tr} q^{L_0 - \\frac{c}{24}} y^{k} b^{J_0} \\ .$ This is essentially the original index, since $y^k$ is merely a constant that can be pulled out of the trace.", "We further define the $SL(2, \\mathbb {Z})$ -transformation of the fugacities $(\\mathfrak {y}, \\mathfrak {b}, \\tau )$ in the following way [81], $(\\mathfrak {y}, \\mathfrak {b}, \\tau ) \\xrightarrow{} (\\mathfrak {y} - \\frac{\\mathfrak {b}^2}{\\tau }, \\frac{\\mathfrak {b}}{\\tau }, - \\frac{1}{\\tau })\\ , \\qquad (\\mathfrak {y}, \\mathfrak {b}, \\tau ) \\xrightarrow{} (\\mathfrak {y}, \\mathfrak {b}, \\tau + 1)\\ .$ In particular, under the $S$ -transformation, the derivatives transform as $D_q^{(1)} \\rightarrow & \\ D_{q^{\\prime }}^{(1)} = \\tau ^2 D_q^{(1)} + \\mathfrak {b}\\tau D_b + \\mathfrak {b}^2 D_y, \\qquad D_b \\rightarrow D_{b^{\\prime }} = \\tau D_b + 2 \\mathfrak {b} D_y \\ ,\\\\\\text{where} \\qquad D_y & \\ y \\partial _y \\ , \\qquad D_y \\operatorname{ch} = k \\operatorname{ch} \\ .$ Therefore, the weight-one modular differential equation transforms under $S$ as $D_b + E_1 \\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix} \\xrightarrow{} \\tau D_b + 2 \\mathfrak {b} D_y + \\tau E_1 \\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix} + \\mathfrak {b} = \\tau \\left(D_b + E_1 \\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix}\\right) \\ ,$ showing that the weight-one modular differential equation transforms covariantly once we incorporate the additional $y$ -fugacity.", "Here we have used the fact that $D_y \\operatorname{ch} = k \\operatorname{ch} = -\\frac{1}{2} \\operatorname{ch}$ .", "The transformation of the weight-two equation can be similarly analyzed, $D_{q}^{(1)} - E_2 \\begin{bmatrix}+ 1 \\\\b\\end{bmatrix}\\xrightarrow{} & \\ \\tau ^2 D_q^{(1)} + \\mathfrak {b}\\tau D_b + \\mathfrak {b}^2 D_y - \\tau ^2 E_2 \\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix}+ \\mathfrak {b}\\tau E_1 \\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix} + \\frac{\\mathfrak {b}^2}{2} \\nonumber \\\\= & \\ \\tau ^2\\left( D_q^{(1)} - E_2 \\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix}\\right)+ \\mathfrak {b}\\tau \\left(D_b+ E_1 \\begin{bmatrix}+ 1 \\\\ b\\end{bmatrix}\\right) \\ .$ In going to the second line we have applied $D_y = k = - \\frac{1}{2}$ .", "Clearly, the weight-two equation transforms almost covariantly under $S$ , up to a term proportional to the weight-one equation.", "The analysis of the equations in the untwisted sector is similar but slightly more involved, where the relevant modular group is $\\Gamma ^0(2)$ .", "Note that under $STS \\in \\Gamma ^0(2)$ , $E_1 \\begin{bmatrix}- 1 \\\\ b\\end{bmatrix} \\xrightarrow{} \\tau E_1 \\begin{bmatrix}- 1 \\\\ b\\end{bmatrix}+ \\mathfrak {b}- E_1 \\begin{bmatrix}- 1 \\\\ b\\end{bmatrix} \\ .$ Collecting everything, we find covariance for the weight-one equation, $D_b + E_1 \\begin{bmatrix}- 1 \\\\ b\\end{bmatrix} \\rightarrow (\\tau - 1)\\left(D_b + E_1 \\begin{bmatrix}- 1 \\\\ b\\end{bmatrix}\\right) \\ .$ and almost covariance for the weight-two equation, $& \\ D_q^{(1)} - E_2 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}\\\\\\xrightarrow{} & \\ (\\tau - 1)\\mathfrak {b} \\left(D_b + E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}\\right)+ \\frac{(\\tau - 1)^2}{\\tau } \\left(\\tau \\left(D_q^{(1)} - E_2 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}\\right)+ \\mathfrak {b}\\left(D_b + E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}\\right)\\right) \\ .", "\\nonumber $ Again, the weight-one equation appears on the right hand side of the transformed weight-two equation.", "From the earlier discussions, the weight-one and weight-two equations are enough to determine the unique character of the $\\beta \\gamma $ system in either the twisted or untwisted sector.", "Now we also learn that the weight-two equation alone generates the weight-one equation through the modular transformation $S$ or $STS$ .", "Therefore it appears that the weight-two flavored equation, which reflects the nilpotency of $T$ up to $C_2(\\beta \\gamma )$ , holds all the information of the characters of the $\\beta \\gamma $ system." ], [ "Class $\\mathcal {S}$ theories of type {{formula:ba29bcb2-8645-4230-9a15-3db89b2f7903}}", "Each $A_1$ theory $\\mathcal {T} = \\mathcal {T}_{g,n}$ is associated to a vertex operator algebra $\\mathbb {V}(\\mathcal {T})$ that consists of the Schur operators on the VOA-plane $\\mathbb {R}^2_{x^3 x^4}$ .", "The fact that a vortex defect in section (REF ) preserves the supercharges $\\tilde{Q}_{2 \\dot{-}}, \\tilde{S}^{2 \\dot{-}}, Q^1_-, S_1^-$ implies that the Schur index in their presence equals the character of a non-trivial $\\mathbb {V}(\\mathcal {T})$ -module [13].", "As discussed in section REF and the example in section , null states in $\\mathbb {V}(\\mathcal {T})$ may lead to (flavored) modular differential equations that the Schur index must satisfy.", "By the same logic, the character of any module $M$ of $\\mathbb {V}(\\mathcal {T})$ must also satisfy the same differential equations (or twisted version of them in case of a twisted module).", "It is therefore natural to expect that the defect indices mentioned in section REF also share such feature, since they are supposed to be module characters of $\\mathbb {V}(\\mathcal {T})$ .", "We will show that this is indeed the case by examining several simple examples by studying the (flavored) modular differential equations the Schur indices satisfy and look for their common solutions." ], [ "$\\mathcal {T}_{0,3}$", "The trinion theory $\\mathcal {T}_{0,3}$ is the simplest $A_1$ theory of class-$\\mathcal {S}$ , which consists of four free hypermultiplets.", "The associated vertex operator algebra is just the product of four $\\beta \\gamma $ systems [1].", "Since we have discussed in detail the $\\beta \\gamma $ system in section , here we will be brief.", "The Schur index of the $A_1$ trinion theory is given by $\\mathcal {I}_{0,3} = \\prod _{\\pm \\pm } \\frac{\\eta (\\tau )}{\\vartheta _4(\\mathfrak {b}_1 \\pm \\mathfrak {b}_2 \\pm \\mathfrak {b}_3)}= \\frac{1}{2i} \\frac{\\eta (\\tau )}{\\prod _{i = 1}^3 \\vartheta _1(2 \\mathfrak {b}_i)}\\sum _{\\alpha _i = \\pm } \\left(\\prod _{i = 1}^{3}\\alpha _i\\right)E_1\\left[\\begin{matrix}-1 \\\\ \\prod _{i = 1}^{3} b_i^{\\alpha _i}\\end{matrix}\\right]\\ .$ It is straightforward to find the following modular differential equations for the Schur index $\\mathcal {I}_{0,3}$ , $\\left(D_{b_i} + \\sum _{\\alpha , \\beta = \\pm } E_1 \\begin{bmatrix}-1 \\\\ b_i b_j^\\alpha b_k^\\beta \\end{bmatrix}\\right) \\mathcal {I}_{0,3} = & \\ 0 \\ , \\qquad i \\ne j \\ne k \\ , \\\\\\left(D_q^{(1)} - \\frac{1}{2} \\sum _{\\alpha _i = \\pm } E_2 \\begin{bmatrix}-1 \\\\ \\prod _{i=1}^{3}b_i^{\\alpha _i}\\end{bmatrix} \\right) \\mathcal {I}_{0,3} = & \\ 0 \\ .$ Equations of higher weights can be similarly deduced.", "Note that the second equation (reflecting the nilpotency of the stress tensor) has an unflavoring limit $b_i \\rightarrow 1$ which reproduces the first order equation in [6], while the weight-one equation does not have a non-trivial limit.", "The vortex defects labeled by $k$ have indices given by a simple formula (REF ), $\\mathcal {I}_{0,3}^{\\text{defect}}(k) = & \\ \\frac{(k + 1)\\eta (\\tau )}{2\\prod _{i = 1}^{3} \\vartheta _i(2 \\mathfrak {b}_i)} \\sum _{\\alpha _i = \\pm } \\left(\\prod _{i = 1}^{3}\\alpha _i\\right)E_1 \\begin{bmatrix}- 1 \\\\ \\prod _{i = 1}^{3}b_i^{\\alpha _i}\\end{bmatrix} = (k + 1)\\mathcal {I}_{0,3} \\ , \\quad k = \\text{even}\\ ,\\\\\\mathcal {I}_{0,3}^{\\text{defect}}(k) = & \\ \\frac{-i (k + 1) \\eta (\\tau )}{2\\prod _{i = 1}^{3} \\vartheta _i(2 \\mathfrak {b}_i)} \\sum _{\\alpha _i = \\pm } \\left(\\prod _{i = 1}^{3}\\alpha _i\\right)E_1 \\begin{bmatrix}+ 1 \\\\ \\prod _{i = 1}^{3}b_i^{\\alpha _i}\\end{bmatrix} \\ , \\quad k = \\text{odd}, \\ldots \\ .$ The vortex defects labeled by even $k \\in \\mathbb {N}$ have indices identical to $\\mathcal {I}_{0,3}$ up to some numerical factors, and therefore they all satisfy exactly the same modular equations.", "For odd $k$ , the above defect indices can be equivalently rewritten as $\\mathcal {I}^\\text{defect}(k = \\text{odd}) \\sim \\prod _{\\pm \\pm }\\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {b}_1 \\pm \\mathfrak {b}_2 \\pm \\mathfrak {b}_3)} \\ .$ Obviously, this corresponds to nothing but the (product of) $\\frac{1}{2}$ -twisted module of the four $\\beta \\gamma $ systems.", "Immediately, one derives the modular differential equations that they satisfy, for instance, $\\left(D_{b_i} + \\sum _{\\alpha , \\beta = \\pm } E_1 \\begin{bmatrix}+1 \\\\ b_i b_j^\\alpha b_k^\\beta \\end{bmatrix}\\right) \\mathcal {I}_{0,3} = & \\ 0 \\ , \\qquad i \\ne j \\ne k \\ , \\\\\\left(D_q^{(1)} - \\frac{1}{2} \\sum _{\\alpha _i = \\pm } E_2 \\begin{bmatrix}+1 \\\\ \\prod _{i=1}^{3}b_i^{\\alpha _i}\\end{bmatrix} \\right) \\mathcal {I}_{0,3} = & \\ 0 \\ .$ Similar to the discussion in section , the weight-two modular differential equation () uniquely determines the relevant characters up to numerical factors.", "Recall again that $E_2 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}= \\frac{1}{8\\pi ^2} \\frac{\\vartheta _4^{\\prime \\prime }(\\mathfrak {b})}{\\vartheta _4(\\mathfrak {b})} - \\frac{1}{2}E_2= q \\partial _q\\left[\\frac{1}{3} \\ln \\frac{\\vartheta _1^{\\prime }(0)}{\\vartheta _4(\\mathfrak {b})^3}\\right] \\ ,$ and therefore by the weight-two equation () $\\mathcal {I}_{0,3} = C(b_1, b_2, b_3) \\prod _{\\pm \\pm } \\frac{\\eta (\\tau )}{\\vartheta _4(\\mathfrak {b}_1 \\pm \\mathfrak {b}_2 \\pm \\mathfrak {b}_3)} \\ ,$ The weight-two equation (REF ) further fixes $C$ to be constant in $b_i$ .", "Similarly, the solution in the twisted sector is also uniquely fixed to be $\\mathcal {I}_{0,3}^{\\text{defect}(k = \\text{odd})} = \\prod _{\\pm \\pm } \\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {b}_1 \\pm \\mathfrak {b}_2 \\pm \\mathfrak {b}_3)} \\ .$" ], [ "$\\mathcal {T}_{0,4}$", "The theory $\\mathcal {T}_{0,4}$ describes an $SU(2)$ theory coupled to $N_\\text{F} = 4$ fundamental hypermultiplets [42].", "The flavor symmetry of the theory is $SO(8) = SO(2N_\\text{F})$ that rotates the 8 half-hypermultiplets $Q^i_a$ transforming in the (pseudoreal) fundamental representation of the $SU(2)_\\text{g}$ gauge group, where $i = 1, \\ldots , 8$ is the $SO(8)$ vector indices, $a = 1,2$ is the $SU(2)_\\text{g}$ -fundamental indices.", "The moment map $M$ associated to the $SO(8)$ flavor symmetry transforms under the adjoint $\\mathbf {adj}$ of $SO(8)$ , and is a gauge-invariant composite of the scalars in the 8 half-hypermultiplets, $M^{ij} = Q_a^iQ^{aj} \\ .$ This simple structure of $M^{ij}$ implies that within the Higgs branch chiral ring [82], $(M \\otimes M)_{\\mathbf {35}_s} = (M\\otimes M)_{\\mathbf {35}_c} = 0 \\ .$ Moreover the $\\mathcal {N} = 1$ superpotential of the $\\mathcal {N} = 2$ theory imposes $(M \\otimes M)_{\\mathbf {35}_v} = (M\\otimes M)_{\\mathbf {1}} = 0\\ .$ Note that $\\operatorname{symm}^2\\mathbf {adj} = \\mathbf {35}_{v}\\oplus \\mathbf {35}_s\\oplus \\mathbf {35}_c \\oplus \\mathbf {1} \\oplus 2 \\mathbf {adj}$ , and the above four relations give the Joseph ideal.", "The associated VOA of $\\mathcal {T}_{0,4}$ is given by the affine algebra $\\widehat{\\mathfrak {so}}(8)_{-2}$ with central charge $c = -14$ , whose generators descend from the moment map operator in the 4d theory.", "The algebra $\\widehat{\\mathfrak {so}}(8)_{-2}$ is a member of the affine Lie algebras associated to the Deligne-Cvitanovic exceptional series $\\mathfrak {a}_1 \\subset \\mathfrak {a}_2 \\subset \\mathfrak {g}_2 \\subset \\mathfrak {d}_4 \\subset \\mathfrak {f}_4 \\subset \\mathfrak {e}_6 \\subset \\mathfrak {e}_7 \\subset \\mathfrak {e}_8 , \\qquad k = - \\frac{h^\\vee }{6} - 1\\ .$ This set of current algebras are quasi-lisse [7], and their (unflavored) characters satisfy a (unflavored) modular differential equation of the form $(D_q^{(2)} - 5 (h^\\vee + 1)(h^\\vee - 1)E_4) \\mathcal {I} = 0 \\ ,$ where $h^\\vee $ denotes the dual Coxeter number.", "For $\\widehat{\\mathfrak {so}}(8)_{-2}$ , this equation reads $(D_q^{(2)} - 175 E_4) \\mathcal {I}_{0,4} = 0 \\ .$ It is known that $\\widehat{\\mathfrak {so}}(8)_{-2}$ with the central charge $c = -14 = -4 - 2 \\times 5$ is not rational [36].", "Therefore one would expect its representation theory to be more involved than rational VOAs.", "The stress-tensor $T$ of $\\widehat{\\mathfrak {so}}(8)_{-2}$ is a composite given by the Sugawara construction [1], $T = \\frac{1}{2(k_\\text{2d} + h^\\vee )} \\sum _{A,B} K_{AB} (J^A J^B) \\ ,$ where $K_{AB}$ is the inverse of the Killing form $K^{AB} K(J^A, J^B)$ .", "This equation corresponds to the Joseph ideal relation in the trivial representation in (REF ), since $T$ is not in the Higgs branch chiral ring." ], [ "The equations in the untwisted sector", "The Joseph ideal relations (REF ), (REF ) descend to nontrivial null states $\\mathcal {N}_a$ in the associated VOA $\\widehat{\\mathfrak {so}}(8)_{-2}$ , and can be inserted into the supertrace $\\operatorname{str} o(\\mathcal {N}) q^{L_0} \\mathbf {b}^\\mathbf {f}$ .", "Null states charged under the Cartan of $SO(8)$ do not have interesting outcomes, however, those uncharged can lead to non-trivial modular differential equations.", "As a warm-up, let us first consider a simpler partially unflavored limit where all $b_i \\rightarrow b$ .", "In this limit, the index corresponds to the supertrace over the vacuum module $\\mathcal {I}_{0,4} = \\operatorname{str} b^{h_1 + h_2 + h_3 + h_4}q^{L_0 - \\frac{c}{24}} = \\frac{\\eta (\\tau )^2}{\\vartheta _1(2 \\mathfrak {b})^4}\\left(3 E_2 - 4E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ E_2 \\begin{bmatrix}1 \\\\ b^4\\end{bmatrix}\\right)\\ ,$ where $h_I$ are the Cartan generators of the four $SU(2)$ flavor groups associated to the four punctures.", "The simplest null state associated with the Joseph ideal is the Sugawara construction $T - \\sum _{a,b} K_{ab} (J^a J^b) = 0$ .", "Upon inserting into the supertrace, the equation translates to a weight-two modular differential equation $0 = \\left[D_q^{(1)} - \\frac{1}{16}D_b^2 - \\frac{1}{2}\\left(E_1\\left[\\begin{matrix}1 \\\\ b^2\\end{matrix}\\right]+ E_1\\left[\\begin{matrix}1 \\\\ b^4\\end{matrix}\\right]\\right)D_b+ \\left(E_2 + 4 E_2\\left[\\begin{matrix}1 \\\\ b^2\\end{matrix}\\right]+ 2 E_2\\left[\\begin{matrix}1 \\\\ b^4\\end{matrix}\\right]\\right)\\right] \\mathcal {I}_{0,4} \\ .", "\\nonumber $ The remaining three relations corresponding to $\\mathbf {35}_{v,s,c}$ each lead to three uncharged null states, however, in the partial unflavoring limit they do not give rise to any non-trivial modular differential equation.", "At weight-three, there are new null states besides the descendants of the above Joseph relations.", "In the partial unflavoring limit, they give rise to three modular differential equations, for example, $\\left(D_q^{(1)}D_b + \\left(E_2 - 4 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}- 2 E_2 \\begin{bmatrix}1 \\\\ b^4\\end{bmatrix}\\right)D_b+ 16\\left(4 E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ E_3 \\begin{bmatrix}1 \\\\ b^4\\end{bmatrix}\\right)\\right)\\mathcal {I}_{0,4} = 0 \\ .$ Finally, at weight-four there is $\\bigg [D^{(2)}_q + \\left(4 E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ E_3 \\begin{bmatrix}1 \\\\ b^4\\end{bmatrix}\\right) D_b- \\left(96E_4 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 12 E_4 \\begin{bmatrix}1 \\\\ b^4\\end{bmatrix}+ 67 E_4\\right) \\bigg ] \\mathcal {I}_{0,4} = 0 \\ .$ All the above partially-unflavored modular differential equations can be fully refined to depend on four generic flavor $SU(2)$ fugacities $b_i$ [83].", "Now there are additional equations which were unavailable in the partially unflavoring limit.", "For example, the Sugawara condition gives rise to a weight-two fully flavored modular differential equation, $0 = \\Bigg [D^{(1)}_q & \\ - \\frac{1}{4}\\left(D_{b_3}D_{b_2} + D_{b_4}D_{b_2} + D_{b_4}D_{b_3} + D_{b_4}^2\\right) \\nonumber \\\\& \\ - \\frac{1}{2}E_1 \\begin{bmatrix}1 \\\\ \\frac{b_1}{b_2 b_3 b_4}\\end{bmatrix}\\left(D_{b_1} - D_{b_2} - D_{b_3} - D_{b_4}\\right) \\nonumber \\\\& \\ - \\frac{1}{2}E_1 \\begin{bmatrix}1 \\\\ b_1 b_2 b_3 b_4\\end{bmatrix}\\left(D_{b_1} + D_{b_2} + D_{b_3} + D_{b_4}\\right)- E_1\\begin{bmatrix}1 \\\\ b_4^2\\end{bmatrix} D_{b_4} \\nonumber \\\\& \\ + \\left(E_2 + 2 E_2 \\begin{bmatrix}1 \\\\ \\frac{b_1}{b_2 b_3 b_4}\\end{bmatrix}+ 2E_2\\begin{bmatrix}1 \\\\ b_1 b_2 b_3 b_4\\end{bmatrix}+ 2 E_2\\begin{bmatrix}1 \\\\ b_4^2\\end{bmatrix}\\right) \\Bigg ] \\mathcal {I}_{0,4} \\ .$ For later convenience, we denote the differential operator acting on the index as $\\mathcal {D}^\\text{Sug}$ .", "Additional Joseph ideal relations corresponding to $\\mathbf {35}_{v,s,c}$ lead to in total nine equations [83].", "Three null states at weight-two are associated to the $\\mathbf {35}_v$ relations.", "They are $&J^{[1j]}J^{[1j]}+J^{[5j]}J^{[5j]}-\\frac{1}{4}J^{[mn]}J^{[mn]} \\ ,\\\\&J^{[2j]}J^{[2j]}+J^{[6j]}J^{[6j]}-\\frac{1}{4}J^{[mn]}J^{[mn]} \\ ,\\\\&J^{[3j]}J^{[3j]}+J^{[7j]}J^{[7j]}-\\frac{1}{4}J^{[mn]}J^{[mn]}\\ .$ The three states lead to the following flavored modular differential equations, $\\sum _{i = 1}^{2} \\left(\\frac{1}{4}D_{b_i}^2 + E_1 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}D_{b_i} - 2 E_2 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}\\right) \\mathcal {I}_{0,4}= \\sum _{i = 3}^{4} \\left(\\frac{1}{4}D_{b_i}^2 + E_1 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}D_{b_i} - 2 E_2 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}\\right) \\mathcal {I}_{0,4} \\ ,\\nonumber $ and $& \\ \\frac{1}{2}D_{b_1}D_{b_2}+ \\sum _{j = 1}^{2} \\left(+ E_2 \\begin{bmatrix}1 \\\\ b_j^{-2} b_1 b_2 b_3 b_4\\end{bmatrix}- E_1 \\begin{bmatrix}1 \\\\ b_j^{-2} b_1 b_2 b_3 b_4\\end{bmatrix}\\sum _{i = 1}^4 (-1)^{\\delta _{ij}}D_{b_i}\\right) \\\\= & \\ \\frac{1}{2}D_{b_3}D_{b_4}+ \\sum _{j = 3}^{4} \\left(+ E_2 \\begin{bmatrix}1 \\\\ b_j^{-2} b_1 b_2 b_3 b_4\\end{bmatrix}- E_1 \\begin{bmatrix}1 \\\\ b_j^{-2} b_1 b_2 b_3 b_4\\end{bmatrix}\\sum _{i = 1}^4 (-1)^{\\delta _{ij}}D_{b_i}\\right) \\ ,$ as well as $& \\ \\frac{1}{2}D_{b_1}D_{b_2}+ \\sum _{j = 1}^{2} \\left(+ E_2 \\begin{bmatrix}1 \\\\ b_j^{-2} b_1 b_2 b_3^{-1} b_4\\end{bmatrix}- E_1 \\begin{bmatrix}1 \\\\ b_j^{-2} b_1 b_2 b_3^{-1} b_4\\end{bmatrix}\\sum _{i = 1}^4 (-1)^{\\delta _{ij}+\\delta _{i3}}D_{b_i}\\right) \\\\= & \\ - \\frac{1}{2}D_{b_3}D_{b_4}+ \\sum _{j = 3}^{4}\\left(+ E_2 \\begin{bmatrix}1 \\\\ b_j^{-2} b_1 b_2 b_3^{-1} b_4\\end{bmatrix}- E_1 \\begin{bmatrix}1 \\\\ b_j^{-2} b_1 b_2 b_3^{-1} b_4\\end{bmatrix}\\sum _{i = 1}^4 (-1)^{\\delta _{ij}+\\delta _{i3}}D_{b_i}\\right) \\ .$ From the $\\mathbf {35}_s$ and $\\mathbf {35}_c$ relations there are also six other similar equations, For example, from the null state $J^{[12]}J^{[56]}-J^{[15]}J^{[26]}+J^{[25]}J^{[16]}$ it follows that $\\left(D_{b_1}^2 + 4 E_1 \\begin{bmatrix}1 \\\\ b_1^2\\end{bmatrix} D_{b_1}- 8 E_2 \\begin{bmatrix}1 \\\\ b_1^2\\end{bmatrix}\\right) \\mathcal {I}_{0,4}= \\left(D_{b_2}^2 + 4 E_1 \\begin{bmatrix}1 \\\\ b_2^2\\end{bmatrix} D_{b_2}- 8 E_2 \\begin{bmatrix}1 \\\\ b_2^2\\end{bmatrix}\\right) \\mathcal {I}_{0,4} \\ .$ It will be convenient to reorganize the nine equations from the three $\\mathbf {35}$ 's in the following more compact form, $0 = & \\ \\sum _{i < j}a_{ij}D_{b_i}D_{b_j} \\mathcal {I}_{0,4} + \\sum _{i = 1}^{4} a_i \\left(D_{b_i}^2 + 4 E_1 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}D_{b_i}- 8 E_2 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}\\right)\\mathcal {I}_{0,4} \\nonumber \\\\& \\ \\qquad + \\sum _{\\alpha _i = \\pm } \\Big (\\sum _{i<j}\\alpha _i \\alpha _j a_{ij}\\Big )\\Big (- E_2 \\begin{bmatrix}1 \\\\ \\prod _{k = 1}^{4}b_k^{\\alpha _k}\\end{bmatrix} + \\frac{1}{4}E_1 \\begin{bmatrix}1 \\\\ \\prod _{k = 1}^{4}b_k^{\\alpha _k}\\end{bmatrix}\\sum _{i = 1}^{4}\\alpha _i D_{b_i}\\Big ) \\mathcal {I}_{0,4} \\ .$ Here $a_i$ and $a_{i, j}$ are 9 arbitrary constants with constraint $a_1 + a_2 + a_3 + a_4 = 0$ .", "Let us denote the differential operator acting on $\\mathcal {I}_{0,4}$ as $\\mathcal {D}^\\mathbf {35}$ .", "At weight-three there are four independent modular differential equations, where the index is annihilated by the differential operator $& \\ \\sum _{i = 1}^4 c_i\\left(D_q^{(1)}D_{b_i} + E_2D_{b_i} - 2 E_2 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}D_{b_i} + 8 E_3 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}\\right)\\\\& \\ \\qquad - \\frac{1}{4}\\sum _{\\alpha _i = \\pm }E_2 \\begin{bmatrix}1 \\\\ \\prod _{k = 1}^{4}b_k^{\\alpha _k}\\end{bmatrix}\\sum _{i, j = 1}^4\\alpha _i \\alpha _j c_iD_{b_j}+ 2 \\sum _{\\alpha _i = \\pm } \\sum _{i = 1}^{4}\\alpha _i c_i E_3 \\begin{bmatrix}1 \\\\ \\prod _{k = 1}^{4}b_k^{\\alpha _k}\\end{bmatrix} \\ .$ here $c_i$ are four arbitrary constants.", "Finally, at weight-four there is one equation $0 = \\bigg ( D_q^{(2)} - 31 E_4+ \\frac{1}{2}\\sum _{\\alpha _i = \\pm } E_3 \\begin{bmatrix}+ 1 \\\\ \\prod _{i = 1}^{4} b_i^{\\alpha _i}\\end{bmatrix}& \\ \\Big ( \\sum _{i = 1}^{4}\\alpha _iD_{b_i} \\Big )+ 2 \\sum _{i = 1}^{4} E_3 \\begin{bmatrix}+ 1 \\\\ b_i^2\\end{bmatrix} D_{b_i} \\\\& \\ - 12 \\sum _{i = 1}^{4}E_4 \\begin{bmatrix}+ 1 \\\\ b_i^2\\end{bmatrix}-6 \\sum _{\\alpha _i = \\pm } E_4 \\begin{bmatrix}+ 1 \\\\ \\prod _i b_i^{\\alpha _i}\\end{bmatrix} \\bigg ) \\mathcal {I}_{0,4} \\ .", "\\nonumber $ In the $b_i \\rightarrow 1$ limit, equation (REF ) reduces to the unflavored equation (REF ) where one sends $E_\\text{odd}\\big [ \\begin{array}{c}\\pm 1 \\\\ 1\\end{array} \\big ] \\rightarrow 0$ , corresponding to the nilpotency of the stress tensor $T$ in $\\widehat{\\mathfrak {so}}(8)_{-2}$ [7], [6]." ], [ "The solutions and modular property", "It is natural to ask if there are additional solutions to all these modular differential equations besides the flavored Schur index.", "We begin by noting that the second order unflavored equation (REF ) has an additional solution which is logarithmic [36], [35], [7], [6].", "This can be seen from the integral spacing of the indicial roots for the anzatz $\\mathcal {I} = q^\\alpha (1 + \\ldots )$ , $\\alpha ^2 - \\frac{\\alpha }{6} - \\frac{35}{144} = 0 \\qquad \\Rightarrow \\qquad \\alpha = - \\frac{5}{12}, \\frac{7}{12} \\ .$ Indeed, the unflavored index can be written as [7], [34] $\\mathcal {I}_{0,4}(b = 1) = 3 \\frac{q \\partial _q E_4}{\\eta (\\tau )^{10}} \\ .$ Under $S$ -transformation, $S\\mathcal {I}_{0,4} = \\frac{1}{960\\pi ^7 \\eta (\\tau )^{13}}\\bigg (& \\ \\frac{5 \\vartheta _1^{(3)}(0)^2}{\\eta (\\tau )^3} - 6\\pi \\vartheta _1^{(5)(0)} \\nonumber \\\\& \\ + \\frac{5i \\tau \\vartheta _1^{(3)}(0)^3}{16\\pi ^2 \\eta (\\tau )^6}- \\frac{13i \\tau \\vartheta _1^{(3)}(0)\\vartheta _1^{(5)(0)}}{16\\pi \\eta (\\tau )^3}+ \\frac{3}{8}i \\tau \\vartheta _1^{(7)}(0)\\bigg )\\ ,$ which is precisely the additional logarithmic solution of the form if expanded in $q$ series, $q^{-\\frac{5}{12}} (a + \\ldots ) + q^{\\frac{7}{12}} (\\log q)(a^{\\prime } + \\ldots )\\ .$ The fact that the modular transformation of a solution leads to another solution is guaranteed by the covariance of the unflavored modular differential equation under $SL(2, \\mathbb {Z})$ (or a suitable subgroup).", "Next we turn to the fully flavored case and consider additional solutions to all the equations of weight-two, three and four discussed above.", "Similar to the unflavored case, we will see that there is also a logarithmic solutions that arises from the $S$ transformations of the $y$ -extended Schur index given by $S \\mathcal {I}_{0,4}$ .", "Here, following the discussions in section , we assume an $\\mathbf {y}$ -extension for $\\mathcal {I}_{0,4}$ by a factor $\\prod _{i = 1}^{4}y_i^{k_i}$ , where $k_i = -2$ being the critical affine level of each $\\widehat{\\mathfrak {su}}(2)$ affine subalgebra.", "The presence of a logarithmic solution can be seen by studying the modular properties of the modular differential equations.", "First of all, all ten of the weight-two equations are covariant separately under the $S$ -transformation, $S(\\text{weight-}2) = \\tau ^2 (\\text{weight-}2) \\ .$ Here we apply the transformation of the derivatives $D_q^{(n)} \\rightarrow \\Big (\\tau ^2 \\partial _{(2n - 2)} & \\ + \\tau \\sum _{i}\\mathfrak {b}_i D_{b_i} + \\mathfrak {b}_i^2 k_i - (2n - 2) \\frac{\\tau }{2\\pi i}\\Big ) \\circ \\ldots \\\\& \\ \\circ \\Big (\\tau ^2 \\partial _{(2)} + \\tau \\sum _{i}\\mathfrak {b}_i D_{b_i} + \\mathfrak {b}_i^2 k_i - 2 \\frac{\\tau }{2\\pi i}\\Big )\\circ \\Big (\\tau ^2 \\partial _{(0)} + \\tau \\sum _{i}\\mathfrak {b}_i D_{b_i} + \\mathfrak {b}_i^2 k_i\\Big ) \\ , \\nonumber $ and $D_{b_i} \\rightarrow \\tau D_{b_i} + 2 \\mathfrak {b}_i k_i$ .", "The Eisenstein series transform under $S$ following (REF ).", "The weight-three equations (REF ) are almost covariant under $S$ -transformation up to combinations of the weight-two equations.", "For example, the weight-three equation with $c_1 = 1$ , $c_2 = c_3 = c_4 = 0$ transforms under $S$ as $S(\\text{weight-}3) = & \\ \\tau ^3 (\\text{weight-}3) \\nonumber \\\\& \\ - \\tau ^2 \\mathfrak {b}_1(4 \\mathcal {D}^\\text{Sug} + \\mathcal {D}^\\mathbf {35}(a_{1i} = 0, a_{23} = a_{24} = a_{34} = 1, a_1 = -1, a_2 = a_3 = 0)) \\nonumber \\\\& \\ - \\tau ^2 \\mathfrak {b}_2(\\mathcal {D}^\\mathbf {35}(a_{12} = -1, \\text{all other } a^{\\prime }s = 0)) \\nonumber \\\\& \\ - \\tau ^2 \\mathfrak {b}_3(\\mathcal {D}^\\mathbf {35}(a_{13} = -1, \\text{all other } a^{\\prime }s = 0)) \\nonumber \\\\& \\ - \\tau ^2 \\mathfrak {b}_4(\\mathcal {D}^\\mathbf {35}(a_{14} = -1, \\text{all other } a^{\\prime }s = 0))\\ .$ Finally, the weight-four equation is almost covariant under $S$ -transformation, up to combinations of weight-two and weight-three equations, $S(\\text{weight-}4) = & \\ \\tau ^4 (\\text{weight-4}) \\\\& + \\tau ^3 \\sum _{i = 1}^{4} (\\text{weight-3})(c_i = 2, c_{j \\ne i} = 0) \\\\& + \\tau ^2 \\sum _{i = 1}^{3} \\mathfrak {b}_i^2 (-4 \\mathcal {D}^\\text{Sug} + \\mathcal {D}^\\mathbf {35}(a_{23} = a_{24} = a_{34} = -1, a_i = 1, \\text{other }a = 0)) \\nonumber \\\\& + \\tau ^2 \\mathfrak {b}_4^2 (-4 \\mathcal {D}^\\text{Sug} + \\mathcal {D}^\\mathbf {35}(a_{23} = a_{24} = a_{34} = -1, \\text{other }a = 0)) \\nonumber \\\\& + \\tau ^2 \\sum _{i < j}^4 \\mathfrak {b}_i\\mathfrak {b}_j (\\mathcal {D}^\\mathbf {35}(a_{ij} = 2, a_i = 1, \\text{other }a = 0)) \\nonumber \\ .$ The (almost) covariance implies that the $S$ -transformation of a solution must also be a solution which is logarithmic in this case, and therefore it is potentially a logarithmic module character of $\\widehat{\\mathfrak {so}}(8)_{-2}$ .", "As was discussed in section REF , $S\\mathcal {I}_{0,4}$ can be interpreted as a type of surface defect index.", "There are additional four non-logarithmic solutions to all of the above equations [83].", "Recall that the fully flavored Schur index can be computed by the contour integral $\\mathcal {I}_{0, 4} = - \\frac{1}{2}\\oint _{|a| = 1} \\frac{da}{2\\pi i a} \\vartheta _1(2 \\mathfrak {a})\\vartheta _1(- 2 \\mathfrak {a}) \\prod _{j = 1}^4\\prod _{\\pm }\\frac{\\eta (\\tau )}{\\vartheta _4(\\pm \\mathfrak {a} + \\mathfrak {m}_j)} \\oint \\frac{da}{2\\pi i a} \\mathcal {Z}(a) \\ ,$ where $m_j = e^{2\\pi i \\mathfrak {m}_j}$ are related to the standard flavor fugacities in the class-$\\mathcal {S}$ description, $m_1 = b_1 b_2 , \\qquad m_2 = \\frac{b_1}{b_2}, \\quad m_3 = b_3 b_4, \\qquad m_4 = \\frac{b_3}{b_4}\\ .$ The integrand $\\mathcal {Z}(a)$ has four residues, $R_{j = 1,\\ldots , 4} \\mathop {\\operatorname{Res}}_{a \\rightarrow m_jq^{\\frac{1}{2}}} \\mathcal {Z}(a) = \\frac{i}{2}\\frac{\\vartheta _1(2 \\mathfrak {m}_j)}{\\eta (\\tau )}\\prod _{\\ell \\ne j} \\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {m}_j + \\mathfrak {m}_\\ell )} \\frac{\\eta (\\tau )}{\\vartheta _1(\\mathfrak {m}_j - \\mathfrak {m}_\\ell )} \\ .$ These residues also appear in the modular transformation of $\\mathcal {I}_{0,4}$ , $S \\mathcal {I}_{0,4} = i \\tau \\mathcal {I}_{0,4} + 2i \\sum _{j = 1}^{4} \\mathfrak {m}_j R_j \\ , \\qquad (\\mathfrak {y}_i, \\mathfrak {b}_i, \\tau ) \\xrightarrow{} (\\mathfrak {y}_i - \\frac{\\mathfrak {b}_i^2}{\\tau }, \\frac{\\mathfrak {b}_i}{\\tau }, - \\frac{1}{\\tau })\\ .$ From the discussions in section REF , $R_j$ can be interpreted as indices of Gukov-Witten defects with specific singular boundary behavior at the defect plane.", "These four residues $R_j$ are actually additional linear independent non-logarithmic solutions to the modular differential equations.", "They are conjectured to be the characters of the four highest weight modules of $\\widehat{\\mathfrak {so}}(8)_{-2}$ , whose (finite) highest weights are given by $\\lambda = w(\\omega _1 + \\omega _3 + \\omega _4) - \\rho $ , with Weyl reflections $w = 1, s_{1,3,4}$ [4], [34].", "Given that $R_j$ are just simple ratios of $\\vartheta _i$ and $\\eta (\\tau )$ , they can also be viewed as free field characters of four $bc \\beta \\gamma $ systems, and provide new free field realization of $\\widehat{\\mathfrak {so}}(8)_{-2}$ [84].", "Unlike the index $\\mathcal {I}_{0,4}$ and $S \\mathcal {I}_{0,4}$ , the residues $R_j$ do not have smooth unflavoring limit, and therefore they are invisible if one only studies unflavored modular differential equations.", "It can be shown that the $R_j$ and $\\mathcal {I}_{0,4}$ are the only non-logarithmic solutions to all the weight-two, three and four equations that we have discussed, by solving them through an anzatz $\\mathcal {I} = q^h \\sum _{n} a_n(b_1, \\ldots , b_n)q^{n}$ [83].", "It is therefore natural to conjecture, with the logarithmic solution $S\\mathcal {I}_{0,4}$ , that we have found all the (untwisted) module characters of $\\widehat{\\mathfrak {so}}(8)_{-2}$ , and the tools required were simply the flavored modular differential equations (REF ), (REF ), (REF ) and (REF ).", "In fact, the modular differential equation (REF ) alone, which corresponds to the nilpotency of the stress tensor $T$ , actually generates all the equations of lower weights by modular transformations.", "Subsequently they together determine all the allowed characters of $\\mathfrak {so}(8)_{-2}$ .", "So it appears that all the information about the untwisted characters of $\\widehat{\\mathfrak {so}}(8)_{-2}$ is encoded in one single equation (REF ).", "The above solutions transform nicely under modular transformations.", "First of all, the residues $R_j$ transform in the one-dimensional representation of $SL(2, \\mathbb {Z})$ , $S R_j = i R_j, \\qquad T R_j = e^{\\frac{7\\pi ii}{6}} R_j \\ .", "$ They satisfy $S^2 = (ST)^3 = - \\operatorname{id}$ .", "On the other hand, $\\mathcal {I}_{0,4}$ and $S\\mathcal {I}_{0,4}$ transform in a two-dimensional representation of $SL(2, \\mathbb {Z})$ .", "Denote $\\text{ch}_0 \\mathcal {I}_{0,4}$ , $\\text{ch}_1 S\\mathcal {I}_{0,4}$ .", "Then we have in this basis $S = \\begin{pmatrix}0 & 1 \\\\1 & 0\\end{pmatrix}, \\qquad T = \\begin{pmatrix}e^{\\frac{7 \\pi i}{6}} & 0\\\\e^{- \\frac{\\pi i}{3}} & e^{- \\frac{5\\pi i}{6}}\\end{pmatrix} \\ .$ Here we use the convention that $g \\operatorname{ch}_i = \\sum _{j = 0,1} g_{ij}\\operatorname{ch}_j$ for $g \\in SL(2, \\mathbb {Z})$ .", "The two matrices satisfy $(S T)^3 = S^2 = 1$ , as they should.", "The two characters $\\operatorname{ch}_{0,1}$ form an $SL(2, \\mathbb {Z})$ invariant partition function $Z = n (\\operatorname{ch}_0 \\overline{\\operatorname{ch}_1} + \\operatorname{ch}_1 \\overline{\\operatorname{ch}_0}) \\sum _{i, j = 0,1} M_{ij} \\operatorname{ch}_i \\overline{\\operatorname{ch}_j}\\ ,$ where $n$ denotes possible multiplicity.", "The $S$ -matrix is symmetric and unitary, however, it does not give rise to sensible fusion coefficients since $S_{01} = 0$ .", "We atempt to fix this by considering a different basis $\\operatorname{ch}^{\\prime }_0 \\operatorname{ch}_0, \\qquad \\operatorname{ch}^{\\prime }_1 a\\operatorname{ch}_0 + b\\operatorname{ch}_1 \\ , \\qquad \\operatorname{ch}^{\\prime }_i = \\sum _{j = 0, 1} U_{ij} \\operatorname{ch}_j\\ .$ Doing so, the $SL(2, \\mathbb {Z})$ invairnat partition function reads $Z = \\sum _{i,j}M^{\\prime }_{ij} \\operatorname{ch}^{\\prime }_i \\overline{\\operatorname{ch}^{\\prime }_j}= \\sum _{k,\\ell } \\Big (\\sum _{i,j} U^{-1}_{ik}\\overline{U^{-1}_{j\\ell }} M_{jk}\\Big ) \\operatorname{ch}^{\\prime }_{k} \\overline{\\operatorname{ch}^{\\prime }_\\ell } \\ ,$ and $g\\operatorname{ch}^{\\prime }_j = \\sum _{\\ell } \\Big (\\sum _{j, k} U_{ij}g_{jk}U^{-1}_{k\\ell }\\Big ) \\operatorname{ch}^{\\prime }_\\ell \\ , \\qquad \\forall g \\in SL(2, \\mathbb {Z})\\ .$ With the $S^{\\prime }$ -matrix in the new basis, we tentatively define $N_{ij}^k = \\sum _{\\ell } \\frac{S^{\\prime }_{i\\ell } S^{\\prime }_{j\\ell } \\overline{S^{\\prime }_{\\ell k}}}{S_{0\\ell }} \\ ,$ and require $N_{ij}^k$ to be non-negative integers and $M^{\\prime }$ to be integral.", "The minimal solution to such problem is given by $U_{11} = 1, \\qquad U_{12} = 0, \\qquad U_{21} = 1, \\qquad U_{22} = \\pm 1 \\ ,$ such that $M^{\\prime } = \\mp \\begin{bmatrix}2 & -1\\\\-1 & 0\\end{bmatrix}\\ ,\\qquad [\\operatorname{ch}^{\\prime }_0] \\times [\\operatorname{ch}^{\\prime }_i] = [\\operatorname{ch}^{\\prime }_i], \\qquad [\\operatorname{ch}^{\\prime }_2] \\times [\\operatorname{ch}^{\\prime }_2] = [\\operatorname{ch}^{\\prime }_2] \\ .$ However it is unclear if such fusion algebra bears any sensible mathematical or physical meaning." ], [ "The twisted sector", "In the case at hand, $\\lceil \\frac{n + 2g - 2}{2} \\rceil = 1$ , and therefore among all defect indices, there is only one independent index with even $k$ , and one with odd $k$ .", "For those with odd vorticity, we focus on $\\mathcal {I}^\\text{defect}_{0,4}(k = 1)$ , $\\mathcal {I}^\\text{defect}_{0,4}(k = 1) = \\frac{\\eta (\\tau )^2}{\\prod _{i = 1}^{4}\\vartheta _1(2 \\mathfrak {b}_i)} \\sum _{\\vec{\\alpha }= \\pm } \\left(\\prod _{i = 1}^4 \\alpha _i\\right)E_2\\left[\\begin{matrix}-1 \\\\ \\prod _{i = 1}^{4} b_i^{\\alpha _i}\\end{matrix}\\right] \\ .$ The defect index $\\mathcal {I}^\\text{defect}_{0,4}(k = 1)$ does not satisfy the equations discussed in the previous seciton.However, they do satisfy equations that belong to the twisted sector.", "To begin, the defect index has a smooth unflavoring limit, $\\mathcal {I}^\\text{defect}_{0,4}(k = 1, b_i = 1) = \\frac{\\eta (\\tau )^2}{8\\pi ^2 \\vartheta _4(0)^3 \\vartheta ^{\\prime }_1(0)^4}\\left[6 \\vartheta ^{\\prime \\prime }_4(0)^3 - 7 \\vartheta _4(0)\\vartheta _4^{\\prime \\prime }(0)\\vartheta _4^{(4)}(0)+ \\vartheta _4(0)^2 \\vartheta _4^{(6)(0)}\\right] \\ .", "\\nonumber $ It is easy to check that it satisfies a weight-four equation $0 = \\left[D_q^{(2)} - 79 E_4 - 96 E_4 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix}\\right] \\mathcal {I}^\\text{defect}_{0,4}(k = 1, b = 1) \\ .$ Apparently, this is the twisted version of the unflavored equation (REF ).", "Next we consider the fully flavored defect index.", "As discussed in section REF , the exact form (REF ) suggests that the multi-fundamentals in the VOA have half-integer conformal weight.", "One can insert all the weight-two, three and four null states that we mentioned above into the supertrace that compute the twisted module character $\\mathcal {I}_{0,4}^\\text{defect}(k = 1)$ and turn them into flavored modular differential equations.", "The equations will be almost identical to the ones in the untwisted sector, except that all the Eisenstein series $E_n \\big [ \\begin{array}{c}+1\\\\\\ldots \\end{array} \\big ]$ associated to the multi-fundamentals should be modified to $E_n \\big [ \\begin{array}{c}-1\\\\\\ldots \\end{array} \\big ]$ .", "Therefore, the Sugawara condition leads to an equation $0 = \\Bigg [D^{(1)}_q & \\ - \\frac{1}{4}\\left(D_{b_3}D_{b_2} + D_{b_4}D_{b_2} + D_{b_4}D_{b_3} + D_{b_4}^2\\right) \\nonumber \\\\& \\ - \\frac{1}{2}E_1 \\begin{bmatrix}- 1 \\\\ \\frac{b_1}{b_2 b_3 b_4}\\end{bmatrix}\\left(D_{b_1} - D_{b_2} - D_{b_3} - D_{b_4}\\right)\\nonumber \\\\& \\ - \\frac{1}{2}E_1 \\begin{bmatrix}- 1 \\\\ b_1 b_2 b_3 b_4\\end{bmatrix}\\left(D_{b_1} + D_{b_2} + D_{b_3} + D_{b_4}\\right)- E_1\\begin{bmatrix}1 \\\\ b_4^2\\end{bmatrix} D_{b_4}\\nonumber \\\\& \\ + \\left(E_2 + 2 E_2 \\begin{bmatrix}- 1 \\\\ \\frac{b_1}{b_2 b_3 b_4}\\end{bmatrix}+ 2E_2\\begin{bmatrix}- 1 \\\\ b_1 b_2 b_3 b_4\\end{bmatrix}+ 2 E_2\\begin{bmatrix}1 \\\\ b_4^2\\end{bmatrix}\\right) \\Bigg ] \\mathcal {I}_{0,4}^\\text{defect}(1) \\ ,$ while all the nulls from the three $\\mathbf {35}$ -relations give rise to $0 = & \\ \\sum _{i < j}a_{ij}D_{b_i}D_{b_j} \\mathcal {I}_{0,4} + \\sum _{i = 1}^{4} a_i \\left(D_{b_i}^2 + 4 E_1 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}D_{b_i}- 8 E_2 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}\\right)\\mathcal {I}_{0,4} \\\\& \\ \\qquad + \\sum _{\\alpha _i = \\pm } \\Big (\\sum _{i<j}\\alpha _i \\alpha _j a_{ij}\\Big )\\Big (- E_2 \\begin{bmatrix}- 1 \\\\ \\prod _{k = 1}^{4}b_k^{\\alpha _k}\\end{bmatrix} + \\frac{1}{4}E_1 \\begin{bmatrix}- 1 \\\\ \\prod _{k = 1}^{4}b_k^{\\alpha _k}\\end{bmatrix}\\sum _{i = 1}^{4}\\alpha _i D_{b_i}\\Big ) \\mathcal {I}_{0,4}^\\text{defect}(1) \\ .", "\\nonumber $ Higher weight equations can be similarly obtained.", "These equations are almost covariant under $STS$ transformations, and therefore $STS$ -transformation of the defect index $\\mathcal {I}^\\text{defect}_{0,4}(k = 1)$ provides a logarithmic solution to this set of equations.", "There may be additional non-logarithmic solutions that resemble the residues/free field characters $R_j$ , however, we leave their existence to future study." ], [ "$\\mathcal {N} = 4$ {{formula:a5e0541c-9590-4190-82f4-557f8f2ec6ff}} theory", "The $\\mathcal {N} = 4$ theory with an $SU(2)$ gauge group has an $SU(2)_\\text{f}$ flavor symmetry.", "The corresponding moment map operator $M$ transforming in the adjoint of $SU(2)_\\text{f}$ satisfies the Joseph relation in the Higgs branch chiral ring, $(M \\otimes M)_\\mathbf {1} = 0 \\ .$ Additional relations with the Hall-Littlewood chiral ring operators $\\omega , \\tilde{\\omega }$ also exist, $(M \\otimes \\omega )_\\mathbf {2} = (M \\otimes \\tilde{\\omega })_\\mathbf {2} = \\omega \\otimes \\omega = \\tilde{\\omega }\\otimes \\tilde{\\omega }= 0 \\ .$ The $\\mathcal {N} = 4$ theory with $SU(2)$ gauge group has the small $\\mathcal {N} = 4$ superconformal algebra as its associated VOA, with the central charge $c = - 9$ .", "The $SU(2)$ flavor symmetry leads to an $\\widehat{\\mathfrak {su}}(2)_{k_\\text{2d} = - 3/2}$ subalgebra with generators $J^A$ .", "The Sugawara stress tensor $T_\\text{Sug} = \\frac{1}{2(k_\\text{2d} + h^\\vee )} \\sum _{A,B} K_{AB}(J^A J^B)$ equals the full stress tensor $\\mathcal {T}$ of $\\mathbb {V}_{\\mathcal {N} = 4}$ .", "Since the stress tensor descends from 4d outside the Higgs branch chiral ring, the Sugawara construction reflects the aforementioned Joseph relation.", "The above Hall-Littlewood chiral ring relations are also reflected by four null states in the VOA at conformal weight $5/3$ and 3 [6], which are charged under the Cartan of $SU(2)_\\text{f}$ .", "There is an additional neutral null state (with $A = 3$ ) at conformal weight-3, $\\mathcal {N}_3^A = (\\sigma ^A_{\\alpha \\dot{\\beta }}G^\\alpha _{- 3/2}\\tilde{G}^{\\dot{\\beta }}_{-3/2} + 2 f^A{_{BC}}J^B_{-2} J^C_{-1} + 2J^A_{-3} - 2L_{-2}J^A_{-1})|0\\rangle \\ .$ The stress tensor $T$ is outside of the chiral ring.", "As analyzed in [6], it must be nilpotent up to terms in the subalgebra $C_2(\\mathbb {V}_{\\mathcal {N} = 4})$ .", "This is concretely realized by a null at conformal weight-4, $\\mathcal {N}_4 = \\left((L_{-2})^2 + \\epsilon _{\\alpha \\beta }(\\tilde{G}^\\alpha _{-5/2}G^\\beta _{-3/2} - G^\\alpha _{- 5/2}\\tilde{G}^\\beta _{-3/2}) - K_{AB}J^A_{-2}J^B_{-2} - \\frac{1}{2}L_{-4}\\right)|0\\rangle \\ ,$ corresponding to a relation in the Higgs branch, $(TT) \\sim (K_{AB}J^A J^B)^2 = 0 \\ .$ Now we turn to the modular differential equations that follow from the null states.", "The Schur index of the $\\mathcal {N} = 4$ theory is given by the simple expression $\\mathcal {I}_{\\mathcal {N} = 4} = \\operatorname{tr}(-1)^F q^{L_0 - \\frac{c}{24}}b^{J_0}= \\frac{1}{2\\pi } \\frac{\\vartheta ^{\\prime }_4(\\mathfrak {b})}{\\vartheta _1(2 \\mathfrak {b})}= \\frac{i\\vartheta _4(\\mathfrak {b})}{\\vartheta _1(2\\mathfrak {b})}E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix} \\ .$ The factor in front of the Eisenstein series can be interpreted as a character of a $bc\\beta \\gamma $ system [85], [17], $\\mathcal {I}_{bc\\beta \\gamma } \\frac{i \\vartheta _4(\\mathfrak {b})}{\\vartheta _1(2 \\mathfrak {b})} \\ .$ On the other hand, the Schur index of the $\\mathcal {N} = 4$ theory is a simple contour integral $\\mathcal {I}_{\\mathcal {N} = 4}= \\frac{1}{2} \\oint \\frac{da}{2\\pi i a} \\frac{\\eta (\\tau )^3}{\\vartheta _4(\\mathfrak {b})} \\prod _{\\pm } \\frac{\\vartheta _1(\\pm \\mathfrak {a})}{\\vartheta _4(\\pm \\mathfrak {a} + \\mathfrak {b})}\\oint \\frac{da}{2\\pi i a}\\mathcal {Z}(a)\\ ,$ Character $\\mathcal {I}_{bc \\beta \\gamma }$ coincides with the residue of the integrand, $\\operatorname{Res}_{\\mathfrak {a} \\rightarrow \\mathfrak {b} + \\frac{\\tau }{2}} \\mathcal {Z}(a) \\sim \\mathcal {I}_{bc \\beta \\gamma } \\ ,$ and can be viewed physically as related to the Schur index in the presence of a Gukov-Witten surface defect in the $\\mathcal {N} = 4$ theory.", "Here $J$ denotes the Cartan generator of the flavor group.", "Various null states above can be inserted into the trace, leading to non-trivial modular differential equations.", "The Sugawara construction $0 = T - T_\\text{Sug}$ is the simplest example, giving a modular weight-two equation [83], [17], $\\left[D_q^{(1)} - \\frac{1}{2(k + h^\\vee )}\\left(\\frac{1}{2}D_b^2 + kE_2 + 2k E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 2 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}D_b\\right)\\right] \\mathcal {I}_{\\mathcal {N} = 4} = 0 \\ .$ Also, the weight-three and weight-four null states $\\mathcal {N}^{A = 3}_3$ , $\\mathcal {N}_4$ lead to $0 = \\bigg [ D_q^{(1)} D_b + E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}D_q^{(1)}& \\ - 3 E_3 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}+ 6 E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\\\& \\ + \\left(E_2 + E_2 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}- 2 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right)D_b \\bigg ]\\mathcal {I}_{\\mathcal {N} = 4} = 0 \\ ,$ and $0 = & \\ (D_q^{(2)} + \\frac{c_\\text{2d}}{2} E_4 )\\mathcal {I}_{\\mathcal {N} = 4} + \\left(-2 E_2\\left[\\begin{matrix}-1\\\\b\\end{matrix}\\right]D_q^{(1)}- 4E_3\\left[\\begin{matrix}-1 \\\\b\\end{matrix}\\right]D_b+ 18 E_4\\left[\\begin{matrix}-1 \\\\ b\\end{matrix}\\right]\\right) \\mathcal {I}_{\\mathcal {N} = 4} \\nonumber \\\\& + \\left(3k_\\text{2d}E_4 + 2 E_3\\left[\\begin{matrix}1 \\\\ b^2\\end{matrix}\\right] D_b- 9 E_4\\left[\\begin{matrix}1 \\\\ b^2\\end{matrix}\\right]\\right) \\mathcal {I}_{\\mathcal {N} = 4} \\ .$ There are additional solutions to the above three modular differential equations.", "First we recall that in the unflavoring limit, the Schur index is given by $\\mathcal {I}_{\\mathcal {N} = 4}(b = 1) = \\frac{y^k}{4\\pi } \\frac{\\vartheta _4^{\\prime \\prime }(0)}{\\vartheta _1^{\\prime }(0)} \\ ,$ and it satisfies a $\\Gamma ^0(2)$ -modular equation following from the null state (REF ), $\\left(D^{(2)}_q - 18E_4- 2 E_2 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix}D_q^{(1)}+ 18E_4 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix}\\right)\\mathcal {I}_{\\mathcal {N} = 4} = 0 \\ ,$ This equation reflects the nilpotency of the stress tensor, and is also the unflavoring limit of (REF ), where $E_3 \\begin{bmatrix}\\pm 1 \\\\ b^2\\end{bmatrix} \\xrightarrow{} 0, \\qquad D_b \\operatorname{\\mathcal {I}} \\xrightarrow{} 0 \\ .$ The corresponding indicial equation predicts $\\alpha = - 1/8$ and $\\alpha = 3/8$ for the standard anzatz $\\mathcal {I} = q^\\alpha \\sum _n a_n q^{n/2}$ .", "The half-integer spacing between $\\alpha $ 's implies that among the two linear independent solutions, one is logarithmic of the form $q^{-1/8}\\sum _{n} a_n q^{n/2} + q^{3/8} \\log q \\sum _{n}a^{\\prime }_n q^{n/2} \\ .$ Such logarithmic solution is given by the $STS$ -transformation of the unflavored Schur index, $\\mathcal {I}_{\\log } = - \\frac{i}{2}y^k \\frac{\\vartheta _4(0)}{\\vartheta _1^{\\prime }(0)} + (1 - \\tau ) \\mathcal {I}_{\\mathcal {N} = 4}(b = 1) \\ .$ When flavored, there are three equations (REF ), (REF ) and (REF ) to be concerned with.", "One can assume an anzatz for non-logarithmic solutions of the form $\\mathcal {I} = q^\\alpha \\sum _{n \\ge 0} a_n(b) q^{\\frac{n}{2}} \\ .$ The coefficients $a_n(b)$ can be solved order by order, and one finds that the Schur index $\\mathcal {I}_{\\mathcal {N} = 4}$ and the Gukov-Witten defect index $\\mathcal {I}_{bc\\beta \\gamma }$ are the only two solutions (corresponding to $\\alpha = 3/8$ and $\\alpha = - 1/8$ respectively) to the three flavored modular equations [83].", "The flavored Schur index $\\mathcal {I}_{\\mathcal {N} = 4}$ in the $b \\rightarrow 1$ limit reproduces the non-logarithmic unflavored solution of unflavored equation (REF ), while the $\\alpha = - \\frac{1}{8}$ solution $\\mathcal {I}_{\\beta \\gamma bc}$ does not have a smooth $b \\rightarrow 1$ limit and is invisible from (REF ).", "As discussed in [86], irreducible modules from the category-$\\mathcal {O}$ of the small $\\mathcal {N} = 4$ superconformal algebra $\\mathbb {V}_{\\mathcal {N} = 4}$ were classified: there are only two, one being the vacuum module, and the other will be called $M$ .", "From [86], [85], the associated VOA $\\mathbb {V}_{\\mathcal {N} = 4}$ is a sub-VOA of the $bc\\beta \\gamma $ system, and the quotient gives precisely the irreducible module $M = \\mathbb {V}_{bc\\beta \\gamma }/\\mathbb {V}_{\\mathcal {N} = 4}$ .", "The two non-logarithmic solutions that we have found precisely correspond to the vacuum module and the reducible but indecomposable module $\\mathbb {V}_{bc\\beta \\gamma }$ of $\\mathbb {V}_{\\mathcal {N} = 4}$ .", "There is also a logarithmic solution to the flavored modular differential equations, given by the $STS$ -transformation of $\\mathcal {I}_{\\mathcal {N} = 4}$ .", "To see the presence of such a solution, let us first analyze the modular property of the equation (REF ).", "As discussed in section , we consider the $y$ -extended character $\\mathcal {I}_{\\mathcal {N} = 4}(\\mathfrak {y}, \\mathfrak {b}, \\tau ) = y^k \\mathcal {I}_{\\mathcal {N} = 4}(\\mathfrak {b}, \\tau )\\ , \\qquad k = - \\frac{3}{2}\\ .$ Recall that the auxiliary variable $y$ is associated to the flavor $SU(2)$ fugacity $b$ , whose affine level is $k = \\frac{-3}{2}$ .", "We consider again the $S$ -transformation $S: (\\mathfrak {y}, \\mathfrak {b}, \\tau ) \\rightarrow (\\mathfrak {y} - \\frac{\\mathfrak {b}^2}{\\tau }, \\frac{\\mathfrak {b}}{\\tau }, - \\frac{1}{\\tau }) \\ .$ The weight-two equation (REF ) is actually covariant under $S$ , $& \\ D_q^{(1)} - \\frac{1}{2(k + h^\\vee )}\\left(\\frac{1}{2}D_b^2 + kE_2 + 2k E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 2 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}D_b\\right)\\\\\\rightarrow & \\ \\tau ^2\\left(D_q^{(1)} - \\frac{1}{2(k + h^\\vee )}\\left(\\frac{1}{2}D_b^2 + kE_2 + 2k E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 2 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}D_b\\right)\\right) \\ .$ This implies that $S\\mathcal {I}_{\\mathcal {N} = 4}$ must be a solution.", "It is also invariant under $T$ .", "A similar analysis extends to (REF ), (REF ) which can be shown to be almost covariant under $STS \\in \\Gamma ^0(2)$ , up to equations in lower modular weights.", "Explicitly, $STS (\\text{weight-3}) = & \\ (\\tau - 1)^3 (\\text{weight-3}) - 2 \\mathfrak {b} (\\tau - 1)^2 (\\text{weight-2}) \\ , \\\\STS (\\text{weight-4}) = & \\ (\\tau - 1)^4 (\\text{weight-4}) + 2 \\mathfrak {b} (\\tau - 1)^3 (\\text{weight-3}) - 2 \\mathfrak {b}^2 (\\tau - 1)^2 (\\text{weight-2}) \\ .", "\\nonumber $ To conclude, $STS \\mathcal {I}_{\\mathcal {N} = 4}$ furnishes a logarithmic solution to all three modular equations.", "We conjecture that $STS \\mathcal {I}_{\\mathcal {N} = 4}$ together with the two non-logarithmic solutions $\\mathcal {I}_{\\mathcal {N} = 4}$ and $\\mathcal {I}_{bc\\beta \\gamma }$ form the complete set of solutions.", "The logarithmic solution $STS\\mathcal {I}_{\\mathcal {N} = 4}$ may correspond to a logarithmic module of $\\mathbb {V}_{\\mathcal {N} = 4}$ where the Virasoro zero mode $L_0$ does not have a diagonalizble action [86], though the precise relation will be left for future study.", "Like in the case of $\\widehat{\\mathfrak {so}}(8)_{-2}$ , by a modular transformation $STS$ , the weight-four equation (REF ) generates all the lower weight equations that are necessary for determining the allowed characters.", "Hence, all the character information are encoded in one equation (REF ), which reflects the nilpotency of the stress tensor.", "Finally, it is straightforward to show that $\\Gamma ^0(2)$ acts linearly on the two-dimensional space spanned by $\\text{ch}_0 \\mathcal {I}_{\\mathcal {N} = 4}$ and $\\text{ch}_1 = STS \\mathcal {I}_{\\mathcal {N} = 4}$ : $S^2 = 1, \\qquad T^2 = \\begin{pmatrix}-i & 0 \\\\2i & -i\\end{pmatrix}, \\qquad STS = \\begin{pmatrix}0 & 1 \\\\-1 & 2\\end{pmatrix} \\ .$ Here the matrix $g_{ij}$ of an element $g \\in \\Gamma ^0(2)$ are defined through action $g \\text{ch}_i = \\sum _{j = 0,1} g_{ij} \\text{ch}_j$ ." ], [ "Untwisted sector", "The $\\mathcal {T}_{1,1}$ theory is the product of the $\\mathcal {N} = 4$ $SU(2)$ theory and a free hypermultiplet.", "In general the two sectors each have their own $SU(2)$ flavor symmetry with seperate flavor fugacities $b_{1,2}$ , on which the Schur index depends.", "Let us first look at the naive class-$\\mathcal {S}$ limit $b_{1} = b_2 = b$ .", "The Schur index is given by the formula (REF ), $\\mathcal {I}_{1,1} = i\\frac{\\eta (\\tau )}{\\vartheta _1(2 \\mathfrak {b})} E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix} \\ .$ It satisfies one weight-two, two weight-three and three weight-four equations which are collected in the appendix .", "For example, there are equations mirroring those in the $\\mathcal {N} = 4$ theory at weight-two $0 = \\left(D_q^{(1)} - \\frac{1}{2}D_b^2 - E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix} D_b-2 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix} D_b+ 3 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix} + 2 E_2- 2 E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right) \\mathcal {I}_{1,1} \\ ,$ weight-three $0 = \\left(D_q^{(1)}D_b + E_2 D_b + E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix} D_q^{(1)}- 2 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix} D_b- 2 E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 6 E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right) \\mathcal {I}_{1,1} \\ ,$ and at weight-four, $0 = \\bigg (D_q^{(2)} - & 4 E_2 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}D_q^{(1)}- 4 E_3 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}D_b+ 2 E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}D_b\\\\& \\ + \\frac{8}{3} E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}E_3 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}+ \\frac{2}{3}E_1 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix} + 16 E_4 \\begin{bmatrix}-1 \\\\ b\\end{bmatrix}- 11 E_4 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\bigg ) \\mathcal {I}_{1,1} \\ .", "\\nonumber $ Note that this weight-four equation reduces in the $b \\rightarrow 1$ limit the 2$^\\text{nd}$ order unflavored modular differential equation the reflects the nilpotency of the stress tensor [6], $0 = \\left(D_q^{(2)} - 4 E_2 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix}D_q^{(1)} - 11 E_4 \\begin{bmatrix}+ 1 \\\\ 1\\end{bmatrix}+ 16 E_4 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix}\\right) \\mathcal {I}_{1,1}(b = 1) \\ .$ There are additional solutions to this set of equations.", "First of all, the factor $\\mathcal {I}_{\\beta \\gamma } \\frac{\\eta (\\tau )}{\\vartheta _1(2 \\mathfrak {b})}$ in front of $\\mathcal {I}_{1,1}$ is a non-logarithmic solution, and also coincides with the residue of the integrand that computes $\\mathcal {I}_{1,1}$ in a contour integral.", "Like in the $\\mathcal {N} = 4$ case, the three equaitons (REF ), (REF ) and (REF ) can be solved order by order through an anzatz $\\mathcal {I} = q^h\\sum _{n = 0}a_n(b)q^{\\frac{n}{2}}$ .", "It turns out that $\\mathcal {I}_{1,1}$ and $\\mathcal {I}_{\\beta \\gamma }$ are the only two non-logarithmic solutions.", "There are also logarithmic solutions.", "This can be seen by working out the modular transformation of the (REF ), (REF ), (REF ).", "For example, (REF ) is covariant under $STS$ , while (REF ) is almost covariant $STS (\\text{weight-4}) = (\\tau -1)^4 (\\text{weight-4}) - 2(\\tau - 1)^3 \\mathfrak {b} (\\text{weight-3}) + 2 (\\tau - 1)^2 \\mathfrak {b}^2 (\\text{weight-2}) \\ .", "\\nonumber $ Therefore, $STS \\mathcal {I}_{1,1}$ is also a solution, and we believe we have exhausted all the solutions to the flavored modular differential equations.", "Once again, the flavored modular differential equation corresponding to the nilpotency of $T$ encodes all the information of all the flavored characters, as it generates all the necessary equations of lower weight.", "Next we turn on separate flavor fugacities for the two $SU(2)$ flavor symmetries.", "The fully flavored Schur index is just the product of the Schur indices of the two theories, $\\mathcal {I}_{1,1}(\\mathfrak {b}_1, \\mathfrak {b}_2) = \\frac{1}{2\\pi } \\frac{\\vartheta _4^{\\prime }(\\mathfrak {b}_1)}{\\vartheta _1(2 \\mathfrak {b}_1)} \\frac{\\eta (\\tau )}{\\vartheta _4(\\mathfrak {b}_2)} = \\frac{i \\vartheta _4(\\mathfrak {b}_1)}{\\vartheta _1(2 \\mathfrak {b}_1)}\\frac{\\eta (\\tau )}{\\vartheta _4(\\mathfrak {b}_2)} E_1 \\begin{bmatrix}- 1 \\\\ b_1\\end{bmatrix}\\ .$ It is straightforward to derive the fully-flavored modular differential equations for this theory by combining the results in section REF , REF .", "For instance, at weight-two one has the equation by combining the Sugawara condition (REF ) and (REF ), $0 = \\left[D_q^{(1)} - \\frac{1}{2(k + h^\\vee )}\\left(\\frac{1}{2}D_{b_1}^2 + kE_2 + 2k E_2 \\begin{bmatrix}1 \\\\ b_1^2\\end{bmatrix}+ 2 E_1 \\begin{bmatrix}1 \\\\ b_1^2\\end{bmatrix}D_{b_1}\\right)- E_2 \\begin{bmatrix}-1 \\\\ b_2\\end{bmatrix}\\right] \\mathcal {I}_{1,1} \\ .$ Equations of weight-three and four in section (REF ) can be easily carried over from the previous section.", "Equations in section (REF ) without any $D_q^{(n)}$ also appear here naturally.", "Again there are additional solutions to these equations.", "The coefficient of the fully flavored index is given by $\\mathcal {I}_{bc(\\beta \\gamma )^2} = \\frac{i \\vartheta _4(\\mathfrak {b}_1)}{\\vartheta _1(2 \\mathfrak {b}_1)}\\frac{\\eta (\\tau )}{\\vartheta _4(\\mathfrak {b}_2)}\\ ,$ which coincides with the residue of the integrand that computes (REF ).", "It satisfies all the above mentioned modular differential equations.", "The equations are also (almost) covariant under $STS$ transformation, and hence the transformed index $STS\\mathcal {I}_{1,1}(\\mathfrak {b}_1, \\mathfrak {b}_2)$ gives a logarithmic solution.", "We believe that $\\mathcal {I}_{1,1}$ , $STS \\mathcal {I}_{1,1}$ and $\\mathcal {I}_{bc(\\beta \\gamma )^2}$ form the complete set of solutions." ], [ "Untwisted sector", "Moving onto the twisted sector where we consider again the $b_i = b$ limit.", "Since ${\\frac{n + 2g - 2}{2}} = 1$ in this case, there is only one linear independent vortex defect index with odd vorticity, which we choose to be $\\mathcal {I}^\\text{defect}_{1,1}(k = 1)$ .", "In this sector, we should change all the flavor fundamentals to have integer conformal weights.", "Indeed, one can check that the defect index $\\mathcal {I}^\\text{defect}_{1,1}(k = 1)$ satisfies all the above equations with $E_n \\Big [\\begin{array}{c}- 1 \\\\ b\\end{array}\\Big ] \\rightarrow E_n \\Big [\\begin{array}{c}+ 1 \\\\ b\\end{array}\\Big ]$ .", "For example, at weight-two, there is $0 = \\left(D_q^{(1)} - \\frac{1}{2}D_b^2 - E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix} D_b-2 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix} D_b+ 3 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix} + 2 E_2- 2 E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix}E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right) \\mathcal {I}^\\text{defect}_{1,1}(1) \\ .$ At weight-four, there is also $0 = \\bigg (D_q^{(2)} - & 4 E_2 \\begin{bmatrix}1 \\\\ b\\end{bmatrix}D_q^{(1)}- 4 E_3 \\begin{bmatrix}1 \\\\ b\\end{bmatrix}D_b+ 2 E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}D_b\\\\& \\ + \\frac{8}{3} E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix}E_3 \\begin{bmatrix}1 \\\\ b\\end{bmatrix}+ \\frac{2}{3}E_1 \\begin{bmatrix}1 \\\\ b\\end{bmatrix}E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix} + 16 E_4 \\begin{bmatrix}1 \\\\ b\\end{bmatrix}- 11 E_4 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\bigg ) \\mathcal {I}^\\text{defect}_{1,1}(1) \\ .", "\\nonumber $ What's surprising to observe is that the free field character $\\mathcal {I}_{\\beta \\gamma }$ is actually an additional non-logarithmic solution to all the equations in both the untwisted and the twisted sectorAmong all the examples we have examined, this is the only instance where the free field character walks between both worlds.", "We leave its physical or mathematical implication to future study..", "In fact, there are precisely two linear independent non-logarithmic solutions to all the equations in the twisted sector, the free field character $\\mathcal {I}_{\\beta \\gamma }$ and the defect index $\\mathcal {I}^\\text{defect}_{1,1}(k = 1)$ However, unlike that in the untwisted sector, neither of these solutions has smooth unflavoring limit.", "In particular, $\\mathcal {I}_{1,1}^\\text{defect}(k = 1) = - \\frac{\\eta (\\tau )\\vartheta _1^{\\prime }(\\mathfrak {b})}{\\pi \\vartheta _1(\\mathfrak {b},q) \\vartheta _1(2\\mathfrak {b})}$ has a double pole at $\\mathfrak {b} = 0$ .", "Therefore there is no unflavored modular differential equation in the twisted sector.", ".", "This can be similarly shown by solving them order by order.", "Finally, the equations in the twisted sector are all $SL(2, \\mathbb {Z})$ (almost) covariant, and therefore logarithmic solutions are present given by the modular transformations of the vortex defect index." ], [ "$\\mathcal {T}_{2,0}$", "The genus-two theory $\\mathcal {T}_{2,0}$ admits two weak-coupling limits.", "One limit is given by gauging two $\\mathcal {T}_{1,1}$ by an $SU(2)$ vector multiplet, where one reads off a $U(1)$ flavor symmetry invisible in the class-$\\mathcal {S}$ description [1].", "The chiral algebra of $\\mathcal {I}_{2,0}$ has been constructed in [87] and later a free field realization was proposed in [88].", "The associated flavored Schur index is given by the exact formula [34], $\\mathcal {I}_{2,0} = \\frac{i \\vartheta _1(\\mathfrak {b})^2}{\\eta (\\tau ) \\vartheta _1(2 \\mathfrak {b})} \\left(E_3 \\begin{bmatrix}+1\\\\b\\end{bmatrix}+ E_1 \\begin{bmatrix}+1\\\\b\\end{bmatrix}E_2 \\begin{bmatrix}+1\\\\b\\end{bmatrix}+ \\frac{1}{12}E_1\\begin{bmatrix}+1\\\\b\\end{bmatrix}\\right) \\ .$ Here $b$ denotes the $U(1)$ flavor fugacity.", "Note that the factor in front can be viewed as a free field character of a $(bc)^2\\beta \\gamma $ system, $\\mathcal {I}_{(bc)^2 \\beta \\gamma } \\frac{i\\vartheta _1(\\mathfrak {b})^2}{\\eta (\\tau )\\vartheta _1(2 \\mathfrak {b})} \\ .$ The Schur index satisfies several modular differential equations.", "At weight-four, we have $\\Bigg [ D_q^{(2)}& \\ +\\frac{1}{4}D_{b}^{4}-2D_q^{(1)}D_b^2+2E_1\\begin{bmatrix}1\\\\b^2\\\\\\end{bmatrix} D_{b}^{3}-2\\left( E_1\\begin{bmatrix}1\\\\b\\\\\\end{bmatrix} +2E_1\\begin{bmatrix}1\\\\b^2\\\\\\end{bmatrix} \\right) D_q^{(1)}D_b \\nonumber \\\\& \\ +4\\left( E_2+E_2\\begin{bmatrix}1\\\\b\\\\\\end{bmatrix} +E_2\\begin{bmatrix}1\\\\b^2\\\\\\end{bmatrix} \\right) D_q^{(1)}+\\left( -7E_2-2E_2\\begin{bmatrix}1\\\\b\\\\\\end{bmatrix} -8E_2\\begin{bmatrix}1\\\\b^2\\\\\\end{bmatrix} \\right) D_b^2\\\\& \\ \\left( -2E_2\\left( E_1 \\begin{bmatrix}1\\\\b\\\\\\end{bmatrix} +8E_1 \\begin{bmatrix}1\\\\b^2\\\\\\end{bmatrix} \\right) +12E_3 \\begin{bmatrix}1\\\\b\\\\\\end{bmatrix} +18E_3 \\begin{bmatrix}1\\\\b^2\\\\\\end{bmatrix} \\right) D_b\\nonumber \\\\& \\ +\\left( 7E_{2}^{2}+4E_2\\left( E_2 \\begin{bmatrix}1\\\\b\\\\\\end{bmatrix} +4E_2 \\begin{bmatrix}1\\\\b^2\\\\\\end{bmatrix} \\right) -2\\left( 14E_4+12E_4 \\begin{bmatrix}1\\\\b\\\\\\end{bmatrix} +9E_4 \\begin{bmatrix}1\\\\b^2\\\\\\end{bmatrix} \\right) \\right) \\Bigg ] \\mathcal {I}_{2,0} = 0 \\ .\\nonumber $ This modular differential equation comes from the null [87] $(B^+ B^- - J^4) + 2 (D^{+I}\\bar{D}_I^-) = 4T^2 -6 \\partial ^2 T - 8J^2 T & \\ + 12\\partial (JT) - 4\\partial J^3 \\nonumber \\\\& \\ + 9 (\\partial J)^2 + 14 J \\partial ^2 J - 5 \\partial ^3 J \\ .$ There is no weight-five equation, even though there are several null states at conformal weight-five.", "At weight-six, the Schur index satisfies two modular differential equaitons.", "The first one is $\\Bigg [ & \\ D_{q}^{(3)}+\\frac{3}{2}D_q^{(2)}D_b^2-3E_1\\begin{bmatrix}1\\\\b\\end{bmatrix} D_q^{(2)}D_b+6\\left( E_2-2E_2\\begin{bmatrix}1\\\\b\\end{bmatrix} \\right) D_q^{(1)}D_b^2 \\\\& \\ +3\\left( E_2+2E_2\\begin{bmatrix}1\\\\b\\end{bmatrix} \\right) D_q^{(2)}-12\\left( E_2E_1\\begin{bmatrix}1\\\\b\\end{bmatrix} -5E_3\\begin{bmatrix}1\\\\b\\end{bmatrix} +E_3 \\begin{bmatrix}1\\\\b^2\\end{bmatrix} \\right) D_q^{(1)}D_b\\\\& \\ +6\\left( -3E_3\\begin{bmatrix}1\\\\b\\end{bmatrix} +E_3 \\begin{bmatrix}1\\\\b^2\\end{bmatrix} \\right) D_b^3\\\\& \\ +\\left( 9{E_2}^2-\\frac{111}{2}E_4-24E_2E_2\\begin{bmatrix}1\\\\b\\end{bmatrix} +180E_4\\begin{bmatrix}1\\\\b\\end{bmatrix} -72E_4 \\begin{bmatrix}1\\\\b^2\\end{bmatrix} \\right) D_b^2\\\\& \\ +D_q^{(1)}\\left( \\begin{array}{c}-6{E_2}^2-49E_4+24E_2E_2\\begin{bmatrix}1\\\\b\\end{bmatrix} -72E_4\\begin{bmatrix}1\\\\b\\end{bmatrix} +36E_4 \\begin{bmatrix}1\\\\b^2\\end{bmatrix}\\\\\\end{array} \\right) \\nonumber \\\\& \\ -3\\left( 4{E_2}^2E_1\\begin{bmatrix}1\\\\b\\end{bmatrix} +5E_4E_1\\begin{bmatrix}1\\\\b\\end{bmatrix} +E_2\\left( -44E_3\\begin{bmatrix}1\\\\b\\end{bmatrix} +16E_3 \\begin{bmatrix}1\\\\b^2\\end{bmatrix} \\right) +216E_5\\begin{bmatrix}1\\\\b\\end{bmatrix} -108E_5 \\begin{bmatrix}1\\\\b^2\\end{bmatrix} \\right) \\nonumber \\\\& \\ -\\left( 6{E_2}^3-24{E_2}^2E_2\\begin{bmatrix}1\\\\b\\end{bmatrix} +3E_2\\left( E_4+72E_4\\begin{bmatrix}1\\\\b\\end{bmatrix} -48E_4 \\begin{bmatrix}1\\\\b^2\\end{bmatrix} \\right) \\right) \\\\& \\ -\\left( 10\\left( 11E_6-3E_4E_2\\begin{bmatrix}1\\\\b\\end{bmatrix} -72E_6\\begin{bmatrix}1\\\\b\\end{bmatrix} +54E_6 \\begin{bmatrix}1\\\\b^2\\end{bmatrix} \\right) \\right) \\Bigg ]\\mathcal {I}_{2,0} = 0$ This equation arises from the weight-six null in [87].", "Another modular differential equation takes the schematic form $(D^6_b - 14 D^{(1)}_qD_b^4 + \\ldots ) \\mathcal {I}_{2,0} = 0\\ .$ Note however that this equation does not follow from any null states in [87] and we shall discard.", "Let us turn to additional solutions to the modular differential equations.", "It is easy to verify that the the free field character $\\mathcal {I}_{(bc)^2 \\beta \\gamma }$ in front of the Schur index is an additional solution to the above equations with associated null states.", "Like in the previous examples, it also coincides with the residue of the integrand that computes the original Schur index (REF ).", "Recall that the theory has a duality frame of gluing two copies of $\\mathcal {T}_{0,3}$ , where the Schur index is written as $\\mathcal {I}_{2, 0} = & \\ - \\frac{1}{8}\\oint \\prod _{i = 1}^3 \\left[\\frac{da_i}{2\\pi i a_i}\\vartheta _1(2\\mathfrak {a}_i)\\vartheta _1(- 2\\mathfrak {a}_i)\\right]\\prod _{\\pm , \\pm , \\pm } \\frac{\\eta (\\tau )}{\\vartheta _4(\\pm \\mathfrak {a}_1 \\pm \\mathfrak {a}_2 \\pm \\mathfrak {a}_3 + \\frac{\\mathfrak {b}}{2})} \\\\& \\ \\oint \\prod _{i = 1}^{3}\\frac{da_i}{2\\pi i a_i} \\mathcal {Z}(a) \\ .$ The free field character appears as the following residue, $\\mathcal {I}_{(bc)^2\\beta \\gamma } = \\mathop {\\operatorname{Res}}_{\\mathfrak {a}_1 \\rightarrow \\mathfrak {a}_2 + \\mathfrak {a}_3 + \\frac{\\mathfrak {b}}{2} + \\frac{\\tau }{2}}\\mathop {\\operatorname{Res}}_{\\mathfrak {a}_2 \\rightarrow - \\mathfrak {a}_3}\\mathop {\\operatorname{Res}}_{\\mathfrak {a}_3 \\rightarrow - \\frac{1}{2}\\mathfrak {b}} \\mathcal {Z}(a, b) = \\frac{i}{32} \\frac{\\vartheta _1(\\mathfrak {b})^2}{\\eta (\\tau )\\vartheta _1(2 \\mathfrak {b})} \\ .$ Note that $\\mathcal {I}_{(bc)^2 \\beta \\gamma }$ is not annihilated by the second weight-six modular differential equation which has no associated null state, which is somewhat expected.", "Unfortunately, although one would expect that the $SL(2, \\mathbb {Z})$ transformation of the index $\\mathcal {I}_{2,0}$ gives an additional logarithmic solution, the weight-six equation does not transform properly under $SL(2, \\mathbb {Z})$ .", "As a result, the $S$ -transformed $\\mathcal {I}_{2,0}$ only satisfies the weight-four equation.", "The physical meaning of the lack of a logarithmic solution remains unclear." ], [ "$\\mathcal {T}_{1,2}$", "The Schur index of $\\mathcal {T}_{1,2}$ in exact form is given by $\\mathcal {I}_{1,2} = - \\frac{\\eta (\\tau )^2}{\\prod _{i = 1}^{2}\\vartheta _1(2\\mathfrak {b}_i)}\\left(E_2 \\begin{bmatrix}1 \\\\ b_1 b_2\\end{bmatrix}- E_2 \\begin{bmatrix}1 \\\\ b_1 / b_2\\end{bmatrix}\\right) \\ .$ Note that the factor in front can be interpreted as a free field character of a pair of $\\beta \\gamma $ system, $\\mathcal {I}_{\\beta \\gamma \\beta \\gamma } = \\frac{\\eta (\\tau )}{\\vartheta _1(2\\mathfrak {b}_1)}\\frac{\\eta (\\tau )}{\\vartheta _1(2\\mathfrak {b}_2)} \\ .$ Noting that $\\lceil \\frac{n + 2g - 2}{2} \\rceil = 1$ , there is only one defect index $\\mathcal {I}_{1,2}^\\text{defect}(k = 0) = \\mathcal {I}_{1,2}$ with even $k$ and one $\\mathcal {I}^\\text{defect}_{1,2}(1)$ with odd $k$ ." ], [ "The untwisted sector", "At weight-two there are two modular differential equations satisfied by the index $\\mathcal {I}_{1,2}$ , $0 = \\Bigg [D_q^{(1)}- \\frac{1}{4} \\sum _{i = 1,2} D_{b_i}^2-\\frac{1}{4}& \\ \\sum _{\\alpha _i = \\pm } E_1 \\begin{bmatrix}1 \\\\ b_1^{\\alpha _1}b_2^{\\alpha _2}\\end{bmatrix}\\sum _{i = 1,2}\\alpha _i D_{b_i}- \\sum _{i = 1,2} E_1 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}D_{b_i} \\\\& \\ + 2 \\bigg (E_2 + \\frac{1}{2} \\sum _{\\alpha _i = \\pm }E_2 \\begin{bmatrix}1 \\\\ b_1^{\\alpha _1}b_2^{\\alpha _2}\\end{bmatrix}+ \\sum _{i = 1,2} E_2 \\begin{bmatrix}1 \\\\ b_i^2\\end{bmatrix}\\bigg ) \\Bigg ] \\mathcal {I}_{1,2} \\ .$ and $\\left(D_{b_1}^2 + 4 E_1 \\begin{bmatrix}1 \\\\ b_1^2\\end{bmatrix}- 8 E_2 \\begin{bmatrix}1 \\\\ b_1^2\\end{bmatrix}\\right) \\mathcal {I}_{1,2}= \\left(D_{b_2}^2 + 4 E_1 \\begin{bmatrix}1 \\\\ b_2^2\\end{bmatrix}- 8 E_2 \\begin{bmatrix}1 \\\\ b_2^2\\end{bmatrix}\\right)\\mathcal {I}_{1,2} \\ .$ At weight-three there are three new equations, $\\left[D_{b_1}^3 + 6 E_1 \\begin{bmatrix}1 \\\\ b_1^2\\end{bmatrix} D_{b_1}^2 + \\ldots \\right] \\mathcal {I}_{1,2} = 0\\ .$ There are two more weight-four equations $\\Bigg (D_{b_1}^{4} + 8E_1 \\begin{bmatrix}1\\\\b_{1}^{2}\\end{bmatrix} D_{b_1}^{3}-48E_2 \\begin{bmatrix}1\\\\b_{1}^{2}\\end{bmatrix} D_{b_1}^{2}+192E_3 \\begin{bmatrix}1\\\\b_{1}^{2}\\end{bmatrix} D_{b_1}-384E_4 \\begin{bmatrix}1\\\\b_{1}^{2}\\end{bmatrix} & \\ \\Bigg ) \\mathcal {I}_{1,2} \\nonumber \\\\= & \\ (b_1 \\leftrightarrow b_2) \\ ,$ and a third one which has a tedious form and schematically looks like $\\left(D^{(2)}_q + \\ldots \\right) \\mathcal {I}_{1,2} = 0\\ .$ As will be noted shortly, the unflavored index $\\mathcal {I}_{1,2}(b_i \\rightarrow 1)$ satisfies a 3$^\\text{rd}$ order modular differential equation, which should also admit a fully flavored refinement.", "Unfortunately we have not obtained its explicit form.", "Let us turn to additional solutions to the modular differential equations.", "We first point out that the free field character $\\mathcal {I}_{(\\beta \\gamma )^2}$ in front of the Schur index is also a solution to the fully flavored modular differential equations at weight-two and equation (REF ) at weight-four, but not the weight-three and the second one of weight-four.", "Like in the case of the $SU(2)$ $\\mathcal {N} = 4$ theory or $\\mathcal {T}_{0,4}$ , the free field character $\\mathcal {I}_{(\\beta \\gamma )^2}$ also arises as the residue of the integrand in the contour integral that computes the index $\\mathcal {I}_{1,2}$ , $\\mathcal {I}_{1,2} = \\oint \\left[\\prod _{i = 1}^{2} \\frac{da_i}{2\\pi i a_i}\\frac{1}{2}\\vartheta _1(2\\mathfrak {a}_i)^2\\right] \\prod _{i = 1}^{2} \\prod _{\\pm \\pm }\\frac{\\eta (\\tau )}{\\vartheta _4(\\mathfrak {a}_1 \\pm \\mathfrak {a}_2 \\pm \\mathfrak {b}_i)}\\oint \\prod _{i = 1}^{2} \\frac{da_i}{2\\pi i a_i} \\mathcal {Z}(a, b) \\ .$ Explicitly, $\\mathcal {I}_{(\\beta \\gamma )^2}= \\mathop {\\operatorname{Res}}_{\\mathfrak {a}_1 \\rightarrow \\mathfrak {a}_2 + \\mathfrak {b}_1 + \\frac{\\tau }{2}}\\mathop {\\operatorname{Res}}_{\\mathfrak {a}_2 \\rightarrow - \\frac{1}{2}\\mathfrak {b}_1 - \\frac{1}{2} \\mathfrak {b}_2} \\mathcal {Z}(a,b) \\ .$ Drawing an analogy with the results in REF , REF , we conjecture that the free theory of $bc \\beta \\gamma $ system is actually a module of $\\mathbb {V}(\\mathcal {T}_{1,2})$ .", "Consequently, we expect that the weight-two and the weight-four equations above arise from actual null states in $\\mathbb {V}(\\mathcal {T}_{1,2})$ , while the weight-three equations and the weight-four equation $(D^{(2)}_q + \\ldots )\\mathcal {I} = 0$ do not and we shall ignore them from now on.", "Next we look for logarithmic solutions.", "It is easy to check that the unflavored index satisfies an unflavored equation at weight-sixIn [6], a weight-eight equation was listed instead, $0 = & \\ (D^{(4)}_q - 220 E_4D^{(2)}_q - 2380 E_6 D^{(1)}_q + 6000E_4^2)\\mathcal {I}_{1,2}(b \\rightarrow 1) \\ .$ , $0 = & \\ (D^{(3)}_q - 220 E_4D^{(1)}_q + 700 E_6)\\mathcal {I}_{1,2}\\ .$ The indicial equation based on $\\mathcal {I} = q^\\alpha ( 1 + \\ldots )$ gives $0 = & \\ (6\\alpha - 5)(6\\alpha + 1)^2 & \\Rightarrow & & \\ \\alpha = & \\ - \\frac{1}{6}, - \\frac{1}{6}, \\frac{5}{6} \\ .$ Clearly, the $\\alpha = 5/6$ solution is the unflavored Schur index.", "The two linear independent solutions corresponding to $\\alpha = - \\frac{1}{6}$ are logarithmic ones of the form $\\mathcal {I}_{\\log } = q^{- \\frac{1}{6}} \\sum _{n} a_nq^{n/2} + q^{- \\frac{1}{6}} \\log q \\sum _{n} a^{\\prime }_n q^{n/2} + q^{\\frac{5}{6}} (\\log q)^2 \\sum _{n} a^{\\prime \\prime }_n q^{n/2} \\ .$ Now we focus on the three flavored modular differential equations of weight-two and weight-four with conjectural associated null states.", "Under the $S$ -transformation (with the critical affine levels $k_i = - 2$ and the $y$ -extension as introduced in section ), both the weight-two equations are covariant, $S(\\text{weight-2}) = \\tau ^2 (\\text{weight-2}) \\ .$ The weight-four equation (REF ) is almost covariant, $S(\\text{weight-4}) = \\tau ^4 (\\text{weight-4}) + \\tau ^3 \\frac{12 i}{\\pi } (\\text{weight-2}^{\\prime })$ where the weight-2$^{\\prime }$ equation is defined to be a combination of the two weight-two equations, $\\left[D_{b_1}^2 - D_{b_2}^2 + 4 \\left(E_1 \\begin{bmatrix}1 \\\\ b_1^2\\end{bmatrix} D_{b_1}-E_1 \\begin{bmatrix}1 \\\\ b_2^2\\end{bmatrix} D_{b_1}\\right)- 8 \\left(E_2 \\begin{bmatrix}1 \\\\ b_1^2\\end{bmatrix} D_{b_1}-E_2 \\begin{bmatrix}1 \\\\ b_2^2\\end{bmatrix} D_{b_1}\\right)\\right] \\mathcal {I}_{1,2} = 0 \\ .", "\\nonumber $ Therefore, the (almost) covariance suggests additional logarithmic solutions given by modular transformation of the ($y$ -extended) Schur index.", "More explicitly, under $S$ -transformation we have $S\\mathcal {I}_{1,2}= & \\ \\frac{\\eta (\\tau )^2}{8\\pi ^2\\vartheta _1(2\\mathfrak {b}_1)\\vartheta _1(2\\mathfrak {b}_2)\\vartheta _1(\\mathfrak {b}_1 - \\mathfrak {b}_2)\\vartheta _1(\\mathfrak {b}_1 + \\mathfrak {b}_2)} \\nonumber \\\\& \\ \\times \\bigg [- \\tau ^2 \\vartheta ^{\\prime \\prime }_1(\\mathfrak {b}_1-\\mathfrak {b}_2)\\vartheta _1(\\mathfrak {b}_1+\\mathfrak {b}_2)- 4 \\pi i \\tau (\\mathfrak {b}_1-\\mathfrak {b}_2) \\vartheta ^{\\prime }_1(\\mathfrak {b}_1-\\mathfrak {b}_2)\\vartheta _1(\\mathfrak {b}_1+\\mathfrak {b}_2) \\nonumber \\\\& \\ \\qquad + \\tau ^2 \\vartheta ^{\\prime \\prime }_1(\\mathfrak {b}_1+\\mathfrak {b}_2)\\vartheta _1(\\mathfrak {b}_1-\\mathfrak {b}_2)+ 4 i \\pi \\tau (\\mathfrak {b}_1+\\mathfrak {b}_2) \\vartheta ^{\\prime }_1(\\mathfrak {b}_1+\\mathfrak {b}_2)\\vartheta _1(\\mathfrak {b}_1-\\mathfrak {b}_2) \\nonumber \\\\& \\ \\qquad -16 \\pi ^2 \\vartheta _1(\\mathfrak {b}_1+\\mathfrak {b}_2)\\vartheta _1(\\mathfrak {b}_1-\\mathfrak {b}_2) \\bigg ]\\\\= & \\ - \\tau ^2 \\mathcal {I}_{2,0} -2\\mathfrak {b}_1\\mathfrak {b}_2\\frac{\\eta (\\tau )^2}{\\prod _{i = 1}^2\\vartheta _1(2 \\mathfrak {b}_i)}- \\tau \\frac{ \\eta (\\tau )^2}{\\prod _{i}\\vartheta _1(2 \\mathfrak {b}_i)}\\sum _{\\alpha = \\pm } \\alpha (\\mathfrak {b}_1 +\\alpha \\mathfrak {b}_2)E_1 \\begin{bmatrix}1 \\\\ b_1 b_2^\\alpha \\end{bmatrix} \\ .", "\\nonumber $ Similarly, after $TS$ transformation, $TS\\mathcal {I}_{1,2}= & \\ (-1)^{2/3} \\mathcal {I}_{1,2} - (-1)^{2/3}S\\mathcal {I}_{1,2}\\\\& \\ + (-1)^{2/3} \\left[2 \\tau \\mathcal {I}_{2,0} + \\frac{ \\eta (\\tau )^2}{\\prod _{i}\\vartheta _1(2 \\mathfrak {b}_i)}\\sum _{\\alpha = \\pm } \\alpha (\\mathfrak {b}_1 +\\alpha \\mathfrak {b}_2)E_1 \\begin{bmatrix}1 \\\\ b_1 b_2^\\alpha \\end{bmatrix} \\right]\\ .$ Before moving onto the modular properties of these solutions, we briefly remark on the completeness of the above solutions so far.", "Although the weight-six fully flavored modular differential equations are currently unavailable, we can look at the partially unflavoring limit $b_i \\rightarrow b$ .", "In this limit, there is a unique weight-sixSome equations of lower weights are collected in the appendix flavored modular differential equation that annihilates $\\mathcal {I}_{1,2}$ , $\\mathcal {I}_{(\\beta \\gamma )^2}$ and the two logarithmic solutions $S\\mathcal {I}_{1,2}$ and $TS\\mathcal {I}_{1,2}$ , $0 = & \\ \\Bigg [ D_q^{(3)} + \\frac{1}{24}D_b^4 D_q^{(1)} - \\frac{1}{2} D_{q}^{(2)}D_b^2 + \\frac{1}{2}E_2 D_q^{(1)}D_b^2 - 14 E_2 D_q^{(2)}+ \\frac{1}{2} E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix} D_q^{(1)}D_b^3 \\nonumber \\\\& \\ + 4 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix} (6E_2 - E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}) D_q^{(1)}D_b+ \\left(\\frac{1}{6}E_2 - \\frac{1}{4}E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right) D_b^4 \\nonumber \\\\& \\ + \\frac{1}{2}\\left(3 E_2 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}-5 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 5 E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right)D_b^3 \\nonumber \\\\& \\ + \\frac{1}{3} \\left(- 18E_2 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 35 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}^2+ 38 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}- 40 E_4 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right)D_b ^2 \\nonumber \\\\& \\ - \\frac{4}{3} \\left(15 E_2^2 + 54 E_2 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 31 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}^2- 98 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}-20 E_4 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right)D_q^{(1)} \\nonumber \\\\& \\ + \\left(12 E_2 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}- 92 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 40 E_5 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right)D_b \\nonumber \\\\& \\ + E_2 \\left(-52 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 48 E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right)D_b- \\frac{4}{3}\\left(9E_2^3 - 72 E_2 ^2 E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}- 87 E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}^2\\right) \\nonumber \\\\& \\ - \\frac{4}{3}E_2 \\left(- 137E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}- 158 E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}E_3 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 18 E_4 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right)\\nonumber \\\\& \\ + 40\\left(5E_2 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}E_4 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ E_1 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}E_5 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}+ 7 E_6 \\begin{bmatrix}1 \\\\ b^2\\end{bmatrix}\\right)\\Bigg ]\\mathcal {I}_{1,2}(b)\\ .$ The $S$ -transformation of this equation produces a set of lower weight flavored modular differential equations whose only non-logarithmic solutions are $\\mathcal {I}_{1,2}(b)$ and $\\mathcal {I}_{(\\beta \\gamma )^2}(b)$ .", "Now let us look at the modular properties of the solutions.", "One can find a simple basis for the $SL(2, \\mathbb {Z})$ -orbit of the Schur index, $\\text{ch}_0 = & \\ \\mathcal {I}_{1,2},& \\text{ch}_{\\log , 1} = & \\ 2\\tau \\mathcal {I}_{1,2} + \\sum _{\\alpha = \\pm } \\alpha (\\mathfrak {b}_1 + \\alpha \\mathfrak {b}_2)\\frac{\\eta (\\tau )^2}{\\prod _i\\vartheta _1(2\\mathfrak {b}_i)} E_1 \\begin{bmatrix}1 \\\\ b_1 b_2^\\alpha \\end{bmatrix}\\ ,\\\\\\text{ch}_{\\log , 2} = & \\ S\\mathcal {I}_{1,2} \\ .$ In this basis, $T \\text{ch}_0 = & \\ - (-1)^{2/3} \\operatorname{ch}_0 \\ , \\\\T \\text{ch}_{\\log , 1} = & \\ (-1)^{2/3}(-2\\text{ch}_0 - \\text{ch}_{\\log , 1} )\\ ,\\\\T \\text{ch}_{\\log , 2} = & \\ (-1)^{2/3}(\\text{ch}_0+ \\text{ch}_{\\log , 1}- \\text{ch}_{\\log , 2}) \\ ,$ and by construction $S \\text{ch}_0 = \\operatorname{ch}_{\\log , 2}, \\qquad S \\text{ch}_{\\log , 2} = \\operatorname{ch}_0, \\qquad S \\text{ch}_{\\log , 1} = \\operatorname{ch}_{\\log , 1} \\ .$ In the form of matrix $g\\operatorname{ch}_i = \\sum _{j} g_{ij}\\operatorname{ch}_j$ , we have $T = e^{\\frac{2\\pi i}{3}}\\begin{pmatrix}-1 & 0 & 0 \\\\-2 & - 1 & 0 \\\\1 & 1 & -1\\end{pmatrix}\\ ,\\qquad S = \\begin{pmatrix}0 & 0 & 1\\\\0 & 1 & 0 \\\\1 & 0 & 0\\end{pmatrix}\\ ,$ which furnishes a three-dimensional representation of $SL(2, \\mathbb {Z})$ .", "The three characters can form an $SL(2, \\mathbb {Z})$ invariant partition function $Z = 2\\operatorname{ch}_0 \\overline{\\operatorname{ch}_2} + \\operatorname{ch}_1 \\overline{\\operatorname{ch}_1} + 2 \\operatorname{ch}_2 \\overline{\\operatorname{ch}_0} = \\sum _{i,j} M_{ij}\\operatorname{ch}_i \\overline{\\operatorname{ch}_j} \\ .$ Unfortuantely the $S$ -matrix does not lead to reasonable fusion coefficients.", "One can consider new basis $\\operatorname{ch}^{\\prime }_i$ such that the pairing matrix $M^{\\prime }_{ij}$ is integral and the fusion coefficients $N_{ij}^k$ are non-negative integers.", "Such new basis is not unique, and it leads to two possible fusion algebras, $[\\operatorname{ch}^{\\prime }_0] \\times [\\operatorname{ch}^{\\prime }_i] = [\\operatorname{ch}^{\\prime }_i] \\ , \\qquad [\\operatorname{ch}^{\\prime }_1] \\times [\\operatorname{ch}^{\\prime }_1] = [\\operatorname{ch}^{\\prime }_1], \\qquad [\\operatorname{ch}^{\\prime }_2] \\times [\\operatorname{ch}^{\\prime }_2] = [\\operatorname{ch}^{\\prime }_2] \\ ,$ or, $[\\operatorname{ch}^{\\prime }_0] \\times [\\operatorname{ch}^{\\prime }_i] = & \\ [\\operatorname{ch}^{\\prime }_i],\\qquad [\\operatorname{ch}^{\\prime }_1] \\times [\\operatorname{ch}^{\\prime }_1] = [\\operatorname{ch}^{\\prime }_1] , \\qquad [\\operatorname{ch}^{\\prime }_1] \\times [\\operatorname{ch}^{\\prime }_2] = 2 [\\operatorname{ch}^{\\prime }_1], \\\\[\\operatorname{ch}^{\\prime }_2] \\times [\\operatorname{ch}^{\\prime }_2] = & \\ 2 [\\operatorname{ch}^{\\prime }_1] + [\\operatorname{ch}^{\\prime }_2] \\ .$ it is unclear if such algebras have physical or mathematical meaning.", "Besides the Schur index and its modular companions, the residue $\\mathcal {I}_{(\\beta \\gamma )^2}$ transforms in a one-dimensional representation under $SL(2, \\mathbb {Z})$ , $S\\mathcal {I}_{(\\beta \\gamma )^2} = - \\mathcal {I}_{(\\beta \\gamma )^2}, \\qquad T\\mathcal {I}_{(\\beta \\gamma )^2} = - (-1)^{2/3} \\mathcal {I}_{(\\beta \\gamma )^2} \\ ,$ satisfying $S^2 = (ST)^3 = \\operatorname{id}$ ." ], [ "The twisted sector", "The defect index $\\mathcal {I}_{1,2}^\\text{defect}(k = 1)$ is a twisted character, and will satisfy corresponding twisted modular differential equations.", "Again, these equations can be obtained from all those in the untwisted sector with all the contributions $E_n \\big [ \\begin{array}{c}+1 \\\\ \\ldots \\end{array} \\big ]$ from the bifundamentals turned into $E_n \\big [ \\begin{array}{c}-1\\\\ \\ldots \\end{array} \\big ]$ .", "For example, at weight-2, there is $0 = & \\ \\bigg [ D_q^{(1)}-\\frac{1}{2}D_{b_2}^{2}-\\frac{1}{2}\\left( E_1\\left[ \\begin{matrix}- 1\\\\b_1b_2\\\\\\end{matrix} \\right] +E_1\\left[ \\begin{matrix}- 1\\\\\\frac{b_1}{b_2}\\\\\\end{matrix} \\right] \\right) D_{b_1}-\\frac{1}{2}\\left( E_1\\left[ \\begin{matrix}- 1\\\\b_1b_2\\\\\\end{matrix} \\right] -E_1\\left[ \\begin{matrix}- 1\\\\\\frac{b_1}{b_2}\\\\\\end{matrix} \\right] +E_1\\left[ \\begin{matrix}- 1\\\\b_{2}^{2}\\\\\\end{matrix} \\right] \\right) \\nonumber \\\\& \\ +2\\left( E_2\\left[ \\begin{matrix}- 1\\\\b_1b_2\\\\\\end{matrix} \\right] +E_2\\left[ \\begin{matrix}- 1\\\\\\frac{b_1}{b_2}\\\\\\end{matrix} \\right] +2E_2\\left[ \\begin{matrix}1\\\\b_{2}^{2}\\\\\\end{matrix} \\right] +E_2 \\right) \\bigg ]\\mathcal {I}_{1,2} \\ ,$ and one with $b_1 \\leftrightarrow b_2$ ." ], [ "Other examples", "In the previous subsections we have discussed a few simplest theories with low $g, n$ , where we have studied their flavored modular differential equations and their solutions besides the Schur index.", "In the following we comment on theories with higher $g, n$ .", "For simplicity, we shall focus on the unflavored index and the unflavored modular differential equations [6] they satisfy." ], [ "$\\mathcal {T}_{0,5}$", "We start with the theory $\\mathcal {T}_{0,5}$ .", "The unflavored Schur index satisfies a weigh-8 modular differential equation, $0 = \\bigg [D_q^{(4)} - 220E_4 D_q^{(2)}- & \\ \\left(3020E_6 + 3840 E_6 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix}\\right)D_q^{(1)}\\nonumber \\\\& \\ - 144\\bigg (-35E_8 + 224 E_8 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix}+ 144 E_4 \\begin{bmatrix}-1 \\\\ 1\\end{bmatrix}^2\\bigg )\\bigg ] \\mathcal {I}_{0,5} \\ .$ This equation has four independent solutions.", "The indicial equation for the anzatz $q^h\\sum _{n} a_n q^n$ reads $(h - 1)h^3 = 0 \\qquad \\Rightarrow \\qquad h = 1, 0, 0, 0\\ .$ The integral spacing suggests the presence of two or three logarithmic solutions.", "Clearly the $h = 1$ solution is given already by the original unflavored Schur index.", "It turns out that there is an additional non-logarithmic solution given by the unflavored vortex defect index with vorticity $k = 2$ , $\\mathcal {I}_{0,5}^\\text{defect}(k = 2)= & \\ \\frac{-5 \\vartheta _4(0) \\vartheta ^{(2)}_4(0) \\left(3 \\pi ^2 (12 E_2(q)+5) \\vartheta ^{(4)}_4(0)-2 \\vartheta ^{(6)}_4(0)\\right)}{1024 \\pi ^8 \\eta (q)^{12} \\vartheta _4(0)^3} \\nonumber \\\\& \\ + \\frac{\\vartheta _4(0)^2 \\left(\\pi ^2 (12 E_2(q)+5) \\vartheta ^{(6)}_4(0)-\\vartheta _4^{(8)}(0)\\right)+5 \\vartheta _4(0)\\vartheta ^{(4)}_4(0)^2}{1024 \\pi ^8 \\eta (q)^{12} \\vartheta _4(0)^3} \\nonumber \\\\& \\ + \\frac{30 \\pi ^2 (12 E_2(q)+5) \\vartheta ^{(2)}_4(0)^3-30 (\\vartheta ^{(2)}_4(0))^2 \\vartheta ^{(4)}_4(0)}{1024 \\pi ^8 \\eta (q)^{12} \\vartheta _4(0)^3} \\ .$ Note that there are only $\\lceil \\frac{n + 2g - 2}{2} \\rceil = 2 $ defect indices with even vorticity, including the original Schur index $\\mathcal {I}_{0,5}$ at $k = 0$ .", "The remaining two solutions are logarithmic, given by the $S$ -transformation of $\\mathcal {I}_{0,5}$ and $\\mathcal {I}^\\text{defect}_{0,5}(k = 2)$ , where a basis can be chosen to be $\\mathcal {I}_{0,5}, \\qquad STS \\mathcal {I}_{0,5}, \\qquad TST \\mathcal {I}_{0,5}, \\qquad \\mathcal {I}^\\text{defect}_{0,5}(k = 2) \\ ,$ or $\\mathcal {I}_{0,5}, \\qquad STS \\mathcal {I}_{0,5}, \\qquad \\mathcal {I}^\\text{defect}_{0,5}(k = 2) , \\qquad TST \\mathcal {I}^\\text{defect}_{0,5}(k = 2)\\ .$ There are two independent vortex defect indices with odd vorticity, $\\mathcal {I}^\\text{defect}_{0,5}(k = 1)$ and $\\mathcal {I}^\\text{defect}_{0,5}(k = 3)$ .", "Let us first look at the unflavoring limit of the first defect index, $\\mathcal {I}_{0,5}^\\text{defect}(k = 1)= & \\ - \\frac{1}{\\eta (\\tau )^{12}} (60E_2^2 E_4 - 420 E_2 E_6 + 700 E_8)\\nonumber \\\\= & \\ q^{\\frac{1}{2}}(1 + 48 q + 774 q^2 + 7952 q^3 + 61101 q^4 + 385200 q^5 + \\ldots ) \\ .$ It is easy to check that it satisfies an equation in the twisted sector, $(D_q^{(4)} - 220 E_4 D_q^{(2)} - 6860 E_6 D_q^{(1)} - 75600 E_8)\\mathcal {I}^\\text{defect}_{0,5}(k = 1) = 0 \\ .$ Apparently this equation is just the twisted version of (REF ), where $E_k \\big [ \\begin{array}{c}-1 \\\\ 1\\end{array} \\big ]$ are replaced by $E_k \\big [ \\begin{array}{c}+1 \\\\ 1\\end{array} \\big ]$ and applying the relation $E_4 ^2 = \\frac{7}{3}E_8$ .", "To study the second defect index $\\mathcal {I}^\\text{defect}_{0,5}(k = 3)$ , one has to turn on flavor fugacities since it does not have a smooth unflavoring limit.", "The simplest partial flavoring is $b_1 = b, b_{2,3,4,5} = 1$ .", "It turns out that in this limit there are no flavored modular differential equations below weight-eight, and all the weight-eight equations satisfied by $\\mathcal {I}^\\text{defect}_{0,5}(k = 1)$ will also have $\\mathcal {I}^\\text{defect}_{0,5}(k = 3)$ as an additional solution.", "We refrain from showing the detail of these equations due to their complexity." ], [ "$\\mathcal {T}_{0,6}$", "For $\\mathcal {T}_{0,6}$ , the unflavored Schur index $\\mathcal {I}_{0,6}$ satisfies a weight-twelve, 6$^\\text{th}$ order equation, $0 = \\Big [D_{q}^{(6)} -545 E_{4} D_{q}^{(4)}-15260 E_{6} D_{q}^{(3)} & \\ -164525 E_{4}^{2} D_{q}^{(2)} - 2775500 E_{4} E_{6} D_{q}^{(1)} \\nonumber \\\\& \\ - 26411000 E_{6}^{2}+1483125 E_{4}^{3} \\Big ]\\mathcal {I}_{0,6} \\ .$ The indicial equation gives $(5 - 12h)^4 (144h^2 - 120h - 119) = 0 \\qquad \\Rightarrow \\qquad h = \\frac{5}{12}, \\frac{5}{12}, \\frac{5}{12}, \\frac{5}{12}, - \\frac{7}{12}, \\frac{17}{12} \\ .$ Obviously, the solution with $h = \\frac{17}{12}$ correspond to the Schur index.", "Another non-logarithmic solution comes from the non-trivial defect index: in this case, there are $\\lceil \\frac{n + 2g - 2}{2} \\rceil = 2$ independent defect indices with even $k$ , and indeed $\\mathcal {I}_{0,6}$ and $\\mathcal {I}^\\text{defect}_{0,6}(2)$ are the two independent non-logarithmic solutions to the modular differential equation (REF ), where the latter corresponds to one of the $h = \\frac{5}{12}$ .", "The remaining four solutions are logarithmic obtained from modular transformation of $\\mathcal {I}_{0,6}$ and $\\mathcal {I}^\\text{defect}_{0,6}(2)$ .", "One independent basis can be chosen to be $\\mathcal {I}_{0,6}, \\quad S\\mathcal {I}_{0,6}, \\quad TS\\mathcal {I}_{0,6}, \\quad T^2S\\mathcal {I}_{0,6}, \\quad \\mathcal {I}_{0,6}^\\text{defect}(2), \\quad S\\mathcal {I}_{0,6}^\\text{defect}(2) \\ ,$ spanning the space of solutions.", "There are also two linear independent defect indices with odd vorticity, $\\mathcal {I}^\\text{defect}_{0,6}(k = 1)$ and $\\mathcal {I}^\\text{defect}_{0,6}(k = 3)$ , both with smooth unflavoring limit $\\mathcal {I}^\\text{defect}_{0,6}(k = 1) = & \\ q^{\\frac{11}{12}}(1 + 19 q+ 64 q^{3/2} + 203 q^2 + 896 q^{5/2} + 2320q^3 + \\ldots ) \\ ,\\\\\\mathcal {I}^\\text{defect}_{0,6}(k = 3) = & \\ q^{\\frac{11}{12}}(1 + 64 q^{1/2} + 748 q + 4992 q^{3/2} + 26035 q^2 + 111936^{5/2} + \\ldots ) \\ .$ They both satisfy a $6^\\text{th}$ order unflavored modular differential equation whose explicit form will not be included here." ], [ "$\\mathcal {T}_{g,n = 0}$", "The unflavored index $\\mathcal {I}_{2,0}$ of the genus-two theory $\\mathcal {T}_{2,0}$ can be written in terms of the standard Eisenstein series, $\\mathcal {I}_{2,0} = \\frac{1}{2} \\eta (\\tau )^2 \\left(E_2 + \\frac{1}{12}\\right) \\ .$ It satisfies a 6$^\\text{th}$ order equation $0 = \\Big [D_q^{(6)} - 305 E_4 D_q^{(4)} - 4060E_6 D_q^{(3)}+ 20275E_4^2 & \\ D_q^{(2)} + 2100E_4 E_6 D_q^{(1)} \\nonumber \\\\& \\ - 68600(E_6^2 - 49125E_4^3) \\Big ]\\mathcal {I}_{2,0} \\ .$ Following from seciton REF , there is an additional vortex defect index $\\mathcal {I}_{2,0}^\\text{defect}(k=2) = \\eta (\\tau )^2 \\ .$ Similar to the $g = 0, n = 6$ case, the $SL(2, \\mathbb {Z})$ orbit of $\\mathcal {I}_{2,0}$ and $\\eta (\\tau )^2$ form the complete set of solutions of the 6$^\\text{th}$ order equation, where an independent basis can be chosen as $\\mathcal {I}_{2,0}, \\quad S\\mathcal {I}_{2,0}, \\quad TS\\mathcal {I}_{2,0}, \\quad T^2S\\mathcal {I}_{2,0}, \\quad \\mathcal {I}_{2,0}^\\text{defect}(2), \\quad S\\mathcal {I}_{2,0}^\\text{defect}(2) \\ .$ Note that since $\\eta (\\tau )^2$ is a term in the index $\\mathcal {I}_{2,0}$ itself, and consequently, the other term $\\eta (\\tau )^2E_2$ naturally forms another solution.", "Similarly, the indices $\\mathcal {I}_{3,0}$ and $\\mathcal {I}_{4,0}$ of the genus-three and -four theories satisfy a 20$^\\text{th}$ and 43$^\\text{th}$ order modular differential equation respectively, whose expressions will not be included here.", "By direct computation, it can be shown that the Schur index itself and the defect indices $\\mathcal {I}^\\text{defect}_{g,0}(k = \\text{even})$ provide a collection of solutions.", "Note that this equivalently implies that $\\eta (\\tau )^{2g - 2}, \\eta (\\tau )^{2g - 2} E_2, \\ldots , \\eta (\\tau )^{2g - 2} E_{2g - 2}$ are $g$ independent solutions to these equations.", "Their $SL(2, \\mathbb {Z})$ -orbit will supply additional logarithmic solutions to the equations." ], [ "Acknowledgments", "The authors would like to thank Wolfger Peelaers for sharing important observations and Mathematica notebooks on modular differential equations.", "Y.P.", "is supported by the National Natural Science Foundation of China (NSFC) under Grant No.", "11905301, the Fundamental Research Funds for the Central Universities, Sun Yat-sen University under Grant No.", "2021qntd27." ], [ "Special Functions", "In this appendix we collect the definitions and a few useful properties of the special functions that appear in the maintext.", "We often use letters in straight and fraktur font which are related by $a = e^{2\\pi i \\mathfrak {a}}, \\qquad b = e^{2\\pi i \\mathfrak {b}}, \\qquad \\ldots \\qquad y = e^{2\\pi i \\mathfrak {y}}, \\qquad z = e^{2\\pi i \\mathfrak {z}}\\ .$" ], [ "Jacobi theta functions", "The Jacobi theta functions are defined as Fourier series $\\vartheta _1(\\mathfrak {z}|\\tau ) & \\ -i \\sum _{r \\in \\mathbb {Z} + \\frac{1}{2}} (-1)^{r-\\frac{1}{2}} e^{2\\pi i r \\mathfrak {z}} q^{\\frac{r^2}{2}} ,\\\\\\vartheta _2(\\mathfrak {z}|\\tau ) & \\sum _{r \\in \\mathbb {Z} + \\frac{1}{2}} e^{2\\pi i r \\mathfrak {z}} q^{\\frac{r^2}{2}} \\ ,\\\\\\vartheta _3(\\mathfrak {z}|\\tau ) & \\ \\sum _{n \\in \\mathbb {Z}} e^{2\\pi i n \\mathfrak {z}} q^{\\frac{n^2}{2}},\\\\\\vartheta _4(\\mathfrak {z}|\\tau ) & \\sum _{n \\in \\mathbb {Z}} (-1)^n e^{2\\pi i n \\mathfrak {z}} q^{\\frac{n^2}{2}} \\ .$ Through out this paper we denote $q e^{2\\pi i \\tau }$ .", "For brevity we will frequently omit $|\\tau $ in the notation of the Jacobi theta functions.", "The Jacobi-theta functions can be rewritten as triple product of the $q$ -Pochhammer symbol, for example, $\\vartheta _1(\\mathfrak {z}) = - i z^{\\frac{1}{2}}q^{\\frac{1}{8}}(q;q)(zq;q)(z^{-1};q) \\ , \\qquad \\vartheta _4(\\mathfrak {z}) = (q;q)(zq^{\\frac{1}{2}};q)(z^{-1}q^{\\frac{1}{2}};q) \\ ,$ where $(z;q) \\prod _{k = 0}^{+\\infty }(1 - zq)$ .", "The functions $\\vartheta _i(z)$ almost return to themselves under full-period shifts by $m + n \\tau $ , $\\vartheta _{1,2}(\\mathfrak {z} + 1) = & - \\vartheta _{1,2}(\\mathfrak {z}) , &\\vartheta _{3,4}(\\mathfrak {z} + 1) = & + \\vartheta _{3,4}(\\mathfrak {z}) , & \\\\\\vartheta _{1,4}(\\mathfrak {z} + \\tau ) = & - \\lambda \\vartheta _{1,4}(\\mathfrak {z}), &\\vartheta _{2,3}(\\mathfrak {z} + \\tau ) = & + \\lambda \\vartheta _{2,3}(\\mathfrak {z}) , &$ where $\\lambda \\equiv e^{-2\\pi i \\mathfrak {z}}e^{- \\pi i \\tau }$ .", "The above can be combined, for example, into $\\vartheta _1(\\mathfrak {z} + m \\tau + n) = (-1)^{m + n} e^{-2\\pi i m \\mathfrak {z}} q^{ - \\frac{1}{2}m^2}\\vartheta _1(\\mathfrak {z})\\ .$ Moreover, the four Jacobi theta functions are related by half-period shifts which can be summarized as in the following diagram, Figure: NO_CAPTION where $\\mu = e^{- \\pi i \\mathfrak {z}} e^{- \\frac{\\pi i}{4}}$ , and $f \\xrightarrow{} g$ means $\\text{either}\\qquad f\\left(\\mathfrak {z} + \\frac{1}{2}\\right) = a g(\\mathfrak {z}) \\qquad \\text{or} \\qquad f\\left(\\mathfrak {z} + \\frac{\\tau }{2}\\right) = a g(\\mathfrak {z}) \\ ,$ depending on whether the arrow is horizontal or (slanted) vertical respectively.", "The functions $\\vartheta _i(z | \\tau )$ transform nicely under the modular $S$ and $T$ transformations, which act, as usual, on the nome and flavor fugacity as $(\\frac{\\mathfrak {z}}{\\tau }, - \\frac{1}{\\tau })\\xleftarrow{}(\\mathfrak {z}, \\tau ) \\xrightarrow{} (\\mathfrak {z}, \\tau + 1).$ In summary Figure: NO_CAPTION where $\\alpha = \\sqrt{-i \\tau }e^{\\frac{\\pi i z^2}{\\tau }}$ ." ], [ "Eisenstein series", "The twisted Eisenstein series (with characteristics $\\big [\\begin{array}{c}\\phi \\\\\\theta \\end{array}\\big ]$ ) $E_k\\big [\\begin{array}{c}\\phi \\\\\\theta \\end{array}\\big ]$ are defined as series in $q$ , $E_{k \\ge 1}\\left[\\begin{matrix}\\phi \\\\ \\theta \\end{matrix}\\right] & \\ - \\frac{B_k(\\lambda )}{k!}", "\\\\& \\ + \\frac{1}{(k-1)!", "}\\sum _{r \\ge 0}^{\\prime } \\frac{(r + \\lambda )^{k - 1}\\theta ^{-1} q^{r + \\lambda }}{1 - \\theta ^{-1}q^{r + \\lambda }}+ \\frac{(-1)^k}{(k-1)!", "}\\sum _{r \\ge 1} \\frac{(r - \\lambda )^{k - 1}\\theta q^{r - \\lambda }}{1 - \\theta q^{r - \\lambda }} \\ .$ Here $0 \\le \\lambda < 1$ is determined by $\\phi \\equiv e^{2\\pi i \\lambda }$ , $B_k(x)$ denotes the $k$ -th Bernoulli polynomial, and the prime in the sum means that when $\\phi = \\theta = 1$ the $r = 0$ term should be omitted.", "We also define $E_0\\left[\\begin{matrix}\\phi \\\\ \\theta \\end{matrix}\\right] = -1 \\ .$ The standard (untwisted) Eisenstein series $E_{2n}$ are given by the $\\theta , \\phi \\rightarrow 1$ limit of $E_{2n}\\big [\\begin{array}{c}\\phi \\\\ \\theta \\end{array}\\big ]$ , $E_{2n}(\\tau ) = E_{2n}\\begin{bmatrix}+1 \\\\ +1\\end{bmatrix} \\ .$ When $k$ is odd, we have instead $E_1 \\begin{bmatrix}1 \\\\ e^{2\\pi i \\mathfrak {z}}\\end{bmatrix} = \\frac{1}{2\\pi i} \\frac{\\vartheta ^{\\prime }_1(\\mathfrak {z})}{\\vartheta _1(\\mathfrak {z})} \\xrightarrow{} \\frac{1}{2\\pi i \\mathfrak {z}}, \\qquad E_{k > 1} \\begin{bmatrix}+1 \\\\ +1\\end{bmatrix} = 0\\ .$ The Eisenstein series with $\\phi = \\pm 1$ enjoy a useful symmetry property $E_k\\left[\\begin{matrix}\\pm 1 \\\\ z^{-1}\\end{matrix}\\right] = (-1)^k E_k\\left[\\begin{matrix}\\pm 1 \\\\ z\\end{matrix}\\right] \\ .$ They also transform nicely under $z \\rightarrow qz$ or $z \\rightarrow q^{\\frac{1}{2}}z$ , for example, $E_n \\begin{bmatrix}\\pm 1\\\\ z q^{\\frac{k}{2}}\\end{bmatrix}= \\sum _{\\ell = 0}^{n} \\left(\\frac{k}{2}\\right)^\\ell \\frac{1}{\\ell !}", "E_{n - \\ell } \\begin{bmatrix}(-1)^k (\\pm 1)\\\\z\\end{bmatrix} \\ .$ The Eisenstein series are closely related to the Jacobi theta function.", "Let us define an elliptic $P$ -function $P_2(y) - \\sum _{n = 1}^{+\\infty } \\frac{1}{2n}E_{2n}(\\tau )y^{2n} \\ .$ Then we have a simple translation $E_k \\begin{bmatrix}+1 \\\\ + z\\end{bmatrix}= & \\left[e^{- \\frac{y}{2\\pi i}\\mathcal {D}_\\mathfrak {z} - P_2(y)}\\right]_k \\vartheta _1(\\mathfrak {z}), &E_k \\begin{bmatrix}-1 \\\\ + z\\end{bmatrix}= & \\left[e^{- \\frac{y}{2\\pi i}\\mathcal {D}_\\mathfrak {z} - P_2(y)}\\right]_k \\vartheta _4(\\mathfrak {z}) \\\\E_k \\begin{bmatrix}+1 \\\\ -z\\end{bmatrix}= & \\left[e^{- \\frac{y}{2\\pi i}\\mathcal {D}_\\mathfrak {z} - P_2(y)}\\right]_k \\vartheta _2(\\mathfrak {z}), &E_k \\begin{bmatrix}-1 \\\\ -z\\end{bmatrix}= & \\left[e^{- \\frac{y}{2\\pi i}\\mathcal {D}_\\mathfrak {z} - P_2(y)}\\right]_k \\vartheta _3(\\mathfrak {z}) \\ ,$ where $[\\ldots ]_k$ means taking the coefficient of $y^k$ when expanding $\\ldots $ in $y$ -series around $y = 0$ , and $\\mathcal {D}_\\mathfrak {z}^n$ acting on $\\vartheta _i$ is defined to be $\\mathcal {D}_\\mathfrak {z}^n \\vartheta _i (\\mathfrak {z}) \\frac{\\vartheta _i^{(n)}(\\mathfrak {z})}{\\vartheta _i(\\mathfrak {z})} \\ .$ The Eisenstein series contains simple poles whose residues are easy to work out from the definition.", "For example, $\\mathop {\\operatorname{Res}}_{z \\rightarrow q^{k + \\frac{1}{2}}} \\frac{1}{z} E_n \\begin{bmatrix}- 1 \\\\ z\\end{bmatrix} = \\frac{1}{(n - 1)!}", "(k + \\frac{1}{2})^{n - 1}\\ ,\\qquad \\mathop {\\operatorname{Res}}_{z \\rightarrow q^k} \\frac{1}{z} E_n \\begin{bmatrix}+ 1 \\\\ z\\end{bmatrix} = \\frac{1}{(n - 1)!}", "k^{n - 1}\\ .$ The relation between the Eisenstein series and Jacobi theta functions are helpful in working out the modular transformation of the former.", "In detail, we consider the following transformations, $S: \\tau \\rightarrow - \\frac{1}{\\tau }, \\ \\mathfrak {z} \\rightarrow \\frac{\\mathfrak {z}}{\\tau }, \\qquad \\qquad T: \\tau \\rightarrow \\tau + 1 , \\ \\mathfrak {z} \\rightarrow \\mathfrak {z} \\ .$ Under the $S$ -transformation, $E_n \\begin{bmatrix}+1 \\\\ +z\\end{bmatrix} \\xrightarrow{} &\\left(\\frac{1}{2\\pi i}\\right)^n\\left[\\bigg (\\sum _{k \\ge 0}\\frac{1}{k!", "}(- \\log z)^k y^k\\bigg )\\bigg (\\sum _{\\ell \\ge 0}(\\log q)^\\ell y^\\ell E_\\ell \\begin{bmatrix}+ 1 \\\\ z\\end{bmatrix}\\bigg )\\right]_n\\ ,\\\\E_n \\begin{bmatrix}-1 \\\\ +z\\end{bmatrix} \\xrightarrow{} &\\left(\\frac{1}{2\\pi i}\\right)^n\\left[\\bigg (\\sum _{k \\ge 0}\\frac{1}{k!", "}(- \\log z)^k y^k\\bigg )\\bigg (\\sum _{\\ell \\ge 0}(\\log q)^\\ell y^\\ell E_\\ell \\begin{bmatrix}+ 1 \\\\ -z\\end{bmatrix}\\bigg )\\right]_n\\ ,\\\\E_n \\begin{bmatrix}1 \\\\ -z\\end{bmatrix} \\xrightarrow{} &\\left(\\frac{1}{2\\pi i}\\right)^n\\left[\\bigg (\\sum _{k \\ge 0}\\frac{1}{k!", "}(- \\log z)^k y^k\\bigg )\\bigg (\\sum _{\\ell \\ge 0}(\\log q)^\\ell y^\\ell E_\\ell \\begin{bmatrix}-1 \\\\ +z\\end{bmatrix}\\bigg )\\right]_n\\ ,\\\\E_n \\begin{bmatrix}-1 \\\\ -z\\end{bmatrix} \\xrightarrow{} &\\left(\\frac{1}{2\\pi i}\\right)^n\\left[\\bigg (\\sum _{k \\ge 0}\\frac{1}{k!", "}(- \\log z)^k y^k\\bigg )\\bigg (\\sum _{\\ell \\ge 0}(\\log q)^\\ell y^\\ell E_\\ell \\begin{bmatrix}-1 \\\\ -z\\end{bmatrix}\\bigg )\\right]_n\\ ,$ where $[ \\ldots ]_n$ extracts the coefficient of $y^n$ .", "For readers' convenience, we collect here the $S$ -transformation of several lower-weight Eisenstein series, $E_1 \\begin{bmatrix}+ 1 \\\\ z\\end{bmatrix} \\xrightarrow{}& \\ \\tau E_1 \\begin{bmatrix}+ 1 \\\\ z\\end{bmatrix} + \\mathfrak {z} \\\\E_2 \\begin{bmatrix}+ 1 \\\\ z\\end{bmatrix} \\xrightarrow{}& \\ \\tau ^2 E_2 \\begin{bmatrix}1 \\\\ z\\end{bmatrix}- \\mathfrak {z}\\tau E_1 \\begin{bmatrix}1 \\\\ z\\end{bmatrix}- \\frac{\\mathfrak {z}^2}{2}\\\\E_3 \\begin{bmatrix}+ 1 \\\\ z\\end{bmatrix}\\xrightarrow{}& \\ \\tau ^3 E_3 \\begin{bmatrix}+ 1 \\\\ z\\end{bmatrix}- \\mathfrak {z}\\tau ^2 E_2 \\begin{bmatrix}+1 \\\\ z\\end{bmatrix}+ \\frac{1}{2} \\mathfrak {z}^2\\tau E_1 \\begin{bmatrix}+ 1 \\\\ z\\end{bmatrix}+ \\frac{\\mathfrak {z}^3}{6} \\ .$ Under the $T$ -transformation, $E_n \\begin{bmatrix}+ 1 \\\\ + z\\end{bmatrix} \\xrightarrow{}& \\ E_n \\begin{bmatrix}+ 1 \\\\ + z\\end{bmatrix}, &E_n \\begin{bmatrix}- 1 \\\\ + z\\end{bmatrix} \\xrightarrow{}& \\ E_n \\begin{bmatrix}- 1 \\\\ - z\\end{bmatrix} \\\\E_n \\begin{bmatrix}+ 1 \\\\ - z\\end{bmatrix} \\xrightarrow{}& \\ E_n \\begin{bmatrix}+ 1 \\\\ - z\\end{bmatrix}, &E_n \\begin{bmatrix}- 1 \\\\ - z\\end{bmatrix} \\xrightarrow{}& \\ E_n \\begin{bmatrix}- 1 \\\\ + z\\end{bmatrix} \\ .$ Combined together, $E_n \\begin{bmatrix}-1 \\\\ z\\end{bmatrix} \\xrightarrow{}\\left(\\frac{1}{2\\pi i}\\right)^n\\left[\\bigg (\\sum _{k \\ge 0}\\frac{1}{k!", "}(- \\log z)^k y^k\\bigg )\\bigg (\\sum _{\\ell \\ge 0}(\\log q - 2\\pi i)^\\ell y^\\ell E_\\ell \\begin{bmatrix}-1 \\\\ +z\\end{bmatrix}\\bigg )\\right]_n\\ .$ The transformation of the Eisenstein under $SL(2, \\mathbb {Z})$ implies that they are generalization of the well known modular forms and Jacobi forms.", "In the theory of modular/Jacobi forms, there are a few important differential operators that change the modular weight of a form.", "The Serre derivative $\\partial _{(k)}$ is defined to be $\\partial _{(k)} q \\partial _q + k E_2\\ .$ It maps a weight-$k$ modular form to a weight-$(k + 1)$ form.", "For example, $\\partial _{(2)} E_2 = 5 E_4 + E_2^2 \\ ,\\qquad \\partial _{(4)} E_4 = 14 E_6, \\qquad \\partial _{(6)} E_6 = 20 E_8\\ .$ One can compose the Serre derivative into the modular differential operators $D_q^{(k)}$ , $D_q^{(k)} \\partial _{(2k - 2)} \\circ \\ldots \\circ \\partial _{(2)} \\circ \\partial _{(0)}\\ .$ Such operator turns a weight-zero form to a weight-$2k$ form.", "It transform covariantly under the standard $SL(2, \\mathbb {Z})$ transformation $\\tau \\rightarrow \\tau ^{\\prime } \\frac{a\\tau + b}{c \\tau + d}$ , $D^{(k)}_{q^{\\prime }} = (c \\tau + d)^{2k} D^{(k)}_q \\ .$" ], [ "Vertex operator algebra ", "In this section we briefly summarize some notions and formula concerning vertex operator algebras (VOAs).", "For more rigorous account for the subject, see for example [89], [2].", "A VOA $\\mathbb {V}$ is characterized by a linear space of states $V$ (i.e., the vacuum module), containing a unique vacuum state $|0\\rangle $ and a special state $T$ corresponding to the stress tensor.", "There is a state-operator correspondence $Y$ that builds a local field $Y(a, z)$ out of any state $a \\in V$ .", "We often simply denote the field as $a(z)$ and expand it in a Fourier seriesIn math literature the expansion is often taken to be $\\sum _{n \\in \\mathbb {Z}} a_n z^{-n - 1}$ .", "$a(z) Y(a, z) = \\sum _{n \\in \\mathbb {Z} - h_a} a_n z^{- n - h_a} \\ ,\\qquad T(z) = \\sum _{n \\in \\mathbb {Z}} L_n z^{-n - 2} \\ .$ Here the Fourier modes $a_n$ are linear operators that act on $V$ , $L_n$ form a Virasoro algebra with central charge $c$ , and $h_a$ is the eigenvalue in $L_0 a = h_a a$ .", "The vacuum state $|0\\rangle $ is such that $Y(|0\\rangle , z) = \\operatorname{id}_V$ and $a(0) |0\\rangle = a$ .", "For a state $a$ with integer weight $h_a$ , one defines its zero mode $o(a) a_0$ , whereas $o(a) = 0$ when $h_a$ is non-integral.", "To compute torus correlation functions, it is a common practice to consider [2] $a[z] e^{i z h_a}Y(a, e^{i z} - 1) = \\sum _{n}a_{[n]}z^{-n - h_a}$ where the “square modes” $a_{[n]}$ are defined by the expansion.", "Explicitly, $a_{[n]} = \\sum _{j \\ge n} c(j, n, h_a)a_j$ where the coefficients $c$ are given by the coefficients of the expansion $(1 + z)^{h - 1}[\\log (1 + z)]^n = \\sum _{j \\ge n} c(j, n, h)z^j\\ .$ It is worth noting that $o(a_{[-h_a - n]}) = 0$ , $\\forall n \\in \\mathbb {N}_{\\ge 1}$ .", "Recursion relations for unflavored torus correlation functions were first studied in [2], and later generalized to $\\mathbb {R}$ -graded super-VOAs [30] and flavored correlation functions [32].", "They are the crucial tools for deriving flavored modular differential equations.", "Consider a $\\frac{1}{2}\\mathbb {Z}$ -graded super-VOA $\\mathbb {V}$ containing a $\\widehat{\\mathfrak {u}}(1)$ current $J$ with zero mode $J_0$ , $M$ a module of $\\mathbb {V}$ and $a, b \\in \\mathbb {V}$ are two states of weights $h_a, h_b$ .", "If $J_0 a = 0$ , then Here all modes are the “square modes”, which are suitable for torus correlation functions.", "[30], [6] $& \\ \\operatorname{str}_M o(a_{[- h_a]}b)x^{J_0}q^{L_0} \\\\= & \\ \\operatorname{str}_M o(a_{[-h_a]} |0\\rangle )o(b) x^{J_0} q^{L_0}+ \\sum _{n = 1}^{+\\infty } E_{2k}\\left[\\begin{matrix}e^{2\\pi i h_a} \\\\ 1\\end{matrix}\\right]\\operatorname{str}_M o(a_{[-h_a + 2k]}b)x^{J_0}q^{L_0} \\ .$ Recall that when $a$ is a conformal descendant, $o(a_{[-h_a]}) = 0$ .", "On the other hand, if the state $a$ is charged with $J_0 a = Qa$ and $Q \\ne 0$ , then the recursion formula reads [32], [33] $\\operatorname{str}_M & \\ o(a_{[- h_a]}b)x^{J_0}q^{L_0}= \\sum _{n = 1}^{+\\infty } E_n\\left[ \\begin{matrix}e^{2\\pi i h_a} \\\\ x^Q\\end{matrix} \\right]\\operatorname{str}_M o(a_{[- h_a+n]}b)x^{J_0}q^{L_0}\\ .$ Another frequently encounter insertion is $o(L_{[-2]}^k |0\\rangle )$ .", "In particular, $\\operatorname{str} o( (L_{[-2]})^k |0\\rangle )q^{L_0 - \\frac{c}{24}}= \\mathcal {P}_{k} \\operatorname{str}q^{L_0 - \\frac{c}{24}} \\ .$ Here $\\mathcal {P}_k$ denotes a $k$ -th order (and weight-$2k$ ) differential operator on $q$ , $\\mathcal {P}_1 = D_q^{(1)}, \\quad \\mathcal {P}_2 = D_q^{(2)} + \\frac{c}{4}E_4 , \\quad \\mathcal {P}_3 = D_q^{(3)} + (8 + \\frac{3c}{2})E_4 D_q^{(1)} + 10 c E_6 \\ , \\quad \\ldots \\ .$" ], [ "Null states in $\\mathfrak {so}(8)_{-2}$", "The Lagrangian of $\\mathcal {N}=2$ su(2) super QCD is (we denote both hypermultiplet and its scaler components by $Q$ and $\\tilde{Q}$ ): $\\mathcal {L}=\\Im \\left(\\tau \\int d^2\\theta d^2\\bar{\\theta }\\operatorname{tr}\\left(\\Phi ^{\\dagger }e^{V}\\Phi +Q^{\\dagger }_i e^{V}Q^i+\\tilde{Q}^{\\dagger i}e^V \\tilde{Q}_i\\right)+\\tau \\int d^2\\theta \\left(\\frac{1}{2}\\operatorname{tr}\\mathcal {W}^{\\alpha }\\mathcal {W}_{\\alpha }+\\sqrt{2}\\tilde{Q}^a_{i}\\Phi _a^b Q_b^i\\right)\\right)$ Since the fundamental representation of $SU(2)$ is pseudo-real,the hypermultiplet scalars: $Q^i_a\\qquad \\tilde{Q}^a_i\\qquad i=1...4\\quad a=1,2$ which transform under fundamental representation of flavor group $SU(4)$ , can be recombined into a single $Q^i_a$ , with $i=1,...,8$ , transformed under $8_{V}$ of $SO(8)$ .", "From now we collectively use $Q^i_a$ , $i=1,..,8$ and $a=1,2$ to denote $Q$ and $\\tilde{Q}$ .", "More explicitly, $Q^{i}_a$ refers to $Q$ when $i$ ranges from 1 to 4, otherwise to $\\tilde{Q}$ , when $i$ ranges from 5 to 8.", "The moment map operator of the enhanced flavor group $SO(8)$ is: $M^{[ij]}=Q^i_a Q^{a j}$ It gives $\\mathfrak {so}(8)_{-2}$ currents of the corresponding 2d chiral algebra, as is conjectured in [1] $J^{[ij]}=\\chi \\left(M^{[ij]}\\right)$ where $\\chi $ is the map from operators in the same $SU(2)_R$ multiplet with Schur operators to the generators in 2d chiral algebra, as is defined by ....", "There are totally 9 independent null states in 2d chiral algebra $\\mathfrak {so}(8)_{-2}$ .", "The first three of them have symmetric indices[1]: $&J^{[1j]}J^{[1j]}+J^{[5j]}J^{[5j]}-\\frac{1}{4}J^{[mn]}J^{[mn]}\\\\&J^{[2j]}J^{[2j]}+J^{[6j]}J^{[6j]}-\\frac{1}{4}J^{[mn]}J^{[mn]}\\\\&J^{[3j]}J^{[3j]}+J^{[7j]}J^{[7j]}-\\frac{1}{4}J^{[mn]}J^{[mn]}$ The other six of null states have totally antisymmetric indices[82]: $&J^{[12]}J^{[56]}-J^{[15]}J^{[26]}+J^{[25]}J^{[16]}\\\\&J^{[13]}J^{[57]}-J^{[15]}J^{[37]}+J^{[35]}J^{[17]}\\\\&J^{[23]}J^{[68]}-J^{[26]}J^{[38]}+J^{[36]}J^{[28]}\\\\&J^{[14]}J^{[58]}-J^{[15]}J^{[48]}+J^{[45]}J^{[18]}\\\\&J^{[24]}J^{[68]}-J^{[26]}J^{[48]}+J^{[46]}J^{[28]}\\\\&J^{[34]}J^{[78]}-J^{[37]}J^{[48]}+J^{[47]}J^{[38]}$ There are four commuting $\\mathfrak {su}(2)$ subalgebra in $\\mathfrak {so}\\left(8\\right)_{-2}$ .", "We can choose them to be the Chavalley bases of four simple roots which are not connected in the extended dynkin diagram of $\\mathfrak {so}\\left(8\\right)$ .", "Roughly speaking, generators of $\\mathfrak {so}\\left(8\\right)$ gain charges under the Cartan of four $\\mathfrak {su}\\left(2\\right)$ subalgebra.", "Unfortunately, the generators $J^{[ij]}$ of $\\mathfrak {so}\\left(8\\right)_{-2}$ are not eigenvectors of the four $\\mathfrak {su}\\left(2\\right)$ .", "Therefore we use another definition of $\\mathfrak {so}\\left(8\\right)$ : $M S+S M^{t}=0$ All $8\\times 8$ matrices $M$ form a Lie algebra isormorphic to $\\mathfrak {so}(8)$ .", "Note that $S=\\left(\\begin{array}{cc}0 & \\mathbf {1}_{4\\times 4} \\\\\\mathbf {1}_{4\\times 4} & 0\\end{array}\\right)$ The isomorphism between $J^{[ij]}$ and $M$ is: $J=TMT^{-1}$ where $T=\\left(\\begin{array}{cc}\\frac{\\mathbf {1}_{4\\times 4}}{\\sqrt{2}} & \\frac{\\mathbf {1}_{4\\times 4}}{\\sqrt{2}}\\\\\\frac{i \\mathbf {1}_{4\\times 4}}{\\sqrt{2}} &\\frac{-i \\mathbf {1}_{4\\times 4}}{\\sqrt{2}}\\end{array}\\right)$ In the algebra defined by $M$ , the four Cartans in commuting $\\mathfrak {su}\\left(2\\right)$ we choose are listed below: $&h_1=\\left(\\begin{array}{cccccccc}\\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & \\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & -\\frac{1}{2} & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & -\\frac{1}{2} & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\\end{array}\\right)\\quad h_2=\\left(\\begin{array}{cccccccc}\\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & -\\frac{1}{2} & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & -\\frac{1}{2} & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & \\frac{1}{2} & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\\end{array}\\right)\\\\&h_3=\\left(\\begin{array}{cccccccc}0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & \\frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & -\\frac{1}{2} & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & -\\frac{1}{2} & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & \\frac{1}{2} \\\\\\end{array}\\right)\\quad h_4=\\left(\\begin{array}{cccccccc}0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & \\frac{1}{2} & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & \\frac{1}{2} & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & -\\frac{1}{2} & 0 \\\\0 & 0 & 0 & 0 & 0 & 0 & 0 & -\\frac{1}{2} \\\\\\end{array}\\right)\\\\$ The raising operators are chosen to be: $\\left(\\begin{array}{cc}e_{ij} & o\\\\0 & -e_{ji}\\\\\\end{array}\\right)\\quad \\left(\\begin{array}{cc}0 & e_{ij}-e_{ji} \\\\0 & 0\\\\\\end{array}\\right) \\quad 0\\le i<j\\le 4$ $e_{ij}$ means a four by four matrix only have 1 at the position $(i,j)$ and 0 at other positions.", "The lowering operators are their transpose matrices.", "We can easily find they are eigenvetors of $h_1$ , $h_2$ , and $h_3$ , $h_4$ .", "To derive the flavored MDE, we use the isomorphism $T$ to transform from the bases $J^{[ij]}$ to the bases $M$ .", "We give here the flavor modular differential equations respectively to the last three null relations: $\\bigg [ & - D_{b_1}D_{b_3} + D_{b_1}D_{b_4} - D_{b_2}D_{b_3} + D_{b_2}D_{b_4}\\\\& \\ - 2 E_1 \\begin{bmatrix}1 \\\\ \\frac{b_1 b_2 b_3}{b_4}\\end{bmatrix} (D_{b_1} + D_{b_2} + D_{b_3} - D_{b_4}){red}{+} 2E_1 \\begin{bmatrix}1 \\\\ \\frac{b_1 b_2 b_4}{b_3}\\end{bmatrix}(D_{b_2} + D_{b_1} - D_{b_3} + D_{b_4}) \\\\& \\ + 8 \\bigg (E_2 \\begin{bmatrix}1 \\\\ \\frac{b_1 b_2 b_3}{b_4}\\end{bmatrix}- E_2 \\begin{bmatrix}1 \\\\ \\frac{b_1 b_2 b_4}{b_3}\\end{bmatrix}\\bigg )\\bigg ] \\mathcal {I}_{0,4} = 0 \\ .$ And $\\bigg [ & - D_{b_1}D_{b_3} + D_{b_1}D_{b_4} + D_{b_2}D_{b_3} - D_{b_2}D_{b_4}\\\\& \\ - 2 E_1 \\begin{bmatrix}1 \\\\ \\frac{b_1 b_3}{b_2 b_4}\\end{bmatrix} (D_{b_1} - D_{b_2} + D_{b_3} - D_{b_4}){red}{+} 2E_1 \\begin{bmatrix}1 \\\\ \\frac{b_1 b_4}{b_2 b_3}\\end{bmatrix}(D_{b_1} - D_{b_2} - D_{b_3} + D_{b_4}) \\\\& \\ + 8 \\bigg (E_2 \\begin{bmatrix}1 \\\\ \\frac{b_1 b_3}{b_2 b_4}\\end{bmatrix}- E_2 \\begin{bmatrix}1 \\\\ \\frac{b_1 b_4}{b_2 b_3}\\end{bmatrix}\\bigg )\\bigg ] \\mathcal {I}_{0,4} = 0 \\ .$ And $\\left(D_{b_4}^2 + 4 E_1 \\begin{bmatrix}1 \\\\ b_4^2\\end{bmatrix} D_{b_4}- 8 E_2 \\begin{bmatrix}1 \\\\ b_4^2\\end{bmatrix}\\right) \\mathcal {I}_{0,4}= \\left(D_{b_3}^2 + 4 E_1 \\begin{bmatrix}1 \\\\ b_3^2\\end{bmatrix} D_{b_3}- 8 E_2 \\begin{bmatrix}1 \\\\ b_3^2\\end{bmatrix}\\right) \\mathcal {I}_{0,4} \\ .$" ], [ "Flavored modular differential equations", "In this appendix we collect a few long equations explicitly that were omited in the maintext." ], [ "$\\mathcal {T}_{1,1}$", "In the class-$\\mathcal {S}$ limit $b_i \\rightarrow b$ of the theory $\\mathcal {T}_{1,1}$ , the Schur index is given by the formula (REF ).", "It satisfies several flavored modular differential equations of different weights.", "At weight-two, there is one equation that corresponds to the total stress tensor $T = T_\\text{Sug} + T_{\\beta \\gamma }$ , $0 = \\left[D^{(1)}_q-\\frac{D_b^2}{2}-\\left(2 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+ E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}\\right)D_b-2 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+3 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+2 E_2\\right]\\mathcal {I}_{1,1} \\ .$ At weight-three, we have $0 = \\Bigg [& D_b^3-4 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_q^{(1)}+8 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}D_q^{(1)}+6 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b^2 \\nonumber \\\\& \\ -16 E_2\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}D_b-12 E_2 D_b-12 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b-32 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b \\nonumber \\\\& \\ -12 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}^2 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-24 E_2 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}\\nonumber \\\\& \\ -8 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_2\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}-8 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+48 E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} \\Bigg ] \\mathcal {I}_{1,1} \\ ,$ and $0 = \\Bigg [D_q^{(1)} D_b+2 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}D_q^{(1)}-2 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b+E_2 D_b-2 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+6 E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} \\Bigg ] \\mathcal {I}_{1,1}\\ .$ At weight-four, there are several fairely complicated equations.", "$0 = \\Bigg [ & \\ D_b^4+8E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}^3 D_b+ 48 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_q^{(1)}-72 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_q^{(1)}\\nonumber \\\\& \\ +48 D_b \\left(E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}\\right)^2 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+264 D_b E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+192 D_b E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}\\nonumber \\\\& \\ +336 D_b E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+48 \\left(E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}\\right)^3 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-192 E_2 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} \\nonumber \\\\& +336 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+288 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_3\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}-784 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}\\nonumber \\\\& \\ -432 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-2136 E_4\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+96 D_q^{(1)} E_2\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}+288 D_b E_3\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}\\nonumber \\\\& \\ +1472 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_3\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}-384 E_4\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}-48 E_2^2 \\Bigg ]\\mathcal {I}_{1,1} \\ ,$ $0 = \\Bigg [ & D_b^2 D_q^{(1)} -12 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_q^{(1)}-16 E_2\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}D_q^{(1)}-10 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_q^{(1)}-8 E_2 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b\\nonumber \\\\& \\ -4 E_2 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}D_b+6 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b+12 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b+18 E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b\\nonumber \\\\& \\ +12 E_2 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+24 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+32 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_3\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}\\nonumber \\\\& \\ -4 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-24 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-74 E_4\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} \\nonumber \\\\& \\ +24 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_3\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}+64 E_4\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}+4 E_2^2-8 E_2 E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}\\Bigg ] \\mathcal {I}_{1,1} \\ ,$ $0 = \\Bigg [ D_q^{(2)}& + 2 E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b-4 E_2\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}D_q^{(1)}-4 E_3\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}D_b\\\\& \\ +\\frac{8}{3} E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_3\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}+16 E_4\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix}+\\frac{2}{3} E_1\\begin{bmatrix}-1 \\\\b \\\\\\end{bmatrix} E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-11 E_4\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} \\Bigg ]\\mathcal {I}_{1,1} \\ .", "\\nonumber $" ], [ "$\\mathcal {T}_{1,2}$", "Here we consider the $b_i \\rightarrow b$ limit.", "There are two equations at weight-three, $0 = \\Bigg [ D_b^3-32 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_q^{(1)}& +16 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b^2+48 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}^2D_b-96 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b \\nonumber \\\\& \\ -32 E_2 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-288 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+96 E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} \\Bigg ] \\mathcal {I}_{1,2} \\ ,$ and $0 = \\Bigg [ D_b D_q-6 D_b E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}& +4 E_2 D_b \\nonumber \\\\& \\ +12 E_2 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-12 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+12 E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} \\Bigg ]\\mathcal {I}_{1,2} \\ .$ At weight-four, there are two equations, $0 = \\Bigg [ & D_q^{(2)} -\\frac{1}{8} D_b^2 D_q^{(1)}-\\frac{3}{2} E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b D_q^{(1)}+4 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_q^{(1)}+2 E_2 D_q^{(1)}\\nonumber \\\\& \\ +\\frac{1}{4} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b^2+3 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b\\\\& \\ -8 E_2 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-24 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix} E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-24 E_4\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-E_2^2-31 E_4 \\Bigg ] \\mathcal {I}_{1,2} \\ , \\nonumber $ and $0 = \\Bigg [ D_q^{(2)} & -D_b D_q^{(1)} E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}+4 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_q^{(1)}+2 E_2 D_q^{(1)}-\\frac{1}{2} E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b^2+\\frac{1}{2} E_2 D_b^2\\\\& \\ +2 E_2 E_1\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b+6 E_3\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}D_b-8 E_2 E_2\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-24 E_4\\begin{bmatrix}1 \\\\b^2 \\\\\\end{bmatrix}-4 E_2^2-16 E_4 \\Bigg ]\\mathcal {I}_{1,2} \\ .", "\\nonumber $" ] ]
2207.10463
[ [ "Theoretical Models of the Atomic Hydrogen Content in Dark Matter Halos" ], [ "Abstract Atomic hydrogen (H I) gas, mostly residing in dark matter halos after cosmic reionization, is the fuel for star formation.", "Its relation with properties of host halo is the key to understand the cosmic H I distribution.", "In this work, we propose a flexible, empirical model of H I-halo relation.", "In this model, while the H I mass depends primarily on the mass of host halo, there is also secondary dependence on other halo properties.", "We apply our model to the observation data of the Arecibo Fast Legacy ALFA Survey (ALFALFA), and find it can successfully fit to the cosmic H I abundance ($\\Omega_{\\rm HI}$), average H I-halo mass relation $\\langle M_{\\rm HI}|M_{\\rm h}\\rangle$, and the H I clustering.", "The bestfit of the ALFALFA data rejects with high confidence level the model with no secondary halo dependence of H I mass and the model with secondary dependence on halo spin parameter ($\\lambda$), and shows strong dependence on halo formation time ($a_{1/2}$) and halo concentration ($c_{\\rm vir}$).", "In attempt to explain these findings from the perspective of hydrodynamical simulations, the IllustrisTNG simulation confirms the dependence of H I mass on secondary halo parameters.", "However, the IllustrisTNG results show strong dependence on $\\lambda$ and weak dependence on $c_{\\rm vir}$ and $a_{1/2}$, and also predict a much larger value of H I clustering on large scales than observations.", "This discrepancy between the simulation and observation calls for improvements in understanding the H I-halo relation from both theoretical and observational sides." ], [ "Introduction", "Atomic hydrogen (H i) gas is the fuel for early formation of stars and galaxies.", "The H i gas is harbored mainly in the cold and dense regions of dark matter halos after cosmic reionization, where it is self-shielded from the UV background with high recombination rates , , making H i emission a biased tracer of the dark matter distribution in the universe .", "The 21 cm radiation due to the hyperfine transition of atomic hydrogen is a novel probe of the large-scale structure of the universe , .", "Intensity mapping technology has been proposed and applied in many 21 cm surveys as a powerful tool to probe the large scale structure without resolving individual sources in modest 3D pixels, which implies a shorter integration time and lower angular resolution to get reasonable statistics on large scale .", "Based on this technology, some pioneering experimental 21 cm intensity mapping surveys have been conducted , .", "On the basis of these preceding attempts, the next generation telescopes and surveys have been proposed or already under construction to detect the post-reionization universe, such as the BAO from Integrated Neutral Gas Observations , Tianlai , , Canadian Hydrogen Intensity Mapping Experiment and the Square Kilometre Array , .", "One key element to understand the large-scale structure using the statistics of the 21 cm observations, especially the H i power spectrum, is the relation between the H i gas and the dark matter halo masses.", "In principle, the H i gas mass within and around a given galaxy is affected by the star formation and various feedback processes in the galaxy baryon cycle .", "In reality, considering the total H i mass in a dark matter halo, it is still largely determined by the halo mass as found by recent hydrodynamic simulations and semi-analytical models (SAMs) , , , .", "With an accurate H i-halo mass relation measured and modeled, it can be used to understand the evolution of the H i with the environment, as well as the large-scale structure.", "Mock catalogs for the future surveys could also be conveniently constructed by applying this relation to the cosmological simulations , , .", "However, hydrodynamic simulations and SAMs employ different galaxy formation models, and thus predict different H i-halo mass relations .", "Various empirical laws have also been proposed to describe the H i-halo mass relation , , , , but they are typically designed to fit to specific simulations or SAMs.", "It is thus important to measure the H i-halo mass relation in observations, which, however, is challenging, because the individual measurements of the H i mass in galaxies can suffer from the effect of flux limits of the 21 cm surveys, i.e.", "only H i-rich galaxies can be observed .", "Recently, (hereafter Guo2020) presented a direct observational measurement of the H i-halo mass relation at $z\\simeq 0$ by stacking the overall H i signals from the Arecibo Fast Legacy ALFA Survey for halos of different masses, constructed from the galaxy group catalog .", "They found that the H i mass is not a simple monotonically increasing function of halo mass, but shows strong, additional dependence on the halo richness.", "That is consistent with the finding of (hereafter Guo2017) that H i-rich galaxies tend to live in halos with late-formation time, using the spatial clustering measurements, as confirmed by .", "Such a halo assembly bias effect , in the H i-halo mass relation is generally not taken into account in most of the previous empirical models.", "In this paper, we extend the work of Guo2020 by proposing a more flexible, empirical model for the H i-halo relation at $z\\simeq 0$ , which includes the halo assembly bias effect.", "With the information of the H i abundance from the H i-halo mass relation and the H i bias from the spatial clustering of H i-selected galaxies, we aim to obtain an accurate theoretical model for the H i-halo mass relation, which can be extended to the range of smaller halo mass than what is probed observationally.", "Also, the dependence on the halo assembly history encodes information about the evolution of the total H i gas with the halo mass, which can potentially provide forecasts for the future H i surveys at redshifts higher than $z\\simeq 0$ .", "The rest of this paper is organized as follows.", "We introduce the observational measurements and our empirical model in Section , and present the results of fitting our model to the observation data in Section .", "These results are compared with the hydrodynamic simulations and the previous works in Section .", "We make concluding remarks in Section .", "We use three sets of measurements to constrain the H i-halo mass relation — the cosmic H i abundance ($\\Omega _{\\rm HI}$ ), the average H i-halo mass relation ($\\langle M_{\\rm HI}|M_{\\rm h}\\rangle $ ), and the H i clustering measurements ($w_{\\rm p}(r_{\\rm p})$ and $\\xi (s)$ ).", "The dimensionless cosmic H i abundance, $\\Omega _{\\rm HI}$ , can be calculated as $\\Omega _{\\rm HI}=\\frac{1}{\\rho _{\\rm c}}\\int \\langle M_{\\rm HI}|M_{\\rm h}\\rangle n(M_{\\rm h})dM_{\\rm h}, $ where $n(M_{\\rm h})$ is the halo mass function, and $\\rho _{\\rm c}$ is the critical density.", "It describes the total amount of H i in the universe.", "We adopt the value of $\\Omega _{\\rm HI}=(3.5\\pm 0.6)\\times 10^{-4}$ from .", "It was obtained from the observed H i mass function of ALFALFA 100% sample by integrating the assumed Schechter function.", "The measurement of $\\Omega _{\\rm HI}$ is used to constrain the low mass part of the H i-halo mass relation.", "We note that it is the original value before applying the correction for H i self-absorption, to be consistent with the other two measurements of $\\langle M_{\\rm HI}|M_{\\rm h}\\rangle $ and observed H i masses of individual galaxies in ALFALFA, where self-absorption is also not corrected.", "The H i self-absorption is difficult to be accurately estimated in observations and thus controversial." ], [ "H", "We use the measured H i-halo mass relation $\\langle M_{\\rm HI}|M_{\\rm h}\\rangle $ of Guo2020 in the halo mass range of $10^{11}$ –$10^{14}h^{-1}M_{}$ , with a bin size of $\\Delta \\log M_{\\rm h}=0.25$ (shown as open circles in the left panel of Figure REF ).", "To fit the observational measurements with a finite bin size, the average H i mass in halo mass bins of $M_{\\rm h1}<M_{\\rm h}<M_{\\rm h2}$ should be modeled as, $\\langle M_{\\rm HI}|M_{\\rm h}\\rangle _{\\rm obs}\\big |^{M_{\\rm h2}}_{M_{\\rm h1}}=\\dfrac{\\int _{M_{\\rm h1}}^{M_{\\rm h2}}\\left<M_{\\rm HI}|M_{\\rm h}\\right> n(M_{\\rm h})d M_{\\rm h}}{\\int _{M_{\\rm h1}}^{M_{\\rm h2}}n(M_{\\rm h})dM_{\\rm h}}.$ As will be shown below, the large halo mass bin size will smooth the peak feature of the H i-halo mass relation at the low mass end, which is more apparent in halos of higher richness (see Figure 2 of Guo2020).", "As the observed H i-halo mass relation is only limited to halos above $10^{11}h^{-1}M_{}$ , the low-mass behavior of $\\langle M_{\\rm HI}|M_{\\rm h}\\rangle $ can be well constrained with $\\Omega _{\\rm HI}$ .", "The contribution to H i mass from halos below $10^{11}h^{-1}M_{}$ is estimated to be around 30% in Guo2020, which cannot be simply ignored." ], [ "H", "The spatial clustering of H i gas can provide additional constraints on the halo assembly bias effect, as shown in Guo2017 using the 70% complete sample of ALFALFA.", "However, we only have the total H i mass information from the H i-halo mass relation, without the accurate distribution of H i mass within the halos or the measurements of H i gas in subhalos of different masses.", "As demonstrated in , the galaxy H i mass generally depends on the stellar mass and star formation rate, so baryon physics needs to be carefully taken into account.", "It is thus difficult to accurately model the clustering measurements of galaxies with different H i mass thresholds as in Guo2017.", "To the first order, it is easier to measure and model the clustering of the H i gas itself, by assigning the H i mass as the weight for each observed galaxy in the ALFALFA survey, analogous to the measurement of dark matter clustering.", "Since the ALFALFA galaxy sample selection depends on both the H i flux and line width, special care must be taken to ensure that accurate clustering measurements are made.", "We follow the correction method of Guo2017 by assigning each galaxy pair with an additional weight of the effective volume ($V_{\\rm eff}$ ) probed by the two galaxies .", "We use the Landy-Szalay estimator to measure the redshift-space 3D two-point correlation function $\\xi (r_{\\rm p},r_{\\rm \\pi })$ , where $r_{\\rm \\pi }$ and $r_{\\rm p}$ are the separations of galaxy pairs along and perpendicular to the line-of-sight, respectively, $\\xi (r_{\\rm p},r_{\\rm \\pi })={\\rm (DD-2DR+RR)/RR}$ .", "The galaxy pair counts of data-data ($\\rm {DD}$ ), data-random ($\\rm {DR}$ ), and random-random ($\\rm {RR}$ ) are calculated as follows, $\\rm {DD}(r_{\\rm p},r_{\\rm \\pi })&=&\\sum _{(i,j)\\in V_{ij}}\\frac{M_{{\\rm HI},i}M_{{\\rm HI},j}}{V_{ij}} , \\\\\\rm {DR}(r_{\\rm p},r_{\\rm \\pi })&=&\\sum _{(i,j)\\in V_{ij}}\\frac{M_{{\\rm HI},i}M_{{\\rm HI},j}}{V_{ij}} ,\\\\\\rm {RR}(r_{\\rm p},r_{\\rm \\pi })&=&\\sum _{(i,j)\\in V_{ij}}\\frac{M_{{\\rm HI},i}M_{{\\rm HI},j}}{V_{ij}}, $ where $V_{ij}={\\rm min}(V_{{\\rm eff},i},V_{{\\rm eff},j})$ , with $V_{{\\rm eff},i}$ and $V_{{\\rm eff},j}$ being the effective volumes accessible to the $i^{\\rm th}$ and $j^{\\rm th}$ galaxies, respectively.", "Similarly, we also compute the redshift-space two-point correlation function $\\xi (s)$ , with $s$ being the redshift-space pair separation.", "To reduce the effect of redshift-space distortion (RSD), we also measure the projected two-point correlation function $w_{\\rm p}(r_{\\rm p})$ , $w_{\\rm p}(r_{\\rm p})=\\int _{-r_{\\pi ,{\\rm max}}}^{r_{\\pi ,{\\rm max}}} \\xi (r_{\\rm p},r_{\\rm \\pi })dr_{\\rm \\pi }.", "$ where $r_{\\pi ,{\\rm max}}$ is set to be $20\\,h^{-1}{\\rm {Mpc}}$ as in Guo2017.", "The residual RSD effect can be further accounted for in our model.", "Our galaxy sample comes from the 100% complete catalog of the ALFALFA survey .", "The final sample includes 22330 galaxies with reliable H i mass measurements, covering 6518 $\\deg ^2$ in the redshift range of $0.0025<z<0.06$ .", "The random catalog is constructed similarly as in Guo2017, with the weights assigned by randomly selecting from the galaxy sample.", "We will use the measurements of $w_{\\rm p}(r_{\\rm p})$ and $\\xi (s)$ to constrain the H i-halo mass relation, with logarithmic $r_{\\rm p}$ bins of a constant width $\\Delta \\log r_{\\rm p}=0.2$ ranging from $0.13$ to $12.92\\,h^{-1}{\\rm {Mpc}}$ (the same for $s$ bins) and linear $r_{\\rm \\pi }$ bins of width $\\Delta r_{\\rm \\pi }=2\\,h^{-1}{\\rm {Mpc}}$ from 0 to 20$\\,h^{-1}{\\rm {Mpc}}$ .", "The error covariance matrices are estimated using the jackknife re-sampling method of 100 subsamples.", "However, as we lack the information of H i mass distribution within the halo, we only use the H i clustering measurements above $1\\,h^{-1}{\\rm {Mpc}}$ to avoid the scales dominated by the one-halo contribution.", "The measurements of $w_{\\rm p}(r_{\\rm p})$ and $\\xi (s)$ are shown as the open circles in the middle and right panels of Figure REF , respectively.", "In summary, for the observational data, we have one data point in $\\Omega _{\\rm HI}$ , 10 data points in $\\langle M_{\\rm HI}|M_{\\rm h}\\rangle $ , 6 data points in $w_{\\rm p}(r_{\\rm p})$ and 6 data points in $\\xi (s)$ .", "The total number of data points is 23." ], [ "H", "The observed $\\langle M_{\\rm HI}|M_{\\rm h}\\rangle $ as shown in Figure REF is not just a smoothly increasing function of $M_{\\rm h}$ .", "There is an apparent bump feature around $M_{\\rm h}\\sim 10^{11.5}h^{-1}M_{}$ , which is not caused by the measurement errors of the H i stacking, since such a feature becomes more significant for halos of higher richness.", "As discussed in Guo2020, it is possibly related to the virial shock-heating and active galactic neuclei (AGN) feedback happening in halos around this mass.", "Therefore, we propose a flexible, two-component model for $\\langle M_{\\rm HI}|M_{\\rm h}\\rangle $ , as a combination of a lognormal distribution ($f_{\\rm gau}$ ) at the low mass end and a double power-law (hereafter DPL) distribution $f_{\\rm dpl}$ , $&&f(M_{\\rm HI}|M_{\\rm h})= f_{\\rm gau}(M_{\\rm h}) + f_{\\rm dpl}(M_{\\rm h}) \\\\&&f_{\\rm gau}/(h^{-1}M_{}) = 10^{A_{\\rm gau}\\exp {[-(\\log (M_{\\rm h}/M_{\\rm gau})/\\sigma _{\\rm gau})^2}]}\\\\&&f_{\\rm dpl} = A_{\\rm dpl}M_{\\rm h}/[(M_{\\rm h}/M_{\\rm dpl})^{-\\alpha }+(M_{\\rm h}/M_{\\rm dpl})^\\beta ]$ where $A_{\\rm gau}$ , $M_{\\rm gau}$ , $\\sigma _{\\rm gau}$ , $A_{\\rm dpl}$ , $M_{\\rm dpl}$ , $\\alpha $ and $\\beta $ are the model parameters.", "As will be shown below, such a model is flexible enough to fit the observational measurements, as well as the simulation predictions.", "Apart from the halo mass dependence, as we mentioned above, the additional dependence of $M_{\\rm HI}$ on the halo assembly history is required to explain the observed clustering measurements.", "Following Guo2017, we investigate three halo properties in this paper — formation time ($a_{1/2}$ ), concentration parameter ($c_{\\rm vir}$ ) and spin parameter ($\\lambda $ ), as commonly used.", "The final theoretical model for the H i-halo relation is formulated as $\\langle M_{\\rm HI}|M_{\\rm h}, P\\rangle =f(M_{\\rm HI}|M_{\\rm h})\\cdot P^\\gamma ,$ where $\\gamma $ is the model parameter and the parameter $P$ can be either $a_{1/2}$ , $c_{\\rm vir}$ or $\\lambda $ .", "The simple power-law dependence on $P$ is motivated by the results in the hydrodynamic simulations, and the fitting to observation seems quite reasonable." ], [ "H", "The model prediction for the cosmic H i abundance $\\Omega _{\\rm HI}$ can be directly obtained by integrating Eq.", "(REF ), where the input of halo mass function $n(M_{\\rm h})$ is necessary.", "Since we also need to include the secondary halo parameters in Eq.", "(REF ), the analytical halo model which assumes a specific halo mass function form is not directly applicable.", "We follow the strategy of Guo2017 by directly populating the dark matter halos in the $N$ -body simulations using Eq.", "(REF ).", "Then $\\Omega _{\\rm HI}$ can be obtained by summing up the H i mass in the whole simulation volume.", "We use the dark matter halo catalog from the Small MultiDark simulation of Planck cosmology , covering a volume of $400^3h^{-3}{\\rm Mpc}^3$ and assuming the cosmological parameters of $\\Omega _m=0.307$ , $\\Omega _b=0.048$ , $n_s=0.96$ , $h=0.678$ and $\\sigma _8=0.823$ .", "The particle mass resolution is $9.6\\times 10^7h^{-1}M_{}$ , which is good enough to fully resolve the host halos of H i-rich galaxies.", "The halos are identified by the ROCKSTAR halo finder and we only apply our model to the simulation output at $z=0$ .", "The halo properties of formation time, concentration and spin parameter are also available from the ROCKSTAR halo catalog.", "The halo formation time $a_{1/2}$ is defined as the scale factor at which the halo mass first reaches half of the peak value over the whole merger history.", "The concentration parameter $c_{\\rm vir}$ is the ratio between halo virial radius and scale radius.", "The spin parameter $\\lambda $ is calculated using the definition of .", "Since the ALFALFA survey is limited to the local universe with $z<0.06$ , the sample variance effect for the clustering measurements is still severe and will cause a systematic underestimation of the large-scale clustering measurements.", "Therefore, we correct for this finite volume effect, known as the “integral constraint”, following the method presented in Section 4.4 of Guo2017.", "Briefly speaking, for each run of the model parameters in Eq.", "(REF ), we construct 64 mock galaxy catalogs from the simulation box with the same geometry as the ALFALFA survey, calculate the clustering measurements (with the RSD effect automatically included using halo peculiar velocities), and use the average of the 64 mocks as the model prediction." ], [ "Model Fitting", "To find the bestfit model parameters to the observational data described in Section REF , we apply the Monte Carlo Markov Chain (MCMC) technique, using the Bayesian inference tool of MultiNest .", "The likelihood surface is determined by $\\chi ^2$ , $&&\\chi ^2=\\chi ^2_1+\\chi ^2_2+\\chi ^2_3\\\\&&\\chi ^2_1=(\\Omega _{\\rm HI}-\\Omega ^*_{\\rm HI})^2/\\sigma _{\\Omega _{\\rm HI}}^2\\\\&&\\chi ^2_2=(\\langle M_{\\rm HI}|M_{\\rm h}\\rangle -\\langle M_{\\rm HI}|M_{\\rm h}\\rangle ^*)^2/\\sigma _{\\langle M_{\\rm HI}|M_{\\rm h}\\rangle }^2\\\\&&\\chi ^2_3=(\\mathbf {\\xi _{\\rm all}}-\\mathbf {\\xi ^*_{\\rm all}})^T\\mathbf {C}^{-1}(\\mathbf {\\xi _{\\rm all}}-\\mathbf {\\xi ^*_{\\rm all}})$ where the data vector $\\mathbf {\\xi _{\\rm all}}=[w_{\\rm p}(r_{\\rm p}),\\xi (s)]$ (i.e., the combination of $w_{\\rm p}(r_{\\rm p})$ and $\\xi (s)$ ), and $\\mathbf {C}$ is the full error covariance matrix for $\\xi _{\\rm all}$ .", "The quantity with (without) a superscript `*' is the one from the measurement (model).", "The cross-correlation between $w_{\\rm p}(r_{\\rm p})$ and $\\xi (s)$ has been fully accounted for in $\\mathbf {C}$ .", "We also assume that the measurements of $\\Omega _{\\rm HI}$ , $\\langle M_{\\rm HI}|M_{\\rm h}\\rangle $ , and $\\xi _{\\rm all}$ are independent of each other.", "Also, note that the error $\\sigma _{\\langle M_{\\rm HI}|M_{\\rm h}\\rangle }$ was estimated for the mean of H i mass for a given halo mass bin using the jackknife method, and therefore smaller than the error that would be estimated as the standard deviation of the sample of H i mass at individual halos.", "With 23 data points and 8 model parameters, the final degree-of-freedom (dof) for the model fitting is 15, unless otherwise noted." ] ]
2207.10414
[ [ "Riesz transforms and Sobolev spaces associated to the partial harmonic\n oscillator" ], [ "Abstract In this paper, our goal is to establish the Sobolev space associated to the partial harmonic oscillator.", "Based on its heat kernel estimate, we firstly give the definition of the fractional powers of the partial harmonic oscillator $$\\AH=-\\partial_{\\rho}^2-\\Delta_x+|x|^2,$$ and show that its negative powers are well defined on $L^p(\\mathbb R^{d+1})$ for $p\\in [1,\\infty]$.", "We then define associated Riesz transforms and show that they are bounded on classical Sobolev spaces by the calculus of symbols.", "Secondly, by a factorization of the operator $\\AH$, we define two families of Sobolev spaces with positive integer indices, and show the equivalence between them by the boundedness of Riesz transforms.", "Moreover, the adapted symbolic calculus also implies the boundedness of Riesz type transforms on the Sobolev spaces associated to the partial harmonic oscillator $\\AH$.", "Lastly, as applications of our results, we obtain the revised Hardy--Littlewood--Sobolev inequality, the Gagliardo--Nirenberg--Sobolev inequality, and Hardy's inequality in the potential space $L_{\\AH}^{\\alpha, p}$." ], [ "Introduction", "In this paper, we consider the following Schrödinger operator in $\\mathbb {R}^{d+1}$ , $H_{\\textup {par}}=-\\partial _{\\rho }^2-\\partial _{x_1}^2-\\dots -\\partial _{x_d}^2+|x|^2,$ which we will call a partial harmonic oscillator.", "The anisotropy of the operator is used in [2], [9] to model magnetic traps in the Bose–Einstein condensation.", "It is well known in [2] that $H_{\\textup {par}}$ is a self-adjoint operator, $\\sigma ( H_{\\textup {par}})=\\sigma _{\\textup {ac}}( H_{\\textup {par}})=[d, \\infty ) $ with embedded generalized eigenvalues $2k+d$ for $k\\ge 1$ , and that there is no singular spectrum.", "In this paper, we investigate the Riesz transforms and Sobolev spaces adapted to the operator $H_{\\textup {par}}$ .", "The fractional integrals and the Riesz transform associated to the classical Laplacian $-\\Delta _{\\mathbb {R}^{d+1}}$ are fundamental in harmonic analysis and have significant applications in the study of partial differential equations, see [8], [18], [20] for example.", "Our work is motivated by the series of papers [4], [10], [11], [12], [13], [14], [15], [17], [24], [25], [27].", "Especially in [10], R. Killip, etc, studied the Sobolev spaces adapted to the Schrödinger equation with the inverse-square potential by using the heat kernel estimates.", "These results were crucially used in [11], [17] to obtain the scattering result of the solution for nonlinear Schrödinger and wave equations with the inverse-square potential.", "Riesz transforms outside a convex obstacle were studied in [13].", "Other works can be found in [12], [14], [15].", "The Riesz transforms for the Hermite operators were also investigated in [25], [27], some techniques in this paper are also inspired by that in [4] where the Sobolev spaces adapted to the Hermite operator are discussed.", "Firstly, based on the Fourier–Hermite expansion and the heat kernel estimate for the operator $H_{\\textup {par}}$ , the fractional powers $H_{\\textup {par}}^\\alpha $ can be well-defined on $C_0^\\infty (\\mathbb {R}^{d+1})$ for $\\alpha \\in \\mathbb {R}$ .", "For the operator $H_{\\textup {par}}^{\\alpha }$ with the negative powers ($\\alpha <0$ ), we can obtain that its integral kernel $K_\\alpha (z,z^{\\prime })$ is controlled by an integrable function of $z-z^{\\prime }$ , and therefore $H_{\\textup {par}}^{\\alpha }$ can be extended to an operator on $L^p(\\mathbb {R}^{d+1})$ for $p\\in [1, \\infty ]$ .", "After the study of fractional powers, we can consider the potential spaces $L_{H_{\\textup {par}}}^{\\alpha , p}=H_{\\textup {par}}^{-\\alpha /2}(L^p(\\mathbb {R}^{d+1}))$ , and define the Sobolev spaces $W_{H_{\\textup {par}}}^{k,p}$ , $k\\in \\mathbb {N}$ by using a factorization of the operator $H_{\\textup {par}}$ .", "It turns out that they are equivalent for $\\alpha =k\\in \\mathbb {N}$ , due to the $L^p$ -boundedness of Riesz transforms in the classical Sobolev spaces.", "Moreover, we can also show that the Riesz transforms are bounded on $L_{H_{\\textup {par}}}^{\\alpha , p}$ by the symbolic calculus in class $G^m$ .", "We will also compare three Sobolev spaces associated to $-\\Delta _{\\mathbb {R}^{d+1}}$ , the Hermite operator $H=-\\Delta _{\\mathbb {R}^{d+1}}+\\rho ^2+|x| ^2$ , and the operator $H_{\\textup {par}}$ respectively.", "Finally, we use the above results to show the revised Hardy–Littlewood–Sobolev inequality, Gagliardo–Nirenberg–Sobolev inequality and Hardy's inequality in the potential space $L_{H_{\\textup {par}}}^{\\alpha , p}$ .", "We remark that the operator $H_{\\textup {par}}$ is a polynomial perturbation of the Laplacian and some results have been established in [6] by using the singular integral operators on nilpotent Lie groups.", "Instead, our results closely rely on the heat kernel estimate for the operator $H_{\\textup {par}}$ and Mehler's formula.", "By the way, the authors will establish a Mikhlin–Hörmander multiplier theorem for the operator $H_{\\textup {par}}$ and obtain its applications in the Littlewood–Paley square function estimate together with the Khintchine inequality in the subsequent paper [21].", "The results in this paper can be adapted to obtain analogous results for general partial harmonic oscillators $ L=-\\Delta _x-\\Delta _y+|x|^2, \\ (x,y) \\in \\mathbb {R}^{d_1} \\times \\mathbb {R}^{d_2}.$ There are some scattering results for nonlinear Schrödinger equations with partial harmonic potentials, see [1], [5], and some regularity problem of the fundamental solutions for the Schrödinger flows with partial harmonic oscillator has been discussed by Zelditch in [29].", "Lastly, this paper is organized as follows: some preliminary results and adapted symbol class are introduced in Section .", "In Section , by the symblolic calculus, we discuss the fractional powers and the Riesz transforms associated to the operator $H_{\\textup {par}}$ .", "In Section , we define the Sobolev spaces and discuss some inclusion properties.", "In Section , we show the revised Hardy–Littlewood–Sobolev inequality, Gagliardo–Nirenberg–Sobolev inequality, and Hardy's inequality in the space $L_{H_{\\textup {par}}}^{\\alpha , p}$ ." ], [ "Acknowledgements", "The authors would like to thank Professor Changxing Miao for his valuable comments and suggestions.", "G. Xu was partly supported by National Key Research and Development Program of China (No.", "2020YFA0712900) and by NSFC (No.", "11831004)." ], [ "Notation", "Let $D\\mathrel {\\mathop :}=\\partial /i$ .", "We write $z=(\\rho , x)$ or $z^{\\prime }=(\\rho ^{\\prime }, x)$ to denote elements in $\\mathbb {R}^{d+1}$ with $\\rho , \\rho ^{\\prime } \\in \\mathbb {R}$ and $x, x^{\\prime } \\in \\mathbb {R}^d$ .", "We write $A\\lesssim B$ if $A \\le C B$ for some constant $C>0$ and $A \\gtrsim B$ if $A \\ge c B$ for some constant $c>0$ .", "We write the Japanese bracket as $\\langle x\\rangle =(1+|x|^2)^{1/2}$ .", "We use $H$ and $\\Delta _{\\mathbb {R}^{d+1}}$ to denote the Hermite and Laplacian operators in $\\mathbb {R}^{d+1}$ respectively, and $\\Delta $ to denote the Laplacian operator in $\\mathbb {R}^d$ , i.e.", "$H=-\\partial _\\rho ^2-\\Delta +\\rho ^2+|x|^2, \\quad -\\Delta _{\\mathbb {R}^{d+1}}=-\\partial _\\rho ^2-\\Delta .$ For any $\\alpha >0$ , we define the classical Sobolev space $W^{\\alpha ,p}(\\mathbb {R}^{d+1})$ and the Hermite–Sobolev space $L_H^{\\alpha ,p}(\\mathbb {R}^{d+1})$ as those in [4], [8], [18], [20] as follows: $W^{\\alpha , p}(\\mathbb {R}^{d+1})&=\\lbrace f\\in L^p(\\mathbb {R}^{d+1}): (1-\\Delta _{\\mathbb {R}^{d+1}})^{-\\alpha /2} f\\in L^p(\\mathbb {R}^{d+1})\\rbrace ,\\\\L_{H}^{\\alpha , p}(\\mathbb {R}^{d+1})&=\\lbrace f\\in L^p(\\mathbb {R}^{d+1}):H^{-\\alpha /2} f\\in L^p(\\mathbb {R}^{d+1})\\rbrace .$ The Fourier transform and the inverse Fourier transform of a Schwartz function $f\\in \\mathcal {S}(\\mathbb {R})$ are defined by $\\mathcal {F}^{\\pm }_{\\rho }f (\\tau ) =\\frac{1}{(2\\pi )^{1/2}}\\int _{\\mathbb {R}} e^{\\mp i \\rho \\cdot \\tau } f(\\rho ) \\mathop {}\\!\\mathrm {d}\\rho ,$ and they are similar in the higher dimensional cases." ], [ "Hermite functions", "We recall some basic results about Hermite functions from [3], [7], [26].", "For $k=0, 1, \\ldots ,$ the Hermite functions $h_k(x)$ on $\\mathbb {R}$ are defined by $h_k(x)=(2^k k!", "\\sqrt{\\pi })^{-1/2} (-1)^k \\frac{\\mathop {}\\!\\mathrm {d}^k}{\\mathop {}\\!\\mathrm {d}x^k}(\\textup {e}^{-x^2}) \\textup {e}^{-x^2/2},$ which form a complete orthonormal basis for $L^2(\\mathbb {R})$ .", "For $\\mu \\in \\mathbb {N}^d$ , the Hermite functions $\\Phi _\\mu (x)$ on $\\mathbb {R}^d$ are defined by taking the product of the 1-dimensional Hermite functions $h_{\\mu _j}$ , $\\Phi _\\mu (x)=\\prod _{j=1}^d h_{\\mu _j}(x_j).$ It is well known that they are eigenfunctions of the operator $-\\Delta +|x|^2$ with eigenvalues $2|\\mu |+d$ , $(- \\Delta +|x|^2 )\\Phi _\\mu (x)= (2|\\mu |+d)\\Phi _\\mu (x).$ Let us define the following first order differential operators for $1\\le j \\le d$ as $A_0=-\\partial _{\\rho }, \\quad A^*_0=\\partial _{\\rho }, \\ A_j=-\\frac{\\partial }{\\partial x_j}+x_j, \\ A_{-j}= A_j^*=\\frac{\\partial }{\\partial {x_j}}+x_j.$ Then we have $A_j \\Phi _{\\mu } =\\sqrt{2(\\mu _j+1)} \\Phi _{\\mu +e_j}, \\;\\text{and}\\; A_{-j} \\Phi _{\\mu } =\\sqrt{2\\mu _j} \\Phi _{\\mu -e_j},$ where $e_j$ is the $j$ th unit vector in $\\mathbb {N}^d$ .", "We use $A_j$ and its adjoint operator $A^{*}_j$ to factorize the operator $ H_{\\textup {par}}$ as follows, $H_{\\textup {par}}=\\frac{1}{2} \\sum _{j=0} ^d (A_j A_j^*+A_j^* A_j).$ Denote by $P_k$ the spectral projection to the $k$ th eigenspace of $-\\Delta +|x|^2$ , $P_kf(x)=\\int _{\\mathbb {R}^d} \\sum _{|\\mu |=k}\\Phi _\\mu (x)\\Phi _\\mu (x^{\\prime }) f(x^{\\prime })\\mathop {}\\!\\mathrm {d}x^{\\prime }.", "$ These projections are integral operators with kernels $\\Phi _k(x,x^{\\prime })=\\sum _{|\\mu |=k}\\Phi _\\mu (x)\\Phi _\\mu (x^{\\prime }).$ The useful Mehler formula for $ \\Phi _k(x,x^{\\prime })$ is $\\sum _{k=0} ^\\infty r^k \\Phi _k(x,x^{\\prime }) =\\pi ^{-d/2}(1-r^2)^{-d/2} e^{-\\frac{1}{2}\\frac{1+r^2}{1-r^2}(|x|^2+|x^{\\prime }|^2)+\\frac{2r x \\cdot x^{\\prime }}{1-r^2}},$ for $0<r<1$ , please refer to [3], [26]." ], [ "Symbol class", "First, we recall the definition of the standard symbol class.", "We denote the spatial variable by $z=(\\rho ,x)$ , and the frequency variable by $\\omega = (\\tau ,\\xi )$ .", "Definition 2.1 ([22], [23]) Let $0\\le \\epsilon , \\delta \\le 1$ .", "We say a function $\\sigma (z,\\omega ) \\in C^{\\infty }(\\mathbb {R}^{d+1} \\times \\mathbb {R}^{d+1})$ is a symbol of order $m$ with type $(\\epsilon , \\delta )$ , written $\\sigma \\in S^m_{\\epsilon ,\\delta }$ if for all multi-indices $\\beta , \\gamma $ , there is a constant $C_{\\beta , \\gamma }$ such that $|D_{z}^{\\beta } D_{\\omega }^\\gamma \\sigma (z,\\omega )| \\le C_{\\beta , \\gamma }\\langle \\omega \\rangle ^{m-\\epsilon |\\gamma |+\\delta |\\beta |}.$ Definition 2.2 ([19]) We say a function $\\sigma (z, \\omega ) \\in C^{\\infty }(\\mathbb {R}^{d+1} \\times \\mathbb {R}^{d+1})$ belongs to $\\Gamma ^m$ if for all multi-indices $\\beta , \\gamma $ , there is constant $C_{\\beta , \\gamma }$ such that $|D_{z}^{\\beta } D_{\\omega }^\\gamma \\sigma ( z,\\omega )| \\le C_{\\beta , \\gamma }\\langle |z|+|\\omega |\\rangle ^{m- |\\beta |-|\\gamma |}.$ Proposition 2.3 ([27]) Let $\\alpha <0$ .", "The symbols of the negative fractional powers $H^{\\alpha }$ of the Hermite operators belong to $\\Gamma ^{2\\alpha }$ .", "For later use, we need introduce a new symbol class, adapted to the operator $H_{\\textup {par}}$ , in which the upper bounds also depend on partial spatial variable $x$ .", "Definition 2.4 We say that a function $\\sigma (\\rho , x, \\tau , \\xi ) \\in C^{\\infty }(\\mathbb {R}^{d+1} \\times \\mathbb {R}^{d+1})$ belongs to $G^m$ if for all multi-indices $\\beta , \\gamma $ , there is constant $C_{\\beta , \\gamma }$ such that $|D_{z}^{\\beta } D_{\\omega }^\\gamma \\sigma (\\rho , x, \\tau , \\xi )| \\le C_{\\beta , \\gamma }\\langle |x|+|\\omega |\\rangle ^{m- |\\beta |-|\\gamma |}.$ It is easy to see that $\\Gamma ^m \\subset G^m \\subset S_{1,0}^{m}$ for $m\\le 0$ .", "We quantize a symbol in $ \\sigma \\in G^m$ by defining $T_{\\sigma } f(z)=\\frac{1}{(2\\pi )^{d+1}} \\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{{\\mathbb {R}^{d+1}\\times \\mathbb {R}^{d+1}}}\\textup {e}^{i (z-z^{\\prime }) \\omega } \\sigma (z, \\omega ) f(z^{\\prime }) \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega .$ Using integration by parts, we know that $T_{\\sigma }$ is bounded from $\\mathcal {S}(\\mathbb {R} ^{d+1})$ to $\\mathcal {S}(\\mathbb {R} ^{d+1})$ if $ \\sigma \\in G^m$ .", "For $p\\in G^{m_1}, q \\in G^{m_2}$ , define $ p \\# q(z,\\omega )$ by the oscillatory integral $p \\# q(z, \\omega ) \\mathrel {\\mathop :}=&\\frac{1}{(2\\pi )^{d+1}} \\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{\\mathbb {R}^{d+1} \\times \\mathbb {R}^{d+1}} \\textup {e}^{-i z^{\\prime } \\omega ^{\\prime }} p(z, \\omega +\\omega ^{\\prime }) q(z+z^{\\prime }, \\omega ) \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime }.$ Proposition 2.5 If $p\\in G^{m }$ , $q \\in G^{m^{\\prime }} $ then $ p \\# q\\in G^{m+m^{\\prime }}$ .", "In particular, if $m+m^{\\prime }\\le 0$ , then $p\\#q \\in S^0_{1,0}$ .", "It suffices to prove that $|p\\#q| \\lesssim \\langle |x|+|\\omega |\\rangle ^{m+m^{\\prime }}$ .", "In fact, the derivative estimates of $p\\#q$ will follow from the fact that $D^\\alpha p \\in G^{ m-|\\alpha |}$ and $D^\\beta q \\in G^{m^{\\prime }-|\\beta |}$ .", "By definition, we have for any sufficiently large $k$ that, $&p\\# q(z,\\omega ) = \\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{ \\mathbb {R}^{d+1}\\times \\mathbb {R}^{d+1}} F \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime }\\\\&= \\frac{1}{(2\\pi )^{d+1}} \\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{\\mathbb {R}^{d+1}\\times \\mathbb {R}^{d+1}} \\!\\!\\textup {e}^{-i z^{\\prime } \\omega ^{\\prime }} \\langle \\nabla _{z^{\\prime }}\\rangle ^{2k} \\frac{ q(z+z^{\\prime }, \\omega )}{\\langle z^{\\prime }\\rangle ^{2k} }\\langle \\nabla _{\\omega ^{\\prime }}\\rangle ^{2k} \\frac{p(z, \\omega +\\omega ^{\\prime })}{ \\langle \\omega ^{\\prime }\\rangle ^{2k}} \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime }.$ The integrand $F$ is controlled by $ \\frac{ p(z,\\omega +\\omega ^{\\prime }) q(z+z^{\\prime }, \\omega )}{\\langle z^{\\prime }\\rangle ^{2k} \\langle \\omega ^{\\prime }\\rangle ^{2k} } \\lesssim \\frac{\\langle |x| + |\\omega +\\omega ^{\\prime }| \\rangle ^m \\langle |x+x^{\\prime }|+|\\omega |\\rangle ^{m^{\\prime }} }{\\langle z^{\\prime }\\rangle ^{2k} \\langle \\omega ^{\\prime }\\rangle ^{2k} } .", "$ Case 1: $m\\ge 0,m^{\\prime }\\ge 0$ Peetre's inequality immediately gives for $k\\gg 1$ $ |p\\# q| \\lesssim \\langle |x|+|\\omega |\\rangle ^{m+m^{\\prime }} \\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{ \\mathbb {R}^{d+1}\\times \\mathbb {R}^{d+1}}\\!", "\\langle z^{\\prime }\\rangle ^{m^{\\prime }-2k}\\langle \\omega ^{\\prime }\\rangle ^{m-2k}\\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime } \\lesssim \\langle |x|+|\\omega |\\rangle ^{m+m^{\\prime }} .", "$ Case 2: $m\\ge 0,m^{\\prime }<0$ We partition $ \\mathbb {R}^{d+1}_{z^{\\prime }} = A_< \\cup A_\\ge ,$ where $A_{<} = \\left\\lbrace |z^{\\prime }|<\\frac{|x|+|\\omega |}{2}\\right\\rbrace ,\\qquad A_{\\ge } = \\left\\lbrace |z^{\\prime }|\\ge \\frac{|x|+|\\omega |}{2}\\right\\rbrace ,$ Estimate On $A_<$: note that $|x+x^{\\prime }|+|\\omega |\\ge |x|-|x^{\\prime }|+|\\omega | \\ge \\frac{|x|+|\\omega |}{2}$ , we have $ \\langle |x+x^{\\prime }|+|\\omega |\\rangle ^{m^{\\prime }} \\lesssim \\langle |x| + |\\omega |\\rangle ^{m^{\\prime }} $ and $ \\left|\\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{A_<\\times \\mathbb {R}^{d+1}} F \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime }\\right| \\lesssim \\langle |x| + |\\omega |\\rangle ^{m+m^{\\prime }} \\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{\\mathbb {R}^{d+1}\\times \\mathbb {R}^{d+1}} \\langle z^{\\prime }\\rangle ^{-2k}\\langle \\omega ^{\\prime }\\rangle ^{-2k+m} \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime }.", "$ Estimate On $A_\\ge $: we have $\\langle |x+x^{\\prime }|+|\\omega |\\rangle ^{m^{\\prime }}\\le 1$ and use the denominator to control the $\\langle |x| + |\\omega | \\rangle ^m$ term: $ \\left|\\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{A_\\ge \\times \\mathbb {R}^{d+1} } F \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime } \\right|\\lesssim \\langle |x| + |\\omega |\\rangle ^{m+m^{\\prime }}\\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{\\mathbb {R}^{d+1}\\times \\mathbb {R}^{d+1}} \\langle z^{\\prime }\\rangle ^{-2k-m^{\\prime }}\\langle \\omega ^{\\prime }\\rangle ^{-2k} \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime }.$ Both bounds are controlled by $ \\langle |x|+ |\\omega | \\rangle ^{m+m^{\\prime }}$ if $k$ is large enough.", "Case 3: $m<0$ , $m^{\\prime }\\ge 0$ We decompose $\\mathbb {R}_{\\omega ^{\\prime }}^{d+1} = B_< \\cup B_\\ge $ , where $ B_{<} = \\left\\lbrace |\\omega ^{\\prime }|<\\frac{|x|+|\\omega |}{2}\\right\\rbrace , \\qquad B_{\\ge } = \\left\\lbrace |\\omega ^{\\prime }|\\ge \\frac{|x|+|\\omega |}{2}\\right\\rbrace .", "$ On the set $B_<$ , we have $\\langle |x|+|\\omega +\\omega ^{\\prime }|\\rangle ^{m} \\lesssim \\langle |x|+|\\omega |\\rangle ^{m}$ , and on the set $B_\\ge $ we have $\\langle \\omega ^{\\prime }\\rangle ^m \\lesssim \\langle |x|+|\\omega |\\rangle ^{m},$ so we can obtain the result by similar argument in Case 2 (despite the fact that the required upper bounds depend on $\\tau $ , but not on $\\rho $ ).", "Case 4: $m<0$ , $m^{\\prime }<0$ We use the decompositions as follows $ \\mathbb {R}^{d+1}\\times \\mathbb {R}^{d+1} =(A_<\\times B_< ) \\cup ( A_{<}\\times B_\\ge ) \\cup ( A_\\ge \\times B_<) \\cup ( A_\\ge \\times B_\\ge ).", "$ For the above decomposition, we have the following estimates: $A_<: &\\ \\langle |x+x^{\\prime }|+|\\omega |\\rangle ^{m^{\\prime }} \\lesssim \\langle |x| + |\\omega |\\rangle ^{m^{\\prime }},\\\\A_>: &\\ \\langle |x+x^{\\prime }|+|\\omega |\\rangle ^{m^{\\prime }} \\lesssim 1,\\\\B_<: & \\ \\langle |x|+|\\omega +\\omega ^{\\prime }|\\rangle ^{m} \\lesssim \\langle |x|+|\\omega |\\rangle ^{m},\\\\B_>: &\\ \\langle |x|+|\\omega +\\omega ^{\\prime }|\\rangle ^{m} \\lesssim 1.$ By combining the above estimates on the respective sets, we have $\\left| \\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{A_< \\times B_< } F \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime } \\right|\\lesssim \\langle |x| + |\\omega |\\rangle ^{m+m^{\\prime }}\\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{\\mathbb {R}^{d+1}\\times \\mathbb {R}^{d+1}} \\langle z^{\\prime }\\rangle ^{-2k}\\langle \\omega ^{\\prime }\\rangle ^{-2k} \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime },\\\\\\left|\\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{A_\\ge \\times B_\\ge } \\!\\!F \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime }\\right| \\lesssim \\langle |x| + |\\omega |\\rangle ^{m+m^{\\prime }}\\int \\i in {2,...,2}{\\mathchoice{\\hspace{-8.00003pt}}{\\hspace{-6.00006pt}}{\\hspace{-3.99994pt}}{\\hspace{-3.00003pt}}\\int }_{\\mathbb {R}^{d+1}\\times \\mathbb {R}^{d+1}}\\!\\!", "\\langle z^{\\prime }\\rangle ^{-2k-m^{\\prime }}\\langle \\omega ^{\\prime }\\rangle ^{-2k-m} \\mathop {}\\!\\mathrm {d}z^{\\prime } \\mathop {}\\!\\mathrm {d}\\omega ^{\\prime }.$ The integrals over $A_<\\times B_\\ge $ and $A_\\ge \\times B_<$ are similar, and both of them are bounded by $\\langle |x| + |\\omega |\\rangle ^{m+m^{\\prime }}$ if $k$ is sufficiently large.", "This completes the proof.", "Fractional powers of the operator $ H_{\\textup {par}}$ Functional Calculus for the operator $ H_{\\textup {par}}$ Fix $z=(\\rho , x) \\in \\mathbb {R}^{d+1}$ .", "By the continuous Fourier transform in $\\rho \\in \\mathbb {R}$ and the discrete Hermite expansion in $x \\in \\mathbb {R}^{d}$ of the operator $H_{\\textup {par}}$ , we can write $H_{\\textup {par}}f$ for a function $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ as $H_{\\textup {par}}f(\\rho ,x)&=\\sum _{\\mu } \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i \\rho \\tau }(\\tau ^2+2|\\mu |+d) (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau \\\\&= \\sum _{k=0}^\\infty \\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i \\rho \\tau }(\\tau ^2+2k+d)P_k \\mathcal {F}_{\\rho } f(\\tau , x) \\mathop {}\\!\\mathrm {d}\\tau ,$ where $\\mathcal {F}_{\\rho } f$ is the Fourier transform with respect to $\\rho $ , and $P_k$ is the projection operator in (REF ).", "Thus, for a Borel measurable function $F$ defined on $\\mathbb {R}_{+}$ , we can define the operator $F(H_{\\textup {par}})$ by the spectral theory as $F(H_{\\textup {par}})f(\\rho , x)=\\sum _{k=0}^\\infty \\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i \\rho \\tau }F(\\tau ^2+2k+d)P_k \\mathcal {F}_{\\rho } f(\\tau , x) \\mathop {}\\!\\mathrm {d}\\tau ,$ so long as the right hand side makes sense.", "In particular, the heat semigroup $\\textup {e}^{-t{H_{\\textup {par}}}}$ can be defined by $\\textup {e}^{-t H_{\\textup {par}}}f(\\rho ,x)&= \\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}}\\textup {e}^{i\\tau \\rho } \\textup {e}^{-t(\\tau ^2+2|\\mu |+d) } ((\\mathcal {F}_{\\rho } f)(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau \\\\&=\\sum _{k=0}^\\infty \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho } \\textup {e}^{-t(\\tau ^2+2k+d) } {P_k}(\\mathcal {F}_{\\rho } f)(\\tau , x) \\mathop {}\\!\\mathrm {d}\\tau $ for any $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ .", "By Mehler's formula (REF ), the integral kernel of the operator $\\textup {e}^{-t H_{\\textup {par}}}$ is $E(t,z,z^{\\prime })=2^{-\\frac{d+2}{2}}\\pi ^{-\\frac{d+1}{2}}t^{-1/2}(\\sinh 2t)^{-d/2} \\textup {e}^{-B(t,z,z^{\\prime })},$ where $B(t, z, z^{\\prime })=\\frac{1}{4}(2\\coth 2t-\\tanh t)|x-x^{\\prime }|^2+\\frac{\\tanh t }{4}|x+x^{\\prime }|^2+\\frac{(\\rho -\\rho ^{\\prime })^2}{4t}.$ Fractional powers of the operator $H_{\\textup {par}}$ and the heat semigroup On the one hand, for $\\alpha \\in \\mathbb {R}$ , we can define the fractional powers $H_{\\textup {par}}^\\alpha $ on $C_0^\\infty (\\mathbb {R}^{d+1})$ by $H_{\\textup {par}}^{\\alpha }f(\\rho ,x)&=\\sum _{k=0}^\\infty \\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2k+d)^{\\alpha } {P_k} (\\mathcal {F}_{\\rho } f)(\\tau , x) \\mathop {}\\!\\mathrm {d}\\tau .$ Simple calculation shows that the identity $H_{\\textup {par}}^{\\alpha } \\cdot H_{\\textup {par}}^{\\beta }f = H_{\\textup {par}}^{\\alpha +\\beta } f,$ holds for $ f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ and $\\alpha ,\\beta \\in \\mathbb {R}$ .", "On the other hand, we can also formulate the powers $ H_{\\textup {par}}^{ \\alpha }$ with $\\alpha \\in \\mathbb {R}$ by the semigroup $\\textup {e}^{-t H_{\\textup {par}}}$ and the Gamma function.", "Firstly, for any $\\alpha >0$ , the negative powers $ H_{\\textup {par}}^{-\\alpha }$ can be written as $H_{\\textup {par}}^{-\\alpha } f(\\rho , x) =\\frac{1}{ \\Gamma (\\alpha )} \\int _0^\\infty t^{\\alpha -1} \\textup {e}^{-t H_{\\textup {par}}} f(\\rho , x) \\mathop {}\\!\\mathrm {d}t.$ Notice that the integral kernel of the operator $H_{\\textup {par}}^{-\\alpha }$ is positive since the integral kernel (REF ) of $\\textup {e}^{-t H_{\\textup {par}}} $ is positive.", "Similarly for any $a\\in \\mathbb {R}$ and $d>- a$ , we have $(H_{\\textup {par}}+a)^{-\\alpha } f(\\rho , x) =\\frac{1}{ \\Gamma (\\alpha )} \\int _0^\\infty t^{\\alpha -1}\\textup {e}^{-t a} \\textup {e}^{-t H_{\\textup {par}}} f(\\rho , x) \\mathop {}\\!\\mathrm {d}t$ so long as the integral exists.", "We now express the positive fractional powers in terms of derivatives of the semigroup.", "Let $N$ be the smallest integer which is larger than $\\alpha $ .", "Using the identity $-H_{\\textup {par}}\\textup {e}^{-tH_{\\textup {par}}}=\\frac{\\mathop {}\\!\\mathrm {d}}{\\mathop {}\\!\\mathrm {d}t} \\textup {e}^{-tH_{\\textup {par}}}$ and (REF ), we have for any $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ that $H_{\\textup {par}}^{\\alpha } f(\\rho , x)=\\frac{(-1)^N}{\\Gamma (N-\\alpha )}\\int _{0}^\\infty t^{N-\\alpha -1} \\frac{\\mathop {}\\!\\mathrm {d}^N }{\\mathop {}\\!\\mathrm {d}t^N} \\textup {e}^{-tH_{\\textup {par}}}f(\\rho , x) \\mathop {}\\!\\mathrm {d}t.$ We have the following properties between $H_{\\textup {par}}^{\\alpha }$ and $A_{\\pm j}$ for $1\\le j\\le d$ .", "Lemma 3.1 For any $\\alpha \\in \\mathbb {R}$ , $d\\ge 3$ , $1\\le j\\le d$ , and $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ , we have $A_0 H_{\\textup {par}}^{\\alpha } f &= H_{\\textup {par}}^{\\alpha } A_0f,\\\\A_j H_{\\textup {par}}^{\\alpha }f &=( H_{\\textup {par}}-2)^{\\alpha } A_j f, & A_{-j} H_{\\textup {par}}^{\\alpha } f =( H_{\\textup {par}}+2)^\\alpha A_{-j} f,\\\\H_{\\textup {par}}^{\\alpha } A_j f&= A_j ( H_{\\textup {par}}+2)^\\alpha f,& H_{\\textup {par}}^{\\alpha } A_{-j} f =A_{-j} ( H_{\\textup {par}}-2)^{\\alpha }f.$ The first result is trivial.", "For $1\\le j\\le d$ , we only give the details for $ A_j H_{\\textup {par}}^{\\alpha }f =( H_{\\textup {par}}-2)^{\\alpha } A_j f$ , as the other cases are dealt with in analoge argument.", "By the definition of $H_{\\textup {par}}^{\\alpha }$ , we have $H_{\\textup {par}}^{\\alpha } f(\\rho , x) =\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d)^\\alpha (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau .$ On one hand, by (REF ), we have $&A_j H_{\\textup {par}}^{\\alpha } f(\\rho , x)\\\\&=\\sum _{\\mu } \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d)^\\alpha (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot ))A_j \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau \\\\&=\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d)^\\alpha (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot ))\\mathop {}\\!\\mathrm {d}\\tau \\cdot \\sqrt{2(\\mu _j+1)}\\Phi _{\\mu +e_j}(x).$ On the other hand, by the fact that $A_{-j}=A_j^*$ , we obtain $A_jf(\\rho , x)&=\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(A_j\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau \\\\&=\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), A_{-j}\\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau \\\\&=\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu -e_j}(\\cdot ))\\mathop {}\\!\\mathrm {d}\\tau \\cdot \\sqrt{2\\mu _j} \\Phi _{\\mu } (x).$ By (REF ), we have for a function $g\\in C_0^\\infty (\\mathbb {R}^{d+1})$ that $( H_{\\textup {par}}-2)^{\\alpha } g=\\sum _{\\mu } \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d-2)^\\alpha (\\mathcal {F}_\\rho g(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau ,$ which implies by choosing $g=A_jf$ that $& ( H_{\\textup {par}}-2)^{\\alpha } A_jf(\\rho , x)\\\\&=\\sum _{\\mu } \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d-2)^\\alpha (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu -e_j}(\\cdot ))\\mathop {}\\!\\mathrm {d}\\tau \\sqrt{2\\mu _j} \\Phi _{\\mu } (x)\\\\&=\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d)^\\alpha (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot ))\\mathop {}\\!\\mathrm {d}\\tau \\sqrt{2\\mu _j+2} \\Phi _{\\mu +e_j} (x)\\\\&= A_j H_{\\textup {par}}^{\\alpha }f(\\rho , x).$ We finish the proof.", "Properties of negative fractional powers of $H_{\\textup {par}}$ In this subsection, we explore the properties of the negative powers of the operator $H_{\\textup {par}}$ .", "The first result is: Proposition 3.2 Given $\\alpha >0$ , the operator $ H_{\\textup {par}}^{-\\alpha }$ has the integral representation $H_{\\textup {par}}^{-\\alpha } f(z)=\\int _{\\mathbb {R}^{d+1}}{K_{\\alpha }}(z,z^{\\prime }) f(z^{\\prime })\\mathop {}\\!\\mathrm {d}z^{\\prime } $ for all $f \\in C_0^\\infty (\\mathbb {R}^{d+1})$ .", "Moreover, there exist a functions $\\Psi _\\alpha \\in L^1(\\mathbb {R}^{d+1})$ and a constant $C>0$ such that ${K_{\\alpha }}(z, z^{\\prime }) \\le C \\Psi _\\alpha (z-z^{\\prime }), \\ \\text{for all}\\ z, z^{\\prime } \\in \\mathbb {R}^{d+1}.", "$ Hence, $H_{\\textup {par}}^{-\\alpha }$ is well defined and bounded on $L^p(\\mathbb {R}^{d+1})$ for $p \\in [1, +\\infty ]$ .", "By (REF ) and (REF ), we have $ K_{\\alpha }(\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) = \\frac{C_d}{\\Gamma (\\alpha )} \\int _0^\\infty t^{\\alpha -1-1/2} (\\sinh 2t)^{-d/2} \\textup {e}^{-B(t, z, z^{\\prime })} \\mathop {}\\!\\mathrm {d}t.$ We decompose ${K_{\\alpha }}(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ into two parts, $K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })&=\\frac{1}{\\Gamma (\\alpha )}\\int _0^1 t^{\\alpha -1} E(t,\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) \\mathop {}\\!\\mathrm {d}t, \\\\K_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })&=\\frac{1}{\\Gamma (\\alpha )}\\int _1^\\infty t^{\\alpha -1}E(t,\\rho ,x,\\rho ^{\\prime },x^{\\prime }) \\mathop {}\\!\\mathrm {d}t.$ We firstly estimate the term $ K_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })$ .", "Together the inequality that $2\\coth 2t -\\tanh t >\\coth 2t > 1$ , with the fact that $\\tanh t \\sim 1, \\; \\sinh 2t \\sim \\textup {e}^{2t },\\; \\coth 2t \\sim \\textup {e}^{2t},\\;\\text{as}\\; t \\rightarrow \\infty ,$ we have $|K_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })|&\\le C \\textup {e}^{-|x-x^{\\prime }|^2-(\\rho -\\rho ^{\\prime })^2-|x+x^{\\prime }|^2} \\int _1^\\infty t^{\\alpha -3/2}\\textup {e}^{-td}\\mathop {}\\!\\mathrm {d}t\\\\&\\le C \\textup {e}^{-|x-x^{\\prime }|^2-(\\rho -\\rho ^{\\prime })^2-|x+x^{\\prime }|^2}.$ We next estimate $K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ .", "We further split it into two cases.", "Case 1: $(z, z^{\\prime }) \\in D_{+}\\mathrel {\\mathop :}=\\lbrace (z, z^{\\prime }) \\in \\mathbb {R}^{d+1} \\times \\mathbb {R}^{d+1}; \\; |z-z^{\\prime }|\\ge 1\\rbrace $ .", "Using the fact that $\\sinh 2t \\sim 2t$ , $\\coth 2t \\sim \\frac{1}{2t}$ and $\\tanh t\\sim t$ as $t\\rightarrow 0$ , we have $|K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })|&\\lesssim \\int _0^1 t^{\\alpha -1-\\frac{d+1}{2}} \\textup {e}^{-\\frac{1}{8t}[|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2]} \\mathop {}\\!\\mathrm {d}t\\\\&\\lesssim \\textup {e}^{-\\frac{1}{16}[|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2]} \\int _{0}^1 t^{\\alpha -1-\\frac{d+1}{2}} \\textup {e}^{-\\frac{1}{16 t}} \\mathop {}\\!\\mathrm {d}t\\\\& \\lesssim \\textup {e}^{-\\frac{1}{16}[|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2]}.$ Case 2: $(z, z^{\\prime }) \\in D_{-}\\mathrel {\\mathop :}=\\lbrace (z, z^{\\prime }) \\in \\mathbb {R}^{d+1} \\times \\mathbb {R}^{d+1}; \\; |z-z^{\\prime }|<1\\rbrace $ .", "If $\\alpha <\\frac{d+1}{2}$ , we have $|K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })|&\\le \\int _0^1 t^{\\alpha -1-\\frac{d+1}{2}} \\textup {e}^{-\\frac{1}{8t}[|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2]} \\mathop {}\\!\\mathrm {d}t\\\\&\\lesssim \\frac{1}{[|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2]^{(d+1)/2-\\alpha }};$ if $\\alpha =\\frac{d+1}{2}$ , we have $|K_{\\alpha }^1(\\rho , ,x, \\rho ^{\\prime }, x^{\\prime })| \\lesssim \\log [|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2];$ and finally, if $\\alpha >\\frac{d+1}{2}$ , we have $|K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })|\\lesssim 1.$ It follows that (REF ) holds with the integrable function $\\Psi _\\alpha $ defined by $\\Psi _\\alpha (z-z^{\\prime })={\\left\\lbrace \\begin{array}{ll}\\mathbf {1}_{D_-}|z-z^{\\prime }|^{2\\alpha -(d+1)}+ \\mathbf {1}_{D_+}\\textup {e}^{-\\frac{1}{16}|z-z^{\\prime }|^2}, & \\alpha <\\frac{d+1}{2},\\\\\\mathbf {1}_{D_-} \\log |z-z^{\\prime }|+ \\mathbf {1}_{D_+}\\textup {e}^{-\\frac{1}{16}|z-z^{\\prime }|^2}, & \\alpha =\\frac{d+1}{2},\\\\\\mathbf {1}_{D_-} + \\mathbf {1}_{D_+}\\textup {e}^{-\\frac{1}{16}|z-z^{\\prime }|^2}, & \\alpha >\\frac{d+1}{2},\\end{array}\\right.", "}$ which completes the proof.", "By (REF ), we can obtain similar estimates for the integral kernels of the operators $(H_{\\textup {par}}+2)^{-\\alpha }$ and $(H_{\\textup {par}}-2)^{-\\alpha }$ : Corollary 3.3 Assume that $\\alpha >0$ .", "Let $M_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ and $N_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ be the integral kernels of the operators $(H_{\\textup {par}}+2)^{-\\alpha }$ and $(H_{\\textup {par}}-2)^{-\\alpha }$ respectively.", "Then, we have $M_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) \\le {K_{\\alpha }}(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })\\le \\Psi _\\alpha (z-z^{\\prime }),$ and $N_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) \\le \\Psi _\\alpha (z-z^{\\prime }), \\quad \\text{if}\\;\\; d\\ge 3.$ By (REF ), the estimate of $M_\\alpha $ is trivial.", "It suffices to show the estimate of $N_\\alpha $ .", "By definition, we have $N_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime })= \\frac{1}{\\Gamma (\\alpha )}\\int _0^\\infty t^{\\alpha -1} e^{2t} E(t,\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) \\mathop {}\\!\\mathrm {d}t.$ We once again decompose $N_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ into two parts as follows.", "$N_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })&=\\frac{1}{\\Gamma (\\alpha )}\\int _0^1 t^{\\alpha -1} e^{2t}E(t,\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) \\mathop {}\\!\\mathrm {d}t, \\\\N_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })&=\\frac{1}{\\Gamma (\\alpha )}\\int _1^\\infty t^{\\alpha -1}e^{2t}E(t,\\rho ,x,\\rho ^{\\prime },x^{\\prime }) \\mathop {}\\!\\mathrm {d}t.$ For the term $ N_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ , we can proceed by the same way as the estimate of $K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ , we omit the details here.", "For $N_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ term, we obtain for $d\\ge 3$ that $|N_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })|&\\le C \\textup {e}^{-|x-x^{\\prime }|^2-(\\rho -\\rho ^{\\prime })^2-|x+x^{\\prime }|^2} \\int _1^\\infty t^{\\alpha -3/2}\\textup {e}^{-td}e^{2t}\\mathop {}\\!\\mathrm {d}t\\\\&\\le C \\textup {e}^{-|x-x^{\\prime }|^2-(\\rho -\\rho ^{\\prime })^2-|x+x^{\\prime }|^2},$ which completes the proof.", "Remark 3.4 We make some comments about the condition that $d > - a=2$ for the second result of $N_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })$ in the above corollary.", "Since the integral $\\int _1^\\infty t^{\\alpha -3/2}e^{2t}e^{-t} \\mathop {}\\!\\mathrm {d}t$ is divergent, we cannot obtain similar estimates for $(H_{\\textup {par}}-2)^{-\\alpha }$ for the case $d=1$ .", "It is consistent with the spectral property that $\\sigma (H_{\\textup {par}}-2)=[-1, \\infty )$ when $d=1$ .", "When $d=2$ , the spectral property is that $\\sigma (H_{\\textup {par}}-2)=[0, \\infty )$ , and we have different behavior depending on the power $\\alpha $ .", "If $0<\\alpha <1/2$ , the kernel of $(H_{\\textup {par}}-2)^{-\\alpha }$ has exponential decay as $|z-z^{\\prime }| \\rightarrow \\infty $ since $\\int _1^\\infty t^{\\alpha -3/2}e^{2t}e^{-2t} \\mathop {}\\!\\mathrm {d}t$ is convergent, and if $\\alpha \\ge 1/2$ , no useful result can be derived.", "In addition, we also have the following boundedness result for the negative fractional powers of the operator $H_{\\textup {par}}$ .", "Proposition 3.5 Let $p\\in [1, \\infty ]$ and $\\alpha >0$ .", "Then the weighted operator $|x|^{2\\alpha } H_{\\textup {par}}^{-\\alpha }$ is bounded on $L^p(\\mathbb {R}^{d+1})$ .", "By Schur's lemma [8], we only need to verify that $\\sup _{z} |x|^{2\\alpha } \\int _{\\mathbb {R}^{d+1}} |{K_{\\alpha }}(z, z^{\\prime })| \\mathop {}\\!\\mathrm {d}z^{\\prime } \\lesssim 1, $ and $\\sup _{z^{\\prime }} \\int _{\\mathbb {R}^{d+1}} |x|^{2\\alpha } |{K_{\\alpha }}(z, z^{\\prime })| \\mathop {}\\!\\mathrm {d}z \\lesssim 1.$ By the boundedness in Proposition REF , we may assume $|x|\\ge 2$ .", "Firstly, we prove (REF ).", "We partition $\\mathbb {R}^{d}=E_x \\cup E_x^c$ , where $E_x=\\lbrace x^{\\prime }\\in \\mathbb {R}^{d}: |x|>2|x-x^{\\prime }|\\rbrace .$ When $x^{\\prime }\\in E^c_x$ , i.e., $|x|\\le 2|x-x^{\\prime }|$ .", "By the fact that $2\\coth 2t-\\tanh t \\ge \\frac{1}{2}(2\\coth 2t-\\tanh t)+\\frac{1}{4},$ we have $\\quad K_\\alpha (\\rho , x, \\rho ^{\\prime }, x^{\\prime })& \\lesssim \\textup {e}^{-\\frac{1}{16}|x-x^{\\prime }|^2}\\int _0^\\infty t^{\\alpha -1-1/2} (\\sinh 2t)^{-d/2} \\textup {e}^{-\\frac{1}{2}(B(t, \\rho , \\frac{x}{\\sqrt{2}}, \\rho ^{\\prime }, \\frac{x^{\\prime }}{\\sqrt{2}} )} \\mathop {}\\!\\mathrm {d}t\\\\& \\lesssim \\textup {e}^{-\\frac{1}{16}|x-x^{\\prime }|^2} {K_{\\alpha }}\\left(\\rho , \\frac{x}{\\sqrt{2}}, \\rho ^{\\prime }, \\frac{x^{\\prime }}{\\sqrt{2}} \\right).$ So it follows that $& \\quad |x|^{2\\alpha } \\int _\\mathbb {R} \\int _{E_x^c} |{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime }) |\\mathop {}\\!\\mathrm {d}x^{\\prime } \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\lesssim \\int _{\\mathbb {R}^{d+1}}\\underbrace{ |x-x^{\\prime }|^{2\\alpha } \\textup {e}^{-\\frac{1}{16}|x-x^{\\prime }|^2}}_{<\\; \\infty }\\Big |{K_{\\alpha }}\\Big (\\rho , \\frac{x}{\\sqrt{2}}, \\rho ^{\\prime }, \\frac{x^{\\prime }}{\\sqrt{2}} \\Big ) \\Big |\\mathop {}\\!\\mathrm {d}x^{\\prime } \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\lesssim 1.$ When $x^{\\prime } \\in E_x$ , i.e., $|x|> 2|x-x^{\\prime }|$ , we once again decompose $K_\\alpha =K^1_\\alpha +K^2_\\alpha $ as in the proof of Proposition REF .", "For $K^2_{\\alpha }(z, z^{\\prime })$ term.", "Since $|x|>2|x-x^{\\prime }|$ implies $|x|<|x+x^{\\prime }|$ , we have $&\\quad |x|^{2\\alpha }\\int _{\\mathbb {R}} \\int _{E_x} |x|^{2\\alpha }\\textup {e}^{-\\frac{1}{8}|x+x^{\\prime }|^2} \\textup {e}^{-\\frac{1}{4}|x-x^{\\prime }|^2} \\textup {e}^{-(\\rho -\\rho ^{\\prime })^2} \\mathop {}\\!\\mathrm {d}x^{\\prime }\\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\le \\int _{\\mathbb {R}} \\int _{\\mathbb {R}^d} \\underbrace{|x|^{2\\alpha }\\textup {e}^{-\\frac{1}{8}|x|^2}}_{\\lesssim \\; 1} \\textup {e}^{-\\frac{1}{4}|x-x^{\\prime }|^2} \\textup {e}^{-(\\rho -\\rho ^{\\prime })^2} \\mathop {}\\!\\mathrm {d}x^{\\prime }\\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\lesssim 1.$ As for the $K_\\alpha ^1 (z, z^{\\prime })$ term, we have $&\\int _{\\mathbb {R}} \\int _{E_x} |x|^{2\\alpha } \\int _0^1 t^{\\alpha -\\frac{d+3}{2}} \\textup {e}^{-\\frac{1}{8 t}|x-x^{\\prime }|^2-\\frac{1}{4}t|x|^2 -\\frac{(\\rho -\\rho ^{\\prime })^2}{4t}} \\mathop {}\\!\\mathrm {d}t \\mathop {}\\!\\mathrm {d}x^{\\prime } \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\le |x| \\int _{\\mathbb {R}} \\int _{0}^{|x|^2} \\int _0^{|x|^2} u^{\\alpha -\\frac{d+3}{2}} \\textup {e}^{-\\frac{|x|^2}{8 u}(\\rho -\\rho ^{\\prime })^2} \\textup {e}^{-\\frac{r^2}{4 u} +\\frac{1}{4}u} \\mathop {}\\!\\mathrm {d}u r^d \\mathop {}\\!\\mathrm {d}r\\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\le |x| \\int _{0}^{|x|^2} \\int _0^{|x|^2} u^{\\alpha -\\frac{d+3}{2}}\\underbrace{ \\int _{\\mathbb {R}} \\textup {e}^{-\\frac{|x|^2}{8 u}(\\rho -\\rho ^{\\prime })^2} \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }}_{\\lesssim \\; u^{1/2}|x|^{-1}}\\textup {e}^{-\\frac{r^2}{4 u} +\\frac{1}{4}u} \\mathop {}\\!\\mathrm {d}u r^d \\mathop {}\\!\\mathrm {d}r\\\\&\\lesssim \\int _{0}^{|x|^2} \\int _0^{|x|^2} u^{\\alpha -\\frac{d+2}{2}} \\textup {e}^{-\\frac{r^2}{4 u} +\\frac{1}{4}u} \\mathop {}\\!\\mathrm {d}u \\, r^d \\mathop {}\\!\\mathrm {d}r <\\infty .$ This completes the proof of (REF ).", "To prove (REF ), we similarly partition $\\mathbb {R}^{d}=E_{x^{\\prime }}\\cup E_{x^{\\prime }}^c$ , with $E_{x^{\\prime }}=\\lbrace x\\in \\mathbb {R}^d: |x^{\\prime }|\\ge 2|x-x^{\\prime }|\\rbrace ,$ and write $\\int _{\\mathbb {R}^{d+1}} |x|^{2\\alpha }{K_{\\alpha }}(z, z^{\\prime }) \\mathop {}\\!\\mathrm {d}z=\\bigg (\\int _{\\mathbb {R}} \\int _{E_{x^{\\prime }}}+\\int _{\\mathbb {R}} \\int _{E_{x^{\\prime }}^c}\\bigg )|x|^{2\\alpha }{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime }) \\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho .$ When $x\\in E_{x^{\\prime }}$ , i.e., $|x^{\\prime }| \\ge 2|x-x^{\\prime }|$ , we have $|x| \\le \\frac{3}{2}|x^{\\prime }|$ .", "Since the kernel is symmetric in $z$ and $z^{\\prime }$ , that is, ${K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime })={K_{\\alpha }}(\\rho ^{\\prime }, x^{\\prime }, \\rho , x)$ , we have $\\int _{\\mathbb {R}} \\int _{E_{x^{\\prime }}}|x|^{2\\alpha }{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime }) \\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho \\lesssim |x^{\\prime }|^{2\\alpha } \\int _{\\mathbb {R}} \\int _{|x^{\\prime }|>2|x-x^{\\prime }|} |{K_{\\alpha }}(\\rho ^{\\prime }, x^{\\prime }, \\rho , x) |\\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho .$ By (REF ), the right hand side of the above inequality is finite.", "When $x\\in E^c_{x^{\\prime }}$ , we further split the integral domain into two cases.", "When $x\\in E^c_{x^{\\prime }}$ and $|x-x^{\\prime }|<1$ , it follows that $|x|\\le |x^{\\prime }|+|x-x^{\\prime }| \\le 4$ .", "Hence, $\\int _{\\mathbb {R}} \\int _{|x^{\\prime }|<2|x-x^{\\prime }|}|x|^{2\\alpha }|{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime })| \\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho &\\lesssim \\int _{\\mathbb {R}} \\int _{|x^{\\prime }|<2|x-x^{\\prime }|}|{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime })| \\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho \\\\&\\lesssim 1.$ When $x\\in E^c_{x^{\\prime }}$ and $|x-x^{\\prime }|>1$ , we have $&\\quad \\int _{\\mathbb {R}} \\int _{\\lbrace |x^{\\prime }|<2|x-x^{\\prime }|, \\ |x-x^{\\prime }|\\ge 1\\rbrace }|x|^{2\\alpha }|{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime })| \\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho \\\\&\\lesssim \\int _{\\mathbb {R}} \\int _{|x-x^{\\prime }|\\ge 1}|x-x^{\\prime }|^{2\\alpha }|{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime }) |\\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho \\\\&\\lesssim \\int _{\\mathbb {R}^{d+1}}\\underbrace{ |x-x^{\\prime }|^{2\\alpha } \\textup {e}^{-\\frac{1}{16}|x-x^{\\prime }|^2}}_{\\lesssim \\; 1}\\Big |{K_{\\alpha }}\\Big ({\\rho }, \\frac{x}{\\sqrt{2}}, \\rho ^{\\prime }, \\frac{x^{\\prime }}{\\sqrt{2}} \\Big ) \\Big |\\mathop {}\\!\\mathrm {d}x^{\\prime } \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime } \\\\&\\lesssim 1.$ This proves (REF ) and completes the proof.", "Symbols for fractional powers $H_{\\textup {par}}^{\\alpha }$ Using the heat kernel representation of the fractional powers operator $H_{\\textup {par}}^{\\alpha }$ in Subsection REF , we firstly calculate the symbol of the heat semigroup $\\textup {e}^{-t H_{\\textup {par}}}$ .", "Together (REF ), the fact that $\\widehat{\\Phi }_{\\mu }=(-i)^{|\\mu |} \\Phi _{\\mu }$ , with Plancherel's theorem, we have $&\\textup {e}^{-t H_{\\textup {par}}}f(\\rho , x)\\\\&=\\sum _{\\mu } \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho } \\textup {e}^{-\\tau ^2 t} \\textup {e}^{-(2|\\mu |+d)t} \\textup {e}^{-\\frac{\\pi i}{4}2|\\mu |}(\\mathcal {F}_{\\rho , x}f(\\tau , \\xi ), \\Phi _{\\mu }(\\xi ))\\Phi _{\\mu }(x) \\mathop {}\\!\\mathrm {d}\\tau \\\\&=\\frac{1}{(2\\pi )^{(d+1)/2}}\\int _{\\mathbb {R}^{d+1}} \\textup {e}^{i\\tau \\rho } \\textup {e}^{i\\xi \\cdot x} p_t(\\rho , x, \\tau , \\xi ) \\mathcal {F}_{\\rho , x}f(\\tau , \\xi ) \\mathop {}\\!\\mathrm {d}\\tau \\mathop {}\\!\\mathrm {d}\\xi ,$ where $p_t(\\rho , x, \\tau , \\xi )=\\sum _{\\mu } \\textup {e}^{-i \\xi \\cdot x}\\textup {e}^{-\\tau ^2 t} \\textup {e}^{-(2|\\mu |+d)t} \\textup {e}^{-\\frac{\\pi i}{4}2|\\mu |} \\Phi _{\\mu }(\\xi ) \\Phi _{\\mu }(x).$ In view of Mehler's formula (REF ), we have $p_t(\\rho , x, \\tau , \\xi )= c_d (\\cosh 2t )^{-\\frac{d}{2}} \\textup {e}^{-b(t, x, \\tau , \\xi )},$ where $c_d={(2\\pi )^{-d/2}}$ and $b(t, x, \\tau , \\xi ) &=\\frac{1}{2} (|x|^2+|\\xi |^2)\\tanh 2t +2i x \\cdot \\xi \\operatorname{sech}2t (\\sinh t)^2 +t \\tau ^2.$ Now we have Lemma 3.6 Let $\\alpha \\in \\mathbb {R}$ .", "The symbol $\\sigma _{\\alpha }(\\rho , x, \\tau , \\xi )$ of the operator $H_{\\textup {par}}^{\\alpha }$ belongs to the symbol class $G^{2\\alpha }$ defined by Definition REF .", "From the explicit formula of $p_t$ , we know that $\\sigma _\\alpha $ does not depend on $\\rho $ .", "Thus, the estimates for $\\sigma _\\alpha $ do not depend on $\\rho $ and its derivatives on $\\rho $ are zero.", "For brevity, we will write $\\sigma _{\\alpha }( x, \\tau , \\xi )$ instead of $\\sigma _{\\alpha }(\\rho , x, \\tau , \\xi )$ , and similarly for other terms which are independent of $\\rho $ .", "The case $\\alpha =0$ is obvious.", "For the case $\\alpha <0$ .", "Using the equality of (REF ), we have $\\sigma _{\\alpha }( x, \\tau , \\xi )=\\frac{1}{ \\Gamma (-\\alpha )} \\int _0^\\infty t^{-\\alpha -1} p_t( x, \\tau , \\xi ) \\mathop {}\\!\\mathrm {d}t.$ We split $ \\sigma _{\\alpha }( x, \\tau , \\xi )$ into two parts as follows.", "$\\sigma _{\\alpha }^1( x, \\tau , \\xi )=\\frac{1}{ \\Gamma (-\\alpha )} \\int _0^1 t^{-\\alpha -1} p_t( x, \\tau , \\xi ) \\mathop {}\\!\\mathrm {d}t,\\\\\\sigma _{\\alpha }^2( x, \\tau , \\xi )=\\frac{1}{ \\Gamma (-\\alpha )} \\int _1^\\infty t^{-\\alpha -1} p_t( x, \\tau , \\xi ) \\mathop {}\\!\\mathrm {d}t.$ To estimate $\\sigma _{\\alpha }^1$ , on the one hand, by the lower bound estimate for the real part of $b$ as $t\\in (0, 1)$ , $\\operatorname{Re}b(t, x, \\tau , \\xi ) \\ge c t (|x|^2+|\\xi |^2+\\tau ^2),$ we have $|\\sigma _{\\alpha }^1( x, \\tau , \\xi )|\\le \\; \\frac{1}{ \\Gamma (-\\alpha )} \\int _0^1 t^{-\\alpha -1} e^{- c(|x|^2+|\\xi |^2+\\tau ^2) t} \\mathop {}\\!\\mathrm {d}t\\;&\\le (1+|x|^2+|\\xi |^2+\\tau ^2)^\\alpha .$ On the other hand, by (REF ), the derivatives of $b(t, x, \\tau , \\xi )$ satisfy $\\partial _{x_j} b(t, x, \\tau , \\xi )&=x_j \\tanh 2t +2i \\xi _j\\operatorname{sech}2t (\\sinh t)^2 \\sim x_j t + \\xi _jt^2,\\\\\\partial _{x_j} ^2b(t, x, \\tau , \\xi )&= \\tanh 2t \\sim t,\\\\\\partial _{\\xi _j} b(t, x, \\tau , \\xi )&= \\xi _j \\tanh 2t +2i x_j\\operatorname{sech}2t (\\sinh t)^2 \\sim \\xi _jt + x_jt^2,\\\\\\partial _{\\xi _j} ^2b(t, x, \\tau , \\xi )&= \\tanh 2t \\sim t,\\\\\\partial _{\\tau }b(t, x, \\tau , \\xi )&= 2\\tau t, \\quad \\partial _{\\tau }^2b(t, x, \\tau , \\xi )= 2 t,$ as $t\\rightarrow 0$ .", "In conclusion, we obtain with the shorthand $X=(x,\\tau ,\\xi )$ that $|\\partial _X^\\beta b(t, x, \\tau , \\xi ) | \\lesssim {\\left\\lbrace \\begin{array}{ll}|X|^{2-|\\beta |}t, &|\\beta | \\le 2, \\\\0, & |\\beta | \\ge 3.\\end{array}\\right.", "}$ By using Faà di Bruno's formula, we obtain $\\partial _X^{\\beta } \\sigma _{\\alpha }^1( x, \\tau , \\xi )=\\frac{1}{ \\Gamma (-\\alpha )} \\int _0^1 t^{-\\alpha -1} p_t( x, \\tau , \\xi ) \\prod _{\\begin{array}{c}1\\le j \\le |\\beta | \\\\\\sum _j|\\beta _j|n_j=|\\beta |\\end{array}} ( \\partial _{X}^{\\beta _j}b)^{n_j}\\mathop {}\\!\\mathrm {d}t.$ Hence, for any $|\\beta | \\ge 1$ , we get $|\\partial _X^{\\beta } \\sigma _{\\alpha }^1( x, \\tau , \\xi )|&\\lesssim \\int _0^1 t^{-\\alpha -1} e^{-c t |X|^2} \\prod _{j=1}^{|\\beta |}t^{n_j} \\mathop {}\\!\\mathrm {d}t \\prod _{\\begin{array}{c}1\\le j \\le |\\beta | \\\\\\sum _j|\\beta _j|n_j=|\\beta |\\end{array}} |X|^{2n_j-|\\beta _j|n_j}\\\\&\\lesssim \\langle X\\rangle ^{2\\alpha -|\\beta |}.$ That is, $\\sigma _{\\alpha }^1 \\in G^{2\\alpha }$ for all $\\alpha <0$ .", "To estimate $\\sigma _{\\alpha }^2$ , since $\\cosh t \\sim e^{t}$ , and $\\tanh t \\ge t$ as $t\\rightarrow \\infty $ , we have $\\operatorname{Re}b(t, x, \\tau , \\xi ) \\ge ct |X|^2,$ which implies that $| \\sigma _{\\alpha }^2( x, \\tau , \\xi )| \\lesssim \\int _1^\\infty t^{-\\alpha -1} e^{-td } e^{-c |X|^2} \\mathop {}\\!\\mathrm {d}t\\lesssim e^{-|X|^2}.$ When we take partial derivatives in $X=(x,\\tau ,\\xi )$ , we only change the degree of the polynomials in $X$ .", "The dominating term is still $e^{-|X|^2}.$ Hence, we have $\\sigma _\\alpha ^2 \\in G^{\\alpha }$ .", "For the case $\\alpha >0$ .", "By (REF ), we obtain the symbol of the operator $H_{\\textup {par}}^\\alpha $ , $\\sigma _{\\alpha }( x, \\tau , \\xi )=\\frac{(-1)^N}{\\Gamma (N-\\alpha )}\\int _{0}^\\infty t^{N-\\alpha -1} \\frac{\\mathop {}\\!\\mathrm {d}^N }{\\mathop {}\\!\\mathrm {d}t^N} p_t( x, \\tau , \\xi )\\mathop {}\\!\\mathrm {d}t.$ Arguing as in the case $\\alpha <0$ , we split it into two parts $\\sigma _{\\alpha }^1( x, \\tau , \\xi )=\\frac{(-1)^N}{\\Gamma (N-\\alpha )}\\int _{0}^1t^{N-\\alpha -1} \\frac{\\mathop {}\\!\\mathrm {d}^N }{\\mathop {}\\!\\mathrm {d}t^N} p_t( x, \\tau , \\xi )\\mathop {}\\!\\mathrm {d}t,\\\\\\sigma _{\\alpha }^1( x, \\tau , \\xi )=\\frac{(-1)^N}{\\Gamma (N-\\alpha )}\\int _{1}^\\infty t^{N-\\alpha -1} \\frac{\\mathop {}\\!\\mathrm {d}^N }{\\mathop {}\\!\\mathrm {d}t^N} p_t( x, \\tau , \\xi )\\mathop {}\\!\\mathrm {d}t.$ Note that $&\\quad \\partial _t b (t, x, \\tau , \\xi )\\\\&= 2 (|x|^2+|\\xi |^2) \\operatorname{sech}^2 2t+\\tau ^2+ 2ix\\cdot \\xi [ -2 \\operatorname{sech}^2 2t \\sinh 2t \\sinh ^2 t+\\tanh 2t] \\\\&= 2 (|x|^2+|\\xi |^2) \\operatorname{sech}^2 2t+\\tau ^2 + 2ix\\cdot \\xi \\operatorname{sech}^2 2t \\sinh 2t.$ Thus, for $t\\in (0,1)$ we have $|\\partial _t (\\cosh 2t )^{-d/2}| \\lesssim 1, \\quad |\\partial _t b (t, x, \\tau , \\xi )| \\lesssim |X|^2,$ and $|\\partial _t^N (\\cosh 2t )^{-d/2}|&\\lesssim 1, \\quad |\\partial _t ^N b (t, x, \\tau , \\xi ) | \\lesssim |X|^{2N}.$ To compute the derivative estimates of $p_t( x, \\tau , \\xi )$ in the term $ \\sigma _{\\alpha }^1( x, \\tau , \\xi )$ , we use the above estimates, Leibniz rule and Faà di Bruno's formula, and obtain that $| \\sigma _{\\alpha }^1( x, \\tau , \\xi )| &\\lesssim \\int _{0}^1 t^{N-\\alpha -1} e^{-c|X|^2} |X|^{2N}\\mathop {}\\!\\mathrm {d}t\\lesssim \\langle X\\rangle ^{2\\alpha }.$ For the derivative estimates, we have $|\\partial _X^{\\beta } \\sigma _{\\alpha }^1( x, \\tau , \\xi )|&\\lesssim \\int _0^1 t^{N-\\alpha -1} e^{-c t |X|^2} \\prod _{\\begin{array}{c}1\\le j \\le |\\beta | \\\\\\sum _j|\\beta _j|n_j=|\\beta |\\end{array}}|X|^{2N} |X|^{2n_j-|\\beta _j|n_j} \\\\&\\lesssim \\langle X\\rangle ^{2\\alpha -|\\beta |}.$ For the term $\\sigma _{\\alpha }^2$ , the exponential decay can be obtained as the case $\\alpha <0$ .", "Summing up, we have $\\sigma _\\alpha \\in G^{2\\alpha }$ for all $\\alpha \\in \\mathbb {R}$ , and this concludes the proof.", "Riesz transforms and symbols The $j$ th Riesz transforms associated with the operator $ H_{\\textup {par}}$ are defined as $R_j= A_jH_{\\textup {par}}^{-1/2}, \\quad -d\\le j\\le d.$ In general, for any $m\\in \\mathbb {N}$ and $\\mathbf {j}=(j_{1},\\dots ,j_m)$ , $-d \\le j_l\\le d$ , the $\\mathbf {j}$ th Riesz transform of order $m$ is the operator $R_{\\mathbf {j}} = R_{j_1,\\dots ,j_m} &=A_{j_1}A_{j_2}\\dots A_{j_m} H_{\\textup {par}}^{-m/2}\\nonumber \\\\&=P_m(\\partial _\\rho , \\partial _{x}, x) H_{\\textup {par}}^{-m/2},$ where $P_m$ is a polynomial of degree $m$ .", "In this subsection, we will prove that the Riesz transforms defined by (REF ) and (REF ) are bounded on classical Sobolev spaces by verifying that their symbols belong to the symbol class $S^0_{1,0}$ .", "There are two ways to show this.", "The most obvious way would be a direct calculation by the heat semigroup.", "The symbol of the operator $A_0 \\textup {e}^{-t H_{\\textup {par}}}$ is $-i\\tau p_t( x, \\tau , \\xi )$ , so the symbol of Riesz transform $R_0$ is $ \\sigma _{R_0}(\\rho , x, \\tau , \\xi )=\\frac{-1}{\\sqrt{\\pi }} \\int _0^\\infty t^{-\\frac{1}{2}}i\\tau p_t( x, \\tau , \\xi ) \\mathop {}\\!\\mathrm {d}t.$ For the symbol of the operator $A_j \\textup {e}^{-t H_{\\textup {par}}}$ , $1\\le j\\le d$ , we use the formula (REF ) to get $A_j \\textup {e}^{-t H_{\\textup {par}}} f(\\rho , x)=\\int _{\\mathbb {R}^{d+1}} \\textup {e}^{i\\tau \\rho } \\textup {e}^{i\\xi \\cdot x} p_t( x, \\tau , \\xi )(-i\\xi _j+x_j+\\partial _{x_j} b ) \\widehat{f}(\\tau , \\xi ) \\mathop {}\\!\\mathrm {d}\\tau \\mathop {}\\!\\mathrm {d}\\xi .$ This shows that the symbol of the operator $A_j \\textup {e}^{-t H_{\\textup {par}}}$ is $p_t( x, \\tau , \\xi )(-i\\xi _j+x_j+\\partial _{x_j} b(t,\\rho ,x,\\tau ,\\xi )).$ So, the symbols for Riesz transforms $R_j$ for $1\\le j \\le d$ are $ \\quad \\sigma _{R_j}(\\rho , x, \\tau , \\xi )=\\frac{-1}{\\sqrt{\\pi }} \\int _0^\\infty t^{-\\frac{1}{2}}(i\\xi _j-x_j+\\partial _{x_j} b) p_t( x, \\tau , \\xi ) \\mathop {}\\!\\mathrm {d}\\tau .$ Similar calculations give the symbols for Riesz transforms $R_j $ for $-d \\le j \\le -1$ .", "It is then possible to use the integral forms (REF ) and (REF ) in a lengthy calculation like in the proof of Lemma REF to show that they belong to the symbol class $S^0_{1,0}$ .", "The second, simpler and shorter way is to take advantage of the symbol calculus for compositions in $G^m$ , which we present below Proposition 3.7 The symbols $\\sigma _{R_j}$ of Riesz transforms $R_j$ for $0\\le |j |\\le d$ belongs to the symbol class $S^0_{1,0}$ , hence they are bounded on classical Sobolev spaces $W^{\\alpha ,p}(\\mathbb {R}^{d+1})$ for any $\\alpha \\in \\mathbb {R}$ and $1<p<\\infty $ .", "In addition, the same result holds for Riesz transforms $R_{\\mathbf {j}}$ of high order.", "The symbols of the operators $A_j$ are given by either $i\\tau $ or $\\pm i\\xi _j+x_j $ , which belong to class $G^{1}$ .", "From Proposition REF , the symbol of the operator $H_{\\textup {par}}^{-1/2}$ belongs to class $G^{-1}$ .", "By symbolic calculus in class $G^m$ (Proposition REF ), we obtain that symbols of Riesz transform $R_j$ for $0\\le |j |\\le d$ belong to class $S_{1,0}^{0}$ , which implies the boundedness of Riesz transform $R_j$ for $0\\le |j |\\le d$ on $W^{\\alpha , p}$ for all $1<p<\\infty $ , (see [22], [23]).", "For Riesz transform $R_{\\mathbf {j}}$ with high order, By Proposition REF and the fact that the symbols of the operators $A_{j_1}A_{j_2}\\dots A_{j_m} $ and $H_{\\textup {par}}^{-m/2}$ belongs to symbol classes $G^{m}$ and $G^{-m}$ respectively.", "Hence, the symbol of $R_{\\mathbf {j}}$ belongs to the symbol the symbol class $S_{1,0}^0$ , which implies the result and completes the proof.", "Sobolev spaces associated to the partial harmonic oscillator Given any $p\\in [1, \\infty )$ , and $\\alpha >0$ , we define the potential spaces associated to $ H_{\\textup {par}}$ by $L_{ H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1}) = H_{\\textup {par}}^{-\\alpha /2} (L^p(\\mathbb {R}^{d+1})),$ with the norm $\\Vert f\\Vert _{L_{H_{\\textup {par}}}^{\\alpha , p}} =\\Vert g\\Vert _{L^p(\\mathbb {R}^{d+1})},$ where $g\\in L^p(\\mathbb {R}^{d+1})$ satisfies $H_{\\textup {par}}^{-\\alpha /2}g=f$ .", "Remark 4.1 The norm is well defined since $ H_{\\textup {par}}^{-\\alpha /2}$ is one-to-one and bounded in $L^p(\\mathbb {R}^{d+1})$ .", "Also, $C_0^\\infty (\\mathbb {R}^{d+1}) $ is dense in $L_{H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1}) $ .", "For any nonnegative integer $k\\ge 0$ , we can also define the Sobolev spaces associated to $ H_{\\textup {par}}$ by the differential operators $A_j$ as follows: $W_{H_{\\textup {par}}}^{k, p}=\\left\\lbrace f\\in L^p(\\mathbb {R}^{d+1}) | \\begin{array}{c} \\displaystyle A_{j_1}A_{j_2}\\dots A_{j_m} f \\in L^p(\\mathbb {R}^{d+1}), \\\\[0.5em]\\displaystyle \\text{for any}\\;\\;1\\le m \\le k, \\ 0\\le |j_1|, \\dots , |j_m| \\le d\\end{array}\\right\\rbrace ,$ with the norm $\\Vert f\\Vert _{W_{H_{\\textup {par}}}^{k, p}}=\\sum _{m=1}^k\\Bigg (\\sum _{j_1=-d}^d\\dots \\sum _{j_m=-d}^d\\Vert A_{j_1}A_{j_2}\\dots A_{j_m} f\\Vert _{L^p}\\Bigg )+\\Vert f\\Vert _{L^p}.$ Theorem 4.2 Let $k \\in \\mathbb {N}$ and $p\\in (1, \\infty )$ .", "Then we have $W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})= L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ with equivalence of norms.", "We firstly prove that $ L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})\\subset W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ .", "For any function $f\\in L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ , there exists a function $ g\\in L^{p}(\\mathbb {R}^{d+1})$ , such that $f=H_{\\textup {par}}^{-k/2} g$ .", "Hence, by the $L^p$ boundedness of Riesz transforms in Proposition REF , we have $\\Vert A_{j_1}A_{j_2}\\dots A_{j_m} f \\Vert _p = \\Vert A_{j_1}A_{j_2}\\dots A_{j_m} H_{\\textup {par}}^{-k/2} g\\Vert _{p}\\lesssim \\Vert g\\Vert _{p} \\le C \\Vert f\\Vert _{L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})},$ which implies that $ L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})\\subset W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ .", "Next, we show that $ W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})\\subset L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ by induction.", "First, it is easy to check that for any $f, g\\in C_0^\\infty (\\mathbb {R}^{d+1})$ , we have $\\int _{\\mathbb {R}^{d+1}} f g =2 \\int _{\\mathbb {R}^{d+1}} \\sum _{-d\\le j\\le d} R_j f R_j g.$ Thus, by duality and the boundedness of Riesz transform, we obtain for any $g\\in L^p(\\mathbb {R}^d)$ that $\\Vert g\\Vert _p \\lesssim \\sum _{-d\\le j\\le d} \\Vert R_j g\\Vert _p.$ Hence, we obtain by choosing $g=H_{\\textup {par}}^{1/2}f$ that $\\Vert f\\Vert _{L_{H_{\\textup {par}}}^{1, p}(\\mathbb {R}^{d+1})} =\\Vert H_{\\textup {par}}^{1/2}f\\Vert _p\\lesssim \\sum _{-d\\le j\\le d} \\Vert A_j f\\Vert _p \\lesssim \\Vert f\\Vert _{W_{H_{\\textup {par}}}^{1,p}(\\mathbb {R}^{d+1})}.$ That is, $ W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})\\subset L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ for $k=1$ .", "Suppose that for any $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ and any $1\\le m<k$ , we have $\\Vert f\\Vert _{L_{H_{\\textup {par}}}^{m, p}(\\mathbb {R}^{d+1})}\\le \\Vert f\\Vert _{W^{m,p}_{H_{par}}(\\mathbb {R}^{d+1})}.$ It follows by duality that $\\Vert f\\Vert _{L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})}&=\\Vert H_{\\textup {par}}^{k/2}f\\Vert _{L^p(\\mathbb {R}^{d+1})} =\\sup _{g\\in C_0^\\infty ; \\Vert g\\Vert _{p^{\\prime }}=1} \\int _{\\mathbb {R}^{d+1}}H_{\\textup {par}}^{k/2}f g\\mathop {}\\!\\mathrm {d}z\\\\&=\\sup _{g\\in C_0^\\infty : \\Vert g\\Vert _{p^{\\prime }}=1} \\int _{\\mathbb {R}^{d+1}}H_{\\textup {par}}^{k}f H_{\\textup {par}}^{-k/2}g\\mathop {}\\!\\mathrm {d}z.$ Since there exist constants $c_1, c_2, \\dots , c_{k-1}$ such that $\\sum _{0\\le |j_1|, \\dots , |j_k| \\le d} A_{j_k}^*\\dots A_{j_1}^* A_{j_1}\\dots A_{j_k} = 2^k H_{\\textup {par}}^{k}+\\sum _{m=1}^{k-1} c_m H_{\\textup {par}}^m,$ we obtain that $&\\quad 2^k \\int _{\\mathbb {R}^{d+1}}H_{\\textup {par}}^{k}f H_{\\textup {par}}^{-k/2}g\\mathop {}\\!\\mathrm {d}z\\\\&= \\int _{\\mathbb {R}^{d+1}} \\sum _{0\\le |j_1|, \\dots , |j_k| \\le d}\\left( A_{j_k}^*\\dots A_{j_1}^* A_{j_1}\\dots A_{j_k}-\\sum _{m=1}^{k-1} c_m H_{\\textup {par}}^m\\right) f H_{\\textup {par}}^{-k/2} g\\\\&=\\!\\sum _{0\\le |j_1|, \\dots , |j_k| \\le d} \\int _{\\mathbb {R}^{d+1}}\\!\\!A_{j_1}\\dots A_{j_k} f R_{j_1}\\dots R_{j_k} g-\\sum _{m=1}^{k-1} c_m \\int _{\\mathbb {R}^{d+1}} H_{\\textup {par}}^{m/2} fH_{\\textup {par}}^{-\\frac{k-m}{2}} g\\\\&\\le \\!\\sum _{0\\le |j_1|, \\dots , |j_k| \\le d} \\!", "\\Vert A_{j_1}\\dots A_{j_k} f\\Vert _{p} \\Vert R_{j_1\\dots j_k} g\\Vert _{p^{\\prime }} +\\sum _{m=1}^{k-1} |c_m|\\Vert H_{\\textup {par}}^{m/2 } f\\Vert _p \\Vert H_{\\textup {par}}^{-\\frac{k-m}{2}} g\\Vert _{p^{\\prime }}\\\\&\\le C \\Vert f\\Vert _{W_{H_{\\textup {par}}}^{k,p}(\\mathbb {R}^{d+1})},$ where we used (REF ) and the boundedness of Riesz transforms in the last inequality.", "This completes the proof that $ W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})\\subset L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ for $k\\ge 1$ .", "Proposition 4.3 Let $p \\in (1, \\infty ) $ .", "The Riesz transforms $R_j$ $(-d\\le j\\le d)$ are bounded on the space $L_{H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1})$ .", "By the definition of $L_{H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1})$ , it suffices to show for $0\\le |j |\\le d $ that the operators $T_j&= H_{\\textup {par}}^{-\\alpha /2}A_j H_{\\textup {par}}^{-1/2} H_{\\textup {par}}^{\\alpha /2}$ are bounded on $L^p(\\mathbb {R}^{d+1})$ .", "By Proposition REF , Lemma REF and the fact that the symbol of the operators $A_j$ belongs to the symbol class $G^1$ , the symbols of the operator $T_j$ belong to the symbol class $S^0_{1,0}$ .", "Hence, the operators $T_j$ are bounded on $L^p(\\mathbb {R}^{d+1})$ for $1<p<\\infty $ (see [22], [23]), which proves the result.", "A direct consequence of Proposition REF is that any function in $L_{H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1})$ enjoys some decay in the $x$ direction.", "Corollary 4.4 If $p\\in [1, \\infty )$ , $\\alpha >0$ and $f\\in L_{ H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1})$ , then $|x|^\\alpha f $ belongs to $L^{p}(\\mathbb {R}^{d+1})$ .", "Next, we show the relations between space $L_{H_{\\textup {par}}}^{\\alpha ,p}$ and spaces $W^{\\alpha , p}(\\mathbb {R}^{d+1})$ , $L^{\\alpha , p}_{H}(\\mathbb {R}^{d+1}) $ adapted to the Laplacian and Hermite operators, respectively.", "Theorem 4.5 Let $\\alpha >0$ and $p\\in (1, \\infty ),$ then $L^{\\alpha , p}_{H}(\\mathbb {R}^{d+1}) L^{\\alpha , p}_{H_{\\textup {par}}}(\\mathbb {R}^{d+1}) W^{\\alpha , p}(\\mathbb {R}^{d+1})$ .", "If $ f\\in W^{\\alpha , p}(\\mathbb {R}^{d+1})$ and has compact support, then $f\\in L_{ H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1}) $ .", "We firstly show the inclusion in $(1)$ .", "It suffices to verify that the symbols of $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2} H_{\\textup {par}}^{-\\alpha /2},\\;\\; \\text{and} \\;\\; H_{\\textup {par}}^{\\alpha /2} H^{-\\alpha /2}$ belong to the symbol class $S_{1,0}^0$ and so they define bounded operators on $L^p(\\mathbb {R}^{d+1})$ , (see [22], [23]).", "For the former: since the symbol of the operator $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}$ belongs to class $S^\\alpha _{1,0}$ and the symbol of the operator $H_{\\textup {par}}^{-\\alpha /2} $ belongs to class $ G^{-\\alpha } $ due to Lemma REF and class $ S^{-\\alpha }_{1,0}$ , we obtain that the symbol of the operator $ (1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2} H_{\\textup {par}}^{-\\alpha /2}$ belongs to $S_{1,0}^0$ .", "Therefore, the operator $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2} H_{\\textup {par}}^{-\\alpha /2}$ is bounded on $L^p(\\mathbb {R}^{d+1})$ .", "For the later: it is known for the symbol $q_\\alpha (\\rho , x, \\tau , \\xi )$ of the operator $H^{-\\alpha /2}$ that $q_\\alpha \\in G^{-\\alpha }$ in [26] since $|D_z^\\beta D_{\\tau }^\\gamma D_{\\xi }^{\\delta }q_{\\alpha }(\\rho , x, \\tau , \\xi ) |&\\lesssim (1+|\\tau |+|\\xi |+|\\rho |+|x|) ^{-\\alpha -|\\beta |-|\\gamma |-|\\delta |}\\\\&\\lesssim (1+|\\tau |+|\\xi |+|x|) ^{-\\alpha -|\\beta |-|\\gamma |-|\\delta |}.$ As the symbol of the operator $H_{\\textup {par}}^\\alpha $ belong to class $ G^{\\alpha }$ , the symbol of the operator $H_{\\textup {par}}^{\\alpha /2} H^{-\\alpha /2}$ belongs to $S_{1,0}^0$ by Proposition REF , which implies that the operator $H_{\\textup {par}}^{\\alpha /2} H^{-\\alpha /2}$ is bounded on $L^p(\\mathbb {R}^{d+1})$ .", "Next, we show the nonequivalence between them.", "Define $g_1(\\rho , x)&=\\frac{1}{(1+\\rho )^{\\frac{1}{p}+\\alpha }}\\frac{1}{(1+|x|)^{\\frac{1}{p}+\\alpha }}, & \\quad f_1(\\rho , x)&=(I-\\Delta _{\\mathbb {R}^{d+1}})^{-\\alpha /2} g_1(\\rho , x),\\\\g_2(\\rho , x)&=\\frac{1}{(1+\\rho )^{\\frac{1}{p}+\\alpha }} \\Phi _{\\mu }(x), & \\quad f_2(\\rho , x)&=H^{-\\alpha /2} g_2(\\rho , x).$ We claim that $f_1\\in W^{\\alpha , p}(\\mathbb {R}^{d+1})\\setminus L^{\\alpha , p}_{H_{\\textup {par}}}(\\mathbb {R}^{d+1}),\\; \\; \\text{and} \\;\\;\\ f_2 \\in L^{\\alpha , p}_{H_{\\textup {par}}}(\\mathbb {R}^{d+1})\\setminus L^{\\alpha , p}_{H}(\\mathbb {R}^{d+1}).$ Let $G_\\alpha (z)$ be the kernel of the opertor $(I-\\Delta _{\\mathbb {R}^{d+1}})^{-\\alpha /2}$ , that is, $G_\\alpha (z)=\\frac{1}{(4\\pi )^{\\alpha /2} \\Gamma (\\alpha /2)} \\int _0^\\infty \\textup {e}^{-\\frac{\\pi |z|^2}{t}-\\frac{t}{4\\pi }} t^{\\alpha /2-(d+1)/2} \\mathop {}\\!\\mathrm {d}t.$ On the one hand, it is easy to see that $g_1 \\in L^p(\\mathbb {R}^{d+1})$ , hence $f_1\\in W^{\\alpha , p}(\\mathbb {R}^{d+1})$ .", "On the other hand, since $ G_\\alpha (z)$ is positive, $f_1(\\rho , x) = \\int _{\\mathbb {R}^{d+1}} G_\\alpha (z^{\\prime } ) g_1(z-z^{\\prime }) \\mathop {}\\!\\mathrm {d}z^{\\prime }\\ge (2+|\\rho |)^{-1/p-\\alpha }(2+|x|)^{-1/p-\\alpha }\\int _{|z|<1} G_\\alpha (z^{\\prime }) \\mathop {}\\!\\mathrm {d}z^{\\prime },$ which implies that $|x|^\\alpha f_1 \\notin L^{p}(\\mathbb {R}^{d+1})$ , and therefore we have $f_1\\notin L^{\\alpha , p}_{H_{\\textup {par}}}(\\mathbb {R}^{d+1})$ by Corollary REF .", "Similarly, using the fact that $g_2 \\in L^p(\\mathbb {R}^{d+1})$ holds, we have $f_2 \\in L^{\\alpha , p}_{H_{\\textup {par}}}(\\mathbb {R}^{d+1})$ .", "At the same time, we have $f_2(\\rho , x) \\ge (2+|\\rho |)^{-1/p-\\alpha } \\int _{|z|<1} \\Psi _\\alpha (z^{\\prime }) \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime } \\mathop {}\\!\\mathrm {d}x^{\\prime },$ which implies that $|\\rho |^\\alpha f_2 \\notin L^{p}(\\mathbb {R}^{d+1})$ , and therefore we have $f_2\\notin L^{\\alpha , p}_{H}(\\mathbb {R}^{d+1})$ .", "Lastly, the statement (REF ) follows from (REF ) and [4].", "Some Integral Inequalities adapted to the operator $H_{\\textup {par}}$ In this section, we obtain some integral inequalities associated to the partial harmonic oscillator $H_{\\textup {par}}$ .", "Hardy–Littlewood–Sobolev inequality.", "Let $0<\\alpha <d+1$ .", "By (REF ), we have $H_{\\textup {par}}^{-\\alpha /2} f(z)\\le C \\int _{\\mathbb {R}^{d+1}} \\frac{|f(z^{\\prime })|}{|z-z^{\\prime }|^{d+1-\\alpha }} \\mathop {}\\!\\mathrm {d}z^{\\prime },$ then we have Proposition 5.1 Let $p, q>1$ and $0<\\alpha <d+1$ with $\\frac{1}{q}=\\frac{1}{p}-\\frac{\\alpha }{d+1}$ , then the operator $H_{\\textup {par}}^{-\\alpha /2}$ is bounded from $L^p(\\mathbb {R}^{d+1})$ to $L^q(\\mathbb {R}^{d+1})$ .", "This is obvious from the rough estimate (REF ) and the classical Hardy–Littlewood–Sobolev inequality in [16].", "In fact, by (REF ), the integral kerel of the operator $H_{\\textup {par}}^{-\\alpha /2} f$ is controlled by an integral function which has exponential decay away from the zero, which implies the following refined estimates.", "Theorem 5.2 Let $0<\\alpha <d+1$ .", "Then the following holds: there exists a constant $C>0$ such that $\\Vert H_{\\textup {par}}^{-\\alpha /2} f\\Vert _{q} \\le C \\Vert f\\Vert _1,$ for all $f\\in L^1(\\mathbb {R}^{d+1})$ if and only if $1\\le q <\\frac{d+1}{d+1-\\alpha }$ .", "there exists a constant $C>0$ such that $\\Vert H_{\\textup {par}}^{-\\alpha /2} f\\Vert _{\\infty } \\le C \\Vert f\\Vert _p$ for all $f\\in L^p(\\mathbb {R}^{d+1})$ if and only if $p>\\frac{d+1}{\\alpha }$ .", "If $1<p<\\infty $ , $1<q<\\infty $ and $\\frac{1}{p}-\\frac{\\alpha }{d+1} \\le \\frac{1}{q}<\\frac{1}{p}$ , then there exists a constant $C>0$ such that $\\Vert H_{\\textup {par}}^{-\\alpha /2} f\\Vert _{q} \\le C \\Vert f\\Vert _p$ for all $f\\in L^{p}(\\mathbb {R}^{d+1})$ .", "For the case (REF ).", "By generalized Minkowski's inequality, we obtain for function $f\\in L^1(\\mathbb {R}^{d+1})$ that $\\int _{\\mathbb {R}^{d+1}} ( H_{\\textup {par}}^{-\\alpha /2} f) ^q (z) \\mathop {}\\!\\mathrm {d}z \\le \\left(\\int _{\\mathbb {R}^{d+1}}\\left( \\int _{\\mathbb {R}^{d+1}} K_{\\alpha /2}(z, z^{\\prime })^q \\mathop {}\\!\\mathrm {d}z \\right)^{1/q}| f(z^{\\prime })|\\mathop {}\\!\\mathrm {d}z^{\\prime }\\right)^{q}$ By (REF ) and (REF ), we have $\\int _{\\mathbb {R}^{d+1}} K_{\\alpha /2}(z, z^{\\prime })^q \\mathop {}\\!\\mathrm {d}z\\lesssim \\int _{D_{-}} \\frac{\\mathop {}\\!\\mathrm {d}z}{|z-z^{\\prime }|^{q(d+1-\\alpha )}}+\\int _{D_{+}} \\textup {e}^{-q|z-z^{\\prime }|^2/{16}} \\mathop {}\\!\\mathrm {d}z,$ where $ D_{-}=\\lbrace |z-z^{\\prime }|<1\\rbrace $ and $D_{+}=\\lbrace |z-z^{\\prime }|\\ge 1\\rbrace $ .", "The right hand side in the above estimate is finite if $1\\le q <\\frac{d+1}{d+1-\\alpha }$ .", "For the converse, note that as $|z-z^{\\prime }|<1$ , we have $K_{\\alpha /2}(z, z^{\\prime }) \\ge C_\\alpha \\textup {e}^{-|x+x^{\\prime }|^2} \\int _0^1 t^{\\alpha -1-\\frac{d+1}{2}} \\textup {e}^{- |z-z^{\\prime }|^2/t}\\mathop {}\\!\\mathrm {d}t \\ge C_\\alpha \\frac{\\textup {e}^{-|x+x^{\\prime }|^2}}{|z-z^{\\prime }|^{d+1-\\alpha }}.$ Let $f_n$ be an approximation of identity.", "Then we have $\\int _{\\mathbb {R}^{d+1}} \\left(\\int _{\\mathbb {R}^{d+1}} \\frac{\\textup {e}^{-|x+x^{\\prime }|^2}}{|z-z^{\\prime }|^{d-\\alpha }} f_n(z^{\\prime }) \\mathop {}\\!\\mathrm {d}z^{\\prime }\\right)^q \\mathop {}\\!\\mathrm {d}z\\xrightarrow[n\\rightarrow \\infty ]{} \\int _{|z| \\le 1} \\frac{\\textup {e}^{-q|x|^2}}{|z|^{q(d+1-\\alpha )}} \\mathop {}\\!\\mathrm {d}z,$ which is $\\infty $ for $q(d+1-\\alpha ) \\ge d$ , and completes the proof of the necessity that $1\\le q\\le d/(d+1-\\alpha )$ .", "For the case $(2)$ .", "By Hölder's inequality, we get $\\left| \\int _{\\mathbb {R}^{d+1}} K_{\\alpha /2 }(z, z^{\\prime }) f(z^{\\prime }) \\mathop {}\\!\\mathrm {d}z^{\\prime }\\right| \\le \\Vert f\\Vert _{p} \\left(\\int _{\\mathbb {R}^{d+1}} K_{\\alpha /2 }(z, z^{\\prime })^{p^{\\prime }} \\mathop {}\\!\\mathrm {d}z^{\\prime }\\right)^{1/p^{\\prime }}.$ Using a similar argument as in the proof of the case $(1)$ , we know that the right hand side is finite when $p>\\frac{d+1}{\\alpha }$ .", "Conversely, by choosing $f(z)={\\left\\lbrace \\begin{array}{ll}|z|^{-\\alpha } (\\log \\frac{1}{|z|})^{-\\frac{\\alpha }{d+1}(1+\\varepsilon )}, & \\text{if } |z|\\le 1/2, \\\\0, & \\text{if } |z|\\ge 1/2,\\end{array}\\right.", "}$ then we have $f\\in L^p(\\mathbb {R}^{d+1})$ for all $p\\le (d+1)/\\alpha $ .", "However, the function $H_{\\textup {par}}^{-\\alpha /2}f$ is essentially unbounded since we have by (REF ) that $H_{\\textup {par}}^{-\\alpha /2} f(0) \\ge C\\int _{|z^{\\prime }|\\le 1/2} |z|^{-(d+1)}\\left(\\log \\frac{1}{|z^{\\prime }|}\\right)^{-\\frac{\\alpha }{d+1}(1+\\varepsilon )} \\mathop {}\\!\\mathrm {d}z^{\\prime } =\\infty ,$ where $ \\varepsilon $ is small.", "The case (REF ) now follows from Proposition REF , the inequality (REF ), the inequality (REF ) and the Riesz–Thorin interpolation theorem.", "Remark 5.3 By Corollary REF and the similar argument as in the above proof, we have the following result: Let $p, q>1$ and $0<\\alpha <d+1$ with $\\frac{1}{p}-\\frac{\\alpha }{d+1} \\le \\frac{1}{q}<\\frac{1}{p}$ , then there exists a constant $C$ such that for all $f\\in L^{p}(\\mathbb {R}^{d+1})$ , we have $\\Vert (H_{\\textup {par}}+2)^{-\\alpha /2} f\\Vert _{q} &\\le C \\Vert f\\Vert _p, \\\\\\Vert (H_{\\textup {par}}-2)^{-\\alpha /2} f\\Vert _{q} &\\le C \\Vert f\\Vert _p, \\ d\\ge 3.$ Gagliardo–Nirenberg–Sobolev inequality We define the gradient operator associated to the operator $H_{\\textup {par}}$ as follows: $\\nabla _{H_{\\textup {par}}} f:=(A_0f, A_1 f, \\dots , A_d f, A_{-1} f, \\dots , A_{-d} f).$ Theorem 5.4 Let $d\\ge 3$ and $1<p, q <\\infty $ satisfy $\\frac{1}{p}-\\frac{1}{d+1} \\le \\frac{1}{q}<\\frac{1}{p}$ .", "Then for any $f\\in L_{H_{\\textup {par}}}^{1,p}(\\mathbb {R}^{d+1})$ , we have $\\Vert f\\Vert _q \\le C \\Vert \\nabla _{H_{\\textup {par}}} f\\Vert _p.$ By duality and (REF ), we have $\\Vert f\\Vert _{q} \\le \\sum _{-d\\le j\\le d} \\Vert R_j f\\Vert _{q}.$ By Lemma REF , we obtain $R_{0} f =H_{\\textup {par}}^{-1/2} A_{0} f$ and $R_j f=(H_{\\textup {par}}+2 \\operatorname{sgn}j)^{-1/2} A_j f, \\ \\qquad 1\\le |j|\\le d .$ Hence, by Remark REF and Theorem REF , we obtain for $\\frac{1}{p}-\\frac{1}{d+1} \\le \\frac{1}{q}<\\frac{1}{p}$ that $\\Vert f\\Vert _{q} &\\le \\sum _{-d\\le j\\le d} \\Vert R_{j} f\\Vert _{q}\\\\&\\le \\sum _{1\\le |j| \\le d} \\Vert (H_{\\textup {par}}+2\\operatorname{sgn}j)^{-1/2} A_j f\\Vert _{q}+ \\Vert H_{\\textup {par}}^{-1/2} A_{0} f\\Vert _{q}\\\\&\\le \\sum _{-d\\le j\\le d} \\Vert A_jf\\Vert _{p} = \\Vert \\nabla _{H_{\\textup {par}}} f\\Vert _p.$ This completes the proof.", "Remark 5.5 The classical Gagliardo–Nirenberg–Sobolev inequality in $\\mathbb {R}^{d+1}$ holds with $\\frac{1}{q} = \\frac{1}{p} - \\frac{\\alpha }{d+1}$ .", "The result here holds for a larger range of $(p, q)$ due to the extra decay property of functions $f\\in L_{H_{\\textup {par}}}^{1,p}(\\mathbb {R}^{d+1})$ .", "Hardy's inequality Recall that the Hardy's inequality is: $\\Vert |z|^{-\\alpha } f(z)\\Vert _{L^p(\\mathbb {R}^{d+1})} \\lesssim \\Vert (-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2} f\\Vert _{L^p(\\mathbb {R}^{d+1})}, \\;\\; 0<\\alpha <\\frac{d+1}{p},$ see [28].", "We extend classical Hardy's inequality for the Laplacian operator $-\\Delta _{\\mathbb {R}^{d+1}}$ to the operator $H_{\\textup {par}}$ as follows: Theorem 5.6 Let $1<p<\\infty $ .", "Then, for any $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ , we have $\\Vert |z|^{-\\alpha } f(z)\\Vert _{L^p(\\mathbb {R}^{d+1})} \\lesssim \\Vert H_{\\textup {par}}^{\\alpha /2} f\\Vert _{L^p(\\mathbb {R}^{d+1})},\\ 0<\\alpha <\\frac{d+1}{p} .$ In particular, the inequality $\\Vert |z|^{-1} f(z)\\Vert _{L^p(\\mathbb {R}^{d+1})} \\lesssim \\Vert \\nabla _{H_{\\textup {par}}} f\\Vert _{L^p(\\mathbb {R}^{d+1})}$ holds when $1<p<d+1$ .", "Note that the symbols of $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}$ and $H_{\\textup {par}}^{-\\alpha /2}$ belong to $S_{1,0}^\\alpha $ and $S^{-\\alpha }_{1,0}$ respectively.", "The composition law gives that the symbol of $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}H_{\\textup {par}}^{-\\alpha /2} $ belongs to $S^{0}_{1,0}$ , which implies that $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}H_{\\textup {par}}^{-\\alpha /2 }$ are bounded in $L^p(\\mathbb {R}^{d+1})$ for $1<p<\\infty $ , (see [22], [23]).", "In addition, $(-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}(1-\\Delta _{\\mathbb {R}^{d+1}})^{-\\alpha /2}$ are bounded in $L^p(\\mathbb {R}^{d+1})$ for $1<p<\\infty $ from [20].", "Hence, by the inequality (REF ) we have $\\Vert |z|^{-\\alpha } f(z)\\Vert _{L^p(\\mathbb {R}^{d+1})}&\\lesssim \\Vert (-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2} f\\Vert _{L^p(\\mathbb {R}^{d+1})}\\\\&\\lesssim \\Vert (-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}(1-\\Delta _{\\mathbb {R}^{d+1}})^{-\\alpha /2}(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}H_{\\textup {par}}^{-\\alpha /2 }H_{\\textup {par}}^{\\alpha /2 }f\\Vert _{L^p(\\mathbb {R}^{d+1})}\\\\&\\lesssim \\Vert H_{\\textup {par}}^{\\alpha /2 }f\\Vert _{L^p(\\mathbb {R}^{d+1})}.$ For $\\alpha =1$ , using the first inequality in (REF ), we have $\\Vert |z|^{-1} f(z)\\Vert _{L^p(\\mathbb {R}^{d+1})}\\lesssim \\Vert H_{\\textup {par}}^{1/2} f\\Vert _{L^p(\\mathbb {R}^{d+1})} \\lesssim \\sum _{0<|j|\\le d} \\Vert A_jf \\Vert _{L^p(\\mathbb {R}^{d+1})}\\lesssim \\Vert \\nabla _{H_{\\textup {par}}} f\\Vert _{L^p(\\mathbb {R}^{d+1})}.$ Hence, we completes the proof." ], [ "Functional Calculus for the operator $ H_{\\textup {par}}$ ", "Fix $z=(\\rho , x) \\in \\mathbb {R}^{d+1}$ .", "By the continuous Fourier transform in $\\rho \\in \\mathbb {R}$ and the discrete Hermite expansion in $x \\in \\mathbb {R}^{d}$ of the operator $H_{\\textup {par}}$ , we can write $H_{\\textup {par}}f$ for a function $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ as $H_{\\textup {par}}f(\\rho ,x)&=\\sum _{\\mu } \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i \\rho \\tau }(\\tau ^2+2|\\mu |+d) (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau \\\\&= \\sum _{k=0}^\\infty \\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i \\rho \\tau }(\\tau ^2+2k+d)P_k \\mathcal {F}_{\\rho } f(\\tau , x) \\mathop {}\\!\\mathrm {d}\\tau ,$ where $\\mathcal {F}_{\\rho } f$ is the Fourier transform with respect to $\\rho $ , and $P_k$ is the projection operator in (REF ).", "Thus, for a Borel measurable function $F$ defined on $\\mathbb {R}_{+}$ , we can define the operator $F(H_{\\textup {par}})$ by the spectral theory as $F(H_{\\textup {par}})f(\\rho , x)=\\sum _{k=0}^\\infty \\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i \\rho \\tau }F(\\tau ^2+2k+d)P_k \\mathcal {F}_{\\rho } f(\\tau , x) \\mathop {}\\!\\mathrm {d}\\tau ,$ so long as the right hand side makes sense.", "In particular, the heat semigroup $\\textup {e}^{-t{H_{\\textup {par}}}}$ can be defined by $\\textup {e}^{-t H_{\\textup {par}}}f(\\rho ,x)&= \\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}}\\textup {e}^{i\\tau \\rho } \\textup {e}^{-t(\\tau ^2+2|\\mu |+d) } ((\\mathcal {F}_{\\rho } f)(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau \\\\&=\\sum _{k=0}^\\infty \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho } \\textup {e}^{-t(\\tau ^2+2k+d) } {P_k}(\\mathcal {F}_{\\rho } f)(\\tau , x) \\mathop {}\\!\\mathrm {d}\\tau $ for any $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ .", "By Mehler's formula (REF ), the integral kernel of the operator $\\textup {e}^{-t H_{\\textup {par}}}$ is $E(t,z,z^{\\prime })=2^{-\\frac{d+2}{2}}\\pi ^{-\\frac{d+1}{2}}t^{-1/2}(\\sinh 2t)^{-d/2} \\textup {e}^{-B(t,z,z^{\\prime })},$ where $B(t, z, z^{\\prime })=\\frac{1}{4}(2\\coth 2t-\\tanh t)|x-x^{\\prime }|^2+\\frac{\\tanh t }{4}|x+x^{\\prime }|^2+\\frac{(\\rho -\\rho ^{\\prime })^2}{4t}.$" ], [ "Fractional powers of the operator $H_{\\textup {par}}$ and the heat semigroup", "On the one hand, for $\\alpha \\in \\mathbb {R}$ , we can define the fractional powers $H_{\\textup {par}}^\\alpha $ on $C_0^\\infty (\\mathbb {R}^{d+1})$ by $H_{\\textup {par}}^{\\alpha }f(\\rho ,x)&=\\sum _{k=0}^\\infty \\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2k+d)^{\\alpha } {P_k} (\\mathcal {F}_{\\rho } f)(\\tau , x) \\mathop {}\\!\\mathrm {d}\\tau .$ Simple calculation shows that the identity $H_{\\textup {par}}^{\\alpha } \\cdot H_{\\textup {par}}^{\\beta }f = H_{\\textup {par}}^{\\alpha +\\beta } f,$ holds for $ f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ and $\\alpha ,\\beta \\in \\mathbb {R}$ .", "On the other hand, we can also formulate the powers $ H_{\\textup {par}}^{ \\alpha }$ with $\\alpha \\in \\mathbb {R}$ by the semigroup $\\textup {e}^{-t H_{\\textup {par}}}$ and the Gamma function.", "Firstly, for any $\\alpha >0$ , the negative powers $ H_{\\textup {par}}^{-\\alpha }$ can be written as $H_{\\textup {par}}^{-\\alpha } f(\\rho , x) =\\frac{1}{ \\Gamma (\\alpha )} \\int _0^\\infty t^{\\alpha -1} \\textup {e}^{-t H_{\\textup {par}}} f(\\rho , x) \\mathop {}\\!\\mathrm {d}t.$ Notice that the integral kernel of the operator $H_{\\textup {par}}^{-\\alpha }$ is positive since the integral kernel (REF ) of $\\textup {e}^{-t H_{\\textup {par}}} $ is positive.", "Similarly for any $a\\in \\mathbb {R}$ and $d>- a$ , we have $(H_{\\textup {par}}+a)^{-\\alpha } f(\\rho , x) =\\frac{1}{ \\Gamma (\\alpha )} \\int _0^\\infty t^{\\alpha -1}\\textup {e}^{-t a} \\textup {e}^{-t H_{\\textup {par}}} f(\\rho , x) \\mathop {}\\!\\mathrm {d}t$ so long as the integral exists.", "We now express the positive fractional powers in terms of derivatives of the semigroup.", "Let $N$ be the smallest integer which is larger than $\\alpha $ .", "Using the identity $-H_{\\textup {par}}\\textup {e}^{-tH_{\\textup {par}}}=\\frac{\\mathop {}\\!\\mathrm {d}}{\\mathop {}\\!\\mathrm {d}t} \\textup {e}^{-tH_{\\textup {par}}}$ and (REF ), we have for any $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ that $H_{\\textup {par}}^{\\alpha } f(\\rho , x)=\\frac{(-1)^N}{\\Gamma (N-\\alpha )}\\int _{0}^\\infty t^{N-\\alpha -1} \\frac{\\mathop {}\\!\\mathrm {d}^N }{\\mathop {}\\!\\mathrm {d}t^N} \\textup {e}^{-tH_{\\textup {par}}}f(\\rho , x) \\mathop {}\\!\\mathrm {d}t.$ We have the following properties between $H_{\\textup {par}}^{\\alpha }$ and $A_{\\pm j}$ for $1\\le j\\le d$ .", "Lemma 3.1 For any $\\alpha \\in \\mathbb {R}$ , $d\\ge 3$ , $1\\le j\\le d$ , and $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ , we have $A_0 H_{\\textup {par}}^{\\alpha } f &= H_{\\textup {par}}^{\\alpha } A_0f,\\\\A_j H_{\\textup {par}}^{\\alpha }f &=( H_{\\textup {par}}-2)^{\\alpha } A_j f, & A_{-j} H_{\\textup {par}}^{\\alpha } f =( H_{\\textup {par}}+2)^\\alpha A_{-j} f,\\\\H_{\\textup {par}}^{\\alpha } A_j f&= A_j ( H_{\\textup {par}}+2)^\\alpha f,& H_{\\textup {par}}^{\\alpha } A_{-j} f =A_{-j} ( H_{\\textup {par}}-2)^{\\alpha }f.$ The first result is trivial.", "For $1\\le j\\le d$ , we only give the details for $ A_j H_{\\textup {par}}^{\\alpha }f =( H_{\\textup {par}}-2)^{\\alpha } A_j f$ , as the other cases are dealt with in analoge argument.", "By the definition of $H_{\\textup {par}}^{\\alpha }$ , we have $H_{\\textup {par}}^{\\alpha } f(\\rho , x) =\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d)^\\alpha (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau .$ On one hand, by (REF ), we have $&A_j H_{\\textup {par}}^{\\alpha } f(\\rho , x)\\\\&=\\sum _{\\mu } \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d)^\\alpha (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot ))A_j \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau \\\\&=\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d)^\\alpha (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot ))\\mathop {}\\!\\mathrm {d}\\tau \\cdot \\sqrt{2(\\mu _j+1)}\\Phi _{\\mu +e_j}(x).$ On the other hand, by the fact that $A_{-j}=A_j^*$ , we obtain $A_jf(\\rho , x)&=\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(A_j\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau \\\\&=\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), A_{-j}\\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau \\\\&=\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu -e_j}(\\cdot ))\\mathop {}\\!\\mathrm {d}\\tau \\cdot \\sqrt{2\\mu _j} \\Phi _{\\mu } (x).$ By (REF ), we have for a function $g\\in C_0^\\infty (\\mathbb {R}^{d+1})$ that $( H_{\\textup {par}}-2)^{\\alpha } g=\\sum _{\\mu } \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d-2)^\\alpha (\\mathcal {F}_\\rho g(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot )) \\Phi _{\\mu } (x)\\mathop {}\\!\\mathrm {d}\\tau ,$ which implies by choosing $g=A_jf$ that $& ( H_{\\textup {par}}-2)^{\\alpha } A_jf(\\rho , x)\\\\&=\\sum _{\\mu } \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d-2)^\\alpha (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu -e_j}(\\cdot ))\\mathop {}\\!\\mathrm {d}\\tau \\sqrt{2\\mu _j} \\Phi _{\\mu } (x)\\\\&=\\sum _{\\mu }\\frac{1}{\\sqrt{2\\pi }} \\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho }(\\tau ^2+2|\\mu |+d)^\\alpha (\\mathcal {F}_{\\rho } f(\\tau , \\cdot ), \\Phi _{\\mu }(\\cdot ))\\mathop {}\\!\\mathrm {d}\\tau \\sqrt{2\\mu _j+2} \\Phi _{\\mu +e_j} (x)\\\\&= A_j H_{\\textup {par}}^{\\alpha }f(\\rho , x).$ We finish the proof." ], [ "Properties of negative fractional powers of $H_{\\textup {par}}$", "In this subsection, we explore the properties of the negative powers of the operator $H_{\\textup {par}}$ .", "The first result is: Proposition 3.2 Given $\\alpha >0$ , the operator $ H_{\\textup {par}}^{-\\alpha }$ has the integral representation $H_{\\textup {par}}^{-\\alpha } f(z)=\\int _{\\mathbb {R}^{d+1}}{K_{\\alpha }}(z,z^{\\prime }) f(z^{\\prime })\\mathop {}\\!\\mathrm {d}z^{\\prime } $ for all $f \\in C_0^\\infty (\\mathbb {R}^{d+1})$ .", "Moreover, there exist a functions $\\Psi _\\alpha \\in L^1(\\mathbb {R}^{d+1})$ and a constant $C>0$ such that ${K_{\\alpha }}(z, z^{\\prime }) \\le C \\Psi _\\alpha (z-z^{\\prime }), \\ \\text{for all}\\ z, z^{\\prime } \\in \\mathbb {R}^{d+1}.", "$ Hence, $H_{\\textup {par}}^{-\\alpha }$ is well defined and bounded on $L^p(\\mathbb {R}^{d+1})$ for $p \\in [1, +\\infty ]$ .", "By (REF ) and (REF ), we have $ K_{\\alpha }(\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) = \\frac{C_d}{\\Gamma (\\alpha )} \\int _0^\\infty t^{\\alpha -1-1/2} (\\sinh 2t)^{-d/2} \\textup {e}^{-B(t, z, z^{\\prime })} \\mathop {}\\!\\mathrm {d}t.$ We decompose ${K_{\\alpha }}(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ into two parts, $K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })&=\\frac{1}{\\Gamma (\\alpha )}\\int _0^1 t^{\\alpha -1} E(t,\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) \\mathop {}\\!\\mathrm {d}t, \\\\K_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })&=\\frac{1}{\\Gamma (\\alpha )}\\int _1^\\infty t^{\\alpha -1}E(t,\\rho ,x,\\rho ^{\\prime },x^{\\prime }) \\mathop {}\\!\\mathrm {d}t.$ We firstly estimate the term $ K_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })$ .", "Together the inequality that $2\\coth 2t -\\tanh t >\\coth 2t > 1$ , with the fact that $\\tanh t \\sim 1, \\; \\sinh 2t \\sim \\textup {e}^{2t },\\; \\coth 2t \\sim \\textup {e}^{2t},\\;\\text{as}\\; t \\rightarrow \\infty ,$ we have $|K_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })|&\\le C \\textup {e}^{-|x-x^{\\prime }|^2-(\\rho -\\rho ^{\\prime })^2-|x+x^{\\prime }|^2} \\int _1^\\infty t^{\\alpha -3/2}\\textup {e}^{-td}\\mathop {}\\!\\mathrm {d}t\\\\&\\le C \\textup {e}^{-|x-x^{\\prime }|^2-(\\rho -\\rho ^{\\prime })^2-|x+x^{\\prime }|^2}.$ We next estimate $K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ .", "We further split it into two cases.", "Case 1: $(z, z^{\\prime }) \\in D_{+}\\mathrel {\\mathop :}=\\lbrace (z, z^{\\prime }) \\in \\mathbb {R}^{d+1} \\times \\mathbb {R}^{d+1}; \\; |z-z^{\\prime }|\\ge 1\\rbrace $ .", "Using the fact that $\\sinh 2t \\sim 2t$ , $\\coth 2t \\sim \\frac{1}{2t}$ and $\\tanh t\\sim t$ as $t\\rightarrow 0$ , we have $|K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })|&\\lesssim \\int _0^1 t^{\\alpha -1-\\frac{d+1}{2}} \\textup {e}^{-\\frac{1}{8t}[|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2]} \\mathop {}\\!\\mathrm {d}t\\\\&\\lesssim \\textup {e}^{-\\frac{1}{16}[|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2]} \\int _{0}^1 t^{\\alpha -1-\\frac{d+1}{2}} \\textup {e}^{-\\frac{1}{16 t}} \\mathop {}\\!\\mathrm {d}t\\\\& \\lesssim \\textup {e}^{-\\frac{1}{16}[|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2]}.$ Case 2: $(z, z^{\\prime }) \\in D_{-}\\mathrel {\\mathop :}=\\lbrace (z, z^{\\prime }) \\in \\mathbb {R}^{d+1} \\times \\mathbb {R}^{d+1}; \\; |z-z^{\\prime }|<1\\rbrace $ .", "If $\\alpha <\\frac{d+1}{2}$ , we have $|K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })|&\\le \\int _0^1 t^{\\alpha -1-\\frac{d+1}{2}} \\textup {e}^{-\\frac{1}{8t}[|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2]} \\mathop {}\\!\\mathrm {d}t\\\\&\\lesssim \\frac{1}{[|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2]^{(d+1)/2-\\alpha }};$ if $\\alpha =\\frac{d+1}{2}$ , we have $|K_{\\alpha }^1(\\rho , ,x, \\rho ^{\\prime }, x^{\\prime })| \\lesssim \\log [|x-x^{\\prime }|^2+(\\rho -\\rho ^{\\prime })^2];$ and finally, if $\\alpha >\\frac{d+1}{2}$ , we have $|K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })|\\lesssim 1.$ It follows that (REF ) holds with the integrable function $\\Psi _\\alpha $ defined by $\\Psi _\\alpha (z-z^{\\prime })={\\left\\lbrace \\begin{array}{ll}\\mathbf {1}_{D_-}|z-z^{\\prime }|^{2\\alpha -(d+1)}+ \\mathbf {1}_{D_+}\\textup {e}^{-\\frac{1}{16}|z-z^{\\prime }|^2}, & \\alpha <\\frac{d+1}{2},\\\\\\mathbf {1}_{D_-} \\log |z-z^{\\prime }|+ \\mathbf {1}_{D_+}\\textup {e}^{-\\frac{1}{16}|z-z^{\\prime }|^2}, & \\alpha =\\frac{d+1}{2},\\\\\\mathbf {1}_{D_-} + \\mathbf {1}_{D_+}\\textup {e}^{-\\frac{1}{16}|z-z^{\\prime }|^2}, & \\alpha >\\frac{d+1}{2},\\end{array}\\right.", "}$ which completes the proof.", "By (REF ), we can obtain similar estimates for the integral kernels of the operators $(H_{\\textup {par}}+2)^{-\\alpha }$ and $(H_{\\textup {par}}-2)^{-\\alpha }$ : Corollary 3.3 Assume that $\\alpha >0$ .", "Let $M_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ and $N_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ be the integral kernels of the operators $(H_{\\textup {par}}+2)^{-\\alpha }$ and $(H_{\\textup {par}}-2)^{-\\alpha }$ respectively.", "Then, we have $M_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) \\le {K_{\\alpha }}(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })\\le \\Psi _\\alpha (z-z^{\\prime }),$ and $N_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) \\le \\Psi _\\alpha (z-z^{\\prime }), \\quad \\text{if}\\;\\; d\\ge 3.$ By (REF ), the estimate of $M_\\alpha $ is trivial.", "It suffices to show the estimate of $N_\\alpha $ .", "By definition, we have $N_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime })= \\frac{1}{\\Gamma (\\alpha )}\\int _0^\\infty t^{\\alpha -1} e^{2t} E(t,\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) \\mathop {}\\!\\mathrm {d}t.$ We once again decompose $N_\\alpha (\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ into two parts as follows.", "$N_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })&=\\frac{1}{\\Gamma (\\alpha )}\\int _0^1 t^{\\alpha -1} e^{2t}E(t,\\rho , x, \\rho ^{\\prime } ,x^{\\prime }) \\mathop {}\\!\\mathrm {d}t, \\\\N_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })&=\\frac{1}{\\Gamma (\\alpha )}\\int _1^\\infty t^{\\alpha -1}e^{2t}E(t,\\rho ,x,\\rho ^{\\prime },x^{\\prime }) \\mathop {}\\!\\mathrm {d}t.$ For the term $ N_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ , we can proceed by the same way as the estimate of $K_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ , we omit the details here.", "For $N_{\\alpha }^1(\\rho , x, \\rho ^{\\prime } ,x^{\\prime })$ term, we obtain for $d\\ge 3$ that $|N_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })|&\\le C \\textup {e}^{-|x-x^{\\prime }|^2-(\\rho -\\rho ^{\\prime })^2-|x+x^{\\prime }|^2} \\int _1^\\infty t^{\\alpha -3/2}\\textup {e}^{-td}e^{2t}\\mathop {}\\!\\mathrm {d}t\\\\&\\le C \\textup {e}^{-|x-x^{\\prime }|^2-(\\rho -\\rho ^{\\prime })^2-|x+x^{\\prime }|^2},$ which completes the proof.", "Remark 3.4 We make some comments about the condition that $d > - a=2$ for the second result of $N_{\\alpha }^2(\\rho ,x, \\rho ^{\\prime }, x^{\\prime })$ in the above corollary.", "Since the integral $\\int _1^\\infty t^{\\alpha -3/2}e^{2t}e^{-t} \\mathop {}\\!\\mathrm {d}t$ is divergent, we cannot obtain similar estimates for $(H_{\\textup {par}}-2)^{-\\alpha }$ for the case $d=1$ .", "It is consistent with the spectral property that $\\sigma (H_{\\textup {par}}-2)=[-1, \\infty )$ when $d=1$ .", "When $d=2$ , the spectral property is that $\\sigma (H_{\\textup {par}}-2)=[0, \\infty )$ , and we have different behavior depending on the power $\\alpha $ .", "If $0<\\alpha <1/2$ , the kernel of $(H_{\\textup {par}}-2)^{-\\alpha }$ has exponential decay as $|z-z^{\\prime }| \\rightarrow \\infty $ since $\\int _1^\\infty t^{\\alpha -3/2}e^{2t}e^{-2t} \\mathop {}\\!\\mathrm {d}t$ is convergent, and if $\\alpha \\ge 1/2$ , no useful result can be derived.", "In addition, we also have the following boundedness result for the negative fractional powers of the operator $H_{\\textup {par}}$ .", "Proposition 3.5 Let $p\\in [1, \\infty ]$ and $\\alpha >0$ .", "Then the weighted operator $|x|^{2\\alpha } H_{\\textup {par}}^{-\\alpha }$ is bounded on $L^p(\\mathbb {R}^{d+1})$ .", "By Schur's lemma [8], we only need to verify that $\\sup _{z} |x|^{2\\alpha } \\int _{\\mathbb {R}^{d+1}} |{K_{\\alpha }}(z, z^{\\prime })| \\mathop {}\\!\\mathrm {d}z^{\\prime } \\lesssim 1, $ and $\\sup _{z^{\\prime }} \\int _{\\mathbb {R}^{d+1}} |x|^{2\\alpha } |{K_{\\alpha }}(z, z^{\\prime })| \\mathop {}\\!\\mathrm {d}z \\lesssim 1.$ By the boundedness in Proposition REF , we may assume $|x|\\ge 2$ .", "Firstly, we prove (REF ).", "We partition $\\mathbb {R}^{d}=E_x \\cup E_x^c$ , where $E_x=\\lbrace x^{\\prime }\\in \\mathbb {R}^{d}: |x|>2|x-x^{\\prime }|\\rbrace .$ When $x^{\\prime }\\in E^c_x$ , i.e., $|x|\\le 2|x-x^{\\prime }|$ .", "By the fact that $2\\coth 2t-\\tanh t \\ge \\frac{1}{2}(2\\coth 2t-\\tanh t)+\\frac{1}{4},$ we have $\\quad K_\\alpha (\\rho , x, \\rho ^{\\prime }, x^{\\prime })& \\lesssim \\textup {e}^{-\\frac{1}{16}|x-x^{\\prime }|^2}\\int _0^\\infty t^{\\alpha -1-1/2} (\\sinh 2t)^{-d/2} \\textup {e}^{-\\frac{1}{2}(B(t, \\rho , \\frac{x}{\\sqrt{2}}, \\rho ^{\\prime }, \\frac{x^{\\prime }}{\\sqrt{2}} )} \\mathop {}\\!\\mathrm {d}t\\\\& \\lesssim \\textup {e}^{-\\frac{1}{16}|x-x^{\\prime }|^2} {K_{\\alpha }}\\left(\\rho , \\frac{x}{\\sqrt{2}}, \\rho ^{\\prime }, \\frac{x^{\\prime }}{\\sqrt{2}} \\right).$ So it follows that $& \\quad |x|^{2\\alpha } \\int _\\mathbb {R} \\int _{E_x^c} |{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime }) |\\mathop {}\\!\\mathrm {d}x^{\\prime } \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\lesssim \\int _{\\mathbb {R}^{d+1}}\\underbrace{ |x-x^{\\prime }|^{2\\alpha } \\textup {e}^{-\\frac{1}{16}|x-x^{\\prime }|^2}}_{<\\; \\infty }\\Big |{K_{\\alpha }}\\Big (\\rho , \\frac{x}{\\sqrt{2}}, \\rho ^{\\prime }, \\frac{x^{\\prime }}{\\sqrt{2}} \\Big ) \\Big |\\mathop {}\\!\\mathrm {d}x^{\\prime } \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\lesssim 1.$ When $x^{\\prime } \\in E_x$ , i.e., $|x|> 2|x-x^{\\prime }|$ , we once again decompose $K_\\alpha =K^1_\\alpha +K^2_\\alpha $ as in the proof of Proposition REF .", "For $K^2_{\\alpha }(z, z^{\\prime })$ term.", "Since $|x|>2|x-x^{\\prime }|$ implies $|x|<|x+x^{\\prime }|$ , we have $&\\quad |x|^{2\\alpha }\\int _{\\mathbb {R}} \\int _{E_x} |x|^{2\\alpha }\\textup {e}^{-\\frac{1}{8}|x+x^{\\prime }|^2} \\textup {e}^{-\\frac{1}{4}|x-x^{\\prime }|^2} \\textup {e}^{-(\\rho -\\rho ^{\\prime })^2} \\mathop {}\\!\\mathrm {d}x^{\\prime }\\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\le \\int _{\\mathbb {R}} \\int _{\\mathbb {R}^d} \\underbrace{|x|^{2\\alpha }\\textup {e}^{-\\frac{1}{8}|x|^2}}_{\\lesssim \\; 1} \\textup {e}^{-\\frac{1}{4}|x-x^{\\prime }|^2} \\textup {e}^{-(\\rho -\\rho ^{\\prime })^2} \\mathop {}\\!\\mathrm {d}x^{\\prime }\\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\lesssim 1.$ As for the $K_\\alpha ^1 (z, z^{\\prime })$ term, we have $&\\int _{\\mathbb {R}} \\int _{E_x} |x|^{2\\alpha } \\int _0^1 t^{\\alpha -\\frac{d+3}{2}} \\textup {e}^{-\\frac{1}{8 t}|x-x^{\\prime }|^2-\\frac{1}{4}t|x|^2 -\\frac{(\\rho -\\rho ^{\\prime })^2}{4t}} \\mathop {}\\!\\mathrm {d}t \\mathop {}\\!\\mathrm {d}x^{\\prime } \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\le |x| \\int _{\\mathbb {R}} \\int _{0}^{|x|^2} \\int _0^{|x|^2} u^{\\alpha -\\frac{d+3}{2}} \\textup {e}^{-\\frac{|x|^2}{8 u}(\\rho -\\rho ^{\\prime })^2} \\textup {e}^{-\\frac{r^2}{4 u} +\\frac{1}{4}u} \\mathop {}\\!\\mathrm {d}u r^d \\mathop {}\\!\\mathrm {d}r\\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }\\\\&\\le |x| \\int _{0}^{|x|^2} \\int _0^{|x|^2} u^{\\alpha -\\frac{d+3}{2}}\\underbrace{ \\int _{\\mathbb {R}} \\textup {e}^{-\\frac{|x|^2}{8 u}(\\rho -\\rho ^{\\prime })^2} \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime }}_{\\lesssim \\; u^{1/2}|x|^{-1}}\\textup {e}^{-\\frac{r^2}{4 u} +\\frac{1}{4}u} \\mathop {}\\!\\mathrm {d}u r^d \\mathop {}\\!\\mathrm {d}r\\\\&\\lesssim \\int _{0}^{|x|^2} \\int _0^{|x|^2} u^{\\alpha -\\frac{d+2}{2}} \\textup {e}^{-\\frac{r^2}{4 u} +\\frac{1}{4}u} \\mathop {}\\!\\mathrm {d}u \\, r^d \\mathop {}\\!\\mathrm {d}r <\\infty .$ This completes the proof of (REF ).", "To prove (REF ), we similarly partition $\\mathbb {R}^{d}=E_{x^{\\prime }}\\cup E_{x^{\\prime }}^c$ , with $E_{x^{\\prime }}=\\lbrace x\\in \\mathbb {R}^d: |x^{\\prime }|\\ge 2|x-x^{\\prime }|\\rbrace ,$ and write $\\int _{\\mathbb {R}^{d+1}} |x|^{2\\alpha }{K_{\\alpha }}(z, z^{\\prime }) \\mathop {}\\!\\mathrm {d}z=\\bigg (\\int _{\\mathbb {R}} \\int _{E_{x^{\\prime }}}+\\int _{\\mathbb {R}} \\int _{E_{x^{\\prime }}^c}\\bigg )|x|^{2\\alpha }{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime }) \\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho .$ When $x\\in E_{x^{\\prime }}$ , i.e., $|x^{\\prime }| \\ge 2|x-x^{\\prime }|$ , we have $|x| \\le \\frac{3}{2}|x^{\\prime }|$ .", "Since the kernel is symmetric in $z$ and $z^{\\prime }$ , that is, ${K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime })={K_{\\alpha }}(\\rho ^{\\prime }, x^{\\prime }, \\rho , x)$ , we have $\\int _{\\mathbb {R}} \\int _{E_{x^{\\prime }}}|x|^{2\\alpha }{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime }) \\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho \\lesssim |x^{\\prime }|^{2\\alpha } \\int _{\\mathbb {R}} \\int _{|x^{\\prime }|>2|x-x^{\\prime }|} |{K_{\\alpha }}(\\rho ^{\\prime }, x^{\\prime }, \\rho , x) |\\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho .$ By (REF ), the right hand side of the above inequality is finite.", "When $x\\in E^c_{x^{\\prime }}$ , we further split the integral domain into two cases.", "When $x\\in E^c_{x^{\\prime }}$ and $|x-x^{\\prime }|<1$ , it follows that $|x|\\le |x^{\\prime }|+|x-x^{\\prime }| \\le 4$ .", "Hence, $\\int _{\\mathbb {R}} \\int _{|x^{\\prime }|<2|x-x^{\\prime }|}|x|^{2\\alpha }|{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime })| \\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho &\\lesssim \\int _{\\mathbb {R}} \\int _{|x^{\\prime }|<2|x-x^{\\prime }|}|{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime })| \\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho \\\\&\\lesssim 1.$ When $x\\in E^c_{x^{\\prime }}$ and $|x-x^{\\prime }|>1$ , we have $&\\quad \\int _{\\mathbb {R}} \\int _{\\lbrace |x^{\\prime }|<2|x-x^{\\prime }|, \\ |x-x^{\\prime }|\\ge 1\\rbrace }|x|^{2\\alpha }|{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime })| \\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho \\\\&\\lesssim \\int _{\\mathbb {R}} \\int _{|x-x^{\\prime }|\\ge 1}|x-x^{\\prime }|^{2\\alpha }|{K_{\\alpha }}(\\rho , x, \\rho ^{\\prime }, x^{\\prime }) |\\mathop {}\\!\\mathrm {d}x \\mathop {}\\!\\mathrm {d}\\rho \\\\&\\lesssim \\int _{\\mathbb {R}^{d+1}}\\underbrace{ |x-x^{\\prime }|^{2\\alpha } \\textup {e}^{-\\frac{1}{16}|x-x^{\\prime }|^2}}_{\\lesssim \\; 1}\\Big |{K_{\\alpha }}\\Big ({\\rho }, \\frac{x}{\\sqrt{2}}, \\rho ^{\\prime }, \\frac{x^{\\prime }}{\\sqrt{2}} \\Big ) \\Big |\\mathop {}\\!\\mathrm {d}x^{\\prime } \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime } \\\\&\\lesssim 1.$ This proves (REF ) and completes the proof." ], [ "Symbols for fractional powers $H_{\\textup {par}}^{\\alpha }$", "Using the heat kernel representation of the fractional powers operator $H_{\\textup {par}}^{\\alpha }$ in Subsection REF , we firstly calculate the symbol of the heat semigroup $\\textup {e}^{-t H_{\\textup {par}}}$ .", "Together (REF ), the fact that $\\widehat{\\Phi }_{\\mu }=(-i)^{|\\mu |} \\Phi _{\\mu }$ , with Plancherel's theorem, we have $&\\textup {e}^{-t H_{\\textup {par}}}f(\\rho , x)\\\\&=\\sum _{\\mu } \\frac{1}{\\sqrt{2\\pi }}\\int _{\\mathbb {R}} \\textup {e}^{i\\tau \\rho } \\textup {e}^{-\\tau ^2 t} \\textup {e}^{-(2|\\mu |+d)t} \\textup {e}^{-\\frac{\\pi i}{4}2|\\mu |}(\\mathcal {F}_{\\rho , x}f(\\tau , \\xi ), \\Phi _{\\mu }(\\xi ))\\Phi _{\\mu }(x) \\mathop {}\\!\\mathrm {d}\\tau \\\\&=\\frac{1}{(2\\pi )^{(d+1)/2}}\\int _{\\mathbb {R}^{d+1}} \\textup {e}^{i\\tau \\rho } \\textup {e}^{i\\xi \\cdot x} p_t(\\rho , x, \\tau , \\xi ) \\mathcal {F}_{\\rho , x}f(\\tau , \\xi ) \\mathop {}\\!\\mathrm {d}\\tau \\mathop {}\\!\\mathrm {d}\\xi ,$ where $p_t(\\rho , x, \\tau , \\xi )=\\sum _{\\mu } \\textup {e}^{-i \\xi \\cdot x}\\textup {e}^{-\\tau ^2 t} \\textup {e}^{-(2|\\mu |+d)t} \\textup {e}^{-\\frac{\\pi i}{4}2|\\mu |} \\Phi _{\\mu }(\\xi ) \\Phi _{\\mu }(x).$ In view of Mehler's formula (REF ), we have $p_t(\\rho , x, \\tau , \\xi )= c_d (\\cosh 2t )^{-\\frac{d}{2}} \\textup {e}^{-b(t, x, \\tau , \\xi )},$ where $c_d={(2\\pi )^{-d/2}}$ and $b(t, x, \\tau , \\xi ) &=\\frac{1}{2} (|x|^2+|\\xi |^2)\\tanh 2t +2i x \\cdot \\xi \\operatorname{sech}2t (\\sinh t)^2 +t \\tau ^2.$ Now we have Lemma 3.6 Let $\\alpha \\in \\mathbb {R}$ .", "The symbol $\\sigma _{\\alpha }(\\rho , x, \\tau , \\xi )$ of the operator $H_{\\textup {par}}^{\\alpha }$ belongs to the symbol class $G^{2\\alpha }$ defined by Definition REF .", "From the explicit formula of $p_t$ , we know that $\\sigma _\\alpha $ does not depend on $\\rho $ .", "Thus, the estimates for $\\sigma _\\alpha $ do not depend on $\\rho $ and its derivatives on $\\rho $ are zero.", "For brevity, we will write $\\sigma _{\\alpha }( x, \\tau , \\xi )$ instead of $\\sigma _{\\alpha }(\\rho , x, \\tau , \\xi )$ , and similarly for other terms which are independent of $\\rho $ .", "The case $\\alpha =0$ is obvious.", "For the case $\\alpha <0$ .", "Using the equality of (REF ), we have $\\sigma _{\\alpha }( x, \\tau , \\xi )=\\frac{1}{ \\Gamma (-\\alpha )} \\int _0^\\infty t^{-\\alpha -1} p_t( x, \\tau , \\xi ) \\mathop {}\\!\\mathrm {d}t.$ We split $ \\sigma _{\\alpha }( x, \\tau , \\xi )$ into two parts as follows.", "$\\sigma _{\\alpha }^1( x, \\tau , \\xi )=\\frac{1}{ \\Gamma (-\\alpha )} \\int _0^1 t^{-\\alpha -1} p_t( x, \\tau , \\xi ) \\mathop {}\\!\\mathrm {d}t,\\\\\\sigma _{\\alpha }^2( x, \\tau , \\xi )=\\frac{1}{ \\Gamma (-\\alpha )} \\int _1^\\infty t^{-\\alpha -1} p_t( x, \\tau , \\xi ) \\mathop {}\\!\\mathrm {d}t.$ To estimate $\\sigma _{\\alpha }^1$ , on the one hand, by the lower bound estimate for the real part of $b$ as $t\\in (0, 1)$ , $\\operatorname{Re}b(t, x, \\tau , \\xi ) \\ge c t (|x|^2+|\\xi |^2+\\tau ^2),$ we have $|\\sigma _{\\alpha }^1( x, \\tau , \\xi )|\\le \\; \\frac{1}{ \\Gamma (-\\alpha )} \\int _0^1 t^{-\\alpha -1} e^{- c(|x|^2+|\\xi |^2+\\tau ^2) t} \\mathop {}\\!\\mathrm {d}t\\;&\\le (1+|x|^2+|\\xi |^2+\\tau ^2)^\\alpha .$ On the other hand, by (REF ), the derivatives of $b(t, x, \\tau , \\xi )$ satisfy $\\partial _{x_j} b(t, x, \\tau , \\xi )&=x_j \\tanh 2t +2i \\xi _j\\operatorname{sech}2t (\\sinh t)^2 \\sim x_j t + \\xi _jt^2,\\\\\\partial _{x_j} ^2b(t, x, \\tau , \\xi )&= \\tanh 2t \\sim t,\\\\\\partial _{\\xi _j} b(t, x, \\tau , \\xi )&= \\xi _j \\tanh 2t +2i x_j\\operatorname{sech}2t (\\sinh t)^2 \\sim \\xi _jt + x_jt^2,\\\\\\partial _{\\xi _j} ^2b(t, x, \\tau , \\xi )&= \\tanh 2t \\sim t,\\\\\\partial _{\\tau }b(t, x, \\tau , \\xi )&= 2\\tau t, \\quad \\partial _{\\tau }^2b(t, x, \\tau , \\xi )= 2 t,$ as $t\\rightarrow 0$ .", "In conclusion, we obtain with the shorthand $X=(x,\\tau ,\\xi )$ that $|\\partial _X^\\beta b(t, x, \\tau , \\xi ) | \\lesssim {\\left\\lbrace \\begin{array}{ll}|X|^{2-|\\beta |}t, &|\\beta | \\le 2, \\\\0, & |\\beta | \\ge 3.\\end{array}\\right.", "}$ By using Faà di Bruno's formula, we obtain $\\partial _X^{\\beta } \\sigma _{\\alpha }^1( x, \\tau , \\xi )=\\frac{1}{ \\Gamma (-\\alpha )} \\int _0^1 t^{-\\alpha -1} p_t( x, \\tau , \\xi ) \\prod _{\\begin{array}{c}1\\le j \\le |\\beta | \\\\\\sum _j|\\beta _j|n_j=|\\beta |\\end{array}} ( \\partial _{X}^{\\beta _j}b)^{n_j}\\mathop {}\\!\\mathrm {d}t.$ Hence, for any $|\\beta | \\ge 1$ , we get $|\\partial _X^{\\beta } \\sigma _{\\alpha }^1( x, \\tau , \\xi )|&\\lesssim \\int _0^1 t^{-\\alpha -1} e^{-c t |X|^2} \\prod _{j=1}^{|\\beta |}t^{n_j} \\mathop {}\\!\\mathrm {d}t \\prod _{\\begin{array}{c}1\\le j \\le |\\beta | \\\\\\sum _j|\\beta _j|n_j=|\\beta |\\end{array}} |X|^{2n_j-|\\beta _j|n_j}\\\\&\\lesssim \\langle X\\rangle ^{2\\alpha -|\\beta |}.$ That is, $\\sigma _{\\alpha }^1 \\in G^{2\\alpha }$ for all $\\alpha <0$ .", "To estimate $\\sigma _{\\alpha }^2$ , since $\\cosh t \\sim e^{t}$ , and $\\tanh t \\ge t$ as $t\\rightarrow \\infty $ , we have $\\operatorname{Re}b(t, x, \\tau , \\xi ) \\ge ct |X|^2,$ which implies that $| \\sigma _{\\alpha }^2( x, \\tau , \\xi )| \\lesssim \\int _1^\\infty t^{-\\alpha -1} e^{-td } e^{-c |X|^2} \\mathop {}\\!\\mathrm {d}t\\lesssim e^{-|X|^2}.$ When we take partial derivatives in $X=(x,\\tau ,\\xi )$ , we only change the degree of the polynomials in $X$ .", "The dominating term is still $e^{-|X|^2}.$ Hence, we have $\\sigma _\\alpha ^2 \\in G^{\\alpha }$ .", "For the case $\\alpha >0$ .", "By (REF ), we obtain the symbol of the operator $H_{\\textup {par}}^\\alpha $ , $\\sigma _{\\alpha }( x, \\tau , \\xi )=\\frac{(-1)^N}{\\Gamma (N-\\alpha )}\\int _{0}^\\infty t^{N-\\alpha -1} \\frac{\\mathop {}\\!\\mathrm {d}^N }{\\mathop {}\\!\\mathrm {d}t^N} p_t( x, \\tau , \\xi )\\mathop {}\\!\\mathrm {d}t.$ Arguing as in the case $\\alpha <0$ , we split it into two parts $\\sigma _{\\alpha }^1( x, \\tau , \\xi )=\\frac{(-1)^N}{\\Gamma (N-\\alpha )}\\int _{0}^1t^{N-\\alpha -1} \\frac{\\mathop {}\\!\\mathrm {d}^N }{\\mathop {}\\!\\mathrm {d}t^N} p_t( x, \\tau , \\xi )\\mathop {}\\!\\mathrm {d}t,\\\\\\sigma _{\\alpha }^1( x, \\tau , \\xi )=\\frac{(-1)^N}{\\Gamma (N-\\alpha )}\\int _{1}^\\infty t^{N-\\alpha -1} \\frac{\\mathop {}\\!\\mathrm {d}^N }{\\mathop {}\\!\\mathrm {d}t^N} p_t( x, \\tau , \\xi )\\mathop {}\\!\\mathrm {d}t.$ Note that $&\\quad \\partial _t b (t, x, \\tau , \\xi )\\\\&= 2 (|x|^2+|\\xi |^2) \\operatorname{sech}^2 2t+\\tau ^2+ 2ix\\cdot \\xi [ -2 \\operatorname{sech}^2 2t \\sinh 2t \\sinh ^2 t+\\tanh 2t] \\\\&= 2 (|x|^2+|\\xi |^2) \\operatorname{sech}^2 2t+\\tau ^2 + 2ix\\cdot \\xi \\operatorname{sech}^2 2t \\sinh 2t.$ Thus, for $t\\in (0,1)$ we have $|\\partial _t (\\cosh 2t )^{-d/2}| \\lesssim 1, \\quad |\\partial _t b (t, x, \\tau , \\xi )| \\lesssim |X|^2,$ and $|\\partial _t^N (\\cosh 2t )^{-d/2}|&\\lesssim 1, \\quad |\\partial _t ^N b (t, x, \\tau , \\xi ) | \\lesssim |X|^{2N}.$ To compute the derivative estimates of $p_t( x, \\tau , \\xi )$ in the term $ \\sigma _{\\alpha }^1( x, \\tau , \\xi )$ , we use the above estimates, Leibniz rule and Faà di Bruno's formula, and obtain that $| \\sigma _{\\alpha }^1( x, \\tau , \\xi )| &\\lesssim \\int _{0}^1 t^{N-\\alpha -1} e^{-c|X|^2} |X|^{2N}\\mathop {}\\!\\mathrm {d}t\\lesssim \\langle X\\rangle ^{2\\alpha }.$ For the derivative estimates, we have $|\\partial _X^{\\beta } \\sigma _{\\alpha }^1( x, \\tau , \\xi )|&\\lesssim \\int _0^1 t^{N-\\alpha -1} e^{-c t |X|^2} \\prod _{\\begin{array}{c}1\\le j \\le |\\beta | \\\\\\sum _j|\\beta _j|n_j=|\\beta |\\end{array}}|X|^{2N} |X|^{2n_j-|\\beta _j|n_j} \\\\&\\lesssim \\langle X\\rangle ^{2\\alpha -|\\beta |}.$ For the term $\\sigma _{\\alpha }^2$ , the exponential decay can be obtained as the case $\\alpha <0$ .", "Summing up, we have $\\sigma _\\alpha \\in G^{2\\alpha }$ for all $\\alpha \\in \\mathbb {R}$ , and this concludes the proof." ], [ "Riesz transforms and symbols", "The $j$ th Riesz transforms associated with the operator $ H_{\\textup {par}}$ are defined as $R_j= A_jH_{\\textup {par}}^{-1/2}, \\quad -d\\le j\\le d.$ In general, for any $m\\in \\mathbb {N}$ and $\\mathbf {j}=(j_{1},\\dots ,j_m)$ , $-d \\le j_l\\le d$ , the $\\mathbf {j}$ th Riesz transform of order $m$ is the operator $R_{\\mathbf {j}} = R_{j_1,\\dots ,j_m} &=A_{j_1}A_{j_2}\\dots A_{j_m} H_{\\textup {par}}^{-m/2}\\nonumber \\\\&=P_m(\\partial _\\rho , \\partial _{x}, x) H_{\\textup {par}}^{-m/2},$ where $P_m$ is a polynomial of degree $m$ .", "In this subsection, we will prove that the Riesz transforms defined by (REF ) and (REF ) are bounded on classical Sobolev spaces by verifying that their symbols belong to the symbol class $S^0_{1,0}$ .", "There are two ways to show this.", "The most obvious way would be a direct calculation by the heat semigroup.", "The symbol of the operator $A_0 \\textup {e}^{-t H_{\\textup {par}}}$ is $-i\\tau p_t( x, \\tau , \\xi )$ , so the symbol of Riesz transform $R_0$ is $ \\sigma _{R_0}(\\rho , x, \\tau , \\xi )=\\frac{-1}{\\sqrt{\\pi }} \\int _0^\\infty t^{-\\frac{1}{2}}i\\tau p_t( x, \\tau , \\xi ) \\mathop {}\\!\\mathrm {d}t.$ For the symbol of the operator $A_j \\textup {e}^{-t H_{\\textup {par}}}$ , $1\\le j\\le d$ , we use the formula (REF ) to get $A_j \\textup {e}^{-t H_{\\textup {par}}} f(\\rho , x)=\\int _{\\mathbb {R}^{d+1}} \\textup {e}^{i\\tau \\rho } \\textup {e}^{i\\xi \\cdot x} p_t( x, \\tau , \\xi )(-i\\xi _j+x_j+\\partial _{x_j} b ) \\widehat{f}(\\tau , \\xi ) \\mathop {}\\!\\mathrm {d}\\tau \\mathop {}\\!\\mathrm {d}\\xi .$ This shows that the symbol of the operator $A_j \\textup {e}^{-t H_{\\textup {par}}}$ is $p_t( x, \\tau , \\xi )(-i\\xi _j+x_j+\\partial _{x_j} b(t,\\rho ,x,\\tau ,\\xi )).$ So, the symbols for Riesz transforms $R_j$ for $1\\le j \\le d$ are $ \\quad \\sigma _{R_j}(\\rho , x, \\tau , \\xi )=\\frac{-1}{\\sqrt{\\pi }} \\int _0^\\infty t^{-\\frac{1}{2}}(i\\xi _j-x_j+\\partial _{x_j} b) p_t( x, \\tau , \\xi ) \\mathop {}\\!\\mathrm {d}\\tau .$ Similar calculations give the symbols for Riesz transforms $R_j $ for $-d \\le j \\le -1$ .", "It is then possible to use the integral forms (REF ) and (REF ) in a lengthy calculation like in the proof of Lemma REF to show that they belong to the symbol class $S^0_{1,0}$ .", "The second, simpler and shorter way is to take advantage of the symbol calculus for compositions in $G^m$ , which we present below Proposition 3.7 The symbols $\\sigma _{R_j}$ of Riesz transforms $R_j$ for $0\\le |j |\\le d$ belongs to the symbol class $S^0_{1,0}$ , hence they are bounded on classical Sobolev spaces $W^{\\alpha ,p}(\\mathbb {R}^{d+1})$ for any $\\alpha \\in \\mathbb {R}$ and $1<p<\\infty $ .", "In addition, the same result holds for Riesz transforms $R_{\\mathbf {j}}$ of high order.", "The symbols of the operators $A_j$ are given by either $i\\tau $ or $\\pm i\\xi _j+x_j $ , which belong to class $G^{1}$ .", "From Proposition REF , the symbol of the operator $H_{\\textup {par}}^{-1/2}$ belongs to class $G^{-1}$ .", "By symbolic calculus in class $G^m$ (Proposition REF ), we obtain that symbols of Riesz transform $R_j$ for $0\\le |j |\\le d$ belong to class $S_{1,0}^{0}$ , which implies the boundedness of Riesz transform $R_j$ for $0\\le |j |\\le d$ on $W^{\\alpha , p}$ for all $1<p<\\infty $ , (see [22], [23]).", "For Riesz transform $R_{\\mathbf {j}}$ with high order, By Proposition REF and the fact that the symbols of the operators $A_{j_1}A_{j_2}\\dots A_{j_m} $ and $H_{\\textup {par}}^{-m/2}$ belongs to symbol classes $G^{m}$ and $G^{-m}$ respectively.", "Hence, the symbol of $R_{\\mathbf {j}}$ belongs to the symbol the symbol class $S_{1,0}^0$ , which implies the result and completes the proof." ], [ "Sobolev spaces associated to the partial harmonic oscillator", "Given any $p\\in [1, \\infty )$ , and $\\alpha >0$ , we define the potential spaces associated to $ H_{\\textup {par}}$ by $L_{ H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1}) = H_{\\textup {par}}^{-\\alpha /2} (L^p(\\mathbb {R}^{d+1})),$ with the norm $\\Vert f\\Vert _{L_{H_{\\textup {par}}}^{\\alpha , p}} =\\Vert g\\Vert _{L^p(\\mathbb {R}^{d+1})},$ where $g\\in L^p(\\mathbb {R}^{d+1})$ satisfies $H_{\\textup {par}}^{-\\alpha /2}g=f$ .", "Remark 4.1 The norm is well defined since $ H_{\\textup {par}}^{-\\alpha /2}$ is one-to-one and bounded in $L^p(\\mathbb {R}^{d+1})$ .", "Also, $C_0^\\infty (\\mathbb {R}^{d+1}) $ is dense in $L_{H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1}) $ .", "For any nonnegative integer $k\\ge 0$ , we can also define the Sobolev spaces associated to $ H_{\\textup {par}}$ by the differential operators $A_j$ as follows: $W_{H_{\\textup {par}}}^{k, p}=\\left\\lbrace f\\in L^p(\\mathbb {R}^{d+1}) | \\begin{array}{c} \\displaystyle A_{j_1}A_{j_2}\\dots A_{j_m} f \\in L^p(\\mathbb {R}^{d+1}), \\\\[0.5em]\\displaystyle \\text{for any}\\;\\;1\\le m \\le k, \\ 0\\le |j_1|, \\dots , |j_m| \\le d\\end{array}\\right\\rbrace ,$ with the norm $\\Vert f\\Vert _{W_{H_{\\textup {par}}}^{k, p}}=\\sum _{m=1}^k\\Bigg (\\sum _{j_1=-d}^d\\dots \\sum _{j_m=-d}^d\\Vert A_{j_1}A_{j_2}\\dots A_{j_m} f\\Vert _{L^p}\\Bigg )+\\Vert f\\Vert _{L^p}.$ Theorem 4.2 Let $k \\in \\mathbb {N}$ and $p\\in (1, \\infty )$ .", "Then we have $W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})= L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ with equivalence of norms.", "We firstly prove that $ L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})\\subset W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ .", "For any function $f\\in L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ , there exists a function $ g\\in L^{p}(\\mathbb {R}^{d+1})$ , such that $f=H_{\\textup {par}}^{-k/2} g$ .", "Hence, by the $L^p$ boundedness of Riesz transforms in Proposition REF , we have $\\Vert A_{j_1}A_{j_2}\\dots A_{j_m} f \\Vert _p = \\Vert A_{j_1}A_{j_2}\\dots A_{j_m} H_{\\textup {par}}^{-k/2} g\\Vert _{p}\\lesssim \\Vert g\\Vert _{p} \\le C \\Vert f\\Vert _{L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})},$ which implies that $ L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})\\subset W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ .", "Next, we show that $ W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})\\subset L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ by induction.", "First, it is easy to check that for any $f, g\\in C_0^\\infty (\\mathbb {R}^{d+1})$ , we have $\\int _{\\mathbb {R}^{d+1}} f g =2 \\int _{\\mathbb {R}^{d+1}} \\sum _{-d\\le j\\le d} R_j f R_j g.$ Thus, by duality and the boundedness of Riesz transform, we obtain for any $g\\in L^p(\\mathbb {R}^d)$ that $\\Vert g\\Vert _p \\lesssim \\sum _{-d\\le j\\le d} \\Vert R_j g\\Vert _p.$ Hence, we obtain by choosing $g=H_{\\textup {par}}^{1/2}f$ that $\\Vert f\\Vert _{L_{H_{\\textup {par}}}^{1, p}(\\mathbb {R}^{d+1})} =\\Vert H_{\\textup {par}}^{1/2}f\\Vert _p\\lesssim \\sum _{-d\\le j\\le d} \\Vert A_j f\\Vert _p \\lesssim \\Vert f\\Vert _{W_{H_{\\textup {par}}}^{1,p}(\\mathbb {R}^{d+1})}.$ That is, $ W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})\\subset L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ for $k=1$ .", "Suppose that for any $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ and any $1\\le m<k$ , we have $\\Vert f\\Vert _{L_{H_{\\textup {par}}}^{m, p}(\\mathbb {R}^{d+1})}\\le \\Vert f\\Vert _{W^{m,p}_{H_{par}}(\\mathbb {R}^{d+1})}.$ It follows by duality that $\\Vert f\\Vert _{L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})}&=\\Vert H_{\\textup {par}}^{k/2}f\\Vert _{L^p(\\mathbb {R}^{d+1})} =\\sup _{g\\in C_0^\\infty ; \\Vert g\\Vert _{p^{\\prime }}=1} \\int _{\\mathbb {R}^{d+1}}H_{\\textup {par}}^{k/2}f g\\mathop {}\\!\\mathrm {d}z\\\\&=\\sup _{g\\in C_0^\\infty : \\Vert g\\Vert _{p^{\\prime }}=1} \\int _{\\mathbb {R}^{d+1}}H_{\\textup {par}}^{k}f H_{\\textup {par}}^{-k/2}g\\mathop {}\\!\\mathrm {d}z.$ Since there exist constants $c_1, c_2, \\dots , c_{k-1}$ such that $\\sum _{0\\le |j_1|, \\dots , |j_k| \\le d} A_{j_k}^*\\dots A_{j_1}^* A_{j_1}\\dots A_{j_k} = 2^k H_{\\textup {par}}^{k}+\\sum _{m=1}^{k-1} c_m H_{\\textup {par}}^m,$ we obtain that $&\\quad 2^k \\int _{\\mathbb {R}^{d+1}}H_{\\textup {par}}^{k}f H_{\\textup {par}}^{-k/2}g\\mathop {}\\!\\mathrm {d}z\\\\&= \\int _{\\mathbb {R}^{d+1}} \\sum _{0\\le |j_1|, \\dots , |j_k| \\le d}\\left( A_{j_k}^*\\dots A_{j_1}^* A_{j_1}\\dots A_{j_k}-\\sum _{m=1}^{k-1} c_m H_{\\textup {par}}^m\\right) f H_{\\textup {par}}^{-k/2} g\\\\&=\\!\\sum _{0\\le |j_1|, \\dots , |j_k| \\le d} \\int _{\\mathbb {R}^{d+1}}\\!\\!A_{j_1}\\dots A_{j_k} f R_{j_1}\\dots R_{j_k} g-\\sum _{m=1}^{k-1} c_m \\int _{\\mathbb {R}^{d+1}} H_{\\textup {par}}^{m/2} fH_{\\textup {par}}^{-\\frac{k-m}{2}} g\\\\&\\le \\!\\sum _{0\\le |j_1|, \\dots , |j_k| \\le d} \\!", "\\Vert A_{j_1}\\dots A_{j_k} f\\Vert _{p} \\Vert R_{j_1\\dots j_k} g\\Vert _{p^{\\prime }} +\\sum _{m=1}^{k-1} |c_m|\\Vert H_{\\textup {par}}^{m/2 } f\\Vert _p \\Vert H_{\\textup {par}}^{-\\frac{k-m}{2}} g\\Vert _{p^{\\prime }}\\\\&\\le C \\Vert f\\Vert _{W_{H_{\\textup {par}}}^{k,p}(\\mathbb {R}^{d+1})},$ where we used (REF ) and the boundedness of Riesz transforms in the last inequality.", "This completes the proof that $ W_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})\\subset L_{H_{\\textup {par}}}^{k, p}(\\mathbb {R}^{d+1})$ for $k\\ge 1$ .", "Proposition 4.3 Let $p \\in (1, \\infty ) $ .", "The Riesz transforms $R_j$ $(-d\\le j\\le d)$ are bounded on the space $L_{H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1})$ .", "By the definition of $L_{H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1})$ , it suffices to show for $0\\le |j |\\le d $ that the operators $T_j&= H_{\\textup {par}}^{-\\alpha /2}A_j H_{\\textup {par}}^{-1/2} H_{\\textup {par}}^{\\alpha /2}$ are bounded on $L^p(\\mathbb {R}^{d+1})$ .", "By Proposition REF , Lemma REF and the fact that the symbol of the operators $A_j$ belongs to the symbol class $G^1$ , the symbols of the operator $T_j$ belong to the symbol class $S^0_{1,0}$ .", "Hence, the operators $T_j$ are bounded on $L^p(\\mathbb {R}^{d+1})$ for $1<p<\\infty $ (see [22], [23]), which proves the result.", "A direct consequence of Proposition REF is that any function in $L_{H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1})$ enjoys some decay in the $x$ direction.", "Corollary 4.4 If $p\\in [1, \\infty )$ , $\\alpha >0$ and $f\\in L_{ H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1})$ , then $|x|^\\alpha f $ belongs to $L^{p}(\\mathbb {R}^{d+1})$ .", "Next, we show the relations between space $L_{H_{\\textup {par}}}^{\\alpha ,p}$ and spaces $W^{\\alpha , p}(\\mathbb {R}^{d+1})$ , $L^{\\alpha , p}_{H}(\\mathbb {R}^{d+1}) $ adapted to the Laplacian and Hermite operators, respectively.", "Theorem 4.5 Let $\\alpha >0$ and $p\\in (1, \\infty ),$ then $L^{\\alpha , p}_{H}(\\mathbb {R}^{d+1}) L^{\\alpha , p}_{H_{\\textup {par}}}(\\mathbb {R}^{d+1}) W^{\\alpha , p}(\\mathbb {R}^{d+1})$ .", "If $ f\\in W^{\\alpha , p}(\\mathbb {R}^{d+1})$ and has compact support, then $f\\in L_{ H_{\\textup {par}}}^{\\alpha , p}(\\mathbb {R}^{d+1}) $ .", "We firstly show the inclusion in $(1)$ .", "It suffices to verify that the symbols of $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2} H_{\\textup {par}}^{-\\alpha /2},\\;\\; \\text{and} \\;\\; H_{\\textup {par}}^{\\alpha /2} H^{-\\alpha /2}$ belong to the symbol class $S_{1,0}^0$ and so they define bounded operators on $L^p(\\mathbb {R}^{d+1})$ , (see [22], [23]).", "For the former: since the symbol of the operator $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}$ belongs to class $S^\\alpha _{1,0}$ and the symbol of the operator $H_{\\textup {par}}^{-\\alpha /2} $ belongs to class $ G^{-\\alpha } $ due to Lemma REF and class $ S^{-\\alpha }_{1,0}$ , we obtain that the symbol of the operator $ (1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2} H_{\\textup {par}}^{-\\alpha /2}$ belongs to $S_{1,0}^0$ .", "Therefore, the operator $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2} H_{\\textup {par}}^{-\\alpha /2}$ is bounded on $L^p(\\mathbb {R}^{d+1})$ .", "For the later: it is known for the symbol $q_\\alpha (\\rho , x, \\tau , \\xi )$ of the operator $H^{-\\alpha /2}$ that $q_\\alpha \\in G^{-\\alpha }$ in [26] since $|D_z^\\beta D_{\\tau }^\\gamma D_{\\xi }^{\\delta }q_{\\alpha }(\\rho , x, \\tau , \\xi ) |&\\lesssim (1+|\\tau |+|\\xi |+|\\rho |+|x|) ^{-\\alpha -|\\beta |-|\\gamma |-|\\delta |}\\\\&\\lesssim (1+|\\tau |+|\\xi |+|x|) ^{-\\alpha -|\\beta |-|\\gamma |-|\\delta |}.$ As the symbol of the operator $H_{\\textup {par}}^\\alpha $ belong to class $ G^{\\alpha }$ , the symbol of the operator $H_{\\textup {par}}^{\\alpha /2} H^{-\\alpha /2}$ belongs to $S_{1,0}^0$ by Proposition REF , which implies that the operator $H_{\\textup {par}}^{\\alpha /2} H^{-\\alpha /2}$ is bounded on $L^p(\\mathbb {R}^{d+1})$ .", "Next, we show the nonequivalence between them.", "Define $g_1(\\rho , x)&=\\frac{1}{(1+\\rho )^{\\frac{1}{p}+\\alpha }}\\frac{1}{(1+|x|)^{\\frac{1}{p}+\\alpha }}, & \\quad f_1(\\rho , x)&=(I-\\Delta _{\\mathbb {R}^{d+1}})^{-\\alpha /2} g_1(\\rho , x),\\\\g_2(\\rho , x)&=\\frac{1}{(1+\\rho )^{\\frac{1}{p}+\\alpha }} \\Phi _{\\mu }(x), & \\quad f_2(\\rho , x)&=H^{-\\alpha /2} g_2(\\rho , x).$ We claim that $f_1\\in W^{\\alpha , p}(\\mathbb {R}^{d+1})\\setminus L^{\\alpha , p}_{H_{\\textup {par}}}(\\mathbb {R}^{d+1}),\\; \\; \\text{and} \\;\\;\\ f_2 \\in L^{\\alpha , p}_{H_{\\textup {par}}}(\\mathbb {R}^{d+1})\\setminus L^{\\alpha , p}_{H}(\\mathbb {R}^{d+1}).$ Let $G_\\alpha (z)$ be the kernel of the opertor $(I-\\Delta _{\\mathbb {R}^{d+1}})^{-\\alpha /2}$ , that is, $G_\\alpha (z)=\\frac{1}{(4\\pi )^{\\alpha /2} \\Gamma (\\alpha /2)} \\int _0^\\infty \\textup {e}^{-\\frac{\\pi |z|^2}{t}-\\frac{t}{4\\pi }} t^{\\alpha /2-(d+1)/2} \\mathop {}\\!\\mathrm {d}t.$ On the one hand, it is easy to see that $g_1 \\in L^p(\\mathbb {R}^{d+1})$ , hence $f_1\\in W^{\\alpha , p}(\\mathbb {R}^{d+1})$ .", "On the other hand, since $ G_\\alpha (z)$ is positive, $f_1(\\rho , x) = \\int _{\\mathbb {R}^{d+1}} G_\\alpha (z^{\\prime } ) g_1(z-z^{\\prime }) \\mathop {}\\!\\mathrm {d}z^{\\prime }\\ge (2+|\\rho |)^{-1/p-\\alpha }(2+|x|)^{-1/p-\\alpha }\\int _{|z|<1} G_\\alpha (z^{\\prime }) \\mathop {}\\!\\mathrm {d}z^{\\prime },$ which implies that $|x|^\\alpha f_1 \\notin L^{p}(\\mathbb {R}^{d+1})$ , and therefore we have $f_1\\notin L^{\\alpha , p}_{H_{\\textup {par}}}(\\mathbb {R}^{d+1})$ by Corollary REF .", "Similarly, using the fact that $g_2 \\in L^p(\\mathbb {R}^{d+1})$ holds, we have $f_2 \\in L^{\\alpha , p}_{H_{\\textup {par}}}(\\mathbb {R}^{d+1})$ .", "At the same time, we have $f_2(\\rho , x) \\ge (2+|\\rho |)^{-1/p-\\alpha } \\int _{|z|<1} \\Psi _\\alpha (z^{\\prime }) \\mathop {}\\!\\mathrm {d}\\rho ^{\\prime } \\mathop {}\\!\\mathrm {d}x^{\\prime },$ which implies that $|\\rho |^\\alpha f_2 \\notin L^{p}(\\mathbb {R}^{d+1})$ , and therefore we have $f_2\\notin L^{\\alpha , p}_{H}(\\mathbb {R}^{d+1})$ .", "Lastly, the statement (REF ) follows from (REF ) and [4]." ], [ "Some Integral Inequalities adapted to the operator $H_{\\textup {par}}$", "In this section, we obtain some integral inequalities associated to the partial harmonic oscillator $H_{\\textup {par}}$ ." ], [ "Hardy–Littlewood–Sobolev inequality. ", "Let $0<\\alpha <d+1$ .", "By (REF ), we have $H_{\\textup {par}}^{-\\alpha /2} f(z)\\le C \\int _{\\mathbb {R}^{d+1}} \\frac{|f(z^{\\prime })|}{|z-z^{\\prime }|^{d+1-\\alpha }} \\mathop {}\\!\\mathrm {d}z^{\\prime },$ then we have Proposition 5.1 Let $p, q>1$ and $0<\\alpha <d+1$ with $\\frac{1}{q}=\\frac{1}{p}-\\frac{\\alpha }{d+1}$ , then the operator $H_{\\textup {par}}^{-\\alpha /2}$ is bounded from $L^p(\\mathbb {R}^{d+1})$ to $L^q(\\mathbb {R}^{d+1})$ .", "This is obvious from the rough estimate (REF ) and the classical Hardy–Littlewood–Sobolev inequality in [16].", "In fact, by (REF ), the integral kerel of the operator $H_{\\textup {par}}^{-\\alpha /2} f$ is controlled by an integral function which has exponential decay away from the zero, which implies the following refined estimates.", "Theorem 5.2 Let $0<\\alpha <d+1$ .", "Then the following holds: there exists a constant $C>0$ such that $\\Vert H_{\\textup {par}}^{-\\alpha /2} f\\Vert _{q} \\le C \\Vert f\\Vert _1,$ for all $f\\in L^1(\\mathbb {R}^{d+1})$ if and only if $1\\le q <\\frac{d+1}{d+1-\\alpha }$ .", "there exists a constant $C>0$ such that $\\Vert H_{\\textup {par}}^{-\\alpha /2} f\\Vert _{\\infty } \\le C \\Vert f\\Vert _p$ for all $f\\in L^p(\\mathbb {R}^{d+1})$ if and only if $p>\\frac{d+1}{\\alpha }$ .", "If $1<p<\\infty $ , $1<q<\\infty $ and $\\frac{1}{p}-\\frac{\\alpha }{d+1} \\le \\frac{1}{q}<\\frac{1}{p}$ , then there exists a constant $C>0$ such that $\\Vert H_{\\textup {par}}^{-\\alpha /2} f\\Vert _{q} \\le C \\Vert f\\Vert _p$ for all $f\\in L^{p}(\\mathbb {R}^{d+1})$ .", "For the case (REF ).", "By generalized Minkowski's inequality, we obtain for function $f\\in L^1(\\mathbb {R}^{d+1})$ that $\\int _{\\mathbb {R}^{d+1}} ( H_{\\textup {par}}^{-\\alpha /2} f) ^q (z) \\mathop {}\\!\\mathrm {d}z \\le \\left(\\int _{\\mathbb {R}^{d+1}}\\left( \\int _{\\mathbb {R}^{d+1}} K_{\\alpha /2}(z, z^{\\prime })^q \\mathop {}\\!\\mathrm {d}z \\right)^{1/q}| f(z^{\\prime })|\\mathop {}\\!\\mathrm {d}z^{\\prime }\\right)^{q}$ By (REF ) and (REF ), we have $\\int _{\\mathbb {R}^{d+1}} K_{\\alpha /2}(z, z^{\\prime })^q \\mathop {}\\!\\mathrm {d}z\\lesssim \\int _{D_{-}} \\frac{\\mathop {}\\!\\mathrm {d}z}{|z-z^{\\prime }|^{q(d+1-\\alpha )}}+\\int _{D_{+}} \\textup {e}^{-q|z-z^{\\prime }|^2/{16}} \\mathop {}\\!\\mathrm {d}z,$ where $ D_{-}=\\lbrace |z-z^{\\prime }|<1\\rbrace $ and $D_{+}=\\lbrace |z-z^{\\prime }|\\ge 1\\rbrace $ .", "The right hand side in the above estimate is finite if $1\\le q <\\frac{d+1}{d+1-\\alpha }$ .", "For the converse, note that as $|z-z^{\\prime }|<1$ , we have $K_{\\alpha /2}(z, z^{\\prime }) \\ge C_\\alpha \\textup {e}^{-|x+x^{\\prime }|^2} \\int _0^1 t^{\\alpha -1-\\frac{d+1}{2}} \\textup {e}^{- |z-z^{\\prime }|^2/t}\\mathop {}\\!\\mathrm {d}t \\ge C_\\alpha \\frac{\\textup {e}^{-|x+x^{\\prime }|^2}}{|z-z^{\\prime }|^{d+1-\\alpha }}.$ Let $f_n$ be an approximation of identity.", "Then we have $\\int _{\\mathbb {R}^{d+1}} \\left(\\int _{\\mathbb {R}^{d+1}} \\frac{\\textup {e}^{-|x+x^{\\prime }|^2}}{|z-z^{\\prime }|^{d-\\alpha }} f_n(z^{\\prime }) \\mathop {}\\!\\mathrm {d}z^{\\prime }\\right)^q \\mathop {}\\!\\mathrm {d}z\\xrightarrow[n\\rightarrow \\infty ]{} \\int _{|z| \\le 1} \\frac{\\textup {e}^{-q|x|^2}}{|z|^{q(d+1-\\alpha )}} \\mathop {}\\!\\mathrm {d}z,$ which is $\\infty $ for $q(d+1-\\alpha ) \\ge d$ , and completes the proof of the necessity that $1\\le q\\le d/(d+1-\\alpha )$ .", "For the case $(2)$ .", "By Hölder's inequality, we get $\\left| \\int _{\\mathbb {R}^{d+1}} K_{\\alpha /2 }(z, z^{\\prime }) f(z^{\\prime }) \\mathop {}\\!\\mathrm {d}z^{\\prime }\\right| \\le \\Vert f\\Vert _{p} \\left(\\int _{\\mathbb {R}^{d+1}} K_{\\alpha /2 }(z, z^{\\prime })^{p^{\\prime }} \\mathop {}\\!\\mathrm {d}z^{\\prime }\\right)^{1/p^{\\prime }}.$ Using a similar argument as in the proof of the case $(1)$ , we know that the right hand side is finite when $p>\\frac{d+1}{\\alpha }$ .", "Conversely, by choosing $f(z)={\\left\\lbrace \\begin{array}{ll}|z|^{-\\alpha } (\\log \\frac{1}{|z|})^{-\\frac{\\alpha }{d+1}(1+\\varepsilon )}, & \\text{if } |z|\\le 1/2, \\\\0, & \\text{if } |z|\\ge 1/2,\\end{array}\\right.", "}$ then we have $f\\in L^p(\\mathbb {R}^{d+1})$ for all $p\\le (d+1)/\\alpha $ .", "However, the function $H_{\\textup {par}}^{-\\alpha /2}f$ is essentially unbounded since we have by (REF ) that $H_{\\textup {par}}^{-\\alpha /2} f(0) \\ge C\\int _{|z^{\\prime }|\\le 1/2} |z|^{-(d+1)}\\left(\\log \\frac{1}{|z^{\\prime }|}\\right)^{-\\frac{\\alpha }{d+1}(1+\\varepsilon )} \\mathop {}\\!\\mathrm {d}z^{\\prime } =\\infty ,$ where $ \\varepsilon $ is small.", "The case (REF ) now follows from Proposition REF , the inequality (REF ), the inequality (REF ) and the Riesz–Thorin interpolation theorem.", "Remark 5.3 By Corollary REF and the similar argument as in the above proof, we have the following result: Let $p, q>1$ and $0<\\alpha <d+1$ with $\\frac{1}{p}-\\frac{\\alpha }{d+1} \\le \\frac{1}{q}<\\frac{1}{p}$ , then there exists a constant $C$ such that for all $f\\in L^{p}(\\mathbb {R}^{d+1})$ , we have $\\Vert (H_{\\textup {par}}+2)^{-\\alpha /2} f\\Vert _{q} &\\le C \\Vert f\\Vert _p, \\\\\\Vert (H_{\\textup {par}}-2)^{-\\alpha /2} f\\Vert _{q} &\\le C \\Vert f\\Vert _p, \\ d\\ge 3.$" ], [ "Gagliardo–Nirenberg–Sobolev inequality", "We define the gradient operator associated to the operator $H_{\\textup {par}}$ as follows: $\\nabla _{H_{\\textup {par}}} f:=(A_0f, A_1 f, \\dots , A_d f, A_{-1} f, \\dots , A_{-d} f).$ Theorem 5.4 Let $d\\ge 3$ and $1<p, q <\\infty $ satisfy $\\frac{1}{p}-\\frac{1}{d+1} \\le \\frac{1}{q}<\\frac{1}{p}$ .", "Then for any $f\\in L_{H_{\\textup {par}}}^{1,p}(\\mathbb {R}^{d+1})$ , we have $\\Vert f\\Vert _q \\le C \\Vert \\nabla _{H_{\\textup {par}}} f\\Vert _p.$ By duality and (REF ), we have $\\Vert f\\Vert _{q} \\le \\sum _{-d\\le j\\le d} \\Vert R_j f\\Vert _{q}.$ By Lemma REF , we obtain $R_{0} f =H_{\\textup {par}}^{-1/2} A_{0} f$ and $R_j f=(H_{\\textup {par}}+2 \\operatorname{sgn}j)^{-1/2} A_j f, \\ \\qquad 1\\le |j|\\le d .$ Hence, by Remark REF and Theorem REF , we obtain for $\\frac{1}{p}-\\frac{1}{d+1} \\le \\frac{1}{q}<\\frac{1}{p}$ that $\\Vert f\\Vert _{q} &\\le \\sum _{-d\\le j\\le d} \\Vert R_{j} f\\Vert _{q}\\\\&\\le \\sum _{1\\le |j| \\le d} \\Vert (H_{\\textup {par}}+2\\operatorname{sgn}j)^{-1/2} A_j f\\Vert _{q}+ \\Vert H_{\\textup {par}}^{-1/2} A_{0} f\\Vert _{q}\\\\&\\le \\sum _{-d\\le j\\le d} \\Vert A_jf\\Vert _{p} = \\Vert \\nabla _{H_{\\textup {par}}} f\\Vert _p.$ This completes the proof.", "Remark 5.5 The classical Gagliardo–Nirenberg–Sobolev inequality in $\\mathbb {R}^{d+1}$ holds with $\\frac{1}{q} = \\frac{1}{p} - \\frac{\\alpha }{d+1}$ .", "The result here holds for a larger range of $(p, q)$ due to the extra decay property of functions $f\\in L_{H_{\\textup {par}}}^{1,p}(\\mathbb {R}^{d+1})$ ." ], [ "Hardy's inequality ", "Recall that the Hardy's inequality is: $\\Vert |z|^{-\\alpha } f(z)\\Vert _{L^p(\\mathbb {R}^{d+1})} \\lesssim \\Vert (-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2} f\\Vert _{L^p(\\mathbb {R}^{d+1})}, \\;\\; 0<\\alpha <\\frac{d+1}{p},$ see [28].", "We extend classical Hardy's inequality for the Laplacian operator $-\\Delta _{\\mathbb {R}^{d+1}}$ to the operator $H_{\\textup {par}}$ as follows: Theorem 5.6 Let $1<p<\\infty $ .", "Then, for any $f\\in C_0^\\infty (\\mathbb {R}^{d+1})$ , we have $\\Vert |z|^{-\\alpha } f(z)\\Vert _{L^p(\\mathbb {R}^{d+1})} \\lesssim \\Vert H_{\\textup {par}}^{\\alpha /2} f\\Vert _{L^p(\\mathbb {R}^{d+1})},\\ 0<\\alpha <\\frac{d+1}{p} .$ In particular, the inequality $\\Vert |z|^{-1} f(z)\\Vert _{L^p(\\mathbb {R}^{d+1})} \\lesssim \\Vert \\nabla _{H_{\\textup {par}}} f\\Vert _{L^p(\\mathbb {R}^{d+1})}$ holds when $1<p<d+1$ .", "Note that the symbols of $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}$ and $H_{\\textup {par}}^{-\\alpha /2}$ belong to $S_{1,0}^\\alpha $ and $S^{-\\alpha }_{1,0}$ respectively.", "The composition law gives that the symbol of $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}H_{\\textup {par}}^{-\\alpha /2} $ belongs to $S^{0}_{1,0}$ , which implies that $(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}H_{\\textup {par}}^{-\\alpha /2 }$ are bounded in $L^p(\\mathbb {R}^{d+1})$ for $1<p<\\infty $ , (see [22], [23]).", "In addition, $(-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}(1-\\Delta _{\\mathbb {R}^{d+1}})^{-\\alpha /2}$ are bounded in $L^p(\\mathbb {R}^{d+1})$ for $1<p<\\infty $ from [20].", "Hence, by the inequality (REF ) we have $\\Vert |z|^{-\\alpha } f(z)\\Vert _{L^p(\\mathbb {R}^{d+1})}&\\lesssim \\Vert (-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2} f\\Vert _{L^p(\\mathbb {R}^{d+1})}\\\\&\\lesssim \\Vert (-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}(1-\\Delta _{\\mathbb {R}^{d+1}})^{-\\alpha /2}(1-\\Delta _{\\mathbb {R}^{d+1}})^{\\alpha /2}H_{\\textup {par}}^{-\\alpha /2 }H_{\\textup {par}}^{\\alpha /2 }f\\Vert _{L^p(\\mathbb {R}^{d+1})}\\\\&\\lesssim \\Vert H_{\\textup {par}}^{\\alpha /2 }f\\Vert _{L^p(\\mathbb {R}^{d+1})}.$ For $\\alpha =1$ , using the first inequality in (REF ), we have $\\Vert |z|^{-1} f(z)\\Vert _{L^p(\\mathbb {R}^{d+1})}\\lesssim \\Vert H_{\\textup {par}}^{1/2} f\\Vert _{L^p(\\mathbb {R}^{d+1})} \\lesssim \\sum _{0<|j|\\le d} \\Vert A_jf \\Vert _{L^p(\\mathbb {R}^{d+1})}\\lesssim \\Vert \\nabla _{H_{\\textup {par}}} f\\Vert _{L^p(\\mathbb {R}^{d+1})}.$ Hence, we completes the proof." ] ]
2207.10461
[ [ "Emergence of molecular-type characteristic spectrum of hidden-charm\n pentaquark with strangeness embodied in the $P_{\\psi s}^\\Lambda(4338)$ and\n $P_{cs}(4459)$" ], [ "Abstract Inspired by the newly observed $P_{\\psi s}^\\Lambda(4338)$ and the reported $P_{cs}(4459)$, we indicate the existence of the molecular-type characteristic mass spectrum for hidden-charm pentaquark with strangeness.", "It shows that the $P_{cs}(4459)$ may contain two substructures corresponding to the $\\Xi_c\\bar{D}^*$ molecular states with $J^P=1/2^-$ and $3/2^-$, while there exists the corresponding $\\Xi_c \\bar{D}$ molecular state with $J^P=1/2^-$.", "As the prediction, we present another characteristic mass spectrum of the $\\Xi_c^\\prime\\bar{D}^{(*)}$ molecular states.", "Experimental confirmation of these characteristic mass spectra is a crucial step of constructing hadronic molecular family." ], [ "Introduction", "Since the birth of quark model [1], [2], the concept of exotic hadronic matter has been proposed, which has attracted extensive attention from both experimentalists and theorists.", "Especially, with the accumulation of experimental data, the observations of charmoniumlike $XYZ$ states and $P_c$ states in the past around twenty years make this issue become hot spot of hadron physics up till now [3], [4], [5], [6], [7], [11], [10], [13], [8], [9], [12].", "These studies are helpful to deepen our understanding of non-perturbative quantum chromodynamics.", "Among different exotic hadronic matters, the hadronic molecular state was popularly applied to explain these novel phenomena relevant to new hadronic states since most of observed hadronic states are close the thresholds of hadron pair.", "A typical example is the LHCb's discoveries of the $P_c(4312)$ , $P_c(4440)$ , and $P_c(4457)$ in the $J/\\psi p$ invariant mass spectrum of the $\\Lambda _b\\rightarrow J/\\psi p K$ [14] weak decay, which show a characteristic mass spectrum consistent with that of hidden-charm molecular pentaquark, which was predicted in Refs.", "[19], [21], [15], [16], [17], [18], [20].", "Finally, hadronic molecular state assignment to the $P_c$ states was established [3], [4], [5], [6], [7], [11], [10], [13], [8], [9], [12].", "Very recently, the LHCb Collaboration announced the observed of a $J/\\psi \\Lambda $ resonance in the $B^-\\rightarrow J/\\psi \\Lambda \\bar{p}$ process [22], which has resonance parameter $M=4338\\pm 0.7\\, \\mathrm {MeV},\\quad {\\mathrm {and}}\\quad \\Gamma =7.0\\pm 1.2\\, \\mathrm {MeV},\\nonumber $ and significance larger than $10\\sigma $ .", "Thus, this $J/\\psi \\Lambda $ resonance is refereed to be the $P_{\\psi s}^\\Lambda (4338)$ , which can be as the candidate of hidden-charm pentaquark with strangeness, as predicted by former theoretical studies [30], [24], [23], [25], [26], [27], [28], [32], [33], [29], [35], [34], [36], [37], [39], [40], [44], [46], [38], [45], [49], [47], [42], [48], [41], [43], [31].", "Before observing the $P_{\\psi s}^\\Lambda (4338)$ , LHCb once reported an evidence of the enhancement structure ($P_{cs}(4459)$ ) in the $J/\\psi \\Lambda $ invariant mass spectrum of $\\Xi _b^-\\rightarrow J/\\psi \\Lambda K$ [50], which is the candidate of hidden-charm pentaquark with strangeness [30], [24], [23], [25], [26], [27], [28], [32], [33], [29], [35], [34], [36], [37], [39], [40], [44], [46], [38], [45], [49], [47], [42], [48], [41], [43], [31].", "Figure: Comparison of the mass spectrum of three P c P_c states and that of the P ψs Λ (4338)P_{\\psi s}^\\Lambda (4338) and P cs (4459)P_{cs}(4459).", "Here, we also list the experimental data of the P cs (4459)P_{cs}(4459) from LHCb .In this work, we indicate that there exists a new characteristic mass spectrum applied to identify the molecular-type hidden-charm pentaquark with strangeness, which is reflected by the mass behavior of the observed $P_{\\psi s}^\\Lambda (4338)$ and the reported $P_{cs}(4459)$ enhancement.", "As shown in Fig.", "REF , the gap between the $P_c(4312)$ mass and the average mass of the $P_c(4440)$ and $P_c(4457)$ is 137 MeV, which is similar to the mass gap (121 MeV) of the $P_{cs}(4459)$ and the $P_{\\psi s}^\\Lambda (4338)$ .", "It is obvious that here SU(3) symmetry plays the role to produce this similarity of mass gap.", "We also notice an interesting phenomenon.", "There exists well coincidence relations of these states, i.e., the $P_{\\psi s}^\\Lambda $ corresponds to $P_c(4312)$ , while the $P_{cs}(4459)$ structure should correspond to the $P_c(4440)$ and $P_c(4457)$ .", "This fact makes us conjecture that the $P_{cs}(4459)$ should contain two substructures.", "If checking the LHCb data of the $J/\\psi \\Lambda $ invariant mass spectrum of $\\Xi _b^-\\rightarrow J/\\psi \\Lambda K$ [50], we find that there exists the possibility of the double-peak structure slightly below the $\\Xi _c\\bar{D}^*$ threshold as indicated in Fig.", "REF , which just correspond to two $\\Xi _c\\bar{D}^*$ molecular states with $J^{P}=1/2^-$ and $3/2^-$ .", "Such scenario can be tested in future experiment like the LHCb.", "And then, the newly reported $P_{\\psi s}^\\Lambda (4338)$ near the $\\Xi _c \\bar{D}$ threshold is a good candidate of the $\\Xi _c\\bar{D}$ molecular state with $J^P=1/2^-$ .", "Inspired by the established the molecular-type characteristic mass spectrum of three $P_c$ states in 2019 [14] and the molecular-type characteristic mass spectrum of the $P_{\\psi s}^\\Lambda (4338)$ and the $P_{cs}(4459)$ enhancement structure found in this work, we further give a new characteristic mass spectrum of the $\\Xi _c^\\prime \\bar{D}^{(*)}$ molecular states, which will be accessible at experiment." ], [ "The obtained/predicted characteristic mass spectra of the $\\Xi _c\\bar{D}^{*}/\\Xi _c^\\prime \\bar{D}^{(*)}$ molecular pentaquarks", "For getting the information of the interaction of the $\\Xi _c^{(\\prime )}\\bar{D}^{(*)}$ systems, the one boson exchange (OBE) model is adopted, which is an effective way to study the hadron interactions [5], [10].", "In realistic calculation, the $S$ -$D$ wave mixing effect is taken into account.", "In general, there exists three typical steps for deducing the effective potentials of these discussed $\\Xi _c^{(\\prime )} \\bar{D}_s^{(*)}$ systems within the OBE model.", "As the first step, we should write out the scattering amplitudes $\\mathcal {M}(h_1h_2\\rightarrow h_3h_4)$ of the scattering process $h_1h_2\\rightarrow h_3h_4$ by considering the effective Lagrangian approach.", "The involved effective Lagrangians for depicting the heavy hadrons $\\mathcal {B}_c/\\bar{D}^{(*)}$ coupling with the light scalar, pesudoscalar, and vector mesons read as [51], [52], [53], [54], [55], [56], [57], [58] $\\mathcal {L}_{\\mathcal {B}_{\\bar{3}}\\mathcal {B}_{\\bar{3}}\\sigma } &=& l_B\\langle \\bar{\\mathcal {B}}_{\\bar{3}}\\sigma \\mathcal {B}_{\\bar{3}}\\rangle ,\\\\\\mathcal {L}_{\\mathcal {B}_{\\bar{3}}\\mathcal {B}_{\\bar{3}}\\mathbb {V}}&=&\\frac{1}{\\sqrt{2}}\\beta _Bg_V\\langle \\bar{\\mathcal {B}}_{\\bar{3}}v\\cdot \\mathbb {V}\\mathcal {B}_{\\bar{3}}\\rangle ,\\\\\\mathcal {L}_{\\mathcal {B}_{6}\\mathcal {B}_{6}\\sigma } &=&-l_S\\langle \\bar{\\mathcal {B}}_6\\sigma \\mathcal {B}_6\\rangle ,\\\\\\mathcal {L}_{\\mathcal {B}_6\\mathcal {B}_6\\mathbb {P}} &=&i\\frac{g_1}{2f_{\\pi }}\\varepsilon ^{\\mu \\nu \\lambda \\kappa }v_{\\kappa }\\langle \\bar{\\mathcal {B}}_6\\gamma _{\\mu }\\gamma _{\\lambda }\\partial _{\\nu }\\mathbb {P}\\mathcal {B}_6\\rangle ,\\\\\\mathcal {L}_{\\mathcal {B}_6\\mathcal {B}_6\\mathbb {V}}&=&-\\frac{\\beta _Sg_V}{\\sqrt{2}}\\langle \\bar{\\mathcal {B}}_6v\\cdot \\mathbb {V}\\mathcal {B}_6\\rangle \\nonumber \\\\&&-i\\frac{\\lambda _S g_V}{3\\sqrt{2}}\\langle \\bar{\\mathcal {B}}_6\\gamma _{\\mu }\\gamma _{\\nu }\\left(\\partial ^{\\mu }\\mathbb {V}^{\\nu }-\\partial ^{\\nu }\\mathbb {V}^{\\mu }\\right)\\mathcal {B}_6\\rangle ,\\\\\\mathcal {L}_{{\\bar{D}}^{*}{\\bar{D}}^{*}\\sigma } &=&2g_{\\sigma } {\\bar{D}}_{a\\mu }^* {\\bar{D}}_a^{*\\mu \\dag }\\sigma ,\\\\\\mathcal {L}_{{\\bar{D}}^{*}{\\bar{D}}^{*}\\mathbb {P}}&=&\\frac{2ig}{f_{\\pi }}v^{\\alpha }\\varepsilon _{\\alpha \\mu \\nu \\lambda }{\\bar{D}}_a^{*\\mu \\dag }{\\bar{D}}_b^{*\\lambda }\\partial ^{\\nu }{\\mathbb {P}}_{ab},\\\\\\mathcal {L}_{{\\bar{D}}^{*}{\\bar{D}}^{*}\\mathbb {V}}&=&-\\sqrt{2}\\beta g_V {\\bar{D}}_{a\\mu }^* {\\bar{D}}_b^{*\\mu \\dag }v\\cdot \\mathbb {V}_{ab}\\nonumber \\\\&&-2\\sqrt{2}i\\lambda g_V {\\bar{D}}_a^{*\\mu \\dag }{\\bar{D}}_b^{*\\nu }\\left(\\partial _{\\mu }\\mathbb {V}_{\\nu }-\\partial _{\\nu }\\mathbb {V}_{\\mu }\\right)_{ab}.$ Here, the matrices $\\mathcal {B}_{\\bar{3}}$ , $\\mathcal {B}_6$ , ${\\mathbb {P}}$ , and $\\mathbb {V}_{\\mu }$ have the standard forms as listed below $\\mathcal {B}_{\\bar{3}} &=& \\left(\\begin{array}{ccc}0 &\\Lambda _c^+ &\\Xi _c^+\\\\-\\Lambda _c^+ &0 &\\Xi _c^0\\\\-\\Xi _c^+ &-\\Xi _c^0 &0\\end{array}\\right),$ $\\mathcal {B}_6&=& \\left(\\begin{array}{ccc}\\Sigma _c^{++} &\\frac{\\Sigma _c^{+}}{\\sqrt{2}} &\\frac{\\Xi _c^{(^{\\prime })+}}{\\sqrt{2}}\\\\\\frac{\\Sigma _c^{+}}{\\sqrt{2}} &\\Sigma _c^{0} &\\frac{\\Xi _c^{(^{\\prime })0}}{\\sqrt{2}}\\\\\\frac{\\Xi _c^{(^{\\prime })+}}{\\sqrt{2}} &\\frac{\\Xi _c^{(^{\\prime })0}}{\\sqrt{2}} &\\Omega _c^{0}\\end{array}\\right),$ ${\\mathbb {P}} &=& {\\left(\\begin{array}{cc}\\frac{\\pi ^0}{\\sqrt{2}}+\\frac{\\eta }{\\sqrt{6}} &\\pi ^+ \\\\\\pi ^- &-\\frac{\\pi ^0}{\\sqrt{2}}+\\frac{\\eta }{\\sqrt{6}} \\end{array}\\right)},$ ${\\mathbb {V}}_{\\mu }&=& {\\left(\\begin{array}{cc}\\frac{\\rho ^0}{\\sqrt{2}}+\\frac{\\omega }{\\sqrt{2}} &\\rho ^+ \\\\\\rho ^- &-\\frac{\\rho ^0}{\\sqrt{2}}+\\frac{\\omega }{\\sqrt{2}} \\end{array}\\right)}_{\\mu },$ respectively.", "In the above expression, these coupling constants can be extracted from the experimental data or by the theoretical model, and the corresponding signs between these coupling constants can be fixed by the quark model [59].", "Thus, we take $g_{\\sigma }=0.76$ , $l_B=-3.65$ , $l_S=6.20$ , $g=0.59$ , $g_1=0.94$ , $f_\\pi =132~\\rm {MeV}$ , $\\beta g_V=-5.25$ , $\\beta _B g_V=-6.00$ , $\\beta _S g_V=12.00$ , $\\lambda g_V =-3.27~\\rm {GeV}^{-1}$ , and $\\lambda _S g_V=19.20~\\rm {GeV}^{-1}$ in the following numerical analysis [58], [60], [38].", "With the above preparation, the obtained effective potentials in the momentum space can be related to the corresponding scattering amplitudes via the Breit approximation [61], [62] and the non-relativistic normalization.", "Finally, the effective potentials in the coordinate space can be obtained by the Fourier transformation.", "Since the discussed hadrons are not pointlike particles, we should introduce the monopole type form factor $\\mathcal {F}(q^2,m_E^2) = (\\Lambda ^2-m_E^2)/(\\Lambda ^2-q^2)$ [63], [64] in each interaction vertex, which can compensate the effect from the off-shell exchanged mesons and depict the structure effect of interaction vertex.", "In order to obtain the concrete effective potentials for these discussed hidden-charm molecular pentaquark systems with strangeness, we need to construct their wave functions.", "For these discussed $\\Xi _c^{(\\prime )}\\bar{D}^{(*)}$ systems, their spin-orbital wave functions can be expressed as $|\\Xi _c^{(\\prime )}\\bar{D}({}^{2S+1}L_{J})\\rangle &=&\\sum _{m,m_L}C^{J,M}_{\\frac{1}{2}m,Lm_L}\\chi _{\\frac{1}{2}m}|Y_{L,m_L}\\rangle ,\\nonumber \\\\|\\Xi _c^{(\\prime )}\\bar{D}^*({}^{2S+1}L_{J})\\rangle &=&\\sum _{m,m^{\\prime },m_S,m_L}C^{S,m_S}_{\\frac{1}{2}m,1m^{\\prime }}C^{J,M}_{Sm_S,Lm_L}\\chi _{\\frac{1}{2}m}\\nonumber \\\\&&\\times \\epsilon _{m^{\\prime }}^{\\mu }|Y_{L,m_L}\\rangle .$ In the above expressions, the constant $C^{e,f}_{ab,cd}$ denotes the Clebsch-Gordan coefficient, and $|Y_{L,m_L}\\rangle $ is the spherical harmonics function.", "The polarization vector $\\epsilon _{m}^{\\mu }\\,(m=0,\\,\\pm 1)$ with spin-1 field is written as $\\epsilon _{\\pm }^{\\mu }= \\left(0,\\,\\pm 1,\\,i,\\,0\\right)/\\sqrt{2}$ and $\\epsilon _{0}^{\\mu }= \\left(0,0,0,-1\\right)$ in the static limit, and the $\\chi _{\\frac{1}{2}m}$ stands for the spin wave function for the charmed baryons $\\Xi _c^{(\\prime )}$ .", "And then, we summarize the flavor wave functions $|I,I_{3}\\rangle $ for these discussed $\\Xi _c^{(\\prime )}\\bar{D}^{(*)}$ systems in Table REF .", "Table: Flavor wave functions for these discussed Ξ c (') D ¯ (*) \\Xi _c^{(\\prime )}\\bar{D}^{(*)} systems.", "Here, II and I 3 I_3 are their isospin and the third component, respectively.With the standard procedures of the OBE model, the expressions of the effective potentials in the coordinate space for these investigated isoscalar $\\Xi _c^{(\\prime )} \\bar{D}_s^{(*)}$ systems are given by $\\mathcal {V}^{\\Xi _{c}\\bar{D}}&=&2l_{B}g_{\\sigma }Y_\\sigma -\\frac{3\\beta \\beta _B g_{V}^2}{4}Y_{\\rho }+\\frac{\\beta \\beta _B g_{V}^2}{4}Y_{\\omega },\\\\\\mathcal {V}^{\\Xi _{c}^{\\prime }\\bar{D}}&=&-l_Sg_{\\sigma }Y_\\sigma +\\frac{3\\beta \\beta _S g_{V}^2}{8}Y_{\\rho }-\\frac{\\beta \\beta _S g_{V}^2}{8}Y_{\\omega },\\\\\\mathcal {V}^{\\Xi _{c}\\bar{D}^*}&=&2l_{B}g_{\\sigma }\\mathcal {A}_{1}Y_\\sigma -\\frac{3\\beta \\beta _B g_{V}^2}{4}\\mathcal {A}_{1}Y_{\\rho }+\\frac{\\beta \\beta _B g_{V}^2}{4}\\mathcal {A}_{1}Y_{\\omega },\\\\\\mathcal {V}^{\\Xi _{c}^{\\prime }\\bar{D}^*}&=&-l_Sg_{\\sigma }\\mathcal {A}_{1}Y_\\sigma -\\frac{g_1 g}{4f_\\pi ^2}\\left[\\mathcal {A}_{2}\\mathcal {O}_r+\\mathcal {A}_{3}\\mathcal {P}_r\\right]Y_{\\pi }\\nonumber \\\\&&-\\frac{g_1 g}{36f_\\pi ^2}\\left[\\mathcal {A}_{2}\\mathcal {O}_r+\\mathcal {A}_{3}\\mathcal {P}_r\\right]Y_{\\eta }\\nonumber \\\\&&+\\frac{3\\beta \\beta _S g_{V}^2}{8}\\mathcal {A}_{1}Y_{\\rho }+\\frac{3\\lambda \\lambda _S g_V^2}{18}\\left[2\\mathcal {A}_{2}\\mathcal {O}_r-\\mathcal {A}_{3}\\mathcal {P}_r\\right]Y_{\\rho }\\nonumber \\\\&&-\\frac{\\beta \\beta _S g_{V}^2}{8}\\mathcal {A}_{1}Y_{\\omega }-\\frac{\\lambda \\lambda _S g_V^2}{18}\\left[2\\mathcal {A}_{2}\\mathcal {O}_r-\\mathcal {A}_{3}\\mathcal {P}_r\\right]Y_{\\omega }.\\\\$ Here, $\\mathcal {O}_r = \\frac{1}{r^2}\\frac{\\partial }{\\partial r}r^2\\frac{\\partial }{\\partial r}$ and $\\mathcal {P}_r = r\\frac{\\partial }{\\partial r}\\frac{1}{r}\\frac{\\partial }{\\partial r}$ .", "The function $Y_i$ is defined as $Y_i\\equiv \\dfrac{e^{-m_ir}-e^{-\\Lambda r}}{4\\pi r}-\\dfrac{\\Lambda ^2-m_i^2}{8\\pi \\Lambda }e^{-\\Lambda r}.$ Additionally, we also define several operators, which include $\\mathcal {A}_{1}=\\chi ^{\\dagger }_3\\left({\\mathbf {\\epsilon }^{\\dagger }_{4}}\\cdot {\\mathbf {\\epsilon }_{2}}\\right)\\chi _1$ , $\\mathcal {A}_{2}=\\chi ^{\\dagger }_3\\left[{\\mathbf {\\sigma }}\\cdot \\left(i{\\mathbf {\\epsilon }_{2}}\\times {\\mathbf {\\epsilon }^{\\dagger }_{4}}\\right)\\right]\\chi _1$ , and $\\mathcal {A}_{3}=\\chi ^{\\dagger }_3T({\\mathbf {\\sigma }},i{\\mathbf {\\epsilon }_{2}}\\times {\\mathbf {\\epsilon }^{\\dagger }_{4}})\\chi _1$ .", "Here, the tensor force operator $T({\\mathbf {x}},{\\mathbf {y}})$ is expressed as $T({\\mathbf {x}},{\\mathbf {y}})= 3\\left(\\hat{\\mathbf {r}} \\cdot {\\mathbf {x}}\\right)\\left(\\hat{\\mathbf {r}} \\cdot {\\mathbf {y}}\\right)-{\\mathbf {x}} \\cdot {\\mathbf {y}}$ with $\\hat{\\mathbf {r}}={\\mathbf {r}}/|{\\mathbf {r}}|$ .", "In Table REF , we collect the obtained relevant operator matrix elements $\\langle f|\\mathcal {A}_k|i\\rangle $ , which are used in our calculation.", "Table: The relevant operator matrix elements 〈f|𝒜 k |i〉\\langle f|\\mathcal {A}_k|i\\rangle , which are obtained by sandwiching these operators between the relevant spin-orbit wave functions.Based on the obtained effective potentials in the coordinate space for these discussed hidden-charm pentaquarks with strangeness, we can obtain the bound state solutions by solving the coupled channel Schr$\\rm {\\ddot{o}}$ dinger equation, where the obtained bound state solutions mainly include the binding energy $E$ , the root-mean-square radius $r_{\\rm RMS}$ , and the probabilities for different components $P_i$ , which may provide us the critical information to judge whether these discussed hidden-charm molecular pentaquark states with strangeness exist or not.", "Of course, when judging whether the loosely bound state is a possible hadronic molecular candidate, we also need to specify two points: (1) the loosely bound state with the cutoff value close to 1.00 GeV is more likely to be regarded as the possible hadronic molecular candidate, which is widely accepted as a reasonable parameter from the experience of studying the deuteron [63], [64], [65], [66]; (2) the reasonable binding energy for the possible hadronic molecular candidate should be at most tens of MeV, and the corresponding typical size should be larger than the size of all the included component hadrons [58].", "These criteria may provide useful hints to identify the hidden-charm molecular pentaquark candidates with strangeness.", "In our calculation, the masses of these involved hadrons are $m_\\sigma =600.00$ MeV, $m_\\pi =137.27$ MeV, $m_\\eta =547.86$ MeV, $m_\\rho =775.26$ , $m_\\omega =782.66$ MeV, $m_{D^*}=2008.56$ MeV, $m_{\\Xi _{c}}=2469.08$ MeV, and $m_{\\Xi _{c}^{\\prime }}=2578.45$ MeV, which are taken from the Particle Data Group [67].", "Figure: Bound state properties for the SS-wave isoscalar Ξ c D ¯ (*) \\Xi _{c}\\bar{D}^{(*)} systems.", "Here, the cutoff Λ\\Lambda , binding energy EE, and root-mean-square radius r RMS r_{RMS} are in units of GeV \\rm {GeV}, MeV \\rm {MeV}, and fm \\rm {fm}, respectively.For the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}^{(*)}$ systems, we list the numerical results of finding bound state solution in Fig.", "REF .", "For the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}$ state with $J^P=1/2^-$ , we can obtain the bound state solution by setting the cutoff parameter $\\Lambda $ around 1.41 GeV.", "For the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}^*$ states with $J^P=1/2^-$ and $J^P=3/2^-$ , the bound state solutionsFor the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}^*$ states with $J^P=1/2^-$ and $J^P=3/2^-$ , there exist mass degeneration if taking the same cutoff.", "In reality, the cutoff values for both cases can be different slightly, which may result in the small difference of the masses of the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}^*$ pentaquark states with $J^P=1/2^-$ and $J^P=3/2^-$ .", "appear at the cutoff parameter around 1.39 GeV.", "However, we also notice that the probabilities for the $D$ -wave components are zero for the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}^*$ states with $J^P=1/2^-$ and $J^P=3/2^-$ , which is not surprising since the contribution of the tensor forces from the $S$ -$D$ wave mixing effect disappears for the isoscalar $\\Xi _{c}\\bar{D}^*$ interactions.", "In addition, the corresponding thresholds of the $\\Xi _{c}\\bar{D}$ and $\\Xi _{c}\\bar{D}^*$ channels are 4336.33 MeV and 4477.64 MeV, so the masses of the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}^*$ states are larger than that of the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}$ state.", "Based on the analysis mentioned above, it is clear that the masses of these discuss hidden-charm molecular pentaquarks with strangeness satisfy the relation $m[\\Xi _{c}\\bar{D}]<m[\\Xi _{c}\\bar{D}^*]$ .", "Generally, the above analysis shows a characteristic mass spectrum of hidden-charm pentaquark with strangeness.", "Here, the $\\Xi _c\\bar{D}$ pentaquark with $J^P=1/2^-$ may correspond to the newly observed $P_{\\psi s}^\\Lambda (4338)$ , while two $\\Xi _c\\bar{D}^*$ pentaquarks with $J^P=1/2^-$ and $3/2^-$ can be related to the event accumulation ($P_{cs}(4459)$ ) existing in the $J/\\psi \\Lambda $ invariant mass spectrum of $\\Xi _b^-\\rightarrow J/\\psi \\Lambda K$ [50].", "Figure: Bound state properties for the SS-wave isoscalar Ξ c ' D ¯ (*) \\Xi _{c}^{\\prime }\\bar{D}^{(*)} systems.", "Here, the cutoff Λ\\Lambda , binding energy EE, and root-mean-square radius r RMS r_{RMS} are in units of GeV \\rm {GeV}, MeV \\rm {MeV}, and fm \\rm {fm}, respectively.Following the procedure discussed above, we present the binding energy, RMS radius, and probabilities for different components for the $S$ -wave isoscalar $\\Xi _{c}^{\\prime }\\bar{D}^{(*)}$ systems in Fig.", "REF .", "For the $S$ -wave isoscalar $\\Xi _{c}^{\\prime } \\bar{D}$ state with $J^P=1/2^-$ , the bound state solution can be found when the cutoff parameter is fixed to be larger than 1.45 GeV.", "For the $S$ -wave isoscalar $\\Xi _{c}^{\\prime } \\bar{D}^*$ system, the $\\pi $ , $\\sigma $ , $\\eta $ , $\\rho $ , and $\\omega $ exchanges contribute to the total effective potentials.", "For the $S$ -wave isoscalar $\\Xi _{c}^{\\prime }\\bar{D}^*$ state with $J^P=1/2^-$ , there exists the bound state solutions with the cutoff parameter around 0.92 GeV, while we can obtain loosely bound state solutions for the $S$ -wave isoscalar $\\Xi _{c}^{\\prime }\\bar{D}^*$ state with $J^P=3/2^-$ when we tune the cutoff parameter to be around 1.63 GeV.", "Thus, we can conclude these three states are possible isoscalar hidden-charm molecular pentaquark candidates with strangeness, which is consistent with the conclusions in Refs.", "[43], [34], [45], [46].", "Additionally, the largest mass of these possible isoscalar hidden-charm molecular pentaquark candidates with strangeness is the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}^*$ state with $J^P=3/2^-$ , followed by the the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}^*$ state with $J^P=1/2^-$ and the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}$ state with $J^P=1/2^-$ .", "Here, we need to point out that the masses of the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}^*$ state with $J^P=3/2^-$ and $J^P=1/2^-$ are 4582.3 MeV and 4568.7 MeV based on the chiral effective field theory [34], and the masses of the $S$ -wave isoscalar $\\Xi _{c}\\bar{D}^*$ state with $J^P=3/2^-$ and $J^P=1/2^-$ are 4582.1 MeV and 4564.9 MeV with a quark level interaction [45], which are comparable with our results.", "For the $\\Xi _c^\\prime \\bar{D}^{(*)}$ pentaquark systems, there also exists a characteristic mass spectrum as predicted above (see Fig.", "REF )." ], [ "Summary", "Very recently, the LHCb Collaboration announced the observation of the $P_{\\psi s}^\\Lambda (4338)$ in the $J/\\psi \\Lambda $ invariant mass spectrum of the $B^-\\rightarrow J/\\psi \\Lambda \\bar{p}$ process [22].", "The $P_{\\psi s}^\\Lambda (4338)$ associated with the previously reported evidence of the $P_{cs}(4459)$ by the LHCb shows a molecular-type characteristic mass spectrum of hidden-charm pentaquark with strangeness.", "Before that, a molecular-type characteristic mass spectrum of hidden-charm pentaquark was observed by discovering the $P_c(4312)$ , $P_c(4440)$ , and $P_c(4457)$ in the $J/\\psi p$ invariant mass spectrum of the $\\Lambda _b\\rightarrow J/\\psi p K$ [14].", "In this work, we indicate the similarity of two molecular-type characteristic mass spectra mentioned above, which is due to SU(3) symmetry.", "Figure: The possible evidence of exiting substructures contained in the P cs (4459)P_{cs}(4459) enhancement structure.", "Here, the data of J/ψΛJ/\\psi \\Lambda invariant mass spectrum are from the LHCb .", "The red solid line is the Ξ c D ¯ * \\Xi _c\\bar{D}^* threshold.", "Near this threshold, there should exist two Ξ c D ¯ * \\Xi _c\\bar{D}^* molecular states with J P =1/2 - J^P=1/2^- and 3/2 - 3/2^-.", "The event cluster in the yellow ban shows possible double-peak evidence but it is not obvious, which can be tested by more precise data.For quantitatively depicting this characteristic mass spectrum of hidden-charm pentaquark with strangeness, we apply the OBE model to obtain the interactions of the $\\Xi _c\\bar{D}^{(*)}$ molecular pentaquark systems, where the effective potentials of the focused systems can be deduced.", "By solving the Schrödinger equation, we find out the bound state solution of the $\\Xi _c\\bar{D}^{(*)}$ pentaquark systems.", "The numerical result shows that the $\\Xi _c\\bar{D}$ molecular pentaquark with $J^P=1/2^-$ corresponds to the newly observed $P_{\\psi s}^\\Lambda (4338)$ , while the calculated two $\\Xi _c\\bar{D}^*$ molecular pentaquark with $J^P=1/2^-$ and $3/2^-$ may exist, which can be contained by the $P_{cs}(4459)$ structure.", "This phenomenon has happened to the $P_c(4450)$ structure by the LHCb in 2015 [68].", "When analyzing more precise data of $\\Lambda _b\\rightarrow J/\\psi p K$ in 2019, the $P_c(4450)$ structure was replaced by two substructures $P_c(4440)$ and $P_c(4457)$ [14].", "We may conjecture that the double-peak phenomenon can be happened to the $P_{cs}(4459)$ structure as illustrated in Fig.", "REF .", "Thus, we strongly suggest experimental colleagues to reanalyze the $J/\\psi \\Lambda $ invariant mass spectrum of $\\Xi _b^-\\rightarrow J/\\psi \\Lambda K$ based on more precis data collected with the running of high-luminosity LHC, by which such scenario can be tested further.", "As a prediction, in this work we also give another molecular-type characteristic mass spectrum of hidden-charm $\\Xi _c^{\\prime }\\bar{D}^{(*)}$ pentaquark systems.", "Our result shows that there exist a $\\Xi _c^{\\prime }\\bar{D}$ molecular pentaquark with $J^P=1/2^-$ and two $\\Xi _c^{\\prime }\\bar{D}^*$ molecular pentaquarks with $J^P=1/2^-$ and $3/2^-$ , which are near the thresholds of $\\Xi _c^{\\prime }\\bar{D}$ and $\\Xi _c^{\\prime }\\bar{D}^{*}$ , respectively.", "New task of searching for this predicted molecular-type characteristic mass spectrum of hidden-charm $\\Xi _c^{\\prime }\\bar{D}^{(*)}$ pentaquark systems is waiting for futher experimental exploration.", "Note added.–When preparing the present paper, we noticed a similar work from Karliner and Rosner [69] appeared in arXiv." ], [ "Acknowledgement", "This work is supported by the China National Funds for Distinguished Young Scientists under Grant No.", "11825503, National Key Research and Development Program of China under Contract No.", "2020YFA0406400, the 111 Project under Grant No.", "B20063, and the National Natural Science Foundation of China under Grant No.", "12175091." ] ]
2207.10493
[ [ "KD-MVS: Knowledge Distillation Based Self-supervised Learning for MVS" ], [ "Abstract Supervised multi-view stereo (MVS) methods have achieved remarkable progress in terms of reconstruction quality, but suffer from the challenge of collecting large-scale ground-truth depth.", "In this paper, we propose a novel self-supervised training pipeline for MVS based on knowledge distillation, termed \\textit{KD-MVS}, which mainly consists of self-supervised teacher training and distillation-based student training.", "Specifically, the teacher model is trained in a self-supervised fashion using both photometric and featuremetric consistency.", "Then we distill the knowledge of the teacher model to the student model through probabilistic knowledge transferring.", "With the supervision of validated knowledge, the student model is able to outperform its teacher by a large margin.", "Extensive experiments performed on multiple datasets show our method can even outperform supervised methods." ], [ "Introduction", "The task of multi-view stereo (MVS) is to reconstruct a dense 3D presentation of the observed scene using a series of calibrated images, which plays an important role in a variety of applications, e.g.", "augmented and virtual reality, robotics and computer graphics.", "Recently, learning-based MVS networks [43], [44], [11], [7], [6], [21] have obtained impressive results.", "However, supervised methods require dense depth annotations as explicit supervision, the acquisition of which is still an expensive challenge.", "Subsequent attempts [18], [38], [37], [41], [14] have made efforts to train MVS networks in a self-supervised manner by using photometric consistency [18], [4], optical flow [38] or reconstructed 3D models [14], [41].", "Though great improvement has been made, there is a significant gap in either reconstruction completeness or accuracy compared to supervised methods.", "In this paper, we propose a novel self-supervised training pipeline for MVS based on knowledge distillation [13], named KD-MVS.", "The pipeline of KD-MVS mainly consists of (a) self-supervised teacher training and (b) distillation-based student training.", "In the self-supervised teacher training stage, the teacher model is trained by enforcing both the photometric consistency [18] and featuremetric consistency between the reference view and the reconstructed views, which can be obtained via homography warping according to the estimated depth.", "Unlike the existing self-supervised MVS methods [18], [41], [4] that use only photometric consistency, we propose to use the internally extracted features to utilize the featuremetric consistency, which is different from the externally extracted features-based loss, e.g.", "perceptual loss [16].", "We analyze and show that the proposed internal featuremetric loss is more suitable for MVS and is able to help the self-supervised teacher model yield relatively complete and accurate depth maps.", "The distillation-based student training stage consists of two main steps: the pseudo probabilistic knowledge generation and the student training.", "We first use the teacher model to infer raw depth maps on unlabeled training data and perform cross-view check to filter unreliable samples.", "We then generate the pseudo probability distribution of the teacher model by probabilistic encoding.", "The probabilistic knowledge can be transferred to the student model by forcing the predicted probability distribution of the student model to be similar to the pseudo probability distribution.", "As a result, the student model can surpass its teacher and even outperform supervised methods.", "Extensive experiments on DTU dataset [2], Tanks and Temples benchmark [1] and BlendedMVS dataset [45] show that KD-MVS brings significant improvement to off-the-shelf MVS networks, even outperforming supervised methods, as is shown in fig:teaser.", "Our main contributions are four-fold as follows: We propose a novel self-supervised training pipeline named KD-MVS based on knowledge distillation.", "We design an internal featuremetric consistency loss to perform robust self-supervised training of the teacher model.", "We propose to perform knowledge distillation to transfer validated knowledge from the self-supervised teacher to a student model for boosting performance.", "Our method achieves state-of-the-art performance on Tanks and Temples benchmark [1], DTU [2] dataset and BlendedMVS [45] dataset.", "Learning-based methods for MVS have achieved impressive reconstruction quality.", "MVSNet [43] transforms the MVS task to a per-view depth estimation task and encodes camera parameters via differentiable homography to build 3D cost volumes, which will be regularized by a 3D CNN to obtain a probability volume for pixel-wise depth distribution.", "However, at cost volume regularization, 3D tensors occupy massive memory for processing.", "To alleviate this problem, some attempts [44], [40], [35] replace the 3D CNN by 2D CNNs and a RNN and some other methods [11], [47], [3], [42] use a multi-stage approach and predict depth in a coarse-to-fine manner." ], [ "Self-supervised MVS", "The key of self-supervised MVS methods is how to make use of prior multi-view information and transform the problem of depth prediction into other forms of problems.", "Unsup-MVS [18] firstly handles MVS as an image reconstruction problem by warping pixels to neighboring views with estimated depth values.", "Given multiple images, MVS$^2$  [4] predicts each view's depth simultaneously and trains the model using cross-view consistency.", "M$^3$ VSNet [14] makes use of the consistency between the surface normal and depth map to enhance the training pipeline and JDACS [37] proposes a unified framework to improve the robustness of self-supervisory signals against natural color disturbance in multi-view images.", "U-MVS [38] utilizes the pseudo optical flow generated by off-the-shelf methods to improve the self-supervised model's performance.", "[41] renders pseudo depth labels from reconstructed mesh models and continues to train the self-supervised model.", "Knowledge distillation [13] aims to transfer knowledge from a teacher model to a student model, so that a powerful and lightweight student model can be obtained.", "[25], [34], [28], [33], [26] consider knowledge at feature space and transfer it to the student model's feature space.", "Born-Again Networks (BAN) [8] trains a student model similarly parameterized as the teacher model and makes the trained student be a teacher model in a new round.", "The self-training scheme [36] generates distillation labels for unlabeled data and trains the student model with these labels.", "Probabilistic knowledge transfer (PKT) [27], [26] trains the student model via matching the probability distribution of the teacher model.", "Since labeled data are not required to minimize the difference of probability distribution, PKT can also be applied to unsupervised learning.", "In this work, we are inspired by PKT and offline distillation [29], [46], [15], [24], [20] and propose to transfer the response-based knowledge [10] by forcing the predicted probability distribution of the student model to be similar to the probability distribution of the teacher model in an offline manner.", "Figure: Overview of KD-MVS.", "The first stage is self-supervised teacher training.", "The second stage is distillation-based student training, including pseudo probabilistic knowledge generation and student training." ], [ "Methodology", "In this section, we elaborate the proposed training framework as illustrated in fig:overview.", "KD-MVS mainly consists of self-supervised teacher training (sec:unsup-training) and distillation-based student training (sec:sd-training).", "Specifically, we first train a teacher model in a self-supervised manner by using both the photometric and featuremetric consistency between the reference view and the reconstructed views.", "We then generate the pseudo probability distribution of the teacher model via cross-view check and probabilistic encoding.", "With the supervision of the pseudo probability, the student model is trained with distillation loss in an offline distillation manner.", "It is worth noting that the proposed KD-MVS is a general pipeline for training MVS networks, it can be easily adapted to arbitrary learning-based MVS networks.", "In this paper, we mainly study KD-MVS with CasMVSNet [11]." ], [ "Self-supervised Teacher Training", "In addition to conventional photometric consistency [18] used in self-supervised MVS, we propose to use internal features and featuremetric consistency as an additional supervisory signal.", "Both the photometric and featuremetric consistency are obtained by calculating the distance between the reference view and the reconstructed views.", "The following is the introduction to view reconstruction and loss formulation.", "Given a reference image $\\mathbf {I}_0$ and its neighboring source images $\\lbrace \\mathbf {I}_i\\rbrace _{i=1}^{N-1}$ , the common coarse-to-fine MVS network (e.g.", "CasMVSNet [11]) extracts features for all $N$ images at three different resolution levels ($1/4$ , $1/2$ , 1), denoted as $\\lbrace \\mathbf {F}_i^{1/4}, \\mathbf {F}_i^{1/2}, \\mathbf {F}_i\\rbrace _{i=0}^{N-1}$ , and estimates the depth maps at these three corresponding levels, as $\\mathbf {D}_0^{1/4}$ , $\\mathbf {D}_0^{1/2}$ and $\\mathbf {D}_0$ .", "Taking $\\mathbf {F}_0$ and $\\mathbf {D}_0$ as an example, the warping between a pixel $\\mathbf {p}$ at the reference view and its corresponding pixel $\\hat{\\mathbf {p}}_i$ at the $i$ -th source view under estimated depth $d=\\mathbf {D}_0(\\mathbf {p})$ is defined as: $\\hat{\\mathbf {p}}_{i} = \\mathbf {K}_i[\\mathbf {R_i}(\\mathbf {K}_0^{-1}\\mathbf {p}d)+\\mathbf {t_i}],$ where $\\mathbf {R}_i$ and $\\mathbf {t}_i$ denote the relative rotation and translation from the reference view to the $i$ -th source view.", "$\\mathbf {K}_0$ and $\\mathbf {K}_i$ are the intrinsic matrices of the reference and the $i$ -th source camera.", "According to Eq.warp, we are able to get the reconstructed images $\\hat{\\mathbf {I}}_i$ and features $\\hat{\\mathbf {I}}_i$ corresponding to the $i$ -th source view.", "fig:warp shows a photometric warping process from the $i$ -th source view to the reference view." ], [ "Loss Formulation", "Our self-supervised training loss consists of two components: photometric loss $\\mathcal {L}_\\textrm {photo}$ and featuremetric loss $\\mathcal {L}_\\textrm {fea}$ .", "Following [18], the $\\mathcal {L}_\\textrm {photo}$ is based on the $\\ell $ -1 distance between the raw RGB reference image and the reconstructed images.", "However, we find that the photometric loss is sensitive to lighting conditions and shooting angles, resulting in poor completeness of predictions.", "To overcome this problem, we use the featuremetric loss to construct a more robust loss function.", "Given the extracted features $\\lbrace \\mathbf {F}_i\\rbrace _{i=0}^{N-1}$ from the feature net of MVS network, and the reconstructed feature maps $\\hat{\\mathbf {F}}_{i}$ generated from the $i$ -th view, our featuremetric loss between $\\hat{\\mathbf {F}}_{i}$ and $\\mathbf {F}_{0}$ is obtained by: $\\mathcal {L}_{\\textrm {fea}}^{(i)} = \\Vert \\mathbf {\\hat{F}}_{i}-\\mathbf {F}_0\\Vert .$ It is worth noting that we put forward to use the internal features extracted by the internal feature net of the online training MVS network instead of the external features (e.g.", "extracted by a pre-trained backbone network [16]) to compute featuremetric loss.", "Our insight is that the nature of MVS is multi-view feature matching along epipolar lines, so the features are supposed to be locally discriminative.", "The pre-trained backbone networks, e.g.", "ResNet [12] and VGG-Net [31], are usually trained with image classification loss, so that their features are not locally discriminative.", "As shown in fig:feature, we compare the features extracted by an external pre-trained backbone (ResNet [12]) and by the internal encoder of the MVS network during online self-supervised training.", "These two options lead to completely different feature representation and we study it in sec:ablation-feature with experiments.", "To summarize, the final loss function for self-supervised teacher training is $\\mathcal {L}_S = \\frac{1}{|\\mathbf {V}|} \\sum _{\\mathbf {p}\\in \\mathbf {V}} \\sum _{i=1}^{N-1} (\\lambda _{\\textrm {fea}}\\mathcal {L}_{\\textrm {fea}}^{(i)}+\\lambda _{\\textrm {photo}}\\mathcal {L}_{\\textrm {photo}}^{(i)}),$ where $\\mathbf {V}$ is the valid subset of image pixels.", "$\\lambda _{\\textrm {fea}}$ and $\\lambda _{\\textrm {photo}}$ are the two manually tuned weights, and in our experiments, we set them as 4 and 1 respectively.", "For coarse-to-fine networks, e.g.", "CasMVSNet, the loss function is applied to each of the regularization steps.", "Figure: Cross-view check.To further stimulate the potential of the self-supervised MVS network, we adopt the idea of knowledge distillation and transfer the probabilistic knowledge of the teacher to a student model.", "This process mainly consists of two steps, namely pseudo probabilistic knowledge generation and student training." ], [ "Pseudo Probabilistic Knowledge Generation", "We consider the knowledge transfer is done through the probability distribution as is done in [15], [29], [34].", "However, we face two main problems when applying distillation in MVS.", "(a) The raw per-view depth generated from the teacher model contains a lot of outliers, which is harmful to training student model.", "Thus we perform cross-view check to filter outliers.", "(b) The real probabilistic knowledge of the teacher model cannot be used directly to train the student model.", "That is because the depth hypotheses in the coarse-to-fine MVS network need to be dynamically sampled according to the results of the previous stage, and we cannot guarantee that the teacher model and student model always share the same depth hypotheses.", "To solve this problem, we propose to generate the pseudo probability distribution by probabilistic encoding.", "is used to filter outliers in the raw depth maps, which are inferred by the self-supervised teacher model on the unlabeled training data.", "Naturally, the outputs of the teacher model are per-view depth maps and the corresponding confidence maps.", "For coarse-to-fine methods, e.g.", "CasMVSNet [11], we multiply confidence maps of all three stages to obtain the final confidence map and take the depth map in the finest resolution as the final depth prediction.", "We denote the final confidence map of reference view as $\\mathbf {C}_0$ and the final depth prediction as $\\mathbf {D}_0$ , the depth maps of source views as $\\lbrace \\mathbf {D}_i\\rbrace _{i=1}^{N-1}$ .", "As is illustrated in fig:filter, considering an arbitrary pixel $\\mathbf {p}_0$ in the reference image coordinate, we cast the 2D point $\\mathbf {p}_0$ to a 3D point $\\mathbf {P}_0$ with the depth value $\\mathbf {D}_0(\\mathbf {p}_0)$ .", "We then back-project $\\mathbf {P}_0$ to $i$ -th source view and obtain the point $\\mathbf {p}_i$ in the source view.", "Using its estimated depth $\\mathbf {D}_i(\\mathbf {p}_i)$ , we can cast the $\\mathbf {p}_i$ to the 3D point $\\mathbf {P}_i$ .", "Finally, we back project $\\mathbf {P}_i$ to the reference view and get $\\hat{\\mathbf {p}}_{0,i}$ .", "Then the reprojection error at $\\mathbf {p}_0$ can be written as $e_{\\textrm {reproj}}^{\\textrm {i}}=\\Vert \\mathbf {p}_0-\\hat{\\mathbf {p}}_{0,i}\\Vert $ .", "A geometric error $e_{\\textrm {geo}}^{\\textrm {i}}$ is also defined to measure the relative depth error of $\\mathbf {P}_0$ and $\\mathbf {P}_i$ observed from the reference camera as $e_{\\textrm {geo}}^{\\textrm {i}}=\\Vert \\tilde{D}_0(\\mathbf {P}_0)-\\tilde{D}_0(\\mathbf {P}_i)\\Vert /\\tilde{D}_0(\\mathbf {P}_0)$ , where the $\\tilde{D}_0(\\mathbf {P}_0)$ and $\\tilde{D}_0(\\mathbf {P}_i)$ are the projected depth of $\\mathbf {P}_0$ and $\\mathbf {P}_i$ in the reference view.", "Accordingly, the validated subset of pixels with regard to the $i$ -th source view is defined as $\\lbrace \\mathbf {p}_0\\rbrace _i=\\lbrace \\mathbf {p}_0|\\mathbf {C}_0(\\mathbf {p}_0)>\\tau _{\\textrm {conf}},e_{\\textrm {reproj}}^{\\textrm {i}}<\\tau _{\\textrm {reproj}},e_{\\textrm {geo}}^{\\textrm {i}} <\\tau _{\\textrm {geo}}\\rbrace ,$ where $\\tau $ represents threshold values, we set $\\tau _{\\textrm {conf}}$ , $\\tau _{\\textrm {reproj}}$ and $\\tau _{\\textrm {geo}}$ to 0.15, 1.0 and 0.01 respectively.", "The final validated mask is the intersection of all $\\lbrace \\mathbf {p}_0\\rbrace _i$ across $N$ -1 source views.", "The obtained $\\lbrace \\tilde{D}_0(\\mathbf {P}_i)\\rbrace _{i=0}^{N-1}$ and validated mask will be further used to generate the pseudo probability distribution." ], [ "Probabilistic Encoding", "uses the $\\lbrace \\tilde{D}_0(\\mathbf {P}_i)\\rbrace _{i=0}^{N-1}$ to generate the pseudo probability distribution $P_{\\mathbf {p}_0}(d)$ of depth value $d$ for each validated pixel $\\mathbf {p}_0$ in reference view.", "We model $P_{\\mathbf {p}_0}$ as a Gaussian distribution with a mean depth value of $\\mu ({\\mathbf {p}_0})$ and a variance of $\\sigma ^2({\\mathbf {p}_0})$ , which can be obtained by maximum likelihood estimation (MLE): $\\mu ({\\mathbf {p}_0}) = \\frac{1}{N} \\sum _{i=0}^{N-1}\\tilde{D}_0(\\mathbf {P}_i), \\ \\ \\ \\sigma ^2({\\mathbf {p}_0}) = \\frac{1}{N}\\sum _{i=0}^{N-1}\\left(\\tilde{D}_0(\\mathbf {P}_i) - \\mu ({\\mathbf {p}_0})\\right)^2.$ The $\\mu ({\\mathbf {p}_0})$ fuses the depth information from multiple views, while the $\\sigma ^2({\\mathbf {p}_0})$ reflects the uncertainty of the teacher model at $\\mathbf {p}_0$ , which will provide probabilistic knowledge for the student model during distillation training.", "With the pseudo probability distribution $P$ , we are able to train a student model from scratch via forcing its predicted probability distribution $\\hat{P}$ to be similar with $P$ .", "For the discrete depth hypotheses $\\lbrace d_k\\rbrace _{k=0}^D$ , we obtain their pseudo probability $\\lbrace P(d_k)\\rbrace _{k=0}^D$ on the continuous probability distribution $P$ and normalize $\\lbrace P(d_k)\\rbrace _{k=0}^D$ using SoftMax, taking the result as the final discrete pseudo probability value.", "We use Kullback–Leibler divergence to measure the distance between the student model's predicted probability and the pseudo probability.", "The distillation loss $\\mathcal {L}_{D}$ is defined as $\\mathcal {L}_{D} = \\mathcal {L}_{KL}(P||\\hat{P}) = \\sum _{\\mathbf {p}\\in \\lbrace \\mathbf {p}_{v}\\rbrace } \\left(P_\\mathbf {p} - \\hat{P}_\\mathbf {p}\\right)log\\left(\\frac{P_\\mathbf {p}}{\\hat{P}_\\mathbf {p}}\\right),$ where $\\lbrace \\mathbf {p}_{v}\\rbrace $ represents the subset of valid pixels after cross-view check.", "In experiments, we find that the trained student model also has the potential of becoming a teacher and further distilling its knowledge to another student model.", "As a trade-off between training time and performance, we perform the process of knowledge distillation once more.", "More details can be found in sec:iter.", "Table: Quantitative results on DTU evaluation set  (lower is better).", "Sup.", "indicates whether the method is supervised or not." ], [ "Datasets", "DTU dataset [2] is captured under well-controlled laboratory conditions with a fixed camera rig, containing 128 scans with 49 views under 7 different lighting conditions.", "We split the dataset into 79 training scans, 18 validation scans, and 22 evaluation scans by following the practice of MVSNet [43].", "BlendedMVS dataset [45] is a large-scale dataset for multi-view stereo and contains objects and scenes of varying complexity and scale.", "This dataset is split into 106 training scans and 7 validation scans.", "Tanks and Temples benchmark [19] is a public benchmark acquired in realistic conditions, which contains 8 scenes for the intermediate subset and 6 for the advanced subset." ], [ "Implementation Details", "At the phase of self-supervised teacher training on DTU dataset [2], we set the number of input images $N=5$ and image resolution as $512\\times 640$ .", "For coarse-to-fine regularization of CasMVSNet [11], the settings of depth range and the number of depth hypotheses are consistent with [11]; the depth interval decays by 0.25 and 0.5 from the coarsest stage to the finest stage.", "The teacher model is trained with Adam for 5 epochs with a learning rate of 0.001.", "At the phase of distillation-based student training, we train the student model with the pseudo probability distribution for 10 epochs.", "Model training of all experiments is carried out on 8 NVIDIA RTX 2080 GPUs." ], [ "DTU Dataset", "We evaluate KD-MVS, applied to MVSNet [43] and CasMVSNet [11] on DTU dataset [2].", "We set $N = 5$ and input resolution as $864 \\times 1152$ at evaluation.", "Quantitative comparisons are shown in table:dtu-results.", "Accuracy, Completeness and Overall are the three official metrics from [2].", "Our method outperforms all self-supervised methods by a large margin and even the supervised ones.", "fig:pcd-dtu shows a visualization comparison of reconstructed point clouds.", "Our method achieves much better reconstruction quality when compared with the baseline network and the state-of-the-art self-supervised method.", "Table: Quantitative results on the intermediate set of Tanks and Temples benchmark .", "Sup.", "indicates whether the method is supervised or not.", "Bold and underlined figures indicate the best and the second best results.Table: Quantitative results on the advanced set of Tanks and Temples benchmark ." ], [ "Tanks and Temples Benchmark", "We test our method on Tanks and Temples benchmark [19] to demonstrate the ability to generalize on varying data.", "For a fair comparison with state-of-the-art methods, we fine-tune our model on the training set of the BlendedMVS dataset [45] using the original image resolution ($576\\times 768$ ) and $N = 5$ .", "More details about the fine-tuning process can be found in supp.", "materials.", "Similar to other methods [11], [38], the camera parameters, depth ranges, and neighboring view selection are aligned with [44].", "We use images of the original resolution for inference.", "Quantitative results are shown in table:tnt-inter and table:tnt-adv, and the qualitative comparisons ares shown in fig:pcd-tnt.", "Figure: Comparison of reconstructed results with the supervised baseline CasMVSNet  and the state-of-the-art self-supervised method  on Tanks and Temples benchmark .", "τ=3mm\\tau =3mm is the distance threshold determined officially and darker regions indicate larger error encountered with regard to τ\\tau ." ], [ "BlendedMVS Dataset", "We further demonstrate the quality of depth maps on the validation set of BlendedMVS dataset [45].", "The details of the training process can be found in supp.", "materials.", "We set $N=5$ , image resolution as $512\\times 640$ , and apply the evaluation metrics described in [5] where depth values are normalized to make depth maps with different depth ranges comparable.", "Quantitative results are illustrated in table:bld-results.", "EPE stands for the endpoint error, which is the average $\\ell $ -1 distance between the prediction and the ground truth depth; $e_1$ and $e_3$ represent the percentage of pixels with depth error larger than 1 and larger than 3.", "Table: Ablation study on loss for self-supervised training stage (teacher model).", "ℒ fea \\mathcal {L}_{\\textrm {fea}} and ℒ fea * \\mathcal {L}_{\\textrm {fea}}^* denotes featuremetric loss by the internal feature encoder and by an external pretrained encoder (ResNet-18 ) respectively.Table: Ablation study on the main factor of effectiveness.", "Mask indicates whether to use the validated mask.", "Depth indicates using ground truth depth or validated depth.", "Loss indicates which loss is used." ], [ "Implementation of Featuremetric Loss", "As analyzed in sec:unsup-training, we consider the nature of the MVS is multi-view feature matching along epipolar lines, where the features are supposed to be relatively locally discriminative.", "table:feature shows the quantitative results of different settings.", "Compared with using photometric loss only, both internal featuremetric and external featuremetric loss can boost the performance.", "And our proposed internal featuremetric loss shows superiority over the external featuremetric loss with external features by a pre-trained ResNet.", "It is worth noting that it is not feasible to adopt our featuremetric loss alone.", "The reason is that the feature network is online trained within the MVS network and thus applying featuremetric loss alone will lead to failure of training where features tend to be a constant (typically 0)." ], [ "Number of Self-training Iterations", "Given the scheme of knowledge distillation via generating pseudo probability, we can iterate the distillation-based student training for an arbitrary number of loops.", "Here we study the performance gain when the number of iterations increases in Tab.", "6.", "As a trade-off of efficiency and accuracy, we set the number of iterations to be 2." ], [ "Insights of Effectiveness", "We attribute the effectiveness of KD-MVS to the following four parts.", "(a) The first one is multi-view consistency as introduced in sec:cross-view-check, which can be used to filter the outliers in noisy raw depth maps.", "The remaining inliers are relatively accurate and are equivalent to ground-truth depth to a certain extent.", "(b) The probabilistic knowledge brings performance gain to the student model.", "Compared with using hard labels such as $\\ell $ -1 loss and depth labels, applying soft probability distribution to student model brings additional inter-depth relationships and thus reduces the ambiguity of noisy 3D points.", "(c) The validated depth contains less perspective error than rendered ground truth labels.", "As shown in the last row of fig:depth (marked with a red box), there are some incorrect values in the ground-truth depth maps of DTU dataset [2] caused by perspective error, which is harmful to training MVS models.", "(d) The validated masks of the teacher model reduce the ambiguity of prediction by filtering the samples which are hard to learn, benefiting the convergence of the model.", "We perform an ablation study on these parts as shown in table:dark-knowledge.", "(1) and (2) show that the validated mask is helpful, (3) and (4) show that enforcing the probability distribution can bring significant improvement.", "More details can be found in supp.", "materials.", "Figure: Visualization of depth maps and errors.", "(a) RGB reference images; (b) ground-truth depth maps; (c) rendered depth maps by ; (d) errors between (b) and (c); (e) pseudo labels in KD-MVS; (f) errors between (b) and (e).", "We apply the same mask on (b)(c)(e) for better visualization.", "[38] leverages optical flow to compute a depth-flow consistency loss.", "To get reliable optical flow labels, U-MVS trains a PWC-Net [32] on DTU dataset [2] in a self-supervised manner, which costs additional training time and needs storage space for the pseudo optical flow labels (more than 120$GB$ )." ], [ "Self-supervised-CVP-MVSNet", "[41] renders depth maps from the reconstructed meshes, which brings in error during Poisson reconstruction [17].", "We compare the rendered depth maps [41] and our validated depth maps in fig:depth." ], [ "Limitations", "- The quality of pseudo probability distribution highly depends on the cross-view check stage and relevant hyperparameters need to be tuned carefully.", "- Knowledge distillation is known as data-hungry and it may not work as expected with a relatively small-scale dataset." ], [ "Conclusion", "In this paper, we propose KD-MVS, which is a general self-supervised pipeline for MVS networks without any ground-truth depth as supervision.", "In the self-supervised teacher training stage, we leverage a featuremetric loss term, which is more robust than photometric loss alone.", "The features are yielded internally by the MVS network itself, which is end-to-end trained under implicit supervision.", "To explore the potential of self-supervised MVS, we adopt the idea of knowledge distillation and distills the teacher's knowledge to a student model by generating pseudo probability distribution.", "Experimental results indicate that the self-supervised training pipeline has the potential to obtain reconstruction quality equivalent to supervised ones." ], [ "More Insights of KD-MVS", "In this section, we discuss the potential reasons why self-supervised methods can obtain comparable (even better) results compared to supervised methods.", "Here we elaborate this from the following two perspectives.", "(a) Self-supervised methods are able to generate accurate pseudo labels with cross-view check.", "According to Eq.", "(4) of the paper, only the inliers (whose depth prediction is accurate) can be kept given a strict threshold.", "We also visualize the depth error of the pseudo depth in Fig.", "8, which is relevant to the overall error of point cloud results.", "The first and the second row indicate the pseudo depth has already been to some extent accurate in most scenes.", "Similar conclusion can also be inferred from the Tab.", "7(2)(3) of the paper.", "(b) The pseudo labels have advantages over GT in some aspects.", "Generally speaking, the datasets contain many pixels (namely training samples) that are normally textureless and considered as “toxic\" for training.", "For example, if we force the network to estimate depth values for a purely white region, which should be inherently unpredictable, the network will be confused since there are simply no valid features extracted.", "fig:depthsupp shows visualized comparisons of GT depth and pseudo depth, the red boxes highlight the textureless regions, which are nearly unpredictable.", "The pseudo depth maps contain fewer misleading regions by filtering the outliers with cross-view check.", "By reducing the toxic samples, the training process will be more stable and the performance will be improved.", "fig:figureloss shows the comparison of several metrics in the training phase.", "Compared with using the original depth (the orange curve), reducing the toxic samples by using the mask of the pseudo depth makes the training phase more stable and helps the student model converge faster.", "It is worth noting that similar results can be found in [38] and [41], which also use the pseudo depth labels to train MVS network in a self-supervised manner and achieve better performance than the supervised baseline methods.", "Besides, we propose to leverage the probabilistic knowledge, which is verified to be effective in classification task [20], [15], [26], [34].", "Some works [13], [36] call this probabilistic knowledge as dark knowledge and believe they contain inter-class information.", "As MVS can also be handled as a classification task, it can benefit from probabilistic knowledge too.", "Figure: Comparisons of training processes in several metrics.", "Original GT indicates training with the original ground-truth depth of DTU dataset .", "Masked GT is obtained by masking the original GT with the mask of pseudo depth.", "Reducing the toxic samples makes the training phase more stable.Figure: Visualized comparisons between GT depth and pseudo depth.", "Red boxes indicate textureless regions, which are nearly unpredictable.", "Pseudo depth maps contain less textureless (misleading) regions by filtering the outliers with cross-view check." ], [ "Fine-tuning on BlendedMVS", "As done in many supervised methods [35], [47], [7], we train the student model on BlendedMVS dataset [45] for better performance and fair comparison.", "Concretely, we use the student model trained on DTU dataset [2] as the teacher model and generate pseudo probabilistic labels for BlendedMVS dataset.", "A new student model is then trained using the pseudo labels of both BlendedMVS and DTU from scratch.", "The process of fine-tuning still follows the self-supervised scheme of KD-MVS and leverages no extra manual annotation." ], [ "Ablation Study on the Insights of effectiveness", "We perform ablation study to verify the insights of effectiveness in Sec.", "5.1 of the paper, and the results are listed in Tab.", "7.", "The $Mask$ indicates whether use the validated masks, which are generated by performing cross-view check on the raw depth maps of the teacher model.", "For (2) in Tab.", "7, since the GT depth itself has a mask, we take the intersection of the two masks as the final mask.", "The $Depth$ indicates which kind of depth is used to train the student model.", "Comparing the results of (2) and (3), it can be found that the pseudo depth label is accurate.", "The $Loss$ indicates which kind of loss function is used to train the student model.", "When the GT depth is used, we can only use the $\\ell $ -1 loss.", "When the distillation loss is used, we use the probabilistic encoding to generate the pseudo probability distribution.", "Comparing the results of (3) and (4), we can find that the distillation loss brings a big performance gain." ], [ "Number of Views & Input Resolution", "We perform ablation study against the number of input views $N$ and input resolution $H\\times W$ on DTU evaluation set [2], and the results are shown in tab:NHW.", "Table: Ablation study on number of input views NN and image resolution H×WH\\times W on DTU evaluation set  (lower is better).", "We use the same student model and keep the other settings fixed." ], [ "Thresholds in Cross-view Check", "As introduced in Sec.", "5.3 of the paper and sec:1.1 of the supp.", "materials, the quality of pseudo probability distribution depends on the cross-view check strategy and the relevant hyper-parameters.", "We perform ablation study on the three threshold parameters in cross-view check, and the results are shown in tab:thershold.", "Table: Ablation study on different settings of the threshold parameters in cross-view check.", "We use the same raw depth of the self-supervised teacher model to genrate the pseudo labels, and show the results of the student model on DTU evaluation set  (lower is better)." ], [ "More Point Cloud Results", "We visualize more point cloud results of KD-MVS (applied with CasMVSNet [11]) on DTU evaluation set [2], Tanks and Temples benchmark [19] respectively in fig:dtu-pcd and fig:tnt-pcd." ], [ "Failure Cases", "As discussed in Sec.", "5.3 of the paper, KD-MVS may face challenges when training student models under the following situations:" ], [ "(a) When KD-MVS is applied on a relatively small-scale dataset.", "We attempted to generate the pseudo probability distribution on the intermediate set of the Tanks and Temples dataset [19] ($\\sim 2K$ samples) for training student models.", "However, when trained on the dataset alone, the performance of the student model degrades significantly compared to the model trained on BlendedMVS dataset [45] ($\\sim 17K$ samples) alone.", "The potential reason is that small-scale datasets cannot provide sufficient data diversity for knowledge distillation, so the student model is not able to learn robust feature representations and performs unsatisfactorily." ], [ "(b) When the thresholds in cross-view check are inappropriate.", "The performance of the student model relies on the quality of the generated pseudo labels, which is greatly affected by the thresholds set in cross-view check.", "tab:thershold shows when these thresholds are inappropriate, the performance of the student model will degrade.", "Figure: Point cloud results on DTU dataset .Figure: Point cloud results on Tanks and Temples benchmark ." ] ]
2207.10425
[ [ "Stochastic Particle-Based Variational Bayesian Inference for Multi-band\n Radar Sensing" ], [ "Abstract Multi-band fusion is an important technology to improve the radar sensing performance.", "In the multi-band radar sensing signal model, the associated likelihood function has oscillation phenomenon, which makes it difficult to obtain high-accuracy parameter estimation.", "To cope with this challenge, we divide the radar target parameter estimation into two stages of coarse estimation and refined estimation, where the coarse estimation is used to narrow down the search range for the refined estimation, and the refined estimation is based on the Bayesian approach to avoid the convergence to a bad local optimum of the likelihood function.", "Specifically, in the coarse estimation stage, we employ a root MUSIC algorithm to achieve initial estimation.", "Then, we apply the block stochastic successive convex approximation (SSCA) approach to derive a novel stochastic particle-based variational Bayesian inference (SPVBI) algorithm for the Bayesian estimation of the radar target parameters in the refined stage.", "Unlike the conventional particle-based VBI (PVBI) in which only the probability of each particle is optimized and the per-iteration computational complexity increases exponentially with the number of particles, the proposed SPVBI optimizes both the position and probability of each particle, and it adopts the block SSCA to significantly improve the sampling efficiency by averaging over iterations.", "As such, it is shown that the proposed SPVBI can achieve a better performance than the conventional PVBI with a much smaller number of particles and per-iteration complexity.", "Finally, extensive simulations verify the advantage of the proposed algorithm over various baseline algorithms." ], [ "Introduction", "Multi-band fusion technology is widely applied to high-precision radar sensing, which can achieve the monitoring and identification of targets by obtaining the precise information of target parameters such as the range and scattering coefficient to identify the size, shape, structure and movement of the target.", "Application scenarios include vehicle-mounted radar imaging, security inspection, medical imaging, remote sensing, inverse synthetic aperture (ISAR) radar imaging, etc.", "In addition, with the rise of integrated sensing and communication (ISAC) [1], how to use multiple limited communication bands to achieve higher estimation accuracy has become a big challenge.", "To meet these application requirements, the radar needs to have a high enough resolution and accuracy.", "The resolution of radar sensing depends on the bandwidth of the transmitted signal.", "However, we cannot straightforwardly increase the bandwidth due to the limited spectrum resources.", "Besides, large bandwidth will bring great pressure to signal acquisition, data transmission and processing, which leads to a high hardware cost.", "Although there are many improvements in the super-resolution method of single-band signal processing, it is still limited by the fixed signal bandwidth.", "To address these challenges, the technology of multi-band signal fusion processing has been proposed, that improves the resolution and range accuracy by coherent processing of radar signals in non-overlapping bands.", "The existing multi-band radar sensing methods can be divided into three categories [2], i.e., spectral estimation method, sparse signal reconstruction method, and probabilistic inference method.", "Spectral Estimation: In [3], the authors proposed bandwidth extrapolation (BWE) and ultra wide band coherent synthesis techniques for multi-band fusion.", "They first determine the parameters of an all-pole model, which is an approximation of the signal model.", "Coherent processing is then performed for the two bands to compensate for their phase differences.", "After that is the band interpolation or extrapolation.", "Finally, the range measure of super-resolution can be obtained by using the fitted full-band data.", "Although this method can effectively improve the range resolution with low complexity, it is sensitive to model error.", "For the determination of model parameters, some subspace decomposition methods can be used such as Estimating Signal Parameters via Rotational Invariance Technique (ESPRIT) [4]–[5], Matrix Pencil algorithm [6]–[8] and RELAX algorithm [2], as well as transform domain methods such as apFFT [9], which are essentially based on spectral search.", "In addition, aiming at each processing flow of the algorithm, some methods of frequency band interpolation and extension are proposed, such as auto-regressive (AR) gap filling [10]–[12], minimum entropy criterion [13], CLEAN [14] and GAPES [15]–[16], as well as some methods of estimating the number of scattering centers (i.e.", "multi-path number), such as [17]–[18].", "Sparse Signal Reconstruction: In this method, the signal model is expressed as a linear sparse form, in which the frequency-related scattering coefficient and delay term are discretized into an observation matrix, and the complex scalar is expressed as sparse vectors.", "The resulting compressed sensing problem can be solved by basis pursuit (BP), orthogonal matching pursuit (OMP) [19], FOCUSS [20] and so on.", "In the corresponding solution vector, the number of non-zero elements is the number of scattering centers.", "In addition, the initial phase and linear phase (i.e.", "timing synchronization error) can also be aligned by extracting the phase and position of non-zero elements in the solution vector [21].", "Compared with spectral estimation method, sparse reconstruction method does not need to determine the number of scattering centers in advance.", "However, in order to obtain high accuracy estimation, it needs to adopt a large scale observation matrix, which leads to a sharp increase in computational complexity.", "Probabilistic Inference: In this category, probability estimates of target parameters are inferred based on statistical models.", "For example, we can obtain the maximum a posterior (MAP) estimate by methods like expectation-maximum (EM) [22] , support vector regression (SVR) [23] and conventional particle-based VBI [24].", "Besides, sparse Bayesian learning (SBL) algorithm can also be used to solve the problem under certain sparse prior assumptions [25], and several algorithms have been proposed to improve the SBL, e.g., [26]–[28].", "These algorithms usually require additional assumptions about priors, and it is not easy to achieve a good trade-off between estimation accuracy and computational complexity.", "To overcome the drawbacks of the existing algorithms, in this paper, we propose a Stochastic Particle-based Variational Bayesian Inference (SPVBI) algorithm for high-accuracy multi-band radar sensing.", "The main contributions are summarized as follows.", "Two-stage parameter estimation framework: To reduce the computational complexity of the SPVBI algorithm, we adopt a two-stage parameter estimation framework.", "Specifically, a simple but stable MUSIC-based coarse estimation is used to narrow down the search range, so that the complexity of the refined stage can be greatly reduced.", "In addition, we find that the likelihood function oscillates violently, which is unfavorable for estimation.", "Therefore, we propose a signal model with bandwidth aperture structure [29] to reduce the degree of oscillation in the coarse estimation stage.", "Then with the particle positions initialized using the coarse estimation result, the SPVBI fully exploits the high resolution provided by the signal model with the band gap aperture structure [29] to produce an accurate estimation of the target parameters.", "Stochastic particle-based variational Bayesian inference: In the proposed SPVBI, three innovative ideas are used to achieve an accurate posterior estimation of the target parameters with reduced complexity and fast convergence speed.", "First, to avoid making any additional assumptions on the prior/posterior distributions of the target parameters, we adopt the particle-based approximation to transform the multiple integral operation in VBI into multiple weighted summation.", "Second, the particle positions are also updated in each iteration to minimize the VBI objective function.", "Such improved degree of freedom can further enhance the performance and accelerate the convergence speed.", "Finally, to avoid the exponential complexity with the particle number, we extend the SSCA approach in [30] to block SSCA and apply the block SSCA to significantly improve the sampling efficiency of the expectation operator in the VBI iteration by using the average-over-iteration technique.", "Rigorous convergence analysis of SPVBI: We prove that SPVBI is guaranteed to converge to a stationary point of the VBI problem, even though the number of samples used to calculate the expectation in each iteration is fixed as a constant that does not increase with the number of adopted particles.", "The rest of the paper is organized as follows.", "In section $\\text{\\mbox{II}}$ , we formulate the system model in the multi-band radar sensing scenario.", "In Section $\\text{\\mbox{III}}$ , we propose a two-stage estimation framework.", "In section $\\text{\\mbox{IV}}$ , we propose the SPVBI algorithm, together with the analysis of convergence and complexity.", "In Section $\\text{\\mbox{V}}$ , we present numerical simulations and performance analysis.", "Finally, conclusions are presented in Section $\\text{\\mbox{VI}}$ ." ], [ "System Model", "In the scene of multi-band sensing, we consider two radars that transmit linear frequency modulation (LFM) signals in non-contiguous frequency bands.", "The discrete received signal model in frequency domain can be formulated as [31] $\\begin{split}r_{m}^{\\left(n\\right)} & =\\sum \\limits _{k=1}^{K}\\alpha _{k}\\left(j\\frac{f_{c,m}+nf_{s,m}}{f_{c,m}}\\right)^{\\beta _{k}}e^{-j2\\pi \\left(f_{c,m}+nf_{s,m}\\right)\\tau _{k}}\\\\e^{j\\phi _{m}} & e^{-j2\\pi \\left(f_{c,m}+nf_{s,m}\\right)\\delta _{m}}+w_{m}^{\\left(n\\right)},\\end{split}$ where $m=1,2,\\ldots M$ is the frequency band index, $N_{m}$ denotes the number of data samples in each band, $n=0,1,\\ldots N_{m}-1$ denotes sample index and $k=1,2,\\ldots K$ denotes the $k$ -th scattering center.", "$w_{m}^{\\left(n\\right)}$ denotes an additive white Gaussian noise (AWGN) following the distribution $\\mathcal {CN}\\left(0,\\eta _{w}^{2}\\right)$ .", "$\\alpha _{k}$ is a complex scalar carrying the amplitude and phase information of a scattering center, and $\\beta _{k}$ is the scattering coefficient that characterizes the geometry of scattering center, which is usually an integer multiple of $0.5$ , e.g., $\\beta _{k}\\in \\left\\lbrace -1,-1/2,0,1/2,1\\right\\rbrace $ .", "$f_{s,m}$ and $f_{c,m}$ are the frequency interval and initial frequency of $m$ -th frequency band, respectively.", "$\\tau _{k}$ is the time delay of the $k$ -th scattering center.", "Figure: An illustration of multi-band distribution in frequencydomain.Due to the instability of Pulse Repeat Frequency (PRF) and transmission delay of synchronization signal, the timing synchronization error $\\delta _{m}$ between two radars exists.", "And because of the hardware difference, the signals received by different radars are superimposed with a random initial phase $\\phi _{m}$ .", "These two imperfect factors are the main obstacles for multi-band signal fusion and need to be calibrated." ], [ "A Two-stage Estimation Framework", "In the original signal model, the carrier frequency term $f_{c,m}$ will lead to the violent oscillation of the likelihood function, where the main-lobe of the likelihood function will become sharp, accompanied by many fluctuating side-lobes, as illustrated by the solid pink line in Fig REF .", "Consequently, it will be extremely intractable to find the global optimum.", "When the signal-to-noise ratio (SNR) is not high and there are non-ideal factors, the estimate may deviate to the locally optimal side-lobe, resulting in the degradation of the estimation performance.", "Therefore, we customize a two-stage estimation framework, which contains two signal models with bandwidth aperture and band gap aperture, respectively.", "The explanation of these two apertures will be given in the following sections.", "Figure: An illustration of oscillation phenomena." ], [ "Coarse Estimation", "In the coarse estimation stage, the original signal model can be simplified into the following form.", "Signal model with the bandwidth aperture structure: $\\begin{split}r_{m}^{\\left(n\\right)} & =\\sum \\limits _{k=1}^{K}\\alpha _{k,m}^{^{\\prime }}e^{-j2\\pi (nf_{s,m})\\left(\\tau _{k}+\\delta _{m}\\right)}+w_{m}^{\\left(n\\right)},\\end{split}$ where $\\alpha _{k,m}^{^{\\prime }}=\\alpha _{k}\\left(j\\right)^{\\beta _{k}}e^{j\\phi _{m}}e^{-j2\\pi f_{c,m}\\left(\\tau _{k}+\\delta _{m}\\right)}$ .", "In the coarse estimation stage, the carrier phase $e^{-j2\\pi f_{c,m}\\left(\\tau _{k}+\\delta _{m}\\right)}$ and the random phase $e^{j\\phi _{m}}$ of each band are absorbed into the complex scalar $\\alpha _{k}$ , while the bandwidth term $nf_{s,m}$ is retained, so that all sub-bands share a bandwidth-dependent delay domain, which we call the bandwidth aperture.", "In addition, the exponential term $\\left(j\\right)^{\\beta _{k}}$ of the scattering coefficient relating to the phase can also be absorbed into $\\alpha _{k}$ .", "Generally, $\\left(\\frac{f_{c,m}+nf_{s,m}}{f_{c,m}}\\right)^{\\beta _{k}}$ can be approximated as 1 with neglectable effect on coarse estimation since $f_{c,m}\\gg nf_{s,m}$ .", "As shown by the dotted blue line in Fig REF , the bandwidth aperture smoothes the likelihood function, so that the true value can most likely be found in the peak region of the main lobe, thus obtaining a relatively rough but stable estimate.", "Then, we use the root-MUSIC [32] algorithm to roughly estimate the delay, and the complex scalar after absorption is denoted as $\\alpha _{k,m}^{^{\\prime }}$ .", "The coarse estimate signal model can be written in the following form: $\\mathbf {Y}_{m}=\\mathbf {X}_{m}\\mathbf {A}_{m}+w_{m},$ where ${\\left\\lbrace \\begin{array}{ll}\\mathbf {Y}_{m}{\\rm =}\\left[r_{m}^{\\left(0\\right)},r_{m}^{\\left(1\\right)},\\ldots ,r_{m}^{\\left(N_{m}-1\\right)}\\right]^{T},\\\\\\mathbf {A}_{m}{\\rm =}[\\alpha _{1,m}^{^{\\prime }},\\alpha _{2,m}^{^{\\prime }},\\cdots ,\\alpha _{K,m}^{^{\\prime }}]^{T},\\\\\\mathbf {X}_{m}(\\mathbf {\\tau }){\\rm =}[\\mathbf {x}(\\tau _{1}+\\delta _{m}),\\mathbf {x}(\\tau _{2}+\\delta _{m}),\\cdots ,\\mathbf {x}(\\tau _{K}+\\delta _{m})],\\\\\\mathbf {x}(\\mathring{\\tau }){\\rm =}[1,e^{-j2\\pi f_{s,m}\\mathring{\\tau }},\\cdots ,e^{-j2\\pi (N_{m}-1)f_{s,m}\\mathring{\\tau }}]^{T},\\\\w_{m}=[w_{m}^{\\left(0\\right)},w_{m}^{\\left(1\\right)},\\cdots ,w_{m}^{\\left(N_{m}-1\\right)}]^{T}.\\end{array}\\right.", "}$ and $\\mathbf {\\tau }=\\left[\\tau _{1},...,\\tau _{K}\\right]$ .", "Next, we can construct a Hankel matrix for subspace decomposition of the received signal [33], in the following form: $H_{m}=\\left[\\begin{array}{cccc}r_{m}^{\\left(0\\right)} & r_{m}^{\\left(1\\right)} & \\cdots & r_{m}^{\\left(L-1\\right)}\\\\r_{m}^{\\left(1\\right)} & r_{m}^{\\left(2\\right)} & \\cdots & r_{m}^{\\left(L\\right)}\\\\\\vdots & \\vdots & \\ddots & \\vdots \\\\r_{m}^{\\left(N_{m}-L\\right)} & r_{m}^{\\left(N_{m}-L+1\\right)} & \\cdots & r_{m}^{\\left(N_{m}-1\\right)}\\end{array}\\right],$ where $L$ is the length of correlation window, empirically taken as ${N_{m}}{3}$ [3].", "After that, we apply the eigenvalue decomposition to the Hankel matrix of the signal, $H_{m}=U_{m}D_{m}V_{m}^{H}$.", "The eigenspace composed of the eigenvectors corresponding to the largest $K$ (i.e.", "the number of scattering centers) eigenvalues is called the signal subspace and is denoted as $\\mathbf {S}_{signal}$ .", "The eigenspace composed of the eigenvectors corresponding to the remaining $(N_{m}-K)$ eigenvalues is called the noise subspace and denoted as $\\mathbf {S}_{noise}$ .", "For determining the number of scattering centers $K$ , Akaike Information Criterion (AIC) [34] or the Minimum Description Length (MDL) [35] are both efficient methods that generally work well, which will not be described here for conciseness.", "Essentially, the idea of root-MUSIC algorithm [32] is to apply the polynomial root-finding method to replace the spectral search of zeros in conventional MUSIC algorithm.", "Define the polynomial: $f_{l}\\left(z\\right)=\\mathbf {u}_{l}^{H}\\mathbf {p}\\left(z\\right),l=K+1,\\ldots N_{m},$ where $\\mathbf {u}_{l}^{H}$ is the $l$ -th eigenvector of noise subspace $\\mathbf {S}_{noise}$ , $\\mathbf {p}\\left(z\\right)=[1,z,\\cdots ,z^{N_{m}-1}]^{T}$ , and $z=e^{-j2\\pi f_{s,m}\\tau }$ .", "In order to utilize all noise eigenvectors, we wish to find the zeros of the following polynomials: $f\\left(z\\right)=\\mathbf {p}^{H}\\left(z\\right)\\mathbf {S}_{noise}\\mathbf {S}_{noise}^{H}\\mathbf {p}\\left(z\\right).$ We rewrite (REF ) to get the polynomial in terms of $z$ as: $f\\left(z\\right)=z^{N_{m}-1}\\mathbf {p}^{T}\\left(z^{-1}\\right)\\mathbf {S}_{noise}\\mathbf {S}_{noise}^{H}\\mathbf {p}\\left(z\\right).$ Find the roots of the above polynomial, wherein the $K$ roots in the unit circle whose moduli are closest to 1 contain the information about delay.", "Denote those roots as $p_{k,m},k=1,2\\ldots K$ , then the coarse estimate of the delay can be obtained: $\\hat{\\tau }_{k,m}=\\frac{arg\\left(p_{k,m}\\right)}{-2\\pi f_{s,m}}.$ We can further improve the delay estimation accuracy by combining the results with different SNR in each frequency band: $\\hat{\\tau }_{k}=\\sum \\limits _{m=1}^{M}\\frac{SNR_{m}}{SNR_{1}+\\ldots +SNR_{M}}\\hat{\\tau }_{k,m}.$ Also, the estimate of $\\alpha _{k,m}^{^{\\prime }}$ can be obtained by the least square (LS) method: $\\mathbf {\\hat{A}}_{m}=\\left(\\mathbf {X}_{m}^{H}(\\hat{\\tau })\\mathbf {X}_{m}(\\hat{\\tau })\\right)^{-1}\\mathbf {X}_{m}^{H}(\\hat{\\tau })\\mathbf {Y}_{m}.$ where $\\hat{\\tau }=\\left[\\hat{\\tau }_{1},...,\\hat{\\tau }_{K}\\right]$ , $\\mathbf {\\hat{A}}_{m}=[\\hat{\\alpha }_{1,m}^{^{\\prime }},\\hat{\\alpha }_{2,m}^{^{\\prime }},\\cdots ,\\hat{\\alpha }_{K,m}^{^{\\prime }}]^{T}$ .", "Then, the signal can be written as an all-pole model [3]: $r_{m}^{\\left(n\\right)}=\\sum \\limits _{k=1}^{K}\\hat{\\alpha }_{k,m}^{^{\\prime }}p_{k,m}^{n}.$ In addition, another advantage of root-MUSIC algorithm is that the phase information can be extracted from the roots to obtain an estimation of those non-ideal factors.", "By comparing the terms in equations (REF ) and (REF ), the difference between the random initial phases of the two bands can be estimated as follows: $\\begin{split}\\hat{\\phi }_{m}^{^{\\prime }}= & \\hat{\\phi }_{m}-\\hat{\\phi }_{1}=\\frac{1}{K}\\sum \\limits _{k=1}^{K}\\left[arg\\left(\\hat{\\alpha }_{k,m}^{^{\\prime }}\\right)-arg\\left(\\hat{\\alpha }_{1,m}^{^{\\prime }}\\right)\\right.\\\\& \\left.-2\\pi f_{c,1}\\left(\\hat{\\tau }_{k,1}\\right)+2\\pi f_{c,m}\\left(\\hat{\\tau }_{k\\text{,m}}\\right)\\right].\\end{split}$ Then we can also get $\\hat{\\delta }_{m}=\\frac{1}{K}\\sum \\limits _{k=1}^{K}\\left[\\frac{arg\\left(p_{k,m}\\right)}{-2\\pi f_{s,m}}-\\hat{\\tau }_{k}\\right].$ Therefore, in the coarse estimation stage, we can obtain coarse estimate of delay $\\hat{\\tau }_{k}$ , complex gain amplitude $\\left\\Vert \\widehat{\\alpha }_{k}\\right\\Vert $ (i.e.", "$\\sum \\limits _{m=1}^{M}\\frac{SNR_{m}\\left\\Vert \\hat{\\alpha }_{k,m}^{^{\\prime }}\\right\\Vert }{SNR_{1}+\\ldots +SNR_{M}}$ ), difference between the random initial phases $\\hat{\\phi }_{m}^{^{\\prime }}$ and timing synchronization error $\\hat{\\delta }_{m}$ to serve the refined estimation." ], [ "Refined Estimation", "In the stage of refined estimation, we first absorb the term $e^{-j2\\pi f_{c,m}\\delta _{m}}$ into the initial phase $e^{j\\phi _{m}}$ , namely $e^{j\\tilde{\\phi }_{m}}{\\rm =}e^{j\\left(\\phi _{m}-2\\pi f_{c,m}\\delta _{m}\\right)}$ , which also aims to reduce the oscillation degree of the likelihood function in the estimation of $\\delta _{m}$ .", "Then, if we take the first frequency band as a reference, the rewritten initial phase $e^{j\\tilde{\\phi }_{1}}$ and the carrier phase $e^{-j2\\pi f_{c,1}\\tau _{k}}$ can be absorbed into the complex scalar $\\alpha _{k}$ , and the residual band gap term $e^{-j2\\pi (f_{c,m}-f_{c,1}+nf_{s,m})\\tau _{k}}$ is retained in each sub-band signal, which is therefore called the band gap aperture structure.", "Signal model with the band gap aperture structure: $\\begin{split}r_{m}^{\\left(n\\right)} & =\\sum \\limits _{k=1}^{K}\\alpha _{k}^{^{\\prime }}\\left(j\\frac{f_{c,m}+nf_{s,m}}{f_{c,m}}\\right)^{\\beta _{k}}e^{-j2\\pi (f_{c,m}^{^{\\prime }}+nf_{s,m})\\tau _{k}}\\\\& e^{j\\phi _{m}^{^{\\prime }}}e^{-j2\\pi nf_{s,m}\\delta _{m}}+w_{m}^{\\left(n\\right)},\\end{split}$ where, $\\alpha _{k}^{^{\\prime }}=\\alpha _{k}e^{j\\tilde{\\phi }_{1}}e^{-j2\\pi f_{c,1}\\tau _{k}}$ , $\\phi _{m}^{^{\\prime }}=\\tilde{\\phi }_{m}-\\tilde{\\phi }_{1}$ , $f_{c,m}^{^{\\prime }}=f_{c,m}-f_{c,1}$ , $\\phi _{1}^{^{\\prime }}=0$ , $f_{c,1}^{^{\\prime }}=0$ .", "Fig REF illustrates that the oscillation of likelihood function associated with the refined estimation will not be too violent, at the same time, the main lobe becomes sharper than that of the coarse estimation model.", "In this case, we can exploit the multi-band gain (i.e, the phase rotation caused by the gap between the carrier frequencies) to improve the performance.", "After obtaining the above refined estimation signal model, prior to presenting algorithm details, the probability model and objective function are introduced at first.", "First of all, the coarse estimate of delay $\\tau _{k}$ can be used as the prior information for the refined estimation stage, assuming that the truth value is evenly distributed in the neighborhood of the coarse estimate $[\\hat{\\tau }_{k}-\\Delta \\hat{\\tau }_{k}/2,\\hat{\\tau }_{k}+\\Delta \\hat{\\tau }_{k}/2]$ .", "The interval of refined estimation $\\Delta \\hat{\\tau }_{k}$ can be determined according to the empirical error of the first stage or based on the Cramér-Rao bound (CRB) analysis.", "The prior probability distributions for $\\phi _{m}^{^{\\prime }}$ and $\\delta _{m}$ are similar.", "Although a relatively accurate estimate of the amplitude of the complex scalar $\\left\\Vert \\alpha _{k}\\right\\Vert $ is obtained in the first stage, its phase is still unknown.", "It may be assumed that the amplitude follows a Gaussian distribution with a small variance near the coarse estimate, and the phase is uniformly distributed from 0 to $2\\pi $ .", "In summary, vectorized variables to be estimated in the refined stage are denoted as $\\mathbf {\\Lambda } & {\\rm =}\\left[\\alpha _{1}^{^{\\prime }},\\ldots ,\\alpha _{K}^{^{\\prime }},\\tau _{1},\\ldots ,\\tau _{K},\\beta _{1},\\ldots ,\\beta _{K},\\right.\\nonumber \\\\& \\left.\\phi _{2}^{^{\\prime }},\\ldots ,\\phi _{M}^{^{\\prime }},\\delta _{2},\\ldots ,\\delta _{M}\\right],$ the total number of variables is denoted as $\\left|\\Lambda \\right|=J$ , and frequency domain measurement of the received signal is denoted as $r=\\left[r_{1}^{\\left(0\\right)},r_{1}^{\\left(1\\right)},\\ldots ,r_{1}^{\\left(N_{1}-1\\right)},\\ldots ,r_{M}^{\\left(0\\right)},r_{M}^{\\left(1\\right)},\\ldots ,r_{M}^{\\left(N_{M}-1\\right)}\\right]^{T}.$ In the case of AWGN, the logarithmic likelihood function can be written as follow $\\begin{split} & \\ln p\\left(r|\\Lambda \\right)=\\ln \\prod \\limits _{m=1}^{M}\\prod \\limits _{n=0}^{N_{m}-1}p(r_{m}^{\\left(n\\right)}|\\Lambda )\\\\& =MN_{m}\\ln \\frac{1}{\\sqrt{2\\pi }\\hat{\\eta }_{w}}-\\sum _{m=1}^{M}\\sum _{n=0}^{N_{m}-1}\\frac{1}{2\\hat{\\eta }_{w}^{2}}\\left|r_{m}^{\\left(n\\right)}-s_{m}^{\\left(n\\right)}\\left(\\Lambda \\right)\\right|^{2},\\end{split}$ $\\begin{split}s_{m}^{\\left(n\\right)}\\left(\\Lambda \\right) & =\\sum \\limits _{k=1}^{K}\\alpha _{k}^{^{\\prime }}\\left(j\\frac{f_{c,m}+nf_{s,m}}{f_{c,m}}\\right)^{\\beta _{k}}e^{-j2\\pi (f_{_{c,m}}^{^{\\prime }}+nf_{s,m})\\tau _{k}}\\\\& e^{j\\phi _{m}^{^{\\prime }}}e^{-j2\\pi \\left(nf_{s,m}\\right)\\delta _{m}},\\end{split}$ where $s_{m}^{\\left(n\\right)}\\left(\\Lambda \\right)$ is the received signal reconstructed from the parameter $\\Lambda $ .", "The joint posterior probability can be easily deduced from the Bayes equation $p\\left(\\Lambda |r\\right)\\propto p\\left(r|\\Lambda \\right)p\\left(\\Lambda \\right)$ , but the marginal posterior probability is not.", "It is inevitable to integrate other variables except $\\Lambda _{j}$ to obtain the closed-form expression of the marginal posterior probability $p\\left(\\Lambda _{j}|r\\right)$ .", "$\\Lambda _{j}$ represents the $j$ -th variable to be estimated in $\\Lambda $ .", "In general, closed-form solutions are very difficult to obtain, and the computational complexity is often unacceptable.", "In summary, we provide a two-stage estimation scheme as shown in Algorithm 1, which divides the estimation process into coarse estimation stage and refined estimation stage.", "In the coarse estimation stage, weighted root-MUSIC algorithm and LS method are used to obtain the rough estimates, providing prior information for subsequent estimation and narrowing the search interval.", "In the next section, we shall propose a SPVBI algorithm to find the approximate marginal posterior of the target parameters $\\Lambda $ for the refined estimation stage." ], [ "Problem Formulation based on Variational Bayesian Inference", "To obtain the marginal posterior probability distribution of delay, we resort to variational Bayesian inference [36], which can asymptotically approximate the real posterior probability distribution $p\\left(\\Lambda _{j}|r\\right)$ by iteratively updating the variational probability distribution $q\\left(\\Lambda _{j}\\right)$ .", "The expression given by the conventional VBI algorithm contains multiple integral calculation (i.e., expectation), so it is usually intractable to obtain the closed-form expression.", "To deal with this, most of the existing works make some prior assumptions, such as assuming that the distribution of these variables comes from some distribution families [22] or meets the conjugation condition, but this is usually not accurate enough and subjective.", "In [24], the authors proposed a particle-based VBI (PVBI) algorithm to approximate the calculation of expectation by means of importance sampling (IS) method [37], which is a kind of Monte Carlo method.", "By iteratively updating the weight of particles, the discrete distribution composed of particles can gradually approach the posterior probability distribution.", "However, based on the conclusive formula ([24], formula (12)) given by VBI, the position of particles cannot be updated.", "Besides, only a large number of particles can overcome the instability caused by initial random sampling, and ensure that the estimation locally converges to a “good” stationary point.", "Although parallel computing can speed up the processing, the complexity will rocket as the number of variables and particles increases.", "Motivated by the above analysis, we also optimize the particle position and further design a SPVBI algorithm to solve the new problem with much lower per-iteration complexity than the conventional PVBI algorithm.", "In the proposed PVBI problem formulation, the marginal posterior probability of the $j$ -th variable is approximated as the weighted sum of $N_{p}$ discrete particles: $q\\left(\\Lambda _{j};\\mathbf {x}_{j},\\mathbf {y}_{j}\\right){\\rm =}\\sum \\limits _{p=1}^{N_{p}}y_{j,p}\\delta \\left(\\Lambda _{j}-x_{j,p}\\right),$ where $\\delta \\left(\\cdot \\right)$ is the impulse function, and $\\mathbf {x}_{j}=\\left[x_{j,1},x_{j,2},...,x_{j,N_{p}}\\right]^{T}$ and $\\mathbf {y}_{j}=\\left[y_{j,1},y_{j,2},...,y_{j,N_{p}}\\right]^{T}$ are the positions and weights of the particles, respectively.", "Furthermore, according to the mean field assumption [38]–[39], the approximate posterior probability of each variable can be assumed to be independent of each other, $q\\left(\\Lambda ;\\mathbf {x},\\mathbf {y}\\right)=\\stackrel{[}{j}=1]{J}{\\prod }q\\left(\\Lambda _{j};\\mathbf {x}_{j},\\mathbf {y}_{j}\\right)$ , where $\\mathbf {x}=\\left[\\mathbf {x}_{1};\\mathbf {x}_{2};...;\\mathbf {x}_{J}\\right]$ and $\\mathbf {y}=\\left[\\mathbf {y}_{1};\\mathbf {y}_{2};...;\\mathbf {y}_{J}\\right]$ .", "Therefore, the positions and weights $\\mathbf {x},\\mathbf {y}$ should be chosen to minimize the Kullback-Leibler (KL) divergence between the variational probability distribution $q\\left(\\Lambda ;\\mathbf {x},\\mathbf {y}\\right)$ and the real posterior probability distribution $p\\left(\\Lambda |r\\right)$ [36], which is defined as $\\begin{split}D_{KL}\\left[q\\left\\Vert p\\right.\\right] & {\\rm =}\\int q\\left(\\Lambda ;\\mathbf {x},\\mathbf {y}\\right)\\ln \\frac{q\\left(\\Lambda ;\\mathbf {x},\\mathbf {y}\\right)}{p\\left(\\Lambda |r\\right)}d\\Lambda \\\\& {\\rm =}\\int q\\left(\\Lambda ;\\mathbf {x},\\mathbf {y}\\right)\\ln \\frac{q\\left(\\Lambda ;\\mathbf {x},\\mathbf {y}\\right)p\\left(r\\right)}{p\\left(r|\\Lambda \\right)p\\left(\\Lambda \\right)}d\\Lambda .\\end{split}$ Considering that $p\\left(r\\right)$ is a constant independent of $q\\left(\\Lambda ;\\mathbf {x},\\mathbf {y}\\right)$ , minimizing the KL divergence is equivalent to solving the following optimization problem: $\\begin{split}\\mathcal {P}:\\mathop {\\min }\\limits _{\\mathbf {x},\\mathbf {y}}\\quad & L\\left(\\mathbf {x},\\mathbf {y}\\right)\\triangleq \\int q\\left(\\Lambda ;\\mathbf {x},\\mathbf {y}\\right)\\ln \\frac{q\\left(\\Lambda ;\\mathbf {x},\\mathbf {y}\\right)}{p\\left(r|\\Lambda \\right)p\\left(\\Lambda \\right)}d\\Lambda \\\\\\text{s.t.", "}\\quad & \\sum \\limits _{p=1}^{N_{p}}y_{j,p}=1,\\quad \\epsilon \\le y_{j,p}\\le 1,\\quad \\forall j,p,\\\\& \\hat{\\Lambda }_{j}-\\Delta \\hat{\\Lambda }_{j}/2\\le x_{j,p}\\le \\hat{\\Lambda }_{j}+\\Delta \\hat{\\Lambda }_{j}/2,\\quad \\forall j,p,\\end{split}$ where $\\hat{\\Lambda }_{j}$ is the coarse estimation obtained in the first stage and $\\Delta \\hat{\\Lambda }_{j}$ is the range of the estimation error of coarse estimation, and $\\epsilon >0$ is a small number.", "The normalized weight $y_{j,p}$ represents the probability that the particle is located at position $x_{j,p}$ .", "Note that it does not make sense to generate particles with very small probabilities in approximate posterior since these particles contribute very little to the MAP/MMSE estimator.", "Therefore, we restrict the probability of each particle $y_{j,p}$ to be larger than a small number $\\epsilon $ .", "The truth value is highly likely to be located in the prior interval $\\left[\\hat{\\Lambda }_{j}-\\Delta \\hat{\\Lambda }_{j}/2,\\hat{\\Lambda }_{j}+\\Delta \\hat{\\Lambda }_{j}/2\\right]$ , so the particle position is searched wherein to accelerate the convergence rate.", "Initial particles can be generated by sampling according to the prior distribution obtained in the coarse estimation stage.", "Adding the optimization of particle positions $\\mathbf {x}$ can improve the effectiveness of characterizing the target distribution by discrete particles, and avoid the estimation result falling into the local optimum due to poor initial sampling.", "Furthermore, Particle position updating can reduce the number of sampled particles, and thus effectively reduce the computational overhead, which will be discussed in detail in the following sections." ], [ "SPVBI Algorithm Design based on Block SSCA", "Although the particle-based approximation can effectively simplify the integral operation in the objective function $L\\left(\\mathbf {x},\\mathbf {y}\\right)$ of $\\mathcal {P}$ , the number of summations in $L\\left(\\mathbf {x},\\mathbf {y}\\right)$ is still as high as $N_{p}^{J}$ , that is, with exponential computational complexity.", "To solve this problem, we propose a stochastic particle-based variational Bayesian inference (SPVBI) algorithm based on the block SSCA to find stationary points of $\\mathcal {P}$ with lower computational complexity.", "Specifically, we divided the optimization variables into $2J$ blocks $\\mathbf {x}_{1}$ , $\\mathbf {y}_{1}$ , $\\mathbf {x}_{2}$ , $\\mathbf {y}_{2}$ , ... ,$\\mathbf {x}_{J}$ , $\\mathbf {y}_{J}$ .", "Starting from an initial point $\\mathbf {x}^{\\left(0\\right)},\\mathbf {y}^{\\left(0\\right)}$ , the SPVBI algorithm alternatively optimizing each block until convergence.", "Let $\\mathbf {x}_{j}^{\\left(t\\right)},\\mathbf {y}_{j}^{\\left(t\\right)}$ and $\\mathbf {x}_{j}^{\\left(t+1\\right)},\\mathbf {y}_{j}^{\\left(t+1\\right)}$ denote the blocks $\\mathbf {x}_{j},\\mathbf {y}_{j}$ before and after the update in the $t$ -th iteration, respectively.", "Then in the $t$ -th iteration, the $\\left(2j-1\\right)$ -th block $\\mathbf {x}_{j}$ is updated by solving the following subproblem: $\\begin{split}\\mathcal {P}_{x_{j}}:\\mathop {\\min }\\limits _{\\mathbf {x}_{j}}\\quad & L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j}^{\\left(t\\right)}\\right)\\\\\\text{s.t.", "}\\quad & \\hat{\\Lambda }_{j}-\\Delta \\hat{\\Lambda }_{j}/2\\le x_{j,p}\\le \\hat{\\Lambda }_{j}+\\Delta \\hat{\\Lambda }_{j}/2,\\forall p\\end{split}$ where $L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j}\\right)=L\\left(\\mathbf {x}^{\\left(t,j\\right)},\\mathbf {y}^{\\left(t,j\\right)}\\right)$ with $\\mathbf {x}^{\\left(t,j\\right)}=\\left[\\mathbf {x}_{1}^{\\left(t+1\\right)}\\right.$ $\\left.;...", ";\\mathbf {x}_{j-1}^{\\left(t+1\\right)};\\mathbf {x}_{j};\\mathbf {x}_{j+1}^{\\left(t\\right)};...;\\mathbf {x}_{J}^{\\left(t\\right)}\\right]$ and $\\mathbf {y}^{\\left(t,j\\right)}=\\left[\\mathbf {y}_{1}^{\\left(t+1\\right)};...;\\mathbf {y}_{j-1}^{\\left(t+1\\right)}\\right.$ $\\left.", ";\\mathbf {y}_{j};\\mathbf {y}_{j+1}^{\\left(t\\right)};...;\\mathbf {y}_{J}^{\\left(t\\right)}\\right]$ .", "In other words, $L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j}\\right)$ is the objective function $L\\left(\\mathbf {x},\\mathbf {y}\\right)$ when fixing all other variables as the latest iterate and only treating $\\mathbf {x}_{j},\\mathbf {y}_{j}$ as variables.", "It can be shown that $& L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j}\\right)=\\sum \\limits _{p=1}^{N_{p}}y_{j,p}\\ln y_{j,p}-\\sum \\limits _{p=1}^{N_{p}}y_{j,p}\\left[\\ln p\\left(x_{j,p}\\right)+\\right.\\nonumber \\\\& \\left.\\sum \\limits _{p_{1}=1}^{N_{p}}\\cdots \\sum \\limits _{p_{j-1}=1}^{N_{p}}\\sum \\limits _{p_{j+1}=1}^{N_{p}}\\cdots \\sum \\limits _{p_{J}=1}^{N_{p}}\\widetilde{y}_{\\sim j,p_{\\sim j}}\\ln p\\left(r\\left|\\widetilde{x}_{\\sim j,p_{\\sim j}},x_{j,p}\\right.\\right)\\right],$ where $\\widetilde{y}_{\\sim j,p_{\\sim j}}=\\underset{i\\ne j}{\\prod }y_{i,p_{i}}$ and $\\widetilde{x}_{\\sim j,p_{\\sim j}}=\\left\\lbrace x_{i,p_{i}}\\right\\rbrace _{i\\ne j}$ , which involves a summation of $N_{p}^{J-1}$ terms.", "As such, the complexity of directly solving $\\mathcal {P}_{x_{j}}$ is unacceptable.", "To overcome this challenge, we reformulate $\\mathcal {P}_{x_{j}}$ to a stochastic optimization problem as $\\begin{split}\\mathcal {P}_{x_{j}}:\\mathop {\\min }\\limits _{\\mathbf {x}_{j}}\\quad & L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j}^{\\left(t\\right)}\\right)\\triangleq {\\rm \\mathbb {E}}_{\\Lambda _{\\sim j}}\\left[g_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j}^{\\left(t\\right)};\\Lambda _{\\sim j}\\right)\\right]\\\\\\text{s.t.", "}\\quad & \\hat{\\Lambda }_{j}-\\Delta \\hat{\\Lambda }_{j}/2\\le x_{j,p}\\le \\hat{\\Lambda }_{j}+\\Delta \\hat{\\Lambda }_{j}/2,\\forall p,\\end{split}$ where $\\Lambda _{\\sim j}$ represents all the other variables except $\\Lambda _{j}$ and ${\\rm \\mathbb {E}}_{\\Lambda _{\\sim j}}\\left[\\cdot \\right]$ represents the expectation operator over the probability distribution of variable $\\Lambda _{\\sim j}$ , and $& g_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j};\\Lambda _{\\sim j}\\right){\\rm =}\\sum \\limits _{p=1}^{N_{p}}y_{j,p}\\ln y_{j,p}-\\sum \\limits _{p=1}^{N_{p}}y_{j,p}\\nonumber \\\\& \\times \\left[\\ln p\\left(x_{j,p}\\right)+\\ln p\\left(r\\left|\\Lambda _{{\\rm \\sim }j},x_{j,p}\\right.\\right)\\right].$ Then following the idea of SSCA, we replace the objective function $L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j}^{\\left(t\\right)}\\right)$ in $\\mathcal {P}_{x_{j}}$ with a simple quadratic surrogate objective function $\\overline{f}_{x_{j}}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}\\right)\\triangleq \\left(\\mathbf {{\\rm f}}_{x_{j}}^{\\left(t\\right)}\\right)^{T}\\left(\\mathbf {x}_{j}-\\mathbf {x}_{j}^{\\left(t\\right)}\\right)+_{x_{j}}\\left\\Vert \\mathbf {x}_{j}-\\mathbf {x}_{j}^{\\left(t\\right)}\\right\\Vert ^{2},$ and obtain an intermediate variable $\\overline{\\mathbf {x}}_{j}^{\\left(t\\right)}$ by minimizing the surrogate objective function as $\\overline{\\mathbf {x}}_{j}^{\\left(t\\right)} & =\\arg \\min \\overline{f}_{x_{j}}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}\\right)\\\\\\text{s.t.}", "& \\quad \\hat{\\Lambda }_{j}-\\Delta \\hat{\\Lambda }_{j}/2\\le x_{j,p}\\le \\hat{\\Lambda }_{j}+\\Delta \\hat{\\Lambda }_{j}/2,\\forall p,\\nonumber $ where $_{x_{j}}$ can be any positive number, $\\mathbf {{\\rm f}}_{x_{j}}^{\\left(t\\right)}$ is an unbiased estimator of the gradient $\\nabla _{\\mathbf {x}_{j}}L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j}^{\\left(t\\right)}\\right)$ , which is updated recursively as follows: $\\begin{split}\\mathbf {{\\rm f}}_{x_{j}}^{\\left(t\\right)} & =\\left(1-\\rho ^{\\left(t\\right)}\\right)\\mathbf {{\\rm f}}_{x_{j}}^{\\left(t-1\\right)}+\\frac{\\rho ^{\\left(t\\right)}}{B}\\sum _{b=1}^{B}\\nabla _{\\mathbf {x}_{j}}g_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}^{\\left(t\\right)},\\mathbf {y}_{j}^{\\left(t\\right)};\\Lambda _{\\sim j}^{\\left(b\\right)}\\right),\\end{split}$ where $\\left\\lbrace \\Lambda _{\\sim j}^{\\left(b\\right)},b=1,...,B\\right\\rbrace $ is a mini-batch of $B$ samples generated by the distribution $\\prod _{j^{^{\\prime }}\\ne j}q\\left(\\Lambda _{j^{^{\\prime }}};\\mathbf {x}_{j^{^{\\prime }}},\\mathbf {y}_{j^{^{\\prime }}}\\right)$ with $\\mathbf {x}_{j^{^{\\prime }}}=\\mathbf {x}_{j^{^{\\prime }}}^{\\left(t+1\\right)},\\mathbf {y}_{j^{^{\\prime }}}=\\mathbf {y}_{j^{^{\\prime }}}^{\\left(t+1\\right)},\\forall j^{^{\\prime }}<j$ and $\\mathbf {x}_{j^{^{\\prime }}}=\\mathbf {x}_{j^{^{\\prime }}}^{\\left(t\\right)},\\mathbf {y}_{j^{^{\\prime }}}=\\mathbf {y}_{j^{^{\\prime }}}^{\\left(t\\right)},\\forall j^{^{\\prime }}>j$ , and $\\rho ^{\\left(t\\right)}$ is a decreasing step size that will be discussed later and we set $\\mathbf {{\\rm f}}_{x_{j}}^{\\left(-1\\right)}=0$ .", "Finally, the updated $\\mathbf {x}_{j}$ is given by $\\mathbf {x}_{j}^{\\left(t+1\\right)}=\\left(1-\\gamma ^{\\left(t\\right)}\\right)\\mathbf {x}_{j}^{\\left(t\\right)}+\\gamma ^{\\left(t\\right)}\\overline{\\mathbf {x}}_{j}^{\\left(t\\right)},$ where $\\gamma ^{\\left(t\\right)}$ is another decreasing step size that will be discussed later, and there is a closed-form solution for $\\overline{\\mathbf {x}}_{j}^{\\left(t\\right)}\\triangleq \\left[\\overline{x}_{j,1}^{\\left(t\\right)},\\overline{x}_{j,2}^{\\left(t\\right)},...,\\overline{x}_{j,N_{p}}^{\\left(t\\right)}\\right]^{T}$ : $\\overline{x}_{j,p}^{\\left(t\\right)}=\\left\\lbrace \\begin{array}{c}\\hat{\\Lambda }_{j}-\\Delta \\hat{\\Lambda }_{j}/2,\\quad \\qquad \\qquad \\widetilde{x}_{j,p}^{\\left(t\\right)}<\\hat{\\Lambda }_{j}-\\Delta \\hat{\\Lambda }_{j}/2,\\\\\\widetilde{x}_{j,p}^{\\left(t\\right)},\\quad \\quad \\hat{\\Lambda }_{j}-\\Delta \\hat{\\Lambda }_{j}/2\\le \\widetilde{x}_{j,p}^{\\left(t\\right)}\\le \\hat{\\Lambda }_{j}+\\Delta \\hat{\\Lambda }_{j}/2,\\\\\\hat{\\Lambda }_{j}+\\Delta \\hat{\\Lambda }_{j}/2,\\quad \\qquad \\qquad \\widetilde{x}_{j,p}^{\\left(t\\right)}>\\hat{\\Lambda }_{j}+\\Delta \\hat{\\Lambda }_{j}/2,\\end{array}\\right.$ $p=1,...,N_{p}$ , where $\\mathbf {\\widetilde{x}}_{j}^{\\left(t\\right)}\\triangleq \\left[\\widetilde{x}_{j,1}^{\\left(t\\right)},\\widetilde{x}_{j,2}^{\\left(t\\right)},...,\\widetilde{x}_{j,N_{p}}^{\\left(t\\right)}\\right]^{T}=\\mathbf {x}_{j}^{\\left(t\\right)}-\\frac{1}{2\\Gamma _{x_{j}}}\\mathbf {{\\rm f}}_{x_{j}}^{\\left(t\\right)}$ .", "Similarly, in the $t$ -th iteration, the $\\left(2j\\right)$ -th block $\\mathbf {y}_{j}$ is updated by solving the following subproblem: $\\begin{split}\\mathcal {P}_{y_{j}}:\\mathop {\\min }\\limits _{\\mathbf {y}_{j}} & L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}^{\\left(t+1\\right)},\\mathbf {y}_{j}\\right)\\\\\\text{s.t.", "}\\quad & \\sum \\limits _{p=1}^{N_{p}}y_{j,p}=1,\\quad \\epsilon \\le y_{j,p}\\le 1,\\quad \\forall p.\\end{split}$ In stead of solving $\\mathcal {P}_{y_{j}}$ directly, we first construct a simple quadratic surrogate objective function $\\overline{f}_{y_{j}}^{\\left(t\\right)}\\left(\\mathbf {y}_{j}\\right)\\triangleq \\left(\\mathbf {{\\rm f}}_{y_{j}}^{\\left(t\\right)}\\right)^{T}\\left(\\mathbf {y}_{j}-\\mathbf {y}_{j}^{\\left(t\\right)}\\right)+_{y_{j}}\\left\\Vert \\mathbf {y}_{j}-\\mathbf {y}_{j}^{\\left(t\\right)}\\right\\Vert ^{2},$ and obtain an intermediate variable $\\overline{\\mathbf {y}}_{j}^{\\left(t\\right)}$ by minimizing the surrogate objective function as $\\overline{\\mathbf {y}}_{j}^{\\left(t\\right)} & =\\arg \\min \\overline{f}_{y_{j}}^{\\left(t\\right)}\\left(\\mathbf {y}_{j}\\right)\\\\s.t.", "& \\quad \\sum \\limits _{p=1}^{N_{p}}y_{j,p}=1,\\quad \\epsilon \\le y_{j,p}\\le 1,\\quad \\forall p,\\nonumber $ where $_{y_{j}}$ can be any positive number, $\\mathbf {{\\rm f}}_{y_{j}}^{\\left(t\\right)}$ is an unbiased estimator of the gradient $\\nabla _{\\mathbf {y}_{j}}L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}^{\\left(t+1\\right)},\\mathbf {y}_{j}^{\\left(t\\right)}\\right)$ , which is updated recursively as follows $\\begin{split}\\mathbf {{\\rm f}}_{y_{j}}^{\\left(t\\right)} & =\\left(1-\\rho ^{\\left(t\\right)}\\right)\\mathbf {{\\rm f}}_{y_{j}}^{\\left(t-1\\right)}+\\frac{\\rho ^{\\left(t\\right)}}{B}\\sum _{b=1}^{B}\\nabla _{\\mathbf {y}_{j}}g_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}^{\\left(t+1\\right)},\\mathbf {y}_{j}^{\\left(t\\right)};\\Lambda _{\\sim j}^{\\left(b\\right)}\\right)\\end{split}$ and we set $\\mathbf {{\\rm f}}_{y_{j}}^{\\left(-1\\right)}=0$ .", "Finally, the updated $\\mathbf {y}_{j}$ is given by $\\mathbf {y}_{j}^{\\left(t+1\\right)}=\\left(1-\\gamma ^{\\left(t\\right)}\\right)\\mathbf {y}_{j}^{\\left(t\\right)}+\\gamma ^{\\left(t\\right)}\\overline{\\mathbf {y}}_{j}^{\\left(t\\right)}.$ To ensure the convergence of the algorithm, the step sizes $\\rho ^{\\left(t\\right)}$ and $\\gamma ^{\\left(t\\right)}$ must satisfy the following conditions.", "Assumption 1 (Assumptions on step sizes) $\\:$ $\\rho ^{t}\\rightarrow 0$ , $\\sum _{t}\\rho ^{t}=\\infty $ , $\\sum _{t}\\left(\\rho ^{t}\\right)^{2}<\\infty $ , $\\underset{t\\rightarrow \\infty }{\\lim }\\gamma ^{t}/\\rho ^{t}=0$ .", "A typical choice of $\\rho ^{t},\\gamma ^{t}$ that satisfies Assumption 1 is $\\rho ^{t}=\\mathcal {O}\\left(t^{-\\kappa _{1}}\\right)$ , $\\gamma ^{t}=\\mathcal {O}\\left(t^{-\\kappa _{2}}\\right)$ , where $0.5<\\kappa _{1}<\\kappa _{2}\\le 1$ .", "Figure: An illustration of the block SSCA.As can be seen, the surrogate optimization problems in (REF ) and (REF ) are quadratic programming with linear constraints, which is easy to solve.", "In addition, explicit expressions of the gradient $\\nabla _{\\mathbf {x}_{j}}g_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j},\\Lambda _{\\sim j}^{b}\\right)$ and $\\nabla _{\\mathbf {y}_{j}}g_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j},\\mathbf {y}_{j},\\Lambda _{\\sim j}^{b}\\right)$ in (REF ) and (REF ) are given in Appendix REF and REF , respectively.", "The above process is illustrated in Fig REF .", "Figure: An illustration of updating approximatemarginal posterior probability distribution.The proposed SPVBI is guaranteed to converge to stationary points of the original Problem $\\mathcal {P}$ , as will be proved in Section REF .", "After the convergence, the corresponding discrete distribution of each variable $q\\left(\\Lambda _{j}\\right)$ composed of particles will approximate the marginal posterior probability distribution, as illustrated in Fig REF .", "As a result, we can take the particle position with the highest probability or the weighted sum of the particles as the final estimate, which are the approximate MAP estimate and MMSE estimate, respectively." ], [ "Summary of the SPVBI Algorithm", "The overall SPVBI algorithm is summarized in Algorithm REF .", "The mini-batch size $B$ can be chosen to achieve a trade-off between the per-iteration complexity and the total number of iterations.", "Thanks to the idea of averaging over iterations as in (REF ), (REF ), (REF ) and (REF ), a constant value of the mini-batch size $B$ is usually sufficient to achieve a fast convergence, e.g., in the simulations, we set $B=10$ .", "Compared to the conventional PVBI algorithm which needs to calculate a summation of $N_{p}^{J-1}$ terms when updating one block in each iteration, the proposed SPVBI only requires to solve a simple quadratic programming problem which only involves the calculation of $B$ gradients, where $B$ can be much smaller than $N_{p}^{J-1}$ .", "Moreover, the addition of updating particle position allows the algorithm to converge more quickly and flexibly to stable solutions.", "As such, the proposed SPVBI algorithm has both lower per-iteration complexity and faster convergence speed than the conventional PVBI.", "The proposed SPVBI is an extension of the SSCA framework in [30].", "There are two key differences.", "First, the SSCA in [30] constructs a single surrogate function to update all the variables simultaneously in each iteration, while the SPVBI allows block-wise update, which is often used in VBI-type algorithms [36].", "Second, the distribution of the random state in the original SSCA framework is assumed to be independent of the optimization variable, i.e., is control independent, while the distribution of the random state in the SPVBI is control dependent.", "For example, the random state $\\Lambda _{{\\rm \\sim }j}$ in the $t$ -th iteration follows the distribution $\\prod _{j^{^{\\prime }\\ne j}}q\\left(\\Lambda _{j^{^{\\prime }}};\\mathbf {x}_{j^{^{\\prime }}},\\mathbf {y}_{j^{^{\\prime }}}\\right)$ , which depends on the value of the current optimization variables and is changing over iterations.", "Therefore, SPVBI can be viewed as an extension of the SSCA framework in [30] to enable block-wise update and control-dependent random state.", "As such, the convergence analysis is also more challenging.", "In the next subsection, we shall provide the convergence analysis for SPVBI." ], [ "Convergence Analysis", "In this section, we will prove the convergence of the SPVBI algorithm.", "First, we present a key lemma which gives some important properties of the surrogate functions.", "Lemma 1 (Properties of the surrogate functions) For each iteration $t=1,2,\\ldots $ and each block $\\mathbf {x}_{j},\\mathbf {y}_{j},j=1,2,\\ldots J$ , we have $\\overline{f}_{x_{j}}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}\\right)$ and $\\overline{f}_{y_{j}}^{\\left(t\\right)}\\left(\\mathbf {y}_{j}\\right)$ are uniformly strongly convex in $\\mathbf {x}_{j}$ and $\\mathbf {y}_{j}$ , respectively.", "For any $\\mathbf {x}_{j}\\in \\mathcal {X}$ and $\\mathbf {y}_{j}\\in \\mathcal {Y}$ , the function $\\overline{f}_{x_{j}}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}\\right)$ and $\\overline{f}_{y_{j}}^{\\left(t\\right)}\\left(\\mathbf {y}_{j}\\right)$ , their derivative, and their second order derivative are uniformly bounded.", "$\\overline{f}_{x_{j}}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}\\right)$ and $\\overline{f}_{y_{j}}^{\\left(t\\right)}\\left(\\mathbf {y}_{j}\\right)$ are Lipschitz continuous function w.r.t.", "$\\mathbf {x}_{j}$ and $\\mathbf {y}_{j}$ , respectively.", "Moreover, $\\limsup _{t_{1},t_{2}\\rightarrow \\infty }\\left|\\overline{f}_{x_{j}}^{\\left(t_{1}\\right)}\\left(\\mathbf {x}_{j}\\right)-\\overline{f}_{x_{j}}^{\\left(t_{2}\\right)}\\left(\\mathbf {x}_{j}\\right)\\right|\\le B_{x}\\left\\Vert \\mathbf {x}^{\\left(t_{1}\\right)}-\\mathbf {x}^{\\left(t_{2}\\right)}\\right\\Vert ,$ $\\limsup _{t_{1},t_{2}\\rightarrow \\infty }\\left|\\overline{f}_{y_{j}}^{\\left(t_{1}\\right)}\\left(\\mathbf {y}_{j}\\right)-\\overline{f}_{y_{j}}^{\\left(t_{2}\\right)}\\left(\\mathbf {y}_{j}\\right)\\right|\\le B_{y}\\left\\Vert \\mathbf {y}^{\\left(t_{1}\\right)}-\\mathbf {y}^{\\left(t_{2}\\right)}\\right\\Vert ,$ $\\forall \\mathbf {x}_{j}\\in \\mathcal {X}$ , $\\forall \\mathbf {y}_{j}\\in \\mathcal {Y}$ , for some constants $B_{x}>0$ and $B_{y}>0$ .", "Consider a subsequence $\\left\\lbrace \\mathbf {x}^{\\left(t_{i}\\right)},\\mathbf {y}^{\\left(t_{i}\\right)}\\right\\rbrace _{i=1}^{\\infty }$ converging to a limit point $x^{*},\\mathbf {y}^{*}$ .", "There exist uniformly differentiable functions $\\hat{f}_{x_{j}}\\left(\\mathbf {x}_{j}\\right)$ and $\\hat{f}_{y_{j}}\\left(\\mathbf {y}_{j}\\right)$ such that $\\mathop {\\lim }\\limits _{i\\rightarrow \\infty }\\overline{f}_{x_{j}}^{\\left(t_{i}\\right)}\\left(\\mathbf {x}_{j}\\right)=\\hat{f}_{x_{j}}\\left(\\mathbf {x}_{j}\\right),\\forall \\mathbf {x}_{j}\\in \\mathcal {X},$ $\\mathop {\\lim }\\limits _{i\\rightarrow \\infty }\\overline{f}_{y_{j}}^{\\left(t_{i}\\right)}\\left(\\mathbf {y}_{j}\\right)=\\hat{f}_{y_{j}}\\left(\\mathbf {y}_{j}\\right),\\forall \\mathbf {y}_{j}\\in \\mathcal {Y}.$ Moreover, we have $\\left\\Vert \\nabla _{\\mathbf {x}_{j}}\\hat{f}_{x_{j}}\\left(\\mathbf {x}_{j}^{*}\\right)-\\nabla _{\\mathbf {x}_{j}}L\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)\\right\\Vert =0,$ $\\left\\Vert \\nabla _{\\mathbf {y}_{j}}\\hat{f}_{y_{j}}\\left(\\mathbf {y}_{j}^{*}\\right)-\\nabla _{\\mathbf {y}_{j}}L\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)\\right\\Vert & =0.$ Please refer to Appendix REF for the proof.", "With the Lemma REF , we are ready to prove the following main convergence result.", "Theorem 2 (Convergence of SPVBI) Starting from a feasible initial point $\\mathbf {x}^{\\left(0\\right)},\\mathbf {y}^{\\left(0\\right)}$ , let $\\left\\lbrace \\mathbf {x}^{\\left(t\\right)},\\mathbf {y}^{\\left(t\\right)}\\right\\rbrace _{t=1}^{\\infty }$ denote the iterates generated by Algorithm 2.", "Then every limiting point $x^{*},\\mathbf {y}^{*}$ of $\\left\\lbrace \\mathbf {x}^{\\left(t\\right)},\\mathbf {y}^{\\left(t\\right)}\\right\\rbrace _{t=1}^{\\infty }$ is a stationary point of original problem $\\mathcal {P}$ almost surely.", "Please refer to Appendix REF for the proof." ], [ "Complexity Analysis", "In this subsection, we analyze the complexity of SPVBI algorithm.", "In the coarse estimation stage, weighted root-MUSIC algorithm is adopted to narrow down the range, which mainly includes subspace decomposition and polynomial rooting, with the complexity of $\\mathcal {O}\\left(MN_{m}^{3}\\right)$ and $\\mathcal {O}\\left(M\\left(2N_{m}-1\\right)^{3}\\right)$ , respectively.", "In addition, the computational complexity of LS method is approximately $\\mathcal {O}\\left(N_{m}K^{2}+K^{3}\\right)$ .", "In the refined estimation stage, for PVBI algorithm, as the number of associated variables and particles increases, the computational complexity increases exponentially.", "Its per-iteration complexity order is $\\mathcal {O}\\left(J\\cdot \\left(N_{p}\\right)^{J-1}N_{LH}\\right)$ , where $N_{LH}=\\mathcal {O}\\left(KMN_{m}\\right)$ represents the average number of floating point operations (FLOPs) required to compute the dominant logarithmic likelihood value.", "Through mini-batch sampling and minimization of quadratic surrogate objective functions, the per-iteration complexity of SPVBI algorithm can be reduced to $\\mathcal {O}\\left(2J\\cdot \\left(N_{p}BN_{grad}+N_{p}^{3}\\right)\\right)$ , where $BN_{grad}$ and $N_{p}^{3}$ represent the complexity of computing a mini-batch of gradients and quadratic programming search, respectively, where $N_{grad}=\\mathcal {O}\\left(KMN_{m}\\right)$ represents the average number of FLOPs required to compute a gradient.", "It can be seen that SPVBI greatly reduces the computational complexity compared with conventional PVBI.", "In this section, simulations are conducted to demonstrate the performance of the proposed algorithm.", "We compare the proposed algorithm with the following baseline algorithms: Baseline 1 (root MUSIC, R-MUSIC) [3]: The conventional root MUSIC algorithm has relatively high accuracy in delay estimation, so it is adopted here for single-band data, mainly to show the gain brought by combining the results of multiple bands.", "Baseline 2 (Weighted root MUSIC, WR-MUSIC) [29]: For each individual band, root MUSIC algorithm is adopted, and the estimation results of each band are fused using maximum ratio combination to obtain a more accurate estimation.", "The WR-MUSIC algorithm is adopted in the coarse estimation stage.", "See Section REF for relevant procedures.", "Baseline 3 (Spectral estimation, SE) [3]: The parameters of the approximate all-pole model are estimated by spectral search methods such as root MUSIC algorithm, and the unknown frequency band data is interpolated according to the model to improve the resolution.", "Baseline 4 (SBL) [26]: The information required for coherent compensation can be extracted from the SBL solution.", "Then, by interpolating data between non-contiguous bands, more accurate estimate can be obtained, where the number of atoms in the dictionary is set to be 5000.", "In the proposed SPVBI, each variable is equipped with 10 particles, and the size of mini-batch $B$ is 10.", "The step size sequence is set as follows: $\\rho ^{\\left(t\\right)}=5/\\left(5+t\\right)^{0.9},\\rho ^{\\left(0\\right)}=1$ ; $\\gamma ^{\\left(t\\right)}=15/\\left(15+t\\right)^{1},\\gamma ^{\\left(0\\right)}=1$ .", "Unless otherwise specified, the experiment was repeated 400 times.", "In the simulations, we consider both narrow-band and wide-band scenarios.", "We first describe the common setup for both scenarios.", "There are two scattering centers.", "The amplitude of $\\alpha _{k}$ is 1 and $0.5$ , and $\\beta _{k}$ is $-0.5$ and $0.5$ , respectively.", "In addition, the phase of $\\alpha _{k}$ and initial phase $\\phi _{m}$ are uniformly generated within $[0,2\\pi ]$ .", "In the narrow-band scenario, the received signals come from two non-adjacent frequency bands with a bandwidth of 40MHz, and the frequency step is $0.8$ MHz.", "The initial frequency is set to $2.4$ GHz and $2.54$ GHz, respectively.", "There are two scattering centers with delays of 50ns and 500ns.", "The timing synchronization error $\\delta _{m}$ is generated following a Gaussian distribution $\\mathcal {N}\\left(0,0.1\\text{ns}^{2}\\right)$ .", "In the wide-band scenario, the bandwidth of the two non-contiguous frequency bands is $0.5$ GHz, and the frequency step is 20MHz.", "The initial frequency is set to 15GHz and 16GHz, respectively.", "The reference delays of the two scattering centers are 10ns and 20ns, respectively.", "The timing synchronization error $\\delta _{m}$ is generated following a Gaussian distribution $\\mathcal {N}\\left(0,1\\text{ns}^{2}\\right)$ ." ], [ "Analysis of Convergence Performance", "In this section, we will focus on the performance of convergence.", "The conventional PVBI algorithm is too complicated to be simulated under the scenario of this paper (i.e., too many variables need to be estimated).", "Therefore, we will focus on the benefits of updating particle positions by comparing the proposed SPVBI with and without updating particle positions.", "Figure: Iteration error curves with differentparticle numbersIn Fig REF , we plot the convergence curves of different updating modes with different number of particles under the narrow-band setup, where 'w' indicates that only particle weight is updated, 'wp' indicates that both weight and position are updated, and '#' denotes the number of particles.", "As the number of particles decreases from 20 to 5, it can be seen that the convergence speed will slow down when only the weight is updated, but the convergence speed and performance will almost remain the same when the position is also updated.", "This means that updating the position of particles can effectively increase the degree of freedom of optimization and ensure convergence and performance even with fewer particles." ], [ "Target Parameter Estimation Error", "In narrow-band positioning scenarios [40], we are mainly concerned with the distance/range of a certain target, so the average delay estimation error needs to be investigated.", "In this subsection, in order to clearly display the simulation results, only the CDF of the errors of SPVBI algorithm and WR-MUSIC algorithm are illustrated in the figure since they achieve better performance.", "The mean and variance of the errors of all algorithms are presented in the tableSE algorithm is designed to improve the resolution by reconstructing the full-band data based on the estimated parameters from the R-MUSIC algorithm, and its delay estimation accuracy is consistent with R-MUSIC..", "In practice, a small variance of error is desired because it means that a better performance can be guaranteed for the worst-case scenario." ], [ "Impact of the SNR", "As can be seen in Fig REF and Table REF , when the SNR increases from 12dB to 15dB, the performance of all algorithms is improved.", "At a certain SNR, the CDF curve of SPVBI is more inclined to the upper left than that of WR-MUSIC, indicating that the estimation error is smaller in most cases.", "The performance of the SBL algorithm for delay estimation is inferior to WR-MUSIC, even when the dictionary size is already quite large (i.e.", "5000 atoms).", "This subsection presents the impact of the band gap.", "We changed the initial frequency of the second band from $2.54$ GHz to $2.49$ GHz, so the band gap reduces to 50MHz.", "Figure: CDF curves of delay estimation errors for differentband gap.Table: ERROR COMPARISON FOR DIFFERENT BAND GAPIn Fig REF and Table REF , with the increase of frequency band gap, the performance of other algorithms remains almost unchanged except that of SPVBI algorithm is further improved, which indicates that the proposed algorithm does utilize the multi-band gap aperture gain.", "Next comes the impact of timing synchronization error $\\delta _{m}$ .", "In Fig REF and Table REF , as the variance of $\\delta _{m}$ increases, the performance of all algorithms degrades significantly." ], [ "Resolution Performance", "In the application scenarios of wide-band radar, such as radar imaging and target feature extraction, ultra-high resolution is required.", "In this case, it is often desirable to reconstruct the full-band data from the available non-adjacent band data to improve the resolution.", "However, whether high resolution can be achieved will depend on the accuracy of the data reconstruction.", "Therefore, for the SPVBI algorithm and Baseline 3 and 4, we compare the root-mean-square error (RMSE) of data reconstruction to indirectly show the resolution performanceThe (weighted) root MUSIC algorithm can achieve a good delay estimation accuracy, but it cannot obtain a good estimation of all parameters to reconstruct the full-band data.", "Therefore, we do not compare with Baseline 1 and 2 in the wide-band scenario..", "The RMSE between the estimated full-band data and the true full-band data can be calculated via the following equation: $RMSE=\\sqrt{\\frac{\\sum _{n=1}^{N}\\left|r^{\\left(n\\right)}-\\hat{r}^{\\left(n\\right)}\\right|^{2}}{N}},$ where $N$ indicates the number of frequency points in the full-band.", "Figure: RMSE of data reconstruction under different SNR.In Fig REF , we show the RMSE of full-band data reconstruction under different SNR.", "In the case of low SNR, SE algorithm is less affected by noise due to its simple model and few parameters to be estimated.", "However, with the increase of SNR, compared with other multi-band fusion algorithms, the full-band data reconstructed by SPVBI is more accurate, which implies that the proposed algorithm can obtain more accurate estimation of the signal parameters.", "The RMSE performance of SBL algorithm is worse than that of SPVBI and its complexity is higher.", "Figure: High resolution range profile (12dB).In addition, Fig REF shows the high resolution range profile (HRRP) reconstructed by full-band data of different algorithms.", "It can be found that the HRRP of the multi-band fusion algorithm (i.e.", "SE, SBL, SPVBI) is narrower than that of the single-band reconstruction, so the resolution is higher.", "Meanwhile, the RMSE of the full-band data reconstructed by SPVBI algorithm is smaller, so the peak point of SPVBI is closer to the true value as can be seen from the rectangular box." ], [ "Comparison of Computational Complexity", "In Table REF , we analyze the per-iteration complexity order of other mainstream multi-band radar sensing algorithms, and then compare them numerically with SPVBI algorithm under a typical setting.", "In the SE algorithm, there is a nonlinear least square fitting step, which is solved by Levenberg-Marquarelt (LM) algorithm with a complexity order of $\\mathcal {O}\\left(N_{m}^{3}\\right)$ .", "In the SBL algorithm, $M_{ob}$ is the number of atoms.", "Typical Settings are as follows: $J=8$ , $N_{p}=10$ , $M=2$ , $N_{m}=50$ , $B=10$ , $N_{LH}=3900$ , $N_{grad}=1500$ , $M_{ob}=5000$ .", "Table: COMPARISON OF THE COMPLEXITY ORDER FORDIFFERENT ALGORITHMSIt can be found that SPVBI algorithm can achieve good trade-off between performance and complexity.", "PVBI algorithm can only be applied in scenarios with few variables, and once the number of variables is large, the complexity will be unacceptable.", "Although the computational complexity of SPVBI is higher than the WR-MUSIC and SE algorithm, the estimation accuracy and resolution are greatly improved.", "Compared with the SBL algorithm, the complexity of SPVBI is greatly reduced and the performance is also improved." ], [ "Conclusions", "In this paper, we proposed a novel high-accuracy algorithm for multi-band radar sensing.", "To overcome the difficulty caused by the oscillation of likelihood function, two signal models transformed from the original multi-band signal model are adopted in a two-stage estimation framework.", "The coarse estimation stage helps to reduce the computational complexity by narrowing down the estimation range, and the refined estimation stage can take full advantage of the carrier phase information between different frequency bands (i.e., multi-band gap gain) to further improve estimation performance.", "Moreover, the SPVBI algorithm based on block SSCA can transform the computation of expectation with exponential complexity in the conventional PVBI into solving stochastic optimization problems, so that the convergence can be guaranteed theoretically and the computational complexity can be greatly reduced through mini-batch random sampling and averaging over iterations.", "Simulation results show that the proposed algorithm achieved good performance with acceptable complexity in different scenarios, and adding the particle position update can speed up convergence and reduce the number of particles required.", "It is worth mentioning that the proposed SPVBI framework has good generalization ability and is expected to be applied to more general parameter estimation scenarios." ], [ "Proof of Lemma ", "We first introduce the following preliminary result.", "Lemma 3 Given subproblem $\\mathcal {P}_{x_{j}}$ and $\\mathcal {P}_{y_{j}}$ under Lemma 1, suppose that the step sizes $\\rho ^{t}$ and $\\gamma ^{t}$ are chosen according to Assumption 1.", "Let $\\left\\lbrace \\mathbf {x}^{\\left(t\\right)},\\mathbf {y}^{\\left(t\\right)}\\right\\rbrace $ be the sequence generated by Algorithm 2.", "Then, the following holds $\\mathop {\\lim }\\limits _{t\\rightarrow \\infty }\\left|\\mathbf {{\\rm f}}_{x_{j}}^{\\left(t\\right)}-\\nabla _{\\mathbf {x}_{j}}L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}^{\\left(t\\right)},\\mathbf {y}_{j}^{\\left(t\\right)}\\right)\\right|=0,w.p.1.$ $\\mathop {\\lim }\\limits _{t\\rightarrow \\infty }\\left|\\mathbf {{\\rm f}}_{y_{j}}^{\\left(t\\right)}-\\nabla _{\\mathbf {y}_{j}}L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}^{\\left(t+1\\right)},\\mathbf {y}_{j}^{\\left(t\\right)}\\right)\\right|=0.w.p.1.$ Lemma 3 is a consequence of ([41], Lemma 1).", "We only need to verify that all the technical conditions therein are satisfied.", "Specifically, Condition (a) of ([41], Lemma 1) is satisfied because $\\mathcal {X}$ and $\\mathcal {Y}$ are compact and bounded.", "Condition (b) of ([41], Lemma 1) follows from the boundedness of the instantaneous gradient $\\nabla g_{j}^{\\left(t\\right)}$ .", "Conditions (c)–(d) immediately come from the step-size rule (1) in Assumption 1.", "Although the control-dependent random states are not identically distributed over iterations, the distributions (i.e., positions and weights of particles) change slowly at the rate of order $\\mathcal {O}\\left(\\gamma ^{t}\\right)$ , so we have $\\left\\Vert \\nabla L_{j}^{\\left(t+1\\right)}-\\nabla L_{j}^{\\left(t\\right)}\\right\\Vert =\\mathcal {O}\\left(\\gamma ^{t}\\right)$ .", "Plusing the step-size rule 2) in Assumption 1, Condition (e) of ([41], Lemma 1) is also satisfied.", "Using this result, since $\\nabla L_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}^{\\left(t\\right)},\\mathbf {y}_{j}^{\\left(t\\right)}\\right)$ is obviously bounded, then $\\mathbf {{\\rm f}}_{x_{j}}^{\\left(t\\right)}$ and $\\mathbf {{\\rm f}}_{y_{j}}^{\\left(t\\right)}$ are bounded.", "As can be seen, the surrogate function adopted is a convex quadratic function with box constraints.", "Therefore, 1)-3) in Lemma 2 follow directly from the expression of the surrogate function in (REF ) and (REF ).", "For 4) in Lemma 2, the proof is similar to that of ([30], Lemma 1).", "Due to 1)-3) in Lemma 2, the families of functions $\\left\\lbrace \\overline{f}_{x_{j}}^{\\left(t_{i}\\right)}\\left(\\mathbf {x}_{j}\\right)\\right\\rbrace $ and $\\left\\lbrace \\overline{f}_{y_{j}}^{\\left(t_{i}\\right)}\\left(\\mathbf {y}_{j}\\right)\\right\\rbrace $ are equicontinuous.", "Moreover, they are bounded and defined over a compact set $\\mathcal {X}$ and $\\mathcal {Y}$ .", "Hence the Arzela–Ascoli theorem [42] implies that, by restricting to a subsequence, there exists uniformly continuous functions $\\hat{f}_{x_{j}}\\left(\\mathbf {x}_{j}\\right)$ and $\\hat{f}_{y_{j}}\\left(\\mathbf {y}_{j}\\right)$ such that (REF ) and (REF ) in Lemma 2-4) are satisfied.", "Clearly, we have $\\nabla _{\\mathbf {x}_{j}/\\mathbf {y}_{j}}L\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)=\\nabla _{\\mathbf {x}_{j}/\\mathbf {y}_{j}}L_{j}\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)$ almost surely.", "And because of $\\mathop {\\lim }\\limits _{i\\rightarrow \\infty }\\mathbf {{\\rm f}}_{x_{j}}^{\\left(t_{i}\\right)}=\\nabla _{\\mathbf {x}_{j}}\\hat{f}_{x_{j}}\\left(\\mathbf {x}_{j}^{*}\\right)$ , $\\mathop {\\lim }\\limits _{i\\rightarrow \\infty }\\mathbf {{\\rm f}}_{y_{j}}^{\\left(t_{i}\\right)}=\\nabla _{\\mathbf {y}_{j}}\\hat{f}_{y_{j}}\\left(\\mathbf {y}_{j}^{*}\\right)$ and Lemma 3, we further have $\\left\\Vert \\nabla _{\\mathbf {x}_{j}}\\hat{f}_{x_{j}}\\left(\\mathbf {x}_{j}^{*}\\right)-\\nabla _{\\mathbf {x}_{j}}L\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)\\right\\Vert & =0,\\\\\\left\\Vert \\nabla _{\\mathbf {y}_{j}}\\hat{f}_{y_{j}}\\left(\\mathbf {y}_{j}^{*}\\right)-\\nabla _{\\mathbf {y}_{j}}L\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)\\right\\Vert & =0.$" ], [ "Proof of Theorem ", "It is easy to see that each iteration of Algorithm 2 is equivalent to optimizing the following surrogate function $\\overline{f}^{\\left(t\\right)}\\left(\\mathbf {x},\\mathbf {y}\\right)=\\sum \\limits _{j=1}^{J}\\left[\\overline{f}_{x_{j}}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}\\right)+\\overline{f}_{y_{j}}^{\\left(t\\right)}\\left(\\mathbf {y}_{j}\\right)\\right].$ Moreover, from Lemma REF , we have $\\mathop {\\lim }\\limits _{t\\rightarrow \\infty }\\overline{f}^{\\left(t\\right)}\\left(\\mathbf {x},\\mathbf {y}\\right)=\\hat{f}\\left(\\mathbf {x},\\mathbf {y}\\right)\\triangleq \\sum \\limits _{j=1}^{J}\\left[\\hat{f}_{x_{j}}\\left(\\mathbf {x}_{j}\\right)+\\hat{f}_{y_{j}}\\left(\\mathbf {y}_{j}\\right)\\right].$ Using Lemma REF and the similar analysis as in the proof of ([30], Theorem 1), we have that $\\left\\lbrace \\mathbf {x}^{*},\\mathbf {y}^{*}\\right\\rbrace & =\\arg \\min \\hat{f}\\left(\\mathbf {x},\\mathbf {y}\\right)\\nonumber \\\\s.t.", "& \\quad h_{j}\\left(\\mathbf {x},\\mathbf {y}\\right)\\le 0,\\quad j=1,\\ldots ,J\\nonumber \\\\& \\quad H_{j}\\left(\\mathbf {x},\\mathbf {y}\\right)=0,\\quad j=1,\\ldots ,J$ where $h_{j}\\left(\\mathbf {x},\\mathbf {y}\\right)$ and $H_{j}\\left(\\mathbf {x},\\mathbf {y}\\right)$ represent inequality constraints and equality constraints in the original problem, respectively.", "The KKT condition of problem (REF ) implies that there exist $\\lambda _{1},\\ldots ,\\lambda _{J}$ and $\\mu _{1},\\ldots ,\\mu _{J}$ that $\\nabla \\hat{f}\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)+\\sum \\limits _{j=1}^{J}\\mu _{j}\\nabla h_{j}\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)+\\sum \\limits _{j=1}^{J}\\lambda _{j}\\nabla H_{j}\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right) & =0\\nonumber \\\\h_{j}\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)b\\le 0,\\quad \\quad H_{j}\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)=0,\\quad j=1,\\ldots ,J\\nonumber \\\\\\mu _{j}\\ge 0,\\quad j=1,\\ldots ,J\\nonumber \\\\\\mu _{j}h_{j}\\left(\\mathbf {x}^{*},\\mathbf {y}^{*}\\right)=0,\\quad j=1,\\ldots ,J.$ Finally, it follows from Lemma REF and (REF ) that $\\left\\lbrace \\mathbf {x}^{*},\\mathbf {y}^{*}\\right\\rbrace $ also satisfies the KKT condition of Problem $\\mathcal {P}$ .", "This completes the proof." ], [ "Derivation of Particle Position Gradient", "The general formula for the gradient of position is $& \\frac{\\partial g_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}^{\\left(t\\right)},\\mathbf {y}_{j}^{\\left(t\\right)},\\Lambda _{\\sim j}^{\\left(b\\right)}\\right)}{\\partial x_{j,p}^{\\left(t\\right)}}\\nonumber \\\\& =-y_{j,p}^{\\left(t\\right)}\\left[\\frac{\\partial \\ln p\\left(x_{j,p}^{\\left(t\\right)}\\right)}{\\partial x_{j,p}^{\\left(t\\right)}}+\\frac{\\partial \\ln p\\left(r\\left|\\Lambda _{\\sim j}^{\\left(b\\right)},x_{j,p}^{\\left(t\\right)}\\right.\\right)}{\\partial x_{j,p}^{\\left(t\\right)}}\\right],$ where the two terms in brackets are derived for different variables as follows." ], [ "Complex scalar $\\alpha _{k}^{^{\\prime }}$", " where the superscript $\\left(b\\right)$ indicates that its value is generated by the $b$ -th sample of a mini-batch." ], [ "Scattering coefficient $\\beta _{k}$", "$\\beta _{k}$ only takes some fixed discrete values of $\\left\\lbrace -1,-1/2,0,1/2,1\\right\\rbrace $ , which in physical sense correspond to the geometry of the scatterer as corner, edge, point, simple curved surface and flat plate [31], so there is no need to optimize the position of the particles here." ], [ "Derivation of Particle Weight Gradient", "The general formula for the gradient of weight is $& \\frac{\\partial g_{j}^{\\left(t\\right)}\\left(\\mathbf {x}_{j}^{\\left(t+1\\right)},\\mathbf {y}_{j}^{\\left(t\\right)},\\Lambda _{\\sim j}^{\\left(b\\right)}\\right)}{\\partial y_{j,p}^{\\left(t\\right)}}\\nonumber \\\\& =\\ln y_{j,p}^{\\left(t\\right)}-\\ln p\\left(x_{j,p}^{\\left(t+1\\right)}\\right)-\\ln p\\left(r\\left|\\Lambda _{{\\rm \\sim }j}^{\\left(b\\right)},x_{j,p}^{\\left(t+1\\right)}\\right.\\right)+1.$" ] ]
2207.10427
[ [ "An inner boundary condition for solar wind models based on coronal\n density" ], [ "Abstract Accurate forecasting of the solar wind has grown in importance as society becomes increasingly dependent on technology that is susceptible to space weather events.", "This work describes an inner boundary condition for ambient solar wind models based on tomography maps of the coronal plasma density gained from coronagraph observations, providing a novel alternative to magnetic extrapolations.", "The tomographical density maps provide a direct constraint of the coronal structure at heliocentric distances of 4 to 8Rs, thus avoiding the need to model the complex non-radial lower corona.", "An empirical inverse relationship converts densities to solar wind velocities which are used as an inner boundary condition by the Heliospheric Upwind Extrapolation (HUXt) model to give ambient solar wind velocity at Earth.", "The dynamic time warping (DTW) algorithm is used to quantify the agreement between tomography/HUXt output and in situ data.", "An exhaustive search method is then used to adjust the lower boundary velocity range in order to optimize the model.", "Early results show up to a 32% decrease in mean absolute error between the modelled and observed solar wind velocities compared to that of the coupled MAS/HUXt model.", "The use of density maps gained from tomography as an inner boundary constraint is thus a valid alternative to coronal magnetic models, and offers a significant advancement in the field given the availability of routine space-based coronagraph observations." ], [ "Introduction", "Rapid changes in solar wind conditions have a direct impact on Earth's magnetosphere [31], ionosphere [32], and both ground-based and space-based technology [21], [5], [12].", "Estimates of economic impact of space weather on European power grids alone range from €10s-100s billion [13].", "Risk can be mitigated considerably by early warnings of impending space weather, currently undertaken by organisations such as the Meteorological Office in the UK.", "We believe that improvements in space weather forecasting depend primarily on three related categories: (1) Better observations of the Sun, corona, and solar wind; (2) Greater understanding of the physical processes operating in the corona and solar wind; (3) Improvements in data analysis and forecasting methods.", "This work presents a novel boundary condition to solar wind models based on tomography maps created from coronagraph observations of the solar atmosphere, thus an advancement that belongs to the third category, and can contribute to the second.", "The use of tomography maps as an inner boundary condition to a solar wind model, combined with an ensemble approach, plays a key role in the Coronal Tomography (CorTom) module to the Space Weather Empirical Ensemble Package (SWEEP) project.", "SWEEP is funded by the UK government's Space Weather Instrumentation, Measurement, Modelling, and Risk (SWIMMR) scheme and will provide an operational space weather prediction package to the UK Meteorological Office by 2023.", "The SWEEP package operates a robust, complimentary framework of multiple models using different boundary conditions including the tomography described in this paper, and both simple and more sophisticated magnetic models [69], [68], [17].", "Approaches to space weather forecasting can be broadly placed in two groups: simulations that use observations of the Sun’s photosphere to drive a model of the solar wind (e.g., Wang-Sheeley-Arge (WSA)/ENLIL: [3], [67]) and a persistence based approach which extrapolates the future behavior of the solar wind based on its past behavior over various timescales [49].", "A persistence based approach assumes that the ambient solar wind does not evolve drastically over a solar rotation.", "This is shown in observational data by a weak recurrence in geomagnetic activity and solar wind conditions [11] over a $\\sim $ 27 day period.", "Hence a persistence approach can provide a baseline for comparison of solar wind forecasting models [50].", "The heliospheric simulations are primarily based on remote solar observations and depend on photospheric magnetic field observations to build a model of the corona (e.g, MAS: [29]; [58], AWSom: [10], etc.).", "It is the modelled conditions of the outer corona that forms the inner boundary for heliospheric solar wind models (e.g, ENLIL: [47], EUHFORIA: [57], HUXt: [51]).", "The Magnetohydrodynamic Algorithm outside a Sphere (MAS) coronal model uses photospheric magnetic field observational data in order to gain the magnetic field configuration at the solar wind base ([29]; [58]).", "MAS derives an inner boundary condition at 30$ R_{\\odot }$ that can be used in solar wind heliospheric models to predict solar wind conditions at 1AU [53].", "In this work, the MAS inner boundary condition is used to benchmark the results of the tomography derived inner boundary condition.", "MAS adopts a simplified coronal `polytropic' model which assumes the thermal pressure is greater or equal to the magnetic pressure at distances closer to the sun (i.e., $\\beta $ $\\ge $ 1).", "This is used to empirically convert the pressure and density, which are gained via simplified physical laws, into a solar wind solution [54].", "Magnetohydrodynamic (MHD) equations are used and integrated forward in time until the solar wind parameters reach a steady-state and give a full three dimensional state of the solar wind at heights greater than the solar wind Alfvén point ([29]; [58]).", "MHD heliospheric models use this information at 30$ R_{\\odot }$ as inner boundary conditions in order to propagate the solar wind conditions to 1AU ([47]; [51]; [59]).", "Three-dimensional MHD models offer a comprehensive physical model of the solar wind at large scales, but are computationally expensive.", "[59] proposed that the magnetic, gravitational, and pressure gradient forces of the solar wind plasma can be neglected at distances greater than 30$ R_{\\odot }$ , and that a purely radial flow of the ambient solar wind plasma can be assumed, thus vastly reducing the complexity of the MHD equations.", "Heliospheric models that use this reduced physical approach have a greatly increased computational efficiency at the expense of reduced physics whilst still yielding results comparable to full 3D MHD models [51].", "[51] has adapted this reduced physical model for the time domain, namely the `Time-dependent Heliospheric Upwind Extrapolation' (HUXt) model.", "The model complexity is reduced further when limited to only radial components of the equatorial plane, allowing the model to run on a standard desktop computer in seconds, even with a moderate angular ($\\sim $ 2.8$^{\\circ }$ ), radial (1.5$ R_{\\odot }$ ), and time ($\\sim $ 4 hour) resolution.", "Whilst space weather forecasting has developed enormously over the past few decades, there is still large room for further improvement.", "One of the main constraints on the accuracy of space weather forecasting is the lack of adequate observational data that constrains direct empirical models between the photosphere and Earth.", "According to the concluding sentences of [30]: `the pace of [physical model] development has outstripped the pace of improvements in the quality of the input data which they consume, and until this is remedied, these models will not achieve their full forecasting potential'.", "This statement is a strong argument for new instruments and missions that are focused on operational space weather to provide higher quality data.", "It is also an argument for full exploitation of current resources: new or improved constraints on coronal structure and density at the coronal-heliosphere boundary are therefore important.", "Recent efforts are focused on improving the diagnostics from photospheric observations through advanced magnetic modelling, or to use alternative observations such as radio scintillation [17].", "Our efforts are focused on using coronagraph observations through coronal rotation tomography to provide a direct constraint on solar wind models.", "A less direct approach developed by [56] is to use coronagraph observations to constrain magnetic models without resolving the line of sight (LOS).", "There are ample, routine, observations made of the coronal-heliospheric boundary region by space-based coronagraphs (e.g., LASCO [8], COR2: [20]).", "These are largely neglected in the context of ambient solar wind modelling due mainly to the LOS problem: given the complex spatial distribution of high- and low-density streams along the LOS it is impossible, from a single observation, to estimate this distribution.", "Coronal rotational tomography techniques aim to find a distribution of electron density in a 3D corona which best satisfy a set of coronagraphic polarized brightness (pB) observations made over half a solar rotation (half a rotation since both east and west limbs are observed), subject to some reasonable assumptions such as the smoothness of the reconstruction, thus helping to rectify the LOS problem.", "A comprehensive review is given by [4].", "An iterative regularised least-square fitting method was presented by [16], and developed and applied to other cases [9].", "A similar method has been applied to very low heights in the corona [28], and expanded to include a time-dependency [66].", "A novel method for creating qualitative maps of the distribution of coronal structure was introduced by [45], resulting in a comprehensive study of coronal structure over a solar cycle [43] and measurements of coronal rotation rates [33].", "Machine learning approaches are currently being developed [25].", "A new quantitative method based on spherical harmonics is presented by [37], utilising the calibration and processing methods of [36].", "Further refinements to the method, and initial results, are presented in [40], and a study of coronal rotation rates based on the tomography is made by [14].", "The method is based on a spherical harmonic model of the coronal electron density, and gives reconstructions at distances of 4 to 10$ R_{\\odot }$ .", "At these heights and above, the corona can be assumed to flow radially outward.", "The tomography results when compared at different heights, confirm this radial structure [40].", "A desirable goal over the next decade would be an unified approach, where solar wind models are driven by coronal models based on as many empirical constraints as possible, including magnetic field extrapolations, coronal density estimations, and any other routine empirical constraints such as outflow velocity estimations.", "A crucial advancement would be the inclusion of a time-dependent inner boundary condition based on time-dependent magnetic models [70], [68], and time-dependent tomography [38], [66].", "[38] state that the time-dependent method requires further development for routine use, thus our current work uses the static tomography method of [40].", "The aim of this work is to improve the accuracy of predictions of the ambient solar wind velocity at Earth, and to investigate the relationship between coronal electron density and solar wind velocity at a distance of 8$ R_{\\odot }$ .", "Although iterative tomography methods have been coupled with MHD models to forecast space weather before [22], these models and studies are primarily focused on Coronal Mass Ejections (CME) [23] and use an iterative integrated kinematic model in order to predict ambient solar wind conditions [24].", "Such heliospheric models incorporate CME models such as the cone model [48], which use observational constraints on CME characteristics in order to model CME propagation throughout the heliosphere.", "CMEs are not considered further in this work, although our method and results are relevant to CME propagation and arrival time predictions.", "This work proposes to use the tomography results as a new inner boundary condition for heliospheric solar wind models (HUXt), describes an initial implementation, and presents initial results compared to boundary conditions based on MAS as a proof of concept.", "The methodology used in this study as well as the simple empirical relationship used to derive solar wind velocity from density at a distance of 8$ R_{\\odot }$ , are described in section .", "The iterative method used to improve the model's match to in situ data through adjustment of input parameters is given in section REF .", "MAS inner boundary conditions, when coupled with 3D MHD heliospheric models, have produced results that compare well with in situ (OMNI) bulk solar wind velocity data [53].", "The optimised model output is compared to results gained by using MAS-derived inner boundary condition and in situ data obtained via Operating Missions as Nodes on the Internet (OMNI) satellite network in section REF .", "The tomography-based model is then applied to dates at different stages of solar cycle 24 in section REF .", "Section REF explores the operational feasibility of the tomography inner boundary condition by adopting a persistence-based approach.", "Section presents an implementation of an ensemble model that demonstrates how uncertainties can be quantified by the system.", "Conclusions are given in section ." ], [ "Method", "This section gives a brief overview of the tomography method (section REF ), outlines the method to generate the inner boundary velocity condition from the tomography density maps (section REF ), gives an overview of the the heliospheric solar wind model (section REF ), and an overview of the Dynamic Time Warping (DTW) algorithm and how this will be exploited in the context of this study (section REF )." ], [ "Tomography Maps of the Solar Corona", "The COR2 coronagraphs are part of the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI: [20]) suite of instruments aboard the twin Solar Terrestrial Relations Observatory (STEREO A & B: [26]).", "Density maps are calculated from COR2A data in 3 main steps: Calibration is applied to COR2 polarized brightness (pB) observations over a period of half a Carrington rotation ($\\pm $ 1 week from the required date) using the procedures of [36].", "The processing includes a method to reduce the signal from coronal mass ejections (CMEs) [39].", "Following calibration, the data is remapped in solar polar co-ordinates, and an annular slice at the required distance from sun center extracted over several hundred observations taken over the time period.", "Figure REF a shows an example of data used as input for the tomography.", "For this example, the central date is 2018/11/11 12:00, the data spans a period from 2018/11/04 00:00 to 2018/11/17 23:08, and the distance chosen is 8$ R_{\\odot }$ .", "This date range corresponds approximately to the mid-date of Carrington rotation (CR) 2210.", "Tomography is applied to the data using the regularized spherical-harmonic optimization approach of [37].", "The method is based on a spherical harmonic distribution of density at the height of reconstruction, with a $r^{-2.2}$ decrease in density above this inner height, thus a purely radial density structure is prescribed.", "Line-of-sight integrations are made of the spherical-harmonic based densities, thus a set of brightnesses are gained, one for each order, with the lines-of-sight corresponding to the COR2A observations.", "The problem is then reduced to finding the coefficients of each order based on a regularised least-squares fitting between the observed brightness and the spherical harmonic brightnesses.", "In this work, a 22th order spherical harmonic basis results in a density reconstruction with 540 longitude and 270 latitude bins for a chosen distance (restricted to between 4 and $\\approx $ 10$ R_{\\odot }$ , with this range limited by the instrument's useful field of view).", "In this work, we use only the reconstructions for a distance of 8$ R_{\\odot }$ .", "Note also that the tomography reconstruction is static - it has no time dependence.", "High-density streamers are narrowed, and a correction for `excess' density, possibly F-corona contamination [46], is applied according to the method of [40].", "The narrowing method is applied based on the gradients within the initial tomography density map, with the degree of narrowing controlled by a single parameter.", "This parameter is adjusted to find an optimal fitting to the observations.", "The `excess density' is estimated based on an analysis of densities within large low-density regions (coronal holes) between tomography maps made at a range of distances (4 to 8$ R_{\\odot }$ ), and a consideration of mass flux.", "Figure REF a shows the observed coronal polarized brightness.", "This distribution is very similar to the reconstructed polarized brightness shown in figure REF b.", "The reconstructed brightness is the result of synthetic observations that use the electron density distribution.", "The tomography density map resulting from the above steps, is shown in figure REF c. This visualises the distribution of electron density at a distance of 8$ R_{\\odot }$ resulting from the steps described above.", "Note that this is a static reconstruction, since it gives a non-time-dependent density distribution that is spatially smooth and best satisfies the data.", "The distribution of the reconstruction shows a good agreement with the observed, and all the large-scale streamer features are well reconstructed.", "However, the reconstruction lacks the fine-scale detail of the observed, including a general `fuzziness' of the brightness and temporal changes over timescales of less than a day.", "To reconstruct fine-scale detail, it is necessary to apply the time-dependent tomography described by [38].", "There is considerable small-scale variation in the nascent slow solar wind [2], and the evolution of this variation in the heliospheric wind is an active field of research.", "For operational forecasting of the ambient solar wind, our immediate problem is to model the large spatial scale and longer-timescale variations, and the static tomography reconstruction is adequate for this purpose.", "Figure: (a) The observed coronal polarized brightness at a distance of 8R ⊙ R_{\\odot }, plotted as a function of position angle (measured counter-clockwise from solar north), and time.", "This is used as input to the tomography method.", "(b) The reconstructed polarized brightness, or synthetic observations gained from the tomographical density distribution.", "(c) The electron density at a distance of 8R ⊙ R_{\\odot }, mapped in Carrington longitude and latitude." ], [ "Generating the Inner Boundary Condition", "We wish to use the tomography density map as an inner boundary condition to the HUXt model, and to compare the resulting solar wind velocities near Earth with the results of using a MAS-based inner boundary.", "HUXt allows the user to set the heliocentric distance of the inner boundary.", "In this work, a distance of 8$ R_{\\odot }$ is used for the tomography-based inner boundary, and 30$ R_{\\odot }$ for the MAS-based inner boundary.", "For the purpose of solar wind modelling in the Sun-Earth equatorial plane, a slice of density is extracted from the tomography map at Earth's Carrington latitude at the initiation of the Carrington rotation (this latitude is 4.8$^{\\circ }$ for the 2018 November example).", "The heliospheric model requires only 128 solar wind velocity values at equally spaced longitudes with increments of$\\approx $ 2.8$^{\\circ }$ .", "The extracted tomography longitudinal profile is rebinned to this size.", "Figure REF shows the density profile at this latitude as a blue curve.", "The solar wind model requires a set of radial outward velocities as an inner boundary condition.", "We adopt a simple linear inverse relationship between densities and velocities, approximately consistent with the general properties of the solar wind as revealed by both in situ and remote observations [1], [18], [63], given by: $ V = V_{max} - \\left[ \\left(\\frac{n- n_{min}}{n_{max} - n_{min}} \\right) \\left( V_{max} - V_{min}\\right) \\right],$ where $V$ and $n$ are the solar wind velocity and electron particle density respectively, $n_{max}$ and $n_{min}$ are the maximum and minimum densities in the equatorial plane respectively, and $V_{max}$ and $V_{min}$ are model parameters specifying the maximum and minimum solar wind velocity at the height of the inner boundary (8$ R_{\\odot }$ ).", "Whereas the density values ($n_{max}$ and $n_{min}$ ) are defined by the tomography data at 8$ R_{\\odot }$ , the range of the velocity values is unknown.", "This highlights the need for an optimisation process with the aim of finding optimal values of $V_{max}$ and $V_{min}$ .", "This optimisation process is described in section REF .", "Figure REF demonstrates the conversion of the electron density to the solar wind velocity at the inner boundary for the 2018 November (CR2210) example with $V_{max}$ and $V_{min}$ set to their optimal values of 480 and 220 km$\\,s^{-1}$ respectively.", "Note that these optimal values are found in a following section.", "This simple inverse linear relationship of equation REF is likely oversimplistic compared to the true relationship between density and velocity in the corona, but serves the purpose of providing a proof of concept of a direct relationship for this study.", "Future efforts will experiment with optimising this empirical relationship, for example, an inverse Sigmoid function or an exponential relationship may better model the likely bimodal slow and fast wind patterns in the nascent solar wind, and may reveal a global model that can provide an optimised model of the solar wind over several years, or a solar cycle.", "We note that converting the density into an estimate of velocity is a similar approach to that used by magnetic models, where an empirical relationship is required to transform the magnetic field distribution to velocity [17].", "Figure: Coronal electron density (blue, right axis) taken from the tomographical map of figure c at Earth's latitude, and solar wind velocity at 8R ⊙ R_{\\odot }(black, left axis) gained from the density using the simple inverse relationship of equation .", "For this example, V max =480V_{max} = 480 kms -1 \\,s^{-1} and V min =220V_{min} = 220 kms -1 \\,s^{-1}." ], [ "Heliospheric Upwind Extrapolation Model", "HUXt is an incompressible solar wind model that solves a reduced Burgess equation along 1D vectors of velocities in the radial domain.", "The `upwind' conditions only allow outward flow and thus forbids any sunward motion of the solar wind.", "Therefore, considerations for stream interaction are included through adjustments in solar wind speed to uphold the upwind conditions (or at a greater distance in the radial co-ordinate) [59].", "For the purposes of forecasting, the results of this model compare well to full 3D MHD models for both predictions of the ambient solar wind and CME events ([19]; [51]).", "In this work we use the static tomographical reconstruction and therefore a time-constant boundary condition.", "The HUXt model includes a parameter to account for residual solar wind acceleration.", "[59] used an exponential function to impose a velocity profile that approached its final value asymptotically over a distance range between the lower boundary and $\\approx $ 50$ R_{\\odot }$ (an acceleration parameter of 0.15 which is built in to the HUXt model).", "In this work, we use this acceleration parameter for both slow and fast wind streams.", "Investigating this parameter is a central focus of our future work: it is well known that the slow wind reaches its final velocity at a greater distance from the sun compared with that of the fast [62].", "HUXt uses a five day `spin-up' time.", "All longitudinal and radial model points are initialised with a value of 400 km$\\,s^{-1}$ .", "The velocity at the inner boundary condition is rotated through$\\approx $ -66$^{\\circ }$ (equal to the solar rotation over 5 days), and the solar wind conditions are iteratively propagated outward whilst the boundary condition is rotated forward with time.", "Therefore, after the five day `spin-up' period, propagated solar wind velocity values have reached distances beyond the orbit of Earth.", "Following the spin-up period the solar wind velocity conditions are simulated forward in time by 27.27 days (or one rotation).", "Results that are generated during the spin-up period are discarded from the model output [51].", "For this study, the inner boundary condition is used as a static state which is not altered during the model run.", "The evolution of solar wind velocity can be plotted as a function of time at any point in space within the model.", "In this work, we limit the results to Earth, allowing a direct comparison with in situ measurements.", "We use the reduced 5-min resolution combined satellite network data provided by OMNI.", "Sporadic short periods of missing solar wind velocity values are linearly interpolated and then binned to form an hourly average with an associated standard deviation.", "This is then smoothed with a 10-hour moving window average." ], [ "Dynamic Time Warping", "Dynamic Time Warping (DTW) is as an effective algorithm to quantify the agreement between two time series and is used here to compare the model and in situ solar wind velocities.", "The DTW algorithm was initially used to aid automated speech recognition and has recently been used in a variety of fields such as economics [15], biology [64], and space weather ([61]; [52]).", "DTW requires two vectors (1D arrays, $A$ and $B$ ) as input.", "Vectors $A$ and $B$ are not required to be the same length.", "The Euclidean distance is calculated between every point in set A to every point in set B and is thus assigned a cost function for every possible alignment between the two data sets.", "This DTW cost or `DTW distance' metric is then minimised to find the optimal alignment between data set A and B [7].", "Effectively, DTW is non-linearly stretching and compressing each data set by connecting like for like structures between the sets.", "More efficient versions of this algorithm have been generated for reduced computational expense such as the Python FastDTW algorithm package used in this study [60].", "The requirements of the DTW algorithm is as follows: The two sets are ordered in time.", "Both sets start and end at the same time, or $A_0$ is anchored to $B_0$ , and $A_{n-1}$ is anchored to $B_{m-1}$ , where $n$ and $m$ are the number of elements in $A$ and $B$ respectively.", "Every element in data set $A$ will be matched with at least one corresponding data point from data set $B$ , and vice versa.", "The optimum path must not `cross' between elements (for example, if $A_{i}$ is paired to $B_{j}$ , then $A_{i+1}$ or later cannot be paired with $B_{j-1}$ or earlier).", "A metric used in this study in order to quantify the DTW distance is the Sequence Similarity Factor (SSF).", "SSF, as defined by [61], is described in eq REF : $ SSF=\\frac{DTW_{Dist}(O,M)}{DTW_{Dist}(O,\\bar{O})}$ Where $M$ represents the modelled solar wind velocity and $\\bar{O}$ represents the average magnitude of the observed in situ data ($O$ ).", "SSF allows a direct comparison of the modelled data to that of the averaged in situ data and provides a surrogate score of the model.", "For context, if SSF is > 1, the predicted solar wind velocities are worse than that of a constant mean observed solar wind value across the full time period of the prediction.", "If SSF = 0, a perfect prediction has been made and the modelled data matches the in situ data exactly." ], [ "Velocity Optimisation using Dynamic Time Warping", "This section describes and implements a simple method to derive optimised values of both the $V_{min}$ and $V_{max}$ terms, and shows how this approach yields a far improved agreement between the model and measured solar wind velocities at Earth, as well as time of arrival of fast solar wind streams.", "Figure REF demonstrates the effect of using arbitrary (and inaccurate) $V_{max}$ and $V_{min}$ velocity terms on the solar wind model values at Earth.", "$V_{max}$ and $V_{min}$ parameters are set at 600 km$\\,s^{-1}$ and 250 km$\\,s^{-1}$ (shown in figure REF ) and 380 km$\\,s^{-1}$ and 110 km$\\,s^{-1}$ (shown in figure REF ).", "This figure also visualises the DTW connections between the model and OMNI measurements along the optimum DTW path (shown in red) with the overall DTW distance being visualised by the sum of the length of the red lines.", "Both overestimation and underestimation of the velocity parameters will cause inaccurate predictions of the solar wind at Earth and thus give a larger DTW path distance compared to a model run with optimised or `best fit' velocity parameters.", "For example, the magnitude of the DTW path distance metrics for the model runs seen in figure REF a) and REF b) are $6.03\\times 10^4$ (SSF value of 1.34) and $7.66 \\times 10^4$ (SSF value of 1.71) respectively.", "Therefore, both model runs shown in figure REF provide a greater DTW distance compared to a constant average observed velocity across the full Carrington rotation.", "In order to obtain velocity values that give an optimal fit (or lowest DTW distance) between the model and in situ data, the efficiency of the HUXt model is exploited in an exhaustive search method.", "The tomography/HUXt model is run repeatedly with incrementally changing $V_{max}$ and $V_{min}$ terms, and the total DTW distance is recorded for each run.", "Figure: DTW optimal path (red) between in situ data and two Tomography/HUXt model runs for CR2210 with a) overestimated V max V_{max} and V min V_{min} terms of 600 and 250 kms -1 \\,s^{-1} respectively, and b) underestimated V max V_{max} and V min V_{min} terms of 380 and 110 kms -1 \\,s^{-1} respectively.Figure REF shows the relationship of optimal DTW path distance between the in situ and Tomography/HUXt model solar wind velocity at 1AU with incrementally changing velocity terms in equation REF .", "The velocity ranges used in this study are 50 - 350 km$\\,s^{-1}$ and 350 - 650 km$\\,s^{-1}$ for $V_{min}$ and $V_{max}$ respectively in order to account for a wide velocity range for each parameter while also constraining each parameter to values that are consistent with physical solar wind velocities.", "We use 30 increments between these extremes (so $30 \\times 30=900$ model evaluations).", "Figure REF shows a minimum optimal DTW path distance for the tomography derived inner boundary condition corresponding to velocity magnitudes of 220 and 480 km$\\,s^{-1}$ for $V_{min}$ and $V_{max}$ respectively.", "This is represented by the white cross in figure REF .", "These values are lower than observed at Earth due to the HUXt heliospheric model incorporating an acceleration parameter.", "A comparison between smoothed OMNI satellite data and the HUXt output with the optimal $V_{min}$ and $V_{max}$ is shown in figure REF , and demonstrates a very strong correlation between the OMNI data and the output of the tomography/HUXt model (for statistical analysis see Table REF ).", "The time of arrival of the faster solar wind streams agree to $\\pm $ 1 day with the velocities agreeing with $\\pm $ 20 km$\\,s^{-1}$ .", "In order for a fair comparison between a tomography and MAS based inner boundary condition (see section REF ), the MAS inner boundary condition requires a similar exhaustive optimisation process to be applied.", "This optimisation process effectively scales the initial MAS inner boundary condition between two values ($VM_{min}$ and $VM_{max}$ ), with the aim of minimising the DTW path distance between the modelled and in situ data.", "During the optimisation process and for following comparisons, the MAS inner boundary height was set at 30$ R_{\\odot }$ .", "The results of this optimisation can be seen in figure REF , with the minimum DTW path distance corresponding to $VM_{min}$ and $VM_{max}$ values of 290 and 580 km$\\,s^{-1}$ respectively.", "Figure: (a) Contour plot of DTW path lengths with varying magnitude of the V max V_{max} and V min V_{min} terms for the tomography derived inner boundary condition.", "The minimum DTW path distance is marked by a white cross, and this point gives the optimal values of V max V_{max} and V min V_{min} of 480 and 220 kms -1 \\,s^{-1} respectively.", "(b) DTW path lengths as a function of varying scale parameters for the MAS-derived inner boundary condition with the white cross showing the optimum VM min VM_{min} and VM max VM_{max} parameters of 280 and 580 kms -1 \\,s^{-1} respectively.Further details of the agreement of model and observation in the context of a DTW analysis are shown in figure REF .", "Figure REF visualises the DTW optimal alignment via the red lines.", "Figure REF shows that the DTW path of the optimal alignment rarely deviates over two days from the `ideal' path which represents a near perfect agreement between model and in situ data shown in grey.", "The largest disagreement is seen at 18-24 days, which is seen in both the longer red lines between data points in figure REF and by the biggest dispersion between the DTW optimum path and the ideal path seen in figure REF .", "This is due to a model overestimation of speed during this period.", "The histograms demonstrate the differences in time of arrival and the magnitude of solar wind velocity between the modelled and observational data along the optimum path.", "Figure REF shows that there is a bias towards negative values.", "This suggests the model is predicting a later time of arrival with a mean $\\Delta T_{(in situ-model)}$ of -0.45 days, and a standard deviation of 1.16 days.", "Figure REF is also biased towards negative values, suggesting velocity overestimation by the model.", "The mean $\\Delta V_{(in situ-model)}$ is -3.64 km$\\,s^{-1}$ with a standard deviation of 19.79 km$\\,s^{-1}$ .", "Figure: Further analysis of the optimal DTW path.", "(a) Comparison of modelled solar wind velocities at Earth (grey) with in situ data (black) and DTW optimised path (red) (b) Alignment of the modelled and in situ data along the optimised DTW path with respect to time.", "(c) Difference in time between observation and modelled data of aligned points along optimal DTW path (ΔT (insitu-model) \\Delta T_{(in situ-model)}) (d) Velocity difference between observation and modelled data of aligned points along optimal DTW path (ΔV (insitu-model) \\Delta V_{(in situ-model)})." ], [ "Validation of Model Output", "Figure REF shows a comparison of the CR2210 tomography/HUXt velocity output (Blue) with OMNI in situ measurements (Black) at 1AU and MAS/HUXt velocity output (Orange).", "The tomography based inner boundary parameters $V_{min}$ and $V_{max}$ set at 220 and 480 km$\\,s^{-1}$ respectively.", "The MAS inner boundary condition has scale parameters of $VM_{min}$ and $VM_{max}$ of 290 and 580  km$\\,s^{-1}$ as described in section REF .", "Figure: Comparison of optimised tomography/HUXt (blue) and optimised MAS/HUXt (orange) model predictions of the solar wind velocity at Earth with insitu\\emph {in situ} data (Black) for CR2210.From figure REF , both models have similar profiles, and the main changes between slow and fast wind tend to agree.", "The first small peak (shown in the tomography data 2018 October 29-31) is not present in the MAS data.", "The solar wind velocity between the second (2018 November 2-5) and third peak (2018 November 10-14) drops to intermediate velocities ( $\\approx $ 440 km$\\,s^{-1}$ ) for the tomography-driven model, whilst the MAS model drops to slower velocity ($\\approx $ 350 km$\\,s^{-1}$ ).", "This is significant as the in situ data, as shown in black in figure REF , shows an intermediate velocity ($\\approx $ 440 km$\\,s^{-1}$ ) of the more undisturbed solar wind in between the two fast peaks (2018 November 7-10).", "The magnitude of solar wind velocity during this time is better represented by the tomography inner boundary condition model.", "Table REF presents statistical details of the comparison between models.", "The Pearson correlation coefficient between in situ measurements and models is considerably higher for the tomography/HUXt model.", "The mean absolute error (MAE) of velocities between measurement and MAS/HUXt is approximately 11% higher than that for the tomography/HUXt, while also showing a higher SSF.", "These results show that the general profile and magnitudes of solar wind velocity is closer to the data using the tomography boundary condition compared to the optimised MAS model.", "This demonstrates both the potential value of tomographical density maps as an inner boundary condition, and the benefit of searching for an optimised velocity range at the inner boundary.", "Note that such an optimisation would not be possible without an efficient solar wind model such as HUXt.", "Table: Statistical analysis of the comparison between the tomography/HUXt model and the MAS/HUXt model, with in situ data.Both models fail to give the exact time of arrival of various features.", "For example, the rapid rise from slow to fast wind observed on November 4 arrives approximately one day early in both models.", "The start of the decrease from fast to slow wind on November 13 comes late in the tomography/HUXt model, and the decrease is less rapid than the observed.", "The main reasons for these differences are listed and discussed in section .", "Despite the differences, the comparison of tomography/HUXt to in situ data is promising - the large-scale features of the measured velocity are present in the predicted velocities, and the timings are reasonable given the initial simple implementation of the inner boundary condition.", "The model fails to replicate smaller-scale structures on timescales of a day or less.", "One reason for this are that the tomography densities are inherently smooth - this is an unavoidable result of finding a static tomographical solution which is discussed further in section .", "Other reasons for differences between model and measurements include the reduced physics approach of HUXt and the limited resolution of computational modelling." ], [ "Application to other dates", "Here we apply the Tomography/HUXt method to two different periods during solar cycle 24.", "The model is applied to 2014 May (CR2150, near solar maximum) and 2018 March (CR2202, at the start of the current solar minimum).", "The tomography maps for these two dates are shown in figure REF .", "Figures REF and REF show the comparison of modelled solar wind velocity at Earth and in situ data for CR2202 and CR2150 respectfully, with optimised velocity parameters which are gained from the contour plots seen in figure REF and REF .", "Figure: Electron density as estimated using coronal tomography at a distance of 8R ⊙ R_{\\odot }, for Carrington rotations (a) 2150.5, and (b) 2202.5.", "The red boxes labelled A and B in (a), and the white box in (b), are relevant for the ensemble results of section .Figure: (a) Comparison of model and OMNI solar wind velocities for CR 2202 for the optimal fit in terms of minimum DTW path distance (marked by white cross in figure ), with V min V_{min} and V max V_{max} set to 230 and 440 kms -1 \\,s^{-1} respectively.", "b) Contour plot showing the DTW path distance as a function of V min V_{min} and V max V_{max} for CR 2202. c) Same as (a), but for CR 2150, with V min V_{min} and V max V_{max} set to 240 and 400 kms -1 \\,s^{-1} respectively (represented by white cross in figure ).", "d) Same as (b), but for CR2150.For CR2202, Figure REF demonstrates a minimum optimal DTW path distance at a $V_{max}$ value of 440 km$\\,s^{-1}$ and a $V_{min}$ value of 230 km$\\,s^{-1}$ .", "For this period the model and observation data agree well.", "However, one disparity is present between 2018 March 29 -2018 April 4, where the in situ data shows a region of higher solar wind velocity which is not present in the modelled data.", "The mean velocity difference between in situ and modelled data along the optimal DTW warped path is 0.49 km$\\,s^{-1}$ .", "The time domain also shows an acceptable agreement, with a mean time difference of 1.12 days.", "CR2150 spans an active period close to the height of solar maximum.", "Figure REF shows a significant fast solar wind stream at 2014 May 22-25 which is seen in both model and measurement.", "However, there is a large discrepancy between a peak seen in the model at 2014 May 14-16 and one seen in measurement 3 days earlier.", "Figure REF demonstrates a minimum optimal DTW path distance at 400 and 240 km$\\,s^{-1}$ for $V_{max}$ and $V_{min}$ respectively.", "The velocity difference has a mean value of -0.9 km$\\,s^{-1}$ .", "These values are close to the previous case studies.", "However the time difference is much greater in comparison to previous Carrington rotations with a mean of -1.6 days and a standard deviation of 2.3 days.", "Such a deteriorated agreement is likely due to the increased solar activity during this time, which both disrupts the tomography process, makes the time-independent static tomography approach less valid, and increases the chance of CMEs in the in situ measurements.", "Table REF shows a statistical comparison between both optimised tomography/HUXt and MAS/HUXt models with in situ data for CR2150 and CR2202.", "For CR2150, the tomography/HUXt model combination yields a lower MAE (38.44  km$\\,s^{-1}$ ) compared to the MAS/HUXt model (40.12  km$\\,s^{-1}$ ).", "The MAE for CR2202 offers a significant 32% reduction for the tomography/HUXt model (39.31  km$\\,s^{-1}$ ), compared to that of the MAS/HUXt model (57.93  km$\\,s^{-1}$ ).", "For both periods, the tomography/HUXt model shows a smaller DTW path distance and a smaller SSF than MAS/HUXt model.", "Table: Statistics of HUXt model run with both tomography and MAS inner boundaries, with the type of inner boundary indicated by the IB column.", "The CR column gives the Carrington rotation number, the V max V_{max} column gives the tomography V max V_{max} and the MAS VM max VM_{max} parameters, and the V min V_{min} column gives the tomography V min V_{min} and the MAS VM min VM_{min} parameters.The model solar wind speeds for Carrington rotations 2150 and 2202 show a more gradual transition between high and low solar wind velocities than is seen in the observational data.", "This is likely due to the smoothness of the density given by the tomography, and the upwind dependence of HUXt.", "The significant velocity overestimation of $\\approx $ 75-100 km$\\,s^{-1}$ seen between 2014 May 27-29 (figure REF ) could be due to the upwind dependence of HUXt, or due to slow wind from equatorial coronal holes.", "Parker solar probe (PSP) has detected low density - low velocity solar wind structures at distances close to the sun that are thought to originate from equatorial coronal holes [6], [27].", "This specific structure of solar wind is in direct contradiction to the simple inverse relationship of equation REF , which assumes a consistent low density - high velocity relationship.", "This will result in an overestimation of solar wind velocities at the inner boundary and therefore at Earth.", "A low density - low velocity structure could well be the explanation of the velocity overestimation of 2014 May 27-29 as this region maps back to approximately 150$^{\\circ }$ longitude which is a section in between two coronal holes (see figure REF a).", "However, this is an area with obvious key implications for solar wind forecasting that demands further research." ], [ "A Persistence Approach", "In this section a persistence based approach is adopted in order to attempt to predict solar wind velocities in a realistic operational context.", "Unlike a traditional persistence model which predicts a near-exact repetition of the observed ambient solar wind conditions for the solar rotation prior [50], in this case the inner boundary condition is updated.", "The coronal densities (and therefore $n_{max}$ and $n_{min}$ terms in eq.", "REF ) are extracted as described in section REF .", "However, this inner boundary condition will not undergo the exhaustive optimisation process described in section REF , but instead use the optimised $V_{max}$ and $V_{min}$ terms unchanged from Carrington rotation prior.", "For example, figure REF shows the comparison between in situ data and the forecast data for CR2203 with $V_{max}$ and $V_{min}$ values of 440 and 230 km$\\,s^{-1}$ which are the optimised parameters for CR2202.", "Likewise, figure REF compares in situ data with forecast data for CR2151 with $V_{max}$ and $V_{min}$ of 400 and 240  km$\\,s^{-1}$ (optimised values for CR2150).", "Figure: Comparison of predicted solar wind conditions of a) CR2203 (V max V_{max} and V min V_{min} of 440 and 230 kms -1 \\,s^{-1} respectively) and b)CR2151 (V max V_{max} and V min V_{min} of 400 and 240  kms -1 \\,s^{-1} respectively).", "The V max V_{max} and V min V_{min} values are that of the optimised velocity terms of the Carrington rotation prior (obtained in section .", ")Figure REF shows a relatively good agreement with the in situ data in terms of time of arrival of the fast solar wind streams (see table REF for statistical analysis).", "However, there are disparities in the magnitude of the peaks.", "For example, in the first peak (seen in the in situ data at 2018 April 21-22) the in situ data predicts a velocity of $\\approx $ 600 km$\\,s^{-1}$ whereas, the model predicts a solar wind velocity of $\\approx $ 500 km$\\,s^{-1}$ .", "The second peak (2018 May 6-11) shows a similar difference.", "This suggests that the $V_{max}$ term is underestimated.", "Likewise, the slower more settled solar wind present in between these two peaks (2018 April 25 - 2018 May 5) is underestimated by the model, suggesting that the $V_{min}$ term is also underestimated.", "Table REF shows an SSF value of 0.65 for CR2203.", "This shows that a persistence approach to the tomography based model can yield more accurate results compared to a single mean value.", "Figure REF shows a weaker agreement between the modelled and in situ data.", "The profiles are different, with the in situ data demonstrating a double peak between the dates of 2018 June 8-13, whereas the model predicts a single peak around this time.", "The magnitude of this peak also disagree by $\\approx $ 150 km$\\,s^{-1}$ .", "This again suggests that the $V_{max}$ term is underestimated.", "Table REF shows generally weaker statistics for CR2151 compared to that of CR2203.", "A SSF value of 1.20 infers that a mean solar wind velocity across the full time period would yield a smaller DTW distance, and potentially better predictions of solar wind velocity compared to a persistence-based approach during this time period.", "During periods near solar maximum (such as CR2151) we would expect a persistence-based approach to yield worse results.", "This is due both to the coronal state changing at a significantly faster rate than during solar minimum, and the tomography reconstruction failing to accurately map the true coronal density.", "Therefore, the inner boundary condition will differ significantly to the physical state of the solar corona.", "This highlights the need for a time-dependent tomography approach in an operational context.", "Table: Statistical analysis of the comparison between the tomography/HUXt model with in situ data using optimal velocity parameters gained for the Carrington rotation prior.Overall, here we show that a persistence model could potentially be used in an operational context as a worst case scenario, without updating the $V_{min}$ and $V_{max}$ parameters.", "However, the model certainly loses accuracy when deployed in this fashion especially during periods near solar maximum.", "This stresses the need of either a time-dependent boundary condition, or a more complex, global relationship between coronal electron density and solar wind velocity at a height of 8$ R_{\\odot }$ .", "Both of these issues are the focus of our current efforts.", "The high efficiency of the HUXt model allows an ensemble approach to estimate the uncertainty in model outputs based on selected uncertainties at the inner boundary.", "The tomography density distribution, as shown in figures REF and REF show thin elongated structures that tend to lie longitudinally, and can be very narrow in latitude.", "Therefore, a small error in the distribution given by the tomography method, or small latitudinal deviance of the solar wind during propagation to Earth, can significantly alter the inner boundary condition and the predicted solar wind conditions at Earth.", "An ensemble approach is a straightforward way of investigating and quantifying the effect small variations in latitude of the extracted tomographical data at the inner boundary can have on the resulting solar wind velocities at Earth.", "Another uncertainty to which we can apply the ensemble approach is the choice of $V_{max}$ and $V_{min}$ .", "Here, we use the map of DTW cost function arising from the exhaustive search of $V_{max}$ and $V_{min}$ values to define a range of velocity terms at the inner boundary for 2014 May (CR2150) and 2018 March (CR2202)." ], [ "Latitudinal dependence", "Figure REF shows two density maps that demonstrate two extremes of solar activity.", "CR2150, as shown in figure REF a, demonstrates a complicated density distribution with many high density streamers positioned at a wide range of latitudes, and which span across the equator.", "Figure REF b shows a quiet solar corona where the streamer belt is longitudinally aligned near the equator, and a more uniform low electron density at higher latitudes.", "The main density enhancements are found exclusively in the equatorial region.", "A model run was conducted as described in section 3.3 with $V_{max}$ and $V_{min}$ remaining fixed at the optimised values of 400 and 240 km$\\,s^{-1}$ for CR2150 and 440 and 230 km$\\,s^{-1}$ for CR2202.", "We adjust the inner boundary velocity profile by extracting densities from the tomography map with $\\pm $ 3$^{\\circ }$ of the latitude of Earth with 1$^{\\circ }$ increments.", "This range of latitudes were chosen with the aim of comfortably covering the latitudinal movement of Earth during one full Carrington rotation, and, more importantly, the unknown drift of the solar wind in latitude between the Sun and Earth.", "Comparisons between the resulting model and measured solar wind conditions at Earth are shown in figure REF .", "The largest variations in the ensemble velocities during CR2150 (see Figure REF ) are present before 2014 May 13.", "During this time we find that the relatively small latitude variation of $\\pm $ 3$^{\\circ }$ leads to a wide variation in velocity.", "For example, the largest velocity range is $\\approx $ 50 km$\\,s^{-1}$ observed on 2014 May 10.", "For the remainder of the period, varying the latitude leads to only a small variation in velocity ($\\approx $ 20 km$\\,s^{-1}$ ).", "This can be explained by examining the density map of figure REF a.", "The solar wind streams at Earth, corresponding to the early part of the period, map back approximately to the region bounded by the red box labelled A in the tomography map.", "This location was calculated by considering the solar wind speed, the distance from Earth to the tomography map location, and the solar rotation rate.", "This region spans the boundary between a high density streamer to the north, and a low density coronal hole to the south.", "Therefore, the northern latitudes in figure REF , yield considerably slower solar wind velocities compared to that of southern latitudes during this time.", "Thus varying the latitude by small values can lead to large changes in the density profile, and therefore the inner boundary velocity.", "Figure: Comparison of in situ data with the results of multiple tomography/HUXt model runs for (a) CR2150 and (b) CR2202, with the latitude of the inner boundary condition varied in one degree increments between ±\\pm 3 ∘ ^{\\circ } from the latitude of Earth.All ensemble model runs fail to match the steep decrease in velocity during 2014 May 25.", "The coordinates of this decrease near Earth map back to the region bounded by the red box labelled B in the tomography map.", "This is a narrow region of low density lying between longitudinally extended regions of higher density to the north and south.", "Either the tomography map is incorrect in this small area, and that this region should contain a higher density and thus map to slow velocities, or the simplistic inverse linear relationship mapping densities to velocities, as given by equation REF fails in this region.", "If the latter, then the relationship should not be linear, and should give highest velocities only for the very lowest density features in the tomography map.", "Current efforts are focused on the investigation of an improved relationship between coronal electron density and solar wind velocity at 8$ R_{\\odot }$ .", "All of the ensemble members consistently predict a higher velocity peak at a date of 2014 May 14-16.", "This feature is not on seen on these dates in the in situ data.", "This peak maps back to the small low density feature at Carrington longitude 270$^{\\circ }$ as seen in the equatorial region of the tomography map seen in figure REF a.", "We note that the density of this feature is low, but not at the minimum densities estimated for coronal holes seen in this map and others.", "Conversely, the peak at intermediate velocities seen in the data around 2014 May 12 is not present on this date in any of the model runs.", "This feature maps back to a longitude of around 305$^{\\circ }$ , close to the boundary between the low-density region at 270$^{\\circ }$ .", "It is likely that the velocity peak seen in all ensemble members (2014 May 14-16) is the same peak seen in the in situ data during 2014 May 12.", "The amplitude of both peaks is comparable.", "There are several reasons why the model predicted a later time of arrival at Earth for this specific structure, including the modelling limitations of HUXt (e.g.", "the acceleration parameter), or a defect in the tomography map.", "Figure REF shows how the density of the corona changes in the time of just three Carrington rotations.", "During CR2149, shown in figure REF a, there is a large and very low density coronal hole spanning the equator and reaching to high latitudes.", "There is a clearly defined western boundary to this coronal hole at longitude 280$^{\\circ }$ near the equator.", "By CR2150, as shown in figure REF b, the coronal hole has greatly reduced in size and does not reach the low densities of the previous rotation.", "Consequently, the western boundary is not clearly defined.", "The following rotation in figure REF c shows the western streamer encroaching into the region previously occupied by the coronal hole, and the coronal hole limited to southern regions.", "This rapid change in structure is the best explanation for the differences between the model and in situ measurements between dates 2015 May 12 and 18, and shows that a time-dependent inner boundary becomes critical during solar maximum.", "Figure: The rapidly changing density of the corona between (a) CR2149, (b) CR2150, and (c) CR2151.The results for CR2202 in figure REF show that the variability in solar wind prediction is largest between 2018 April 1 to 8, where the model velocities vary by more than 100 km$\\,s^{-1}$ .", "This region maps back to the white boxed region in figure REF b.", "This region spans the northern boundary of a high-density streamer, thus small variations in latitude lead to large variations in density and velocity.", "This result shows that even during quiet periods, large uncertainties can arise from small deviations in latitude.", "This is a significant problem for solar wind forecasting, particularly considering that the high-density streamer belt may actually be narrower than that reconstructed using tomography.", "The measured fast wind peak during 2018 April 11 reaches speeds of almost 600 km$\\,s^{-1}$ .", "The models over all latitudes consistently underestimate this peak.", "This feature maps back to the low density equatorial region near Carrington longitude 145$^{\\circ }$ .", "This is likely due to the tomography map overestimating the density at this point, or a flaw in the oversimplistic relationship between density and velocity, or a combination of both." ], [ "Velocity dependence", "Figure REF shows the DTW path distance as a function of $V_{max}$ and $V_{min}$ , used to select the optimal parameters.", "In order to create a velocity-based ensemble, we wish to identify a region within this parameter space where the DTW distance is lower than a set threshold.", "Figure REF shows that there is a selection of velocity values that give an acceptable fit for in situ data, defined as SSF $\\le $ 0.95.", "This range of velocity magnitudes defines an uncertainty which is incorporated into the ensemble.", "The velocity terms for each ensemble member are randomly generated from a normal distribution, with a mean set at the optimal velocity parameters, and a range within three standard deviations of the values that yield a SSF of 0.95 or lower.", "These regions are highlighted by the white contour in figure REF .", "For example, CR2150 has mean (optimal) values of 400 and 240 km$\\,s^{-1}$ and a standard deviation of 20 km$\\,s^{-1}$ for $V_{max}$ and $V_{min}$ respectively.", "For CR2202, mean velocity values are 440 and 230 km$\\,s^{-1}$ with a standard deviation of 30 km$\\,s^{-1}$ .", "Care was taken in order to ensure $V_{max}$ term was greater than the $V_{min}$ term for every ensemble member.", "Figure: Contour plots of the DTW path distance as a function of minimum and maximum inner boundary velocity for Carrington rotations (a) 2150, and (b) 2202.", "The white contour bounds the area that yields a DTW path distance of under the threshold chosen (SSF ≤\\le 0.95) and the white pixels in the lower right corner represent invalid model parameters where V min V_{min} >V max V_{max}.", "Note that the V max V_{max} range is altered to 300-600 kms -1 \\,s^{-1} in this plot.Figure REF compares in situ data with seven tomography/HUXt outputs each with a different combination of $V_{max}$ and $V_{min}$ magnitudes for CR2150 and CR2202.", "This highlights the sensitivity of the output to the velocity range.", "An underestimated $V_{max}$ value will affect the magnitude of the solar wind peaks predicted at Earth, but will also cause the fast solar wind peak to arrive later.", "For example in figure REF a, the model with $V_{max}=350$  km$\\,s^{-1}$ (red) shows a 2-3 day delay in time of arrival of fast solar wind at 2018 May 26, compared with a model run with $V_{max}=460$  km$\\,s^{-1}$ (green).", "Both these terms dictate how rapid the transition between fast and slow solar wind occur.", "Figure: Comparison of in situ data with the results of multiple tomography/HUXt model runs for (a) CR2150 and (b) CR2202, with a range of V max V_{max} and V min V_{min} combinations." ], [ "Ensemble Results", "We create an ensemble with 10,000 unique inner boundary conditions generated with randomly selected pairs of $V_{max}$ and $V_{min}$ taken from a normal distribution around the optimal velocities, and a line of latitude randomly selected between +/- 3 degrees from Earth's latitude at the initiation of the Carrington rotation.", "Once generated, the 10,000 inner boundary conditions were used for the HUXt model and the results of solar wind velocities at Earth recorded.", "The results of the ensemble are shown in figure REF .", "A choice of 10,000 ensemble runs was a compromise between quantifying the sensitivity of the results to the parameter uncertainties with a sensible computation time.", "Figure: Results of the ensemble for a) CR2150 and b) CR2202 with the mean predicted solar wind velocity at each time step shown as white, the light grey area representing all predicted solar wind velocities, and the darker grey representing one standard deviation from the mean.", "The non-ensemble model results with the optimal velocity values at the Carrington latitude of Earth is the blue line.Figure REF shows a strong correlation between the mean of the ensemble runs (white line) and the optimal HUXt model for the true latitude of Earth (blue line), with a maximum velocity difference of 12 km$\\,s^{-1}$ .", "The standard deviation of the ensemble around the mean is shown as the dark grey region, and the average standard deviation across the full time series is 20.06 km$\\,s^{-1}$ .", "The most significant deviations between the in situ data and the model data lie at three intervals centered on 2014 May 12, 2014 May 15 (as discussed in section REF ) and 2014 May 27.", "The final disparity shows a steep transition between the fast and slow solar wind (seen around 2014 May 28 in the in situ data) that is not predicted by any of the 10,000 ensemble model runs.", "CR2202, as seen in figure REF , also demonstrates a good agreement with the in situ data with the maximum velocity difference between the mean and the in situ data being 79 km$\\,s^{-1}$ .", "CR2202 shows a greater variability of the ensemble model runs (as shown by the height of both dark grey and light grey regions) compared that of CR2150, with the average standard deviation across the full time-series being 41.7 km$\\,s^{-1}$ .", "This is approximately double that of CR2150.", "It would be expected that value is larger than that for CR2150 due to the larger standard deviation entered into the velocity terms (20  km$\\,s^{-1}$ and 30 km$\\,s^{-1}$ for CR2150 and CR2202 respectively)." ], [ "Discussion and conclusions", "A new inner boundary condition for solar wind heliospheric models is derived from coronal electron density tomography maps.", "The tomography/HUXt model results give general good agreement with in situ data provided by the OMNI satellite network, with a significant increase in the accuracy of solar wind velocity prediction compared to a HUXt model run with a traditional MAS derived inner boundary condition for the time periods used in this study.", "The time periods investigated in this study were intentionally chosen to demonstrate the model at different stages of the solar activity cycle.", "Given further development, a future study will explore a larger dataset.", "We can identify several aspects of the inner boundary condition and modelling that lead to inaccuracies, and are possible to address with some further work.", "They are: The use of a static tomography reconstruction to represent a dynamic corona.", "We have recently developed a framework for providing time-dependent tomographical densities, that will help address this, although further development is needed [38].", "The overly smooth reconstruction given by tomography compared to the true density.", "The coronal streamer belt likely consists of very narrow high-density structures [44], [42], [41] that are highly variable on small temporal [2] and spatial scales [65], [55].", "Again, developments in tomographical methods and observations can lead to reconstructions that bring us closer to these scales.", "The use of a single, fixed acceleration profile for both fast and slow wind.", "We are developing our own upwind model that includes both velocities and densities, and includes the acceleration profile as a search parameter.", "This method uses iteration to improve the fit to in situ measurements.", "Early results are promising, and may provide a constraint on acceleration.", "Another approach is to use a set of tomographical maps over a range of distances (4 to 10$ R_{\\odot }$ ) to constrain the early acceleration profile of the slow wind, similar to that shown by [40].", "The simplistic inverse relationship of equation REF .", "We are investigating the replacement of the simple inverse relationship of equation REF with a Sigmoid or exponential function, which gives a simple relationship, based on a small number of parameters, to model the transition from slow to fast wind as a function of the density.", "Use of a standard, possibly incorrect, coronal rotation rate.", "Long time series of tomographical maps can give improved estimates of the variable coronal rotation rate [14], [34], [35].", "Near the equator, the coronal rotation rate may vary by a degree per day or more from the Carrington rate, leading to systematic longitudinal errors in solar wind model results.", "The omission of coronal mass ejections (CME) that may be present in the in situ data.", "This can be addressed to an extent using the current approach for operational forecasting: simple parameters describing a cone model of a CME can be input to HUXt, and the CME carried with the solar wind, giving an estimate of time of arrival at Earth.", "A persistence-based approach was made in order to test the operational feasibility of this model.", "The persistence results showed a large difference in accuracy between periods near solar minimum and maximum.", "The rate of evolution of the physical corona during solar maximum is such that a persistence approach will break down over timescales of a Carrington rotation.", "However, during solar minimum, results were more acceptable, yet less accurate than a non-persistence approach.", "This highlights the need for a time-dependent inner boundary condition, and a more advanced relationship between density and velocity than that given by equation REF .", "The efficiency of the HUXt model allows an ensemble framework, which can quantify the uncertainties of the predicted solar wind velocities.", "The ensembles was based on sampling an appropriate range of latitudes and velocities at the model inner boundary.", "The ensemble results confirm that both these uncertainties have a significant effect on the model output velocities at Earth.", "Latitudinally narrow, longitudinally-aligned streamer belt structures can lead to high uncertainties in the output based on small latitudinal uncertainties at the inner boundary.", "The choice of velocity range at the inner boundary also has a large impact on the model results.", "In the context of operational space weather forecasting, this paper shows that the use of a tomography-based inner boundary condition to drive the HUXt model as part of the CORTOM module of the SWEEP project for the UK Met Office is an approach that can offer certain improvements on current systems.", "Use of multiple models over extended periods will enable an extensive analysis of their relative performance, and the improvements described in this paper will be implemented within SWEEP over the coming years.", "We acknowledge (1) STFC grants ST/S000518/1 and ST/V00235X/1, (2) Leverhulme grant RPG-2019-361, (3) STFC PhD studentship ST/S505225/1, and (4) the excellent facilities and support of SuperComputing Wales, (5) NASA/GSFC's Space Physics Data Facility's OMNIWeb service and OMNI data, (6) Predictive Science Inc. for the MHDWeb service and MAS model outputs, and (7) University of Reading (UK) for the HUXt model.", "HUXt is available for download at https://github.com/University-of-ReadingSpace-Science/HUXt..", "The STEREO/SECCHI project is an international consortium of the Naval Research Laboratory (USA), Lockheed Martin Solar and Astrophysics Lab (USA), NASA Goddard Space Flight Center (USA), Rutherford Appleton Laboratory (UK), University of Birmingham (UK), Max-Planck-Institut fur̎ Sonnen-systemforschung (Germany), Centre Spatial de Liege (Belgium), Institut Optique Théorique et Appliqúee (France), and Institut d'Astrophysique Spatiale (France)." ] ]
2207.10460
[ [ "Towards Confident Detection of Prostate Cancer using High Resolution\n Micro-ultrasound" ], [ "Abstract MOTIVATION: Detection of prostate cancer during transrectal ultrasound-guided biopsy is challenging.", "The highly heterogeneous appearance of cancer, presence of ultrasound artefacts, and noise all contribute to these difficulties.", "Recent advancements in high-frequency ultrasound imaging - micro-ultrasound - have drastically increased the capability of tissue imaging at high resolution.", "Our aim is to investigate the development of a robust deep learning model specifically for micro-ultrasound-guided prostate cancer biopsy.", "For the model to be clinically adopted, a key challenge is to design a solution that can confidently identify the cancer, while learning from coarse histopathology measurements of biopsy samples that introduce weak labels.", "METHODS: We use a dataset of micro-ultrasound images acquired from 194 patients, who underwent prostate biopsy.", "We train a deep model using a co-teaching paradigm to handle noise in labels, together with an evidential deep learning method for uncertainty estimation.", "We evaluate the performance of our model using the clinically relevant metric of accuracy vs. confidence.", "RESULTS: Our model achieves a well-calibrated estimation of predictive uncertainty with area under the curve of 88$\\%$.", "The use of co-teaching and evidential deep learning in combination yields significantly better uncertainty estimation than either alone.", "We also provide a detailed comparison against state-of-the-art in uncertainty estimation." ], [ "Introduction", "Prostate cancer (PCa) is the second most common cancer in men worldwide [18].", "The standard of care for diagnosing PCa is histopathological analysis of tissue samples obtained via systematic prostate biopsy under trans-rectal ultrasound (TRUS) guidance.", "TRUS is used for anatomical navigation rather than cancer targeting.", "The appearance of cancer on ultrasound is highly heterogeneous and is further affected by imaging artifacts and noise, resulting in low sensitivity and specificity in PCa detection based on ultrasound alone.", "Substantial previous literature and large multi-center trials report low sensitivity of systematic TRUS biopsy.", "In [3], authors compare diagnostic accuracy of TRUS biopsy and multi-parametric MRI (mp-MRI).", "They report sensitivity of systematic TRUS biopsy as low as 42-55% compared to 88-96% for mp-MRI.", "However, they report low specificity of 36-46% for mp-MRI compared to 94-98% for TRUS.", "Fusion of mp-MRI imaging with ultrasound can enable targeted biopsy by identifying cancerous lesions in the prostate [13], [17].", "Fusion biopsy involves either manual or semi-automated registration of lesions identified in mp-MRI with real-time TRUS.", "This process can be time-consuming and inaccurate due to registration errors and patient motion.", "It is therefore highly desirable to improve the capability of biopsy targeting using ultrasound imaging alone at the point of care.", "The recent development of high frequency “micro-ultrasound\" technology allows for the visualization of tissue at higher resolution than conventional ultrasound.", "A qualitative scoring system based on visual analysis of micro-ultrasound images called the PRI-MUS (prostate risk identification using micro-ultrasound) protocol [6] has been proposed to estimate PCa likelihood.", "Several studies have shown that micro-ultrasound can detect PCa with sensitivity comparable to that of mp-MRI using this grading system [2], [4].", "A recent systematic review and meta analysis analyzing 13 published studies with 1,125 total participents found that micro-ultrasound guided prostate biopsy and mp-MRI imaging targeted prostate biopsy resulted in comparable detection rates for PCa [19].", "Research on this technology is in early stages and relatively few quantitative methods are reported.", "Rohrbach et al.", "[14] use a combination of manual feature selection with machine learning as the first quantitative approach to this problem.", "Shao et al.", "[16] use a deep learning strategy with a three-player minimax game to tackle data source heterogeneity.", "While these studies show significant potential of micro-ultrasound as a diagnostic tool for PCa, methods to-date primarily focus on improving accuracy for cancer prediction.", "We argue that in addition, confidence in detection of cancer can play a significant role for adoption of this technology to ensure that predictions can be clinically trusted.", "Towards this end, we propose to address several key challenges.", "Machine learning models built from ultrasound data rely on ground truth labels from histopathology that are coarse and only approximately describe the spatial distribution of cancer in a biopsy core [11], [14], [16].", "The lack of finer labels cause two challenges: first, labels assigned to patches of ultrasound images in a biopsy core may not match the ground truth tissue, resulting in weak labels; second, biopsies include other types of tissue such as fibromuscular cells, benign prostatic hyperplasia and precancerous changes.", "Many of these tissues are unlabeled in a histopathology report, which will result in out-of-distribution (OOD) data.", "Therefore, effective learning models for micro-ultrasound data should be robust to label noise and OOD samples.", "Several solutions have been presented to address the above issues, mainly by quantifying the uncertainty of predictions [1], [12], [5].", "Predictive uncertainty can be used as a tool to discard unreliable and OOD samples.", "Evidential deep learning (EDL) [15] and ensemble methods [12] are amongst such approaches.", "In particular, evidential learning is computationally light, run-time efficient and theoretically grounded, hence it fits our clinical purpose here.", "Learning from noise in labels (i.e.", "weak labels) has also been addressed before using methods that 1) estimate noise; 2) modify the learning objective function, or 3) use alternative optimization [8].", "Among these, co-teaching [9] has been shown to be a successful baseline that can be easily integrated with any uncertainty quantification method.", "In this paper, for the first time, we propose a learning model for PCa detection using micro-ultrasound that can provide an estimate of its predictive confidence and is robust to weak labels and OOD data.", "We address label noise using co-teaching and utilize evidential learning to estimate uncertainty for OOD rejection, resulting in confident detection of PCa.", "We assess our approach by examining the classification accuracy and uncertainty calibration (i.e.", "the tendency of the model to have high levels of certainty on correct predictions).", "We compare our methodology to a variety of uncertainty methods with and without co-teaching and demonstrate significant improvements over baseline.", "We show that applying an adjustable threshold to discard uncertain predictions yields great improvements in accuracy.", "By allowing correct and confident predictions, our approach could provide clinicians with a powerful tool for computer-assisted cancer detection from ultrasound.", "Data is obtained from 2,335 biopsy cores of 198 patients who underwent transrectal ultrasound-guided prostate biopsy through a clinical trial and after institutional ethics approval is provided.", "A 29 MHz micro-ultrasound system and transducer (ExactVu, Markham) was used for data acquisition.", "A single sagittal ultrasound image composed of 512 lateral radio frequency (RF) lines was obtained prior to the firing of the biopsy gun for each core.", "Primary and secondary Gleason grades, together with an estimate of the fraction of cancer relative to the total core area (the so-called “involvement of cancer\") are also provided for each patient.", "We under-sampled benign cores in order to obtain an equal proportion of cancerous and benign cores during training and evaluation, resulting in 300 benign and 300 cancerous cores, respectively.", "As in [14], we exclude cores with involvement less than 40% to learn from data that better represents PCa.", "We hold out the data from 27 patients as a test set, with the remaining 161 used for training and cross-validation.", "For each RF ultrasound image, a rectangular region of interest (ROI) corresponding to the approximate needle trace area is determined by using the angle and location of the probe-mounted needle relative to the imaging plane (Fig.", "REF , yellow region).", "This ROI is intersected with a manually drawn prostate segmentation mask to exclude non-prostatic tissue.", "Overlapping patches are extracted corresponding to 5 mm $\\times $ 5 mm tissue regions with an overlap of 90% covering the ROI.", "These patches are up-sampled in the lateral direction and down-sampled in the axial direction by factors of 5 to obtain a uniform physical spacing of pixels in both directions.", "This results in a patch of 256 by 256 pixels.", "Ultrasound data in each patch are normalized to a mean of 0 and standard deviation of 1.", "Patches are assigned a binary label of 0 (benign) or 1 (cancerous) depending on the pathology of the core.", "The patches and their associated labels are inputs to our learning algorithms." ], [ "Methodology", "We propose a micro-ultrasound PCa detection learning model that is robust to challenges associated with weak labels and OOD samples.", "In this section, we first define the problem followed by descriptions of co-teaching as a strategy for dealing with weak labels.", "Next, we incorporate evidential deep for quantifying prediction uncertainty and excluding suspected OOD data.", "Finally, we present evaluation metrics to assess our methods." ], [ "Weak Labels and OOD:", "Let $X_i = \\lbrace x_1, x_2, ..., x_{n_i}\\rbrace $ refer to a biopsy core where $n_i$ number of patches extracted from needle region (Figure REF ).", "For each biopsy core $X_i$ , pathology reports a label $Y_i$ and the length of cancer $L_i$ in core, which is a rough estimate between zero and the biopsy sample length.", "Following previous work in PCa detection [14], [11], we assign coarse pathology labels $Y_i$ to all extracted patches $\\lbrace x_1, x_2, ..., x_{n_i}\\rbrace $ due to the lack of finer patch-level labels.", "Therefore, many assigned labels to patches may not necessarily match with the ground truth and they are inherently weak.", "Additionally, other tissue than cancer, present in the core, does not have any gold standard labels.", "Therefore, there is also OOD data." ], [ "Co-teaching:", "We propose to use a state-of-the-art method, co-teaching, to address label noise for micro-ultrasound data [9].", "For weak label methods, we rely on the findings of [11] showing the success of co-teaching method, and [20], which found that co-teaching significantly out-performed other methods such as robust loss functions.", "This approach simultaneously trains two similar neural networks with different weight initializations.", "According to the theory of co-teaching, neural networks initially learn simpler and cleaner samples then overfit to noisy input.", "Therefore, during each iteration, each network picks a subset of samples with lower loss values as potentially clean data and trains the other network with those samples.", "In a batch of data with size $N$ , only $R(e)*N$ number of samples are selected by each network as clean samples, where $R(e)$ is the ratio of selection starting from 1 and gradually decreasing to a fixed value $1-\\gamma $ .", "Formally we have $R(e)=1-\\min (\\frac{e}{e_{\\max }}, \\gamma )$ , where $\\gamma \\in [0, 1]$ is a hyper-parameter, and $e$ and $e_{\\max }$ are the current and maximum number of epochs, respectively.", "Using two networks prevents confirmation bias from arising." ], [ "Evidential Deep Learning:", "Evidential deep learning (EDL) [15] uses the concepts of belief and evidence to formalize the notion of uncertainty in deep learning.", "A neural network is used to learn the parameters of a prior distribution for the class likelihoods instead of point estimates of these likelihoods.", "Given a binary classification problem where $P(y = 1 | x_i) = p_i$ , instead of estimating $p_i$ , the network estimates parameters $e_0, e_1$ such that $p_i \\sim \\text{Beta}(e_0 + 1, e_1 + 1)$ .", "These parameters are then referred to as evidence scores for the classes, and used to generate a belief mass and uncertainty assignment, via $b_0 = \\frac{e_0}{S}, b_1 = \\frac{e_1}{S}, U = \\frac{2}{S} $ , where $S = \\sum _{i=0}^{1} e_i + 1$ .", "Note that $b_0 + b_1 + U = 1$ .", "$U$ ranges between 0 and 1 and is inversely proportional to our overall level of belief or evidence for each class.", "It is worth mentioning that term confidence is also used often instead of uncertainty with confidence being $1-U$ .", "The network is trained to minimize an objective function based on its Bayes Risk as an estimator of the likelihoods ${p_i}$ .", "If the network produces evidences $e_0, e_1$ for sample $i$ , the loss and predicted uncertainty for this sample are $\\mathcal {L}_i = \\sum _{i=1}^{n} E_{p_i \\sim \\text{Beta}(e_0 + e_1)}\\big (|p_i - y_i|^2\\big ), U_i = \\frac{2}{e_0+e_1+2} ,$ where $e_0$ and $e_1$ are the network outputs.", "The loss also incorporates a KL divergence term, which encourages higher uncertainty on predictions that do not contribute to data fit.", "The method offers a combination of speed (requiring only a single forward pass for inference) and well-calibrated uncertainty estimation with a solid theoretical foundation." ], [ "Clinical Evaluation Metrics:", "The goal of our model is to provide the operator with clinically relevant information, such as real-time identification of potential biopsy targets.", "It should also state the degree of confidence in its predictions such that the operator can decide when to accept the model's suggestions or defer to their own experience.", "To measure these success criteria, we propose several evaluation metrics.", "Accuracy reported at the level of patches (the basic input to the model) can be misleading due to weak labeling (some correct predictions are recorded as incorrect because of incorrect labels).", "Therefore, we propose accuracy reported at the level of biopsy cores as a more relevant alternative.", "We determine core-based accuracy using core-wise predictions aggregated from patch-wise predictions for the core.", "Specifically, the average of patch predicted labels is used as a probability score that cancer exists in the core [20], [21].", "To model uncertainty at the core level, patch-wise predictions that do not meet a specified confidence threshold are ignored when calculating this score, and if more than 40$\\%$ of the patch predictions for a core fall below this threshold, the entire core prediction is considered “uncertain\".", "We also use “uncertainty calibration\", a metric that assesses how accurate and representative the predicted uncertainty or confidence is (in terms of true likelihood).", "To compute calibration, we compute Expected Calibration Error (ECE) [7], which measures the correspondence between predictive confidence and empirical accuracy.", "ECE is calculated by grouping the predictions so that each prediction falls into one of the $S$ equal bins produced between zero and one based on its confidence score: $\\begin{split}\\text{ECE} & = \\sum _{s=1}^{S}{\\frac{n_s}{N}|\\text{acc}(s)-\\text{conf}(s)|} ,\\end{split}$ where $S$ denotes the number of bins (10 used in this paper), $n_s$ the number of predictions in bin $s$ , $N$ the total number of predictions, and acc$(s)$ and conf$(s)$ the relative accuracy and average confidence of bin $s$ , respectively." ], [ "Experiments and Results", "From all data, 161 patients (392 cores, 12664 patches) are used for training and a further 40 patients are used as a validation set for model selection and tuning.", "We hold out a set of randomly selected, mutually exclusive, patients as test set (27 patients, 80 cores, 2808 patches).", "All experiments, except for the ensemble method, are repeated nine times with three different validation sets, each with three different initializations; the average of all runs is reported.", "For the ensemble method, as suggested in [12], five different models with different initialization are used for estimating true prediction probabilities, $p(y_i|x_i)$ .", "This process is done with five different validation sets, resulting in a total of 25 runs.", "As a backbone network, we modify ResNet18 [10] by using only half of the layers in each residual block.", "We found this reduction in layers to improve model performance, likely by reducing overfitting.", "Two copies of modified ResNet with different initializations are used for the co-teaching framework.", "For our choice of $\\gamma $ , we emprically found 0.4 to be the best.", "We employ the NovoGrad optimizer with learning rate of $1\\text{e-}4$ ." ], [ "Effect of Co-teaching", "To determine the effects of weak labels and the added value of co-teaching, we design an experiment comparing EDL with co-teaching to EDL alone.", "Table REF shows a promising improvement in both ECE score and patch-based balanced accuracy (Patch B-accuracy) when the co-teaching is employed.", "We report sensitivity, specificity and area under the curve (AUC) metrics for cores.", "Counter-intuitively, we observe that gains in patch-wise accuracy with co-teaching are not reflected in these metrics.", "We hypothesize that the averaging from patch-wise to core-wise predictions may sufficiently smooth the effects of noisy labels at this level.", "We emphasize that the AUC for both methods is at least $10\\%$ higher than AUC achieved using conventional ultrasound machines [11], underlining the strong capabilities of high-frequency ultrasound.", "Figure: Left: accuracy vs. confidence plot.", "As we increase the confidence threshold τ\\tau and retain only confident predictions, the balanced accuracy increases accordingly.", "Middle: The number of remaining cores following exclusion based on the confidence threshold.", "Right: the Expected Calibration Error (ECE) error bar plot for all presented uncertainty quantification methods (lower is better)." ], [ "Comparison of Uncertainty Methods", "Quantification of predictive uncertainty could help clinical decision making during the biopsy procedure by only relying on highly confident predictions and discarding OOD and suspect samples.", "We examine EDL predictive uncertainty using accuracy vs. confidence plots in this section, and illustrate how it may be utilised to eliminate uncertain predictions while achieving high accuracy on the confident ones.", "Then, we compare EDL predictive uncertainty with MC Dropout [5] and deep ensemble [12] methods.", "In our accuracy vs. confidence plot, Figure REF (a), we plot core-based balanced accuracy as a function of the confidence threshold $\\tau \\in [0,1]$ used to filter out underconfident patch-level predictions.", "Patches with predicted confidence less than $\\tau $ , i.e.", "predictive uncertainty more than $1-\\tau $ , are discarded.", "If at least 60% of extracted patches for a biopsy core remain, the average of the remaining patch predictions is used as core-based prediction.", "We observe the increase in core-based accuracy as the threshold increases, showing that confident predictions tend to be correct.", "As shown in Figure REF (b), there is a natural trade-off, with increased threshold values also resulting in increased numbers of rejected cores, yet with well-calibrated uncertainty methods it is not necessary to discard a high fraction of cores in order for uncertainty thresholding to result in meaningful accuracy gains.", "In Figure REF (c), we compare the quality of predictive uncertainty of all methods via ECE score.", "Our experiments show that EDL achieves the best calibration error while providing the best balance between high accuracy and core retention at different threshold levels." ], [ "Model Demonstration", "As a proof-of-concept for the clinical utility of our method, we applied our model as a sliding window over entire RF images and generated a heatmap, where red corresponds to a prediction of cancer and blue to a prediction of benign.", "Uncertainty thresholds at various levels were applied to discard uncertain predictions - discarded predictions had their opacity decreased to 0.", "These maps were overlaid over the corresponding B-mode images to visualize the spread of cancer.", "An example of heatmaps for a cancerous and benign core are shown in Figure REF .", "The cancerous image shows a large amount of red which focuses on two main regions as the confidence threshold increases.", "By the results of Figure REF , we can say that these loci are very likely to be cancerous lesions and good biopsy targets.", "The benign image, on the other hand, shows a dominance of blue, with two small red areas that disappear as the threshold increases.", "These are most likely areas of OOD features on which the model correctly reported high levels of uncertainty.", "These images show the subjective quality of our model's performance and the utility of an adjustable uncertainty threshold." ], [ "Conclusion", "We proposed a model for confident PCa detection using micro-ultrasound.", "We employed co-teaching to improve robustness to label noise, and used evidential deep learning to model the predictive uncertainty of the model.", "We find these strategies to yield a significant improvement over baseline in the clinically relevant metrics of accuracy vs. confidence.", "Our model provides crucial confidence information to interventionists weighing the recommendations of the model against their own expertise, which can be critical for the adoption of precision biopsy targeting using TRUS." ], [ "Acknowledgement.", "This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canadian Institutes of Health Research (CIHR)." ] ]
2207.10485
[ [ "COBRA: Cpu-Only aBdominal oRgan segmentAtion" ], [ "Abstract Abdominal organ segmentation is a difficult and time-consuming task.", "To reduce the burden on clinical experts, fully-automated methods are highly desirable.", "Current approaches are dominated by Convolutional Neural Networks (CNNs) however the computational requirements and the need for large data sets limit their application in practice.", "By implementing a small and efficient custom 3D CNN, compiling the trained model and optimizing the computational graph: our approach produces high accuracy segmentations (Dice Similarity Coefficient (%): Liver: 97.3$\\pm$1.3, Kidneys: 94.8$\\pm$3.6, Spleen: 96.4$\\pm$3.0, Pancreas: 80.9$\\pm$10.1) at a rate of 1.6 seconds per image.", "Crucially, we are able to perform segmentation inference solely on CPU (no GPU required), thereby facilitating easy and widespread deployment of the model without specialist hardware." ], [ "Introduction", "Volumetric image segmentation is a time-consuming and often complicated task that requires medical expertise to produce high-quality, reliable delineations.", "A number of automated approaches have been proposed to free experts from this tedious task and consequently decrease annotation cost.", "However, variability in acquisition protocols, patient anatomy and the presence of pathologies make this a difficult task to automate.", "Current state-of-the-art approaches require large and diverse data sets to generalise to unseen examples.", "The complexity of the resulting models and the size of the CT volumes incur large computational costs and limit applications to centres with adequate computational resources.", "We aim to improve accessibility to these methods by removing the need for expensive, specialist hardware (GPUs) and instead produce models that can perform quick inference entirely on CPU.", "To reduce the computational cost of image processing, high-resolution 3D CT scans are all downsampled to the same size.", "Our model architecture is inspired by the 3D U-Net presented in [1] with a few notable modifications to reduce model size and the number of FLOPs (discussed in section ).", "Finally, we use the Open Neural Network Exchange (ONNX) [2] to compile the trained model, at which point we apply further optimisation to the computational graph.", "As a result, we are able to deploy a small (1.7 Mb) and fast model (1.6 seconds/image on CPU) that produces high-quality 3D segmentations of abdominal organs." ], [ "Method", "Our solution for this challenge is based on a single CNN model which performs segmentation inference on an entire downsampled CT scan.", "We developed a custom 3D CNN for this challenge which is illustrated in Figure REF .", "Figure: A diagram of our custom 3D CNN architecture.", "Our network is influenced by a standard 3D UNet with added ResNet-like residual connections .", "In order to reduce the model size and computational load we introduce three components: a YOLO-inspired 7x7x7 input convolution with stride 2 to quickly reduce the size of the input image whilst preserving a large field of view (see A); bottleneck structures to reduce the number of kernels required (B); and asymmetric factorisation of convolution layers (C)." ], [ "Preprocessing", "When training, we downsample all CT scans and gold standard segmentations to a standardised resolution of $96\\times 192\\times 192$ .", "The CT scans are downsampled using 3rd order spline interpolation, whereas nearest-neighbour downsampling is used for the corresponding gold standard segmentations.", "No cropping is applied prior to training.", "These decisions were made to promote segmentation scale invariance within our model.", "The liver, kidneys, spleen and pancreas are all soft tissue structures.", "Therefore, we chose to normalise the CT scans prior to training using windowing (grey-level mapping).", "By using windowing, we can enhance the contrast of these soft tissue structures whilst also mapping the voxel intensities onto the range $[0,1]$ to improve learning stability.", "The CT scans are normalised in two separate contrast channels which are concatenated (see the input in Figure REF ).", "This improved our segmentation performance.", "In channel one, we use a contrast setting which is used to view general soft tissue structures in the abdominal area (W400, L50).", "In the second channel we apply a tighter window (W100, L60) in an attempt to increase the contrast of the pancreas.", "To aid in the determination of organ boundaries, we split the background label (0) in two: “air” and “body”; this is done by thresholding at -200 HU and applying binary closing and hole filling operations to the resulting mask.", "Air retains the original background label 0, and body becomes label 1, with all other labels shifted accordingly.", "Table: Data splits of FLARE2021." ], [ "Proposed Method", "In Figure REF we show our custom 3D CNN designed for this challenge.", "The architecture is inspired by the classic 3D UNet [1] with added residual connections [3].", "Residual connections are now a standard addition to most modern 3D UNet CNNs and improve gradient propagation in the training process.", "Three key features of our model, which reduce the model size (number of parameters) and computational complexity (number of FLOPs), are introduced below.", "First, a YOLO-inspired [4] input layer is added, consisting of a 7x7x7 convolution with stride of 2.", "In Figure REF we mark this particular layer `A'.", "This input layer quickly reduces the size of the volume being processed by the network, while preserving a wide field-of-view.", "By reducing the size of the CT scan being processed by the network, segmentation inference is accelerated.", "To compress our model, we implemented bottleneck structures as used by He et al.", "for their deeper ResNet architectures [3].", "Bottlenecks are introduced to reduce and then restore the number of kernels in convolutional layers in certain portions of the model.", "By compressing the number kernels of convolutional layers deep in the UNet architecture, we force our model to learn better feature representations whilst also reducing its size and computational complexity.", "Bottlenecks are constructed by sandwiching existing convolutional layers with 1x1x1 convolutions which first reduce and then restore the number of kernels.", "An example bottleneck structure is labelled `B' Figure REF .", "In our model we implement bottlenecks around all the traditional encoder and decoder double convolution layers (shown in red in Figure REF ).", "In addition, all 3D transpose convolutions are similarly bottlenecked.", "For each bottleneck, we define a “bottleneck factor\" which determines the multiplicative reduction in kernels required.", "By default, we use a bottleneck factor = 2, except in blocks marked with a small blue dot in figure REF where we use a bottleneck factor = 4.", "This higher level of kernel compression is performed in the widest layers of the CNN to further reduce model size and accelerate inference.", "To further compress the size of the model and reduce the computational complexity we applied 3D asymmetric factorisation to the convolutional layers of our model.", "Factorisation of convolutional kernels was introduced by Szegedy et al.", "for their Inception v2 architecture [5] and has been more recently applied to 3D data by Yang et al.", "for action recognition in consecutive video frames [6].", "Asymmetric factorisation simulates a large 3D kernel using a series of 1D kernels.", "For example, we factorise all 3x3x3 convolution kernels into a series of layers with 3x1x1, 1x3x1 and 1x1x3 kernels.", "A factorised 3x3x3 kernel reduces the number of parameters by up to a third compared to a standard layer.", "Additionally, the number of FLOPs required to perform factorised layers is significantly lower.", "For our final model, we applied asymmetric factorisation to all convolutional layers with 3x3x3 kernels and the YOLO-inspired 7x7x7 convolutional layer.", "Please refer to the section marked `C' in Figure REF which shows the layers resulting from asymmetric factorisation.", "As a result, our custom 3D CNN contains just 436,982 parameters and requires 48 GFLOPs to segment a $96\\times 192\\times 192$ CT volume." ], [ "Post-processing", "Our CNN model outputs raw segmentation predictions at a resolution of $96\\times 192\\times 192$ .", "We apply argmax to obtain the hard predictions before upsampling the segmentations to the original CT scan dimensions using nearest-neighbour upsampling." ], [ "Dataset", " The dataset used for FLARE2021 is adapted from MSD [7] (Liver [8], Spleen, Pancreas), NIH Pancreas [9], [10], [11], KiTS [12], [13], and Nanjing University under the license permission.", "For more detail information of the dataset, please refer to the challenge website and [14].", "Details of training / validation / testing splits: The total number of cases is 511.", "An approximate 70%/10%/20% train/validation/testing split is employed resulting in 361 training cases, 50 validation cases, and 100 testing cases.", "Detailed information is presented in Table REF .", "At this stage we only have access to the training subset of images and their corresponding gold standard segmentations.", "We present a 5-fold cross validation of our model for this subset in anticipation of evaluation of our model with the validation and testing subsets." ], [ "Evaluation Metrics", " Dice Similarity Coefficient (DSC) Normalized Surface Distance (NSD) with tolerance of 1mm Running time: $\\sim 1.6$ seconds per CT scan Maximum used GPU memory - 0 Mb Table: Environments and requirements.Table: Training protocols." ], [ "Environments and requirements", "The environments and requirements of our method are shown in Table REF .", "Table: Quantitative results of 5-fold cross validation in terms of DSC and NSD.", "We show the median and standard deviation of both measures for every fold individually and for all training folds combined (361 images)." ], [ "Training protocols", "The training protocols we used to train our custom 3D CNN are shown in Table REF .", "We conducted a five-fold cross validation using the “Training\" subset outlined in section REF containing 361 CT scans.", "For the cross-validation, each set of training, validation and test folds contained 252, 36 and 73 images respectively.", "Figure: Boxplots of the organ segmentation results (DSC and NSD) of the 5-fold cross validation.", "The white line shoes the median and red star the mean." ], [ "Testing protocols", "In the testing phase, inference is performed on the entire CT volume by downsampling the input image to a size of $96\\times 192\\times 192$ with 3rd order spline interpolation.", "A Gaussian kernel was applied in-plane to prevent aliasing artefacts when downsampling.", "As in the training phase, we do not crop images prior to inference.", "We use ONNX [2] to compile the trained model thereby decreasing model size - enabling inference on CPU - and increasing inference speed.", "At this point, a number of graph level transformations are applied to improve model performance further.", "These include: constant folding, redundant node elimination and node fusion.", "This led to a small but noticeable improvement in inference speed.", "We experimented with dynamic and static quantization.", "Though both reduced model size (1.7Mb $\\rightarrow $ 614 Kb), the former increased inference time by an order of magnitude while the latter dramatically reduced model performance.", "Therefore, we chose not to apply weight quantization.", "Quantization-aware training may be an alternative for future work.", "Following inference, argmax is applied to model predictions to convert them to multi-class masks then, they are upsampled to the original scan dimensions with nearest neighbour interpolation." ], [ "Quantitative results for 5-fold cross validation.", "The provided results are based on the 5-fold cross-validation results and validation cases.", "Table REF illustrates the results of 5-fold cross-validation.", "Figure REF is the corresponding boxplot of organ segmentation performance.", "While high DSC and NSD scores are obtained for the liver, kidney and spleen, accuracy is lower for the pancreas - highlighting the difficulty of segmenting this organ.", "Fold 2 was our highest performing model in terms of DSC and NSD across the four segmented organs, so we selected this model to submit for evaluation with the validation and test sets." ], [ "Quantitative results on validation set.", "Table REF illustrates the results on validation cases.", "Comparison between Table REF and Table REF illustrates better DSC and NSD performance is obtained for the 5-fold cross validation than the validation set.", "Table: Quantitative results on validation set." ], [ "Qualitative results", "In Figure REF we present two examples of segmentations from the training subset predicted by our model.", "In the top row we show an axial slice of a patient where our model performs well.", "From left to right we show the CT scan alone, the gold-standard segmentations and the organ segmentation predictions made by our model.", "In this example, our model segments the liver (blue), kidneys (green), pancreas (pink) and spleen (yellow) with high accuracy compared to the gold standard.", "Our model struggles in resolving the internal structure of the right kidney, where the gold standard segmentation has avoided the calyx.", "In the bottom row, we show an example where our model struggles to produce an accurate segmentation for the kidneys (green).", "In this example, a large tumour (circled in red) has infiltrated the right kidney.", "In the gold standard this tumour is included in the kidney segmentation, however, our model fails to classify this structure as part of the kidney.", "In figure REF we show three examples from the validation set where our model does a great job at segmenting the organs.", "These examples show the degree of anatomical variation commonly observed for these abdominal organs.", "The segmentation predictions made by our model (right column) closely reflect the gold standard (central column).", "In figure REF we show three challenging examples from the validation subset.", "In each of these cases, disease is present which abnormally alters the shape, size or appearance of one or more of the organs.", "In row a), the left kidney (green) is abnormally large, possibly infiltrated by a tumour, and is has been failed to be segmented by our model.", "In row b), the pancreas (pink) is atypically enlarged and shaped.", "Our CNN is unable to segment the pancreas in this case.", "Finally, in row c) the liver (blue) is darker than usual.", "This is often observed in fatty liver cases.", "Our model is unable to segment this liver, even though the liver is still easily visible.", "These three cases are uncommon and it is likely that our model was not exposed to many similar examples in the training phase.", "Figure: Two sets of axial slices of segmentation outputs from our model.", "Both patients shown here are from the training subset of images.", "In the first column is the CT image alone, the second column shows the gold standard segmentations and the third shows our models prediction.", "In the top row we show an example where our model performs well, accurately segmenting the liver (blue), kidneys (green), pancreas (pink) and spleen (yellow) with good accuracy.", "In the bottom row we show an example where our model struggles to segment the right kidney.", "In this example a tumour is present in the right kidney which our model is unable to recognise.Figure: Three examples from the validation set where our model performs very well.", "As in figure , the first column shows the CT, the second the golden standard and the third shows our model's prediction.", "Our model can segment the abdominal organs in a diverse range of shapes and sizes reflecting common anatomical variation.Figure: Three challenging examples drawn from the validation set where there is significant anatomical deformation caused by disease.", "As in figure , the first column shows the CT, the second the golden standard and the third shows our model's prediction.", "Our CNN is unable to segment the left kidney in case a) which is much larger than normal, likely infiltrated by a tumour.", "Similarly, our model is unable to segment the pancreas of case b) which is larger than standard and oddly shaped.", "The liver of case c) is darker than normal, often indicative of fatty liver disease.", "Due to it's darker appearance, our model is unable to segment this liver." ], [ "Discussion and Conclusion", "We have developed a method for 3D segmentation of organs in abdominal CTs which is capable of producing high quality liver, kidney, spleen and pancreas segmentations in 1.6 seconds using only the CPU.", "Medical image segmentation with CNNs is now very common and a well-studied field.", "However, many methods which produce accurate segmentations very quickly on a GPU will suffer significant slowdown when operating on a CPU.", "For example, Panda et al.", "[16] developed a 3D auto-segmentation method which produces high-quality segmentations of the pancreas (DSC = 0.91), but takes 4 minutes to perform inference for a single image on CPU.", "We have performed a 5-fold cross-validation using the training subset of images (361 images).", "Our model performance was consistent across all five folds.", "However, when evaluated on the validation set, our models segmentation performance was substantially lower in all organs.", "This phenomenon may caused by over-fitting the model on the training set.", "However, we believe the significant deterioration in segmentation performance is due to the distribution of the contrast phases of images between the data splits.", "Our model was trained only with portal venous phase images, whereas at least half of the validation set is made up of images from an earlier contrast stage (late arterial).", "Since we are unable to use external data for this challenge, in future we plan to include additional data augmentation to simulate different contrast phases in the training stage.", "In addition, we would like to ensure that any future training datasets contains examples of fatty livers and other diseases to ensure the model is robust to cases similar to the ones shown in figure REF .", "One of the CTs in the training subset (train_270_0000.nii.gz) was missing many slices of the image, including large portions containing the liver and spleen.", "As a result, we excluded this image from the calculation of DSC and NSD.", "Automated segmentation of medical images with CNNs is now the state-of-the-art [17], and will increasingly release clinicians from the time-intensive task of manually annotated structures.", "Our model performs fast and accurate 3D segmentation on the CPU, which enables much wider deployment of such models within clinics and research groups since it does not require the use of specialist hardware (GPUs)." ], [ "Acknowledgement", "The authors of this paper declare that the segmentation method they implemented for participation in the FLARE challenge has not used any pre-trained models nor additional datasets other than those provided by the organizers." ] ]
2207.10446
[ [ "Weakly non-planar dimers" ], [ "Abstract We study a model of fully-packed dimer configurations (or perfect matchings) on a bipartite periodic graph that is two-dimensional but not planar.", "The graph is obtained from $\\mathbb Z^2$ via the addition of an extensive number of extra edges that break planarity (but not bipartiteness).", "We prove that, if the weight $\\lambda$ of the non-planar edges is small enough, the height function scales on large distances to the Gaussian Free Field with a $\\lambda$-dependent amplitude, that coincides with the anomalous exponent of dimer-dimer correlations.", "Because of non-planarity, Kasteleyn's theory does not apply: the model is non-determinantal.", "Rather, we map the model to a system of interacting lattice fermions in the Luttinger universality class, that we then analyze via fermionic Renormalization Group methods." ], [ "Introduction", "The understanding of the rough phase of two-dimensional random interfaces is important in connection with the macroscopic fluctuation properties of equilibrium crystal shapes and of the separation surface between coexisting thermodynamic phases, among other relevant applications.", "A classical instance of the problem arises when studying the low temperature properties of the three-dimensional (3D) Ising model in the presence of Dobrushin Boundary Conditions (DBC).", "If DBC are fixed so to induce a horizontal interface between the $+$ and $-$ phases, it is well known [13] that at low enough temperatures the interface is rigid.", "It is conjectured that between the so-called roughening temperature and the Curie temperature, the interface displays fluctuations with unbounded variance (the variance diverges logarithmically with the system size), and the height profile supposedly has a massless Gaussian Free Field (GFF) behavior at large scales.", "This conjecture is completely open, in fact not even the existence of the roughening temperature has been proved.", "A connected result [18] is logarithmic divergence of fluctuations of the 2D SOS interface at large enough temperature; however, the result comes with no control of the scaling limit.", "If DBC are chosen so to induce a `tilted' interface, say orthogonal to the $(1,1,1)$ direction, then things are different: fluctuations of the interface are logarithmic already at zero temperature; an exact mapping of the height profile and of its distribution into the standard dimer model on the hexagonal lattice, endowed with the uniform measure over allowed dimer configurations, allows one to get a full control on the large scale properties of the interface fluctuations, which are now proved to behave like a GFF (see [34] for the covariance structure, and [33], as well as [24] for the full Gaussian limit).", "It is very likely that the GFF behavior survives the presence of a small but positive temperature; however, the techniques underlying the proof at zero temperature, based on the exact solvability of the planar dimer problem, break down.", "First of all, at positive temperatures, the very notion of height of the interface is not in general well defined, because of overhangs; these will have a low but non-zero density at low temperature.", "Moreover, even in the regions where overhangs are absent and the configuration can be mapped into a dimer configurations, the probability distribution induced by the finite temperature Ising Gibbs measure onto the dimer model does not correspond to an exactly solvable situation: temperature induces an effective `interaction' among dimers, which destroys the determinant structure that is at the root of Kasteleyn's solution.", "Correlation functions could at best be written in terms of series of determinants, whose behavior at large scales is in general very hard to understand.", "In a previous series of works [24], [25], [26], [27], in collaboration with V. Mastropietro, we started developing methods for the treatment of interacting, non-solvable, dimer models via constructive, fermionic, Renormalization Group (RG) techniques.", "We exhibited an explicit class of models, which include the 6-vertex model close to its free Fermi point as well as several non-integrable versions thereof, for which we proved the GFF nature of the scaling limit of the height fluctuations, as well as the validity of a `Kadanoff' or `Haldane' scaling relation connecting the critical exponent of the so-called electric correlator with the one of the dimer-dimer correlation.", "Such a scaling relation is the counterpart, away from the free Fermi point, of the universality of the stiffness coefficient of the GFF first observed in [34], in connection with the fact that the spectral curve of a planar bipartite dimer model is a Harnack curve.", "In this paper we generalize our analysis to a new setting, inspired by a problem proposed by S. Sheffield a few years agoOpen problem session at the workshop: “Dimers, Ising Model, and their Interactions”, BIRS, 2019: we study the large scale properties of a suitably defined height function, for a dimer model that is two-dimensional but non-planar.", "Note that, when planarity is lost, the very notion of height is not a priori well defined.", "Such a generalization is interesting, in particular in view of the poorly understood fluctuation properties of the interface in the 3D Ising model with DBC, where one of the effects of temperature is to make the height not well defined a priori, but possibly well-defined in a coarse grained sense.", "In short, we introduce a `weakly non-planar' dimer model, by adding non-planar edges with small weights to a reference planar square lattice.", "We do so in a periodic fashion, and in such a way that non-planar edges are restricted to belong to cells, separated among each other by corridors of width one, which are crossed by none of the non-planar edges.", "The fact that non-planar edges avoid these corridors allows us to define a notion of height function on the faces belonging to the corridors themselves.", "We prove that this height function scales at large distances to a GFF with stiffness coefficient that is equal to the anomalous critical exponent of the two-point dimer-dimer correlation.", "As in [24], [27], the proof is based on an exact representation of the dimer model as a system of interacting lattice fermions and in a rigorous multiscale analysis of the effective fermionic model, which has the structure of a lattice regularization of a Luttinger-type model.", "With respect to the previous works [24], [27], obtaining a fermionic representation turns out to be much less trivial, due to the loss of planarity.", "The infrared (i.e., large-scale) analysis of the lattice fermionic model is performed thanks to a comparison with a solvable reference continuum fermionic model, which has been studied and constructed in a series of works by G. Benfatto and V. Mastropietro [4], [5], [7], [8], [9], [10], [11], partly in collaboration also with P. Falco [4], [5], [6].", "The GFF behavior and the Kadanoff-Haldane scaling relation of the dimer model follow from a careful comparison between the emergent chiral Ward Identities of the reference model with exact lattice Ward Identities of the dimer model.", "The first novelty of the present work, as compared to [24], [25], [27], is related to the fermionic representation of the weakly non-planar model.", "The presence of non-planar edges requires a quite non-trivial adaptation of Kasteleyn's theory, which is needed for the very formulation of the finite-volume model in terms of a non-Gaussian Grassmann integral.", "In fact, our non-planar model can in general be embedded on a surface of minimal genus $g\\approx L^2 $ (of the order of the number of non-planar edges) and Kasteleyn's theory for the dimer model on general surfaces [20], [40] would express its partition function as the sum of $4^g$ determinants, i.e.", "of $4^g$ Gaussian Grassmann integrals, a rewriting that is not very useful for extracting thermodynamic properties.", "In this respect, the remarkable aspect of Proposition REF below is that it expresses the partition function of just four Grassmann integrals, which are, however, non-Gaussian.", "The second novel ingredient of our construction is the identification of massive modes associated with the Grassmann field which enters the fermionic representation of the model.", "The fact that the elementary cell of our model consists of $m^2$ sites, with $m$ an even integer larger or equal to 4, implies that the basic Grassmann field of our effective model has a minimum of 16 components.", "It is well known [22], [41] that multi-component Luttinger models, such as the 1D Hubbard model [37], to cite the simplest possible example, do not necessarily display the same qualitative large distance features as the single-component one: new phenomena and quantum instabilities, such as spin-charge separation and metal-insulator transitions accompanied with the opening of a Mott gap may be present and may drastically change the resulting picture.", "Therefore, it is a priori unclear whether the height function of our model should still display a GFF behavior at large scales.", "Remarkably, however, the fact that the characteristic polynomial of the reference model has at most two simple zeros [34] implies that all but two of the components of the effective Grassmann field are massive, and they can be preliminarily integrated out.", "This way, one can at last re-express the effective massless model in terms of just two massless fields (quasi-particle fields), in a way suitable for the application of the multi-scale analysis developed in [24], [27].", "At this point, a large part of the multi-scale analysis is based on the tools developed in our previous works, which we will refer to for many technical aspects, without repeating the analysis in the present slightly different setting." ], [ "The broader context: height delocalization for discrete interface models", "Recently, remarkable progress has been made on (logarithmic) delocalization of discrete, two-dimensional random interfaces.", "We start with the result which is maybe the closest in spirit to our work, that is [2], [3].", "These works prove, by means of bosonic, constructive RG methods, that the height function of the discrete Gaussian interface model (that is the lattice GFF conditioned to be integer-valued) has, at sufficiently high temperature, admits the continuum GFF as scaling limit.", "In a way, this result is quite complementary to ours, since the model considered there is a perturbation of a free bosonic model (the lattice GFF), while in our case we perturb around a free fermionic one (the non-interacting dimer model).", "For closely related results on the 2D lattice Coulomb gas, see also [16], [17].", "In a broader perspective, there has been a number of recent results (e.g.", "[1], [12], [36]) that prove delocalization of discrete, two-dimensional interface models at high temperature, even though they fall short of proving convergence to the GFF.", "Let us mention in particular the recent [36], which proves with a rather soft argument a (non-quantitative) delocalization statement for rather general height models, under the restriction, however, that the underlying graph has maximal degree three.", "For the particular case of the 6-vertex model, delocalization of the height function is known to hold in several regions of parameters [14], [15], [38], [42] but full scaling to the GFF has been proven only in a neighborhood of the free fermion point [27]." ], [ "Organization of the article", "The rest of the paper is organized as follows: in Section we define the model and state our main results precisely.", "In Section we review some useful aspects of Kasteleyn's theory on toroidal graphs and derive the Grassmann representation of the weakly non-planar dimer model.", "In Section we prove one of the main results of our work, concerning the logarithmic behavior of the height covariance at large distances and the Kadanoff-Haldane scaling relation, assuming temporarily a sharp asymptotic result on the correlation functions of the dimer model.", "The proof of the latter is based on a generalization of the analysis of [27], described in Section .", "As mentioned above, the novel aspect of this part consists in the identification and integration of the massive degrees of freedom (Sections REF -REF ), while the integration of the massless ones (Section REF ) is completely analogous to the one described in [27].", "Finally, in Section , we complete the proof of the convergence of the height function to the GFF." ], [ "The “weakly non-planar” dimer model", "To construct the graph $G_L$ on which our dimer model is defined, we let $L,m$ be two positive integers with $m$ even, and we start with $G^{0}_L=(\\mathbb {Z}/(L m\\,\\mathbb {Z}))^2$ , which is just the toroidal graph obtained by a periodization of $\\mathbb {Z}^2$ with period $L\\, m$ in both horizontal and vertical directions.", "We can partition $G^0_L$ into $L^2$ square cells $B_{x}$ , $x=(x_1,x_2)\\in \\Lambda :=(-L/2,L/2]^2\\cap \\mathbb {Z}^2$ , of side $m$ .", "The graph $G^0_L$ is plainly bipartite and we color vertices of the two sub-lattices black and white (each cell contains $m^2/2$ vertices of each color).", "Black (resp.", "white) vertices are denoted $b$ (resp.", "$w$ ).", "We let $ {\\bf e_1}$ (resp.", "${\\bf e_2}$ ) denote the horizontal (resp.", "vertical) vectors of length $m$ and we note that translation by ${\\bf e_{1}}$ (resp.", "by ${\\bf e_2}$ ) maps the cell $B_x$ into $B_{((x_1+1) \\mod {L},x_2)}$ (resp.", "$B_{( x_1, (x_2+1)\\mod {L})}$ ).", "A natural choice of coordinates for vertices is the following one: a vertex is identified by its color (black or white) and by a pair of coordinates $(x,\\ell )$ where $x$ identifies the label of the cell the vertex belongs to, and the “type” $\\ell \\in \\mathcal {I}:=\\lbrace 1,\\dots ,m^2/2\\rbrace $ identifies the vertex within the cell.", "It does not matter how we label vertices within a cell, but we make the natural choice that if two vertices are related by a translation by a multiple of ${\\bf e_{1},e_2}$ , then they have the same type index $\\ell $ .", "The graph $G_L$ is obtained from $G^{0}_L$ by adding in each cell $B_x$ a finite number of edges among vertices of opposite color (so that $G_L$ is still bipartite), with the constraint that $G_L$ is invariant under translations by multiples of ${\\bf e_{1},e_2}$ (i.e., vertex $w$ of coordinates $(x,\\ell )$ is joined to vertex $b$ of coordinates $(x,\\ell ^{\\prime })$ if and only if the same holds for any other $x^{\\prime }\\in \\Lambda $ ).", "See Fig.", "REF for an example.", "Remark 1 It is easy to see that we need that $m\\ge 4$ for this construction to work: if $m=2$ , the two black edges in the cell are already connected to the two black vertices and there are no non-planar edges that can be added.", "Note that $G_L$ is in general non-planar, even in the full-plane limit $L\\rightarrow \\infty $ .", "Figure: An example of graph G L G_L with L=2L=2, m=4m=4 and twonon-planar edges (in red) per cell.", "The height function is definedon the set FF of dashed faces outside of cells.", "Faces colored grayare those in F ¯\\bar{F} (they share a vertex with four differentcells).Let $E_L$ denote the set of edges of $G_L$ : we write $E_L$ as the disjoint union $E_L=E^0_L\\cup N_L$ where $E^0_L$ are the edges of $G^{0}_L$ (we call these “planar edges”) and $N_L$ (we call these “non-planar edges”) are the extra ones.", "Each edge $e\\in E_L$ is assigned a positive weight: since we are interested in the situation where the weights of non-planar edges are small compared to those of planar edges, we first take a collection of weights $\\lbrace \\tilde{t}_e\\rbrace _{e\\in E_L}, \\tilde{t}_e>0$ that is invariant under translations by multiples of ${\\bf e_{1},e_2}$ , and then we establish that the weight of a planar edge $e$ is $t_e=\\tilde{t}_e$ while that of a non-planar edge is $t_e=\\lambda \\tilde{t}_e$ , where $\\lambda $ is a real parameter, that will be taken small later.", "To simplify expressions that follow, we will sometimes write ${\\bf x}\\in {\\bf \\Lambda }$ instead of $(x, \\ell ) \\in \\Lambda \\times \\mathcal {I}$ for the coordinate of a vertex of $G_L$ .", "Also, we label the collection of edges in $E_L$ whose black vertex has type $\\ell $ with a label $j\\in \\mathcal {J}_{\\ell }=\\lbrace 1,\\dots ,|\\mathcal {J}_{\\ell }|\\rbrace $ .", "The labeling is done in such a way that two edges that are obtained one from the other via a translation by a multiple of ${\\bf e_1,e_2}$ have the same label.", "Note that $|\\mathcal {J}_{\\ell }|\\ge 4$ , and it is strictly larger than four if there are non-planar edges incident to the black vertex of type $\\ell $ .", "By convention, we label $j=1,\\dots 4$ the four edges of $G^0_L$ belonging to $\\mathcal {J}_{\\ell }$ , starting from the horizontal one whose left endpoint is black, and moving anti-clockwise.", "The set of perfect matchings (or dimer configurations) of $G_L$ is denoted $\\Omega _L$ .", "Each $M\\in \\Omega _L$ is a subset of $E_L$ and the set of perfect matchings that contain only planar edges is denoted $\\Omega ^{0}_L$ .", "Our main object of study is the probability measure on $\\Omega _L$ given by $\\mathbb {P}_{L,\\lambda }(M)=\\frac{w(M) }{Z_{L,{\\bf \\lambda }}}{\\mathbb {1}}_{ M\\in \\Omega _L}, \\;w(M)=\\prod _{e \\in M} t_e,\\; Z_{L,\\lambda }=\\sum _{M\\in \\Omega _L} w(M).$ We are interested in the limit where $L$ tends to infinity while $m$ (the cell size) is fixed.", "In this limit, the graph $G^{0}_L$ becomes the (planar) graph $G^{0}_\\infty =\\mathbb {Z}^2$ while $G_L$ becomes a periodic, bipartite but in general non-planar graph $G_\\infty $ .", "Cells $B_x$ of the infinite graphs are labelled by $x\\in \\mathbb {Z}^2$ .", "We let $\\Omega $ (resp.", "$\\Omega ^{0}$ ) denote the set of perfect matchings of $G_\\infty $ (resp.", "of $\\mathbb {Z}^2$ ).", "In the case $\\lambda =0$ , the measure $\\mathbb {P}_{L,0}$ is supported on $\\Omega ^{0}_L$ : in fact, $\\mathbb {P}_{L,0}$ is just the Boltzmann-Gibbs measure of the dimer model on the (periodized) square grid, with edge weights of periodicity $m$ (we will refer to this as the “non-interacting dimer model”).", "The non-interacting model is well understood via Kasteleyn's [29], [30], [32] (and Temperley-Fisher's [39]) theory, that allows to write its partition and correlation functions in determinantal form.", "According to the choice of the edge weights $\\lbrace t_e\\rbrace $ , the non-interacting model can be either in a liquid (massless), gaseous (massive) or frozen phase, see [34].", "In particular, in the liquid phase correlations decay like the squared inverse distance (see for instance (REF )-(REF ) for a more precise statement).", "In this work, we assume that the edge weights are such that for $\\lambda =0$ , the model is in a massless phase.", "The essential facts from Kasteleyn's theory that are needed for the present work are recalled in Section REF .", "In particular, we emphasize that all the statistical properties of the non-interacting model are encoded in the so-called characteristic polynomial $\\mu $ (see (REF )), that is nothing else but the determinant of the Fourier transform of the so-called Kasteleyn matrix.", "Then, the assumption that the $\\lambda =0$ model is in the massless phase, can be more precisely stated as followsFor specific choices of the edge weights, the characteristic polynomial can have a real node rather than two zeros, and still the non-interacting model can be in the massless phase [34].", "In this work, we will not consider this somewhat degenerate situation.", ": Assumption 1 The edge weights $\\lbrace t_e\\rbrace $ are such that the characteristic polynomial $\\mu $ (see formula (REF ) below) of the non-interacting dimer model has two simple zeros.", "We recall from [34] that this is a non-empty condition on the edge weights (in fact, this set of edge weights is a non-trivial open set)." ], [ "Results", "Our main goal is to understand the large-scale properties of the height function under the limit measure $\\mathbb {P}_\\lambda $ , which is the weak limit as $L\\rightarrow \\infty $ of $\\mathbb {P}_{L,\\lambda }$ .", "The fact that this limit exists, provided that $|\\lambda |\\le \\lambda _0$ for a sufficiently small $\\lambda _0$ , is a byproduct of the proof.", "Our first main result concerns the large distance asymptotics of the truncated dimer-dimer correlations.", "We use the notation $1_e$ for the indicator function that the edge $e$ is occupied by a dimer, and $\\mathbb {E}_\\lambda (f;g)$ for $\\mathbb {E}_\\lambda (f g)-\\mathbb {E}_\\lambda (f)\\mathbb {E}_\\lambda (g)$ .", "Theorem 1 Choose the dimer weights on the planar edges as in Assumption REF .", "There exists $\\lambda _0>0$ and analytic functions $\\nu :[-\\lambda _0,\\lambda _0]\\mapsto \\mathbb {R}^+$ , $\\alpha _\\omega ,\\beta _\\omega :[-\\lambda _0,\\lambda _0]\\mapsto \\mathbb {C}\\setminus \\lbrace 0\\rbrace $ (labelled by $\\omega \\in \\lbrace +,-\\rbrace $ and satisfying $\\overline{\\alpha _+}=- \\alpha _{-}, \\overline{\\beta _+}=- \\beta _{-}$ and $\\alpha _\\omega (\\lambda )/\\beta _\\omega (\\lambda )\\notin \\mathbb {R}$ ), $K_{\\omega ,j,\\ell }, H_{\\omega ,j,\\ell }:[-\\lambda _0,\\lambda _0]\\mapsto \\mathbb {C}$ (labelled by $\\omega \\in \\lbrace +,-\\rbrace $ , $\\ell \\in \\mathcal {I}$ , $j\\in \\mathcal {J}_{\\ell }$ and satisfying $K_{\\omega ,j,\\ell }=\\overline{K_{-\\omega ,j,\\ell }}$ , $H_{\\omega ,j,\\ell }=\\overline{H_{-\\omega ,j,\\ell }}$ ) and $p^\\omega :[-\\lambda _0,\\lambda _0]\\mapsto [-\\pi ,\\pi ]^2$ (labelled by $\\omega \\in \\lbrace -1,+1\\rbrace $ and satisfying $p^+=-p^-$ ) such that, for any two edges $e,e^{\\prime }$ with black vertices $(x,\\ell ), (x^{\\prime },\\ell ^{\\prime })\\in \\mathbb {Z}^2\\times \\mathcal {I}$ such that $x\\ne x^{\\prime }$ and labels $j\\in \\mathcal {J}_{\\ell }$ , $j^{\\prime }\\in \\mathcal {J}_{\\ell ^{\\prime }}$ , $\\mathbb {E}_{\\lambda }(1_{e};1_{e^{\\prime }})=\\sum _{\\omega }\\Biggl [\\frac{K_{\\omega ,j,\\ell }K_{\\omega ,j^{\\prime },\\ell ^{\\prime }}}{(\\phi _\\omega (x-x^{\\prime }))^2}+\\frac{H_{\\omega ,j,\\ell }H_{-\\omega ,j^{\\prime },\\ell ^{\\prime }}}{|\\phi _\\omega (x-x^{\\prime })|^{2\\nu }}e^{2ip^\\omega \\cdot (x-x^{\\prime })}\\Biggr ]+\\text{Err}(e,e^{\\prime }),$ where, letting $x=(x_1,x_2)$ , $\\phi _\\omega (x):=\\omega (\\beta _\\omega (\\lambda )x_1-\\alpha _\\omega (\\lambda )x_2)$ and $|\\text{Err}(e,e^{\\prime })|\\le C|x-x^{\\prime }|^{-3+O(\\lambda )}$ for some $C>0$ and $O(\\lambda )$ independent of $x,x^{\\prime }$ .", "Moreover, $\\nu (\\lambda )=1+O(\\lambda )$ .", "Even if not indicated explicitly, the functions $\\nu ,\\alpha _\\omega ,\\beta _\\omega , K_{\\omega ,j,\\ell }, H_{\\omega ,j,\\ell },p^\\omega $ all depend non-trivially on the edge weights $\\lbrace t_e\\rbrace $ .", "In particular, generically, $\\nu =1+c_1\\lambda +O(\\lambda ^2)$ , with $c_1$ a non zero coefficient, which depends upon the edge weights (this was already observed in [26] for interacting dimers on planar graphs); therefore, generically, $\\nu $ is larger or smaller than 1, depending on the sign of $\\lambda $ .", "At $\\lambda =0$ , (REF ) reduces to the known asymptotic formula for the truncated dimer-dimer correlation of the standard planar dimer model, which is reviewed in the next section.", "The most striking difference between the case $\\lambda \\ne 0$ and $\\lambda =0$ is the presence of the critical exponent $\\nu $ in the second term in square brackets in the right hand side of (REF ).", "It shows that the presence of non-planar edges in the model qualitatively changes the large distance decay properties of the dimer-dimer correlations.", "Therefore, naive universality, in the strong sense that all critical exponents of the perturbed model are the same as those of the reference unperturbed one, fails.", "In the present context the correct notion to be used is that of `weak universality', due to Kadanoff, on the basis of which we expect that the perturbed model is characterized by a number of exact scaling relations; these should allow us to reduce all the non-trivial critical exponents of the model (i.e., those depending continuously on the strength of the perturbation) to just one of them, for instance $\\nu $ itself.", "A rigorous instance of such a scaling is discussed in Remark REF below.", "The weak universality picture is formally predicted by bosonization methods (see the introduction of [25] for a brief overview), which allow one to express the large distance asymptotics of all correlation functions in terms of a single, underlying, massless GFF.", "Such a GFF is nothing but the scaling limit of the height function of the model, as discussed in the following.", "Given a perfect matching $M\\in \\Omega ^{0}$ of the infinite graph $G^{0}_\\infty =\\mathbb {Z}^2$ , there is a standard definition of height function on the dual graph: given two faces $\\xi ,\\eta $ of $\\mathbb {Z}^2$ , one defines $ h(\\eta )-h(\\xi )=\\sum _{e \\in C_{\\xi \\rightarrow \\eta }}\\sigma _e\\Big {(}\\mathbb {1}_{e \\in M}-\\frac{1}{4} \\Big {)}$ together with $h(\\xi _0)=0$ at some reference face $\\xi _0$ .", "Here, $C_{\\xi \\rightarrow \\eta }$ is a nearest-neighbor path from $\\xi $ to $\\eta $ and $\\sigma _e$ is a sign which equals $+1$ if the edge $e$ is crossed with the white vertex on right and $-1$ otherwise).", "The definition is well-posed since it is independent of the choice of the path.", "We recall that under $\\mathbb {P}_0$ , the height function is known to admit a GFF scaling limit [33], [24].", "A priori, on a non-planar graph such as $G$ , there is no canonical bijection between perfect matchings and height functions.", "However, since the non-planarity is “local” (non-planar edges do not connect different cells), there is an easy way out.", "Namely, let $F$ denote the set of faces of $\\mathbb {Z}^2$ that do not belong to any of the cells $B_x$ (see Figure REF ).", "Given a perfect matching $M\\in \\Omega $ , define an integer-valued height function $h$ on faces $\\xi \\in F$ by setting it to zero at some reference face $\\xi _0\\in F$ and by imposing (REF ) for any path $C_{\\xi \\rightarrow \\eta }$ that uses only faces in $F$ .", "It is easy to check that $h$ is then independent of the choice of path.", "Our second main result implies in particular that the variance of the height difference between faraway faces in $F$ grows logarithmically with the distance.", "For simplicity, let us restrict our attention to the subset $\\bar{F}\\subset F$ of faces that share a vertex with four cells (see Fig.", "REF ): if a face in $ \\bar{F}$ shares a vertex with $B_x,B_{x-(0,1)}, B_{x-(1,0)},B_{x-(1,1)}$ , then we denote it by $\\eta _x$ .", "Theorem 2 Under the same assumptions as Theorem REF , for $x^{(1)},\\dots , x^{(4)}\\in \\mathbb {Z}^2$ , $\\mathbb {E}_\\lambda \\left[(h(\\eta _{x^{(1)}})-h(\\eta _{x^{(2)}}));(h(\\eta _{x^{(3)}})-h(\\eta _{x^{(4)}}))\\right]\\\\=\\frac{\\nu (\\lambda )}{2\\pi ^2}\\Re \\left[\\log \\frac{( \\phi _+({x^{(4)}})- \\phi _+({x^{(1)}}))( \\phi _+({x^{(3)}}) - \\phi _+({x^{(2)}}))}{(\\phi _+({x^{(4)}})- \\phi _+({x^{(2)}}))(\\phi _+({x^{(3)}})-\\phi _+({x^{(1)}}))}\\right]\\\\+O\\left(\\frac{1}{\\min _{i\\ne j\\le 4}|x^{(i)}-x^{(j)}|^{1/2}+1}\\right)$ where $\\nu $ and $\\phi _+$ are the same as in Theorem REF Note that in particular, taking $x^{(1)}=x^{(3)}=x,x^{(2)}=x^{(4)}=y$ we have ${\\rm Var}_{\\mathbb {P}_\\lambda }(h(\\eta _{x})-h(\\eta _{y}))= \\frac{\\nu (\\lambda )}{\\pi ^2}\\Re \\log (\\phi _+(x)-\\phi _x(y))+O(1)=\\frac{\\nu (\\lambda )}{\\pi ^2}\\log |x-y|+O(1)$ as $|x-y|\\rightarrow \\infty $ .", "Remark 2 The remarkable fact of this result is that the `stiffness' coefficient $\\nu (\\lambda )/\\pi ^2$ of the GFF is the same, up to the $1/\\pi ^2$ factor, as the critical exponent of the oscillating part of the dimer-dimer correlation.", "There is no a priori reason that the two coefficients should be the same, and it is actually a deep implication of our proof that this is the case.", "Such an identity is precisely one of the scaling relations predicted by Kadanoff and Haldane in the context of vertex, Ashkin-Teller, XXZ, and Luttinger liquid models, which are different models in the same universality class as our non-planar dimers (see [25] for additional discussion and references).", "Building upon the proof of Theorem REF , we also obtain bounds on the higher point cumulants of the height; these, in turn, imply convergence of the height profile to a massless GFF: Theorem 3 Assume that $|\\lambda |\\le \\lambda _0$ , with $\\lambda _0>0$ as in Theorem REF .", "For every $C^\\infty $ , compactly supported test function $f:\\mathbb {R}^2\\mapsto \\mathbb {R}$ of zero average and $\\epsilon >0$ , define $h^\\epsilon (f):=\\epsilon ^2 \\sum _{x\\in \\mathbb {Z}^2} \\big (h(\\eta _x)-\\mathbb {E}_\\lambda (h(\\eta _x))\\big )f(\\epsilon x).$ Then, one has the convergence in distribution $h^\\epsilon (f)\\stackrel{\\epsilon \\rightarrow 0}{\\Longrightarrow }\\mathcal {N}(0,\\sigma ^2(f))$ where $\\mathcal {N}(0,\\sigma ^2(f))$ denotes a centered Gaussian distribution of variance $\\sigma ^2(f):=\\frac{\\nu (\\lambda )}{2\\pi ^2}\\int _{\\mathbb {R}^2}dx\\int _{\\mathbb {R}^2}dy f(x)\\Re [\\log \\phi _+(x-y)]f(y).$" ], [ "Grassmann representation of the generating function", "In this section we rewrite the partition function $Z_{L,\\lambda }$ of (REF ) in terms of Grassmann integrals (see Sect.REF ).", "As a byproduct of our construction, we obtain a similar Grassmann representation for the generating function of correlations of the dimer model.", "We also observe that the Grassmann integral for the generating function is invariant under a lattice gauge symmetry, whose origin has to be traced back to the local conservation of the number of incident dimers at lattice sites, and which implies exact lattice Ward Identities for the dimer correlations (see Sect.REF ).", "Before diving into the proof of the Grassmann representation, it is convenient to recall some preliminaries about the planar dimer model, its Gaussian Grassmann representation and the structure of its correlation functions in the thermodynamic limit.", "This will be done in the next two subsection, Sect.REF , REF" ], [ "A brief reminder of Kasteleyn theory", "Here we recall a few basic facts of Kasteleyn theory for the dimer model on a bipartite graph $G=(V,E)$ embedded on the torus, with edge weights $\\lbrace t_e>0\\rbrace _{e\\in E}$ .", "For later purposes, we need this for more general such graphs than just $G^0_L$ .", "For details we refer to [20], which considers the more general case where the graph is not bipartite and it is embedded on an orientable surface of genus $g\\ge 1$ .", "For the considerations of this section, we do not need the edge weights to display any periodicity, so here we will work with generic, not necessarily periodic, edge weights.", "As in [20], we assume that $G$ can be represented as a planar connected graph $G_0=(V,E_0)$ (we call this the “basis graph of $G$ ”), embedded on a square, with additional edges that connect the two vertical sides of the square (edges $E_1$ ) or the two horizontal sides (edges $E_2$ ).", "Note that $E=E_0\\cup E_1\\cup E_2$ .", "See Figure REF .", "We always assume that the basis graph $G_0$ is connected and actually[20] develops Kasteleyn's theory without assuming that $G_0$ is $2-$ connected.", "We will avoid below having to deal with non-2-connected graphs, which would entail several useless complications that it is $2-$ connected (i.e.", "removal of any single vertex together with the edges attached to it does not make $G_0$ disconnected).", "We also assume that $G_0$ admits at least one perfect matching and we fix a reference one, which we call $M_0$ .", "Following the terminology of [20], we introduce the following definition.", "Definition 1 (Basic orientation) We call an orientation $D_0$ of the edges $E_0$ a “basic orientation of $G_0$ ” if all the internal faces of the basis graph $G_0$ are clock-wise odd, i.e.", "if running clockwise along the boundary of the face, the number of co-oriented edges is odd (since $G_0$ is 2-connected, the boundary of each face is a cycle).", "A basic orientation always exists [20], but in general it is not unique.", "Next, one defines 4 orientations of the full graph $G$ as follows (these are called “relevant orientations” in [20]).", "First, one draws the planar graphs $G_j,j=1,2$ whose edge sets are $E_0\\cup E_j$ , as in Fig.", "REF .", "Figure: The basis graph G 0 G_0 is schematically represented by the gray square (the vertices and edges inside the square are not shown).", "In the left (resp.", "right) drawing is pictured the planar graph G 1 G_1 (resp.", "G 2 G_2)Note that there is a unique orientation $D_j$ of the edges in $E_0\\cup E_j$ that coincides with $D_0$ on $E_0$ and such that all the internal faces of $G_j$ are clockwise odd.", "Then, we define the relevant orientation $D_\\theta $ of type $\\theta =(\\theta _1,\\theta _2)\\in \\lbrace -1,+1\\rbrace ^2$ of $G$ as the unique orientation of the edges $E$ that coincides with $D_0$ on $G_0$ and with $\\theta _1 D_j$ on the edges in $E_j, j=1,2$ .", "Given one of the four relevant orientations $D_\\theta $ of $G$ , we define a $|V|\\times |V|$ antisymmetric matrix $A_{D_\\theta }$ by establishing that for $v,v^{\\prime }\\in V$ , $A_{D_\\theta }(v,v^{\\prime })=0$ if $(v,v^{\\prime })\\notin E$ , while $A_{D_\\theta }(v,v^{\\prime })=t_e$ if $v,v^{\\prime }$ are the endpoint of the edge $e$ oriented from $v $ to $v^{\\prime }$ , and $A_{D_\\theta }(v,v^{\\prime })=-t_e$ if $e$ is oriented from $v^{\\prime }$ to $v$ .", "Then, [20] says that $Z_G=\\sum _{M\\in \\Omega _G}w(M)= \\sum _{\\theta \\in \\lbrace -1,+1\\rbrace ^2}\\frac{c_\\theta }{2}\\frac{{\\rm Pf}(A_{D_\\theta })}{s(M_0)}$ where $\\Omega _G$ is the set of the perfect matchings of $G$ , $w(M)=\\prod _{e\\in M}t_e$ , and $c_{(-1,-1)}=-1 \\text{ and } c_\\theta =1 \\text{ otherwise.", "}$ In (REF ), ${\\rm Pf}(A)$ denotes the Pfaffian of an anti-symmetric matrix $A$ and $s(M_0)$ denotes the sign of the term corresponding to the reference matching $M_0$ in the expansion of the Pfaffian ${\\rm Pf}(A_{D_\\theta })$ .", "Since by assumption $M_0$ contains only edges from $E_0$ whose orientation does not depend on $\\theta $ , $s(M_0)$ is indeed independent of $\\theta $ .", "In our case, in contrast with the general case considered in [20], the graph $G$ is bipartite.", "By labeling the vertices so that the first $|V|/2$ are black and the last $|V|/2$ are white, the matrices $A_{D_\\theta }(v,v^{\\prime })$ have then a block structure of the type $A_{D_\\theta }= \\begin{pmatrix}0 & \\hspace*{0.0pt}\\hspace*{0.0pt}& +K_\\theta \\\\\\hline -K_\\theta & \\hspace*{0.0pt}\\hspace*{0.0pt}&0\\end{pmatrix}$ We view the $|V|/2\\times |V|/2$ “Kasteleyn matrices” $K_\\theta $ as having rows indexed by black vertices and columns by white vertices.", "By using the relation [19] between Pfaffians and determinants, one can then rewrite the above formula as $Z_G= \\sum _{\\theta \\in \\lbrace -1,+1\\rbrace ^2}\\frac{\\tilde{c}_\\theta }{2}\\det (K_\\theta ), \\quad \\tilde{c}_\\theta =c_\\theta \\frac{(-1)^{(|V|/2-1)|V|/4}}{s(M_0)}.$ Remark 3 Note that changing the order in the labeling of the vertices changes the sign $s(M_0)$ .", "We suppose henceforth that the choice is done so that the ratio in the definition of $\\tilde{c}_\\theta $ equals 1, so that $\\tilde{c}_\\theta =c_\\theta $ ." ], [ "Thermodynamic limit of the planar dimer model", "In the previous section, Kasteleyn's theory for rather general toroidal bipartite graphs was recalled, without assuming any type of translation invariance.", "In this subsection, instead, we specialize to $G=G^0_L$ (the periodized version of $\\mathbb {Z}^2$ introduced in Section REF ) and, as was the case there, we assume that the edge weights are invariant under translations by multiples of ${\\bf e_1,\\bf e_2}$ .", "With Kasteleyn's theory at hand, one can compute the thermodynamic and large-scale properties of the dimer model on $G^0_L$ as $L\\rightarrow \\infty $ .", "We refer to [34], [33], [28] for details.", "In the case where $G=G^0_L$ , the basis graph $G_0$ is a square grid with $Lm$ vertices per side and we choose its basic orientation $D_0$ so that horizontal edges are oriented from left to right, while vertical edges are oriented from bottom to top on every second column and from top to bottom on the remaining columns.", "With this choice, the orientations $D_1,D_2$ of $G_1,G_2$ are like in Fig.", "REF .", "Note that, if $e=(b,w)\\in E_L^0$ is an edge of $G_L^0$ , then for $\\theta =(\\theta _1,\\theta _2)\\in \\lbrace -1,+1\\rbrace ^2$ , $K_{\\theta }(b,w)$ equals $K_{(+1,+1)}(b,w)$ multiplied by $(-1)^{(\\theta _1-1)/2}$ if $e$ belongs to $E_1$ (see Fig.", "REF ) and by $(-1)^{(\\theta _2-1)/2}$ if $e$ belongs to $E_2$ .", "Figure: The graphs G 1 ,G 2 G_1,G_2 corresponding to the basis graph of G L 0 G_L^0 (for Lm=4L\\,m=4), together with their orientations D 1 ,D 2 D_1,D_2.Observe also that the matrix $K_{(-1,-1)}$ is invariant under translations by multiples of ${\\bf e_1,e_2}$ .", "Define $\\mathcal {P}(\\theta ):=\\left\\lbrace k=(k_1,k_2): k_j=\\frac{2\\pi }{L}\\left(n_j+\\frac{\\theta _j+1}{4}\\right), -L/2<n_j\\le L/2\\right\\rbrace .$ Let $P_\\theta $ be the orthogonal $(Lm)^2/2\\times (Lm)^2/2$ matrix whose columns are indexed by $(k,\\ell ), k\\in \\mathcal {P}(\\theta ), \\ell \\in \\mathcal {I}=\\lbrace 1,\\dots ,m^2/2\\rbrace $ , whose rows are indexed by $(x,\\ell ), x\\in \\Lambda , \\ell \\in \\mathcal {I}$ , and such that the column indexed $(k,\\ell )$ is the vector $f_{\\ell ,k}:((x,\\ell ^{\\prime })\\in \\Lambda \\times \\mathcal {I})\\mapsto f_{\\ell ,k}(x,\\ell ^{\\prime })=\\frac{1}{L} e^{-i k x}{\\bf 1}_{\\ell ^{\\prime }=\\ell }.$ Then, $P_\\theta ^{-1}K_\\theta P_\\theta $ is block-diagonal with blocks of size $|\\mathcal {I}|$ labelled by $k\\in \\mathcal {P}(\\theta )$ .", "The block corresponding to the value $k$ is a $|\\mathcal {I}|\\times |\\mathcal {I}|$ matrix $M(k)$ of elements $[M(k)]_{\\ell ,\\ell ^{\\prime }}$ with $\\ell ,\\ell ^{\\prime }\\in \\mathcal {I}$ and $ [M(k)]_{\\ell ,\\ell ^{\\prime }}=\\sum _{e:\\ell \\stackrel{e}{\\sim }\\ell ^{\\prime }} K_{(-1,-1)}(b,w) e^{-i kx_{e}}.$ In this formula, the sum runs over all edges $e$ joining the black vertex $b$ of type $\\ell $ in the cell of coordinates $x=(0,0)$ to some white vertex $w$ of type $\\ell ^{\\prime }$ ($w$ can be either in the same fundamental cell or in another one); $x_{e}\\in \\mathbb {Z}^2$ is the coordinate of the cell to which $w$ belongs.", "The thermodynamic and large-scale properties of the measure $\\mathbb {P}_{L,0}$ are encoded in the matrix $M$ : for instance the infinite volume free energy exists and it is given by [34] $ F=:\\lim _{L \\rightarrow \\infty }\\frac{1}{L^2}\\log Z_{L,0}=\\frac{1}{(2\\pi )^2} \\int _{[-\\pi ,\\pi ]^2} \\log | \\mu (k) | dk$ where $\\mu $ (the “characteristic polynomial”) is $\\mu (k):=\\det M(k),$ which is a polynomial in $e^{i k_1},e^{i k_2}$ .", "Kasteleyn's theory allows one to write multi-point dimer correlations (in the $L\\rightarrow \\infty $ limit) in terms of the so-called “infinite-volume inverse Kasteleyn matrix” $K^{-1}$ : if $w$ (resp.", "$b$ ) is a white (resp.", "black) vertex of type $\\ell $ in cell $x=(x_1,x_2)\\in \\mathbb {Z}^2$ (resp.", "of type $\\ell ^{\\prime }$ and in cell 0), then one has $K^{-1}(w,b):=\\frac{1}{(2\\pi )^2}\\int _{[-\\pi ,\\pi ]^2} [(M(k))^{-1}]_{\\ell ,\\ell ^{\\prime }}e^{-i k x} dk.$ As can be guessed from (REF ), the long-distance behavior of $K^{-1}$ is related to the zeros of the determinant of $M(k)$ , that is, to the zeros of $ \\mu $ on $[-\\pi ,\\pi ]^2$ .", "It is a well known fact [34] that, for any choice of the edge weights, $\\mu $ can have at most two zeros.", "Our Assumption REF means that we restrict to a choice of edge weights such that $\\mu $ has exactly two distinct zeros, named $p_0^+,p_0^-$ .", "We also define the complex numbers $\\alpha ^0_\\omega :=\\partial _{k_1}\\mu (p_0^\\omega ), \\quad \\beta ^0_\\omega :=\\partial _{k_2}\\mu (p_0^\\omega ), \\quad \\omega =\\pm .$ Note that, since the Kasteleyn matrix elements $K_\\theta (b,w)$ are realIn [27], [25] etc, a different choice of Kasteleyn matrix was done, with complex entries.", "As a consequence, in that case one had $p_0^++p_0^-=(\\pi ,\\pi )$ instead., from (REF ) we have the symmetry $ [M(-k)]_{\\ell ,\\ell ^{\\prime }}=\\overline{[M(k)]_{\\ell \\ell ^{\\prime }}}$ and in particular $& p_0^++p_0^-=0 \\\\& \\alpha ^0_-=-\\overline{\\alpha ^0_+}, \\quad \\beta ^0_-=-\\overline{\\beta ^0_+}.$ It is also known [34] that $\\alpha ^0_\\omega ,\\beta ^0_\\omega $ are not collinear as elements of the complex plane: $\\alpha ^0_\\omega /\\beta ^0_\\omega \\notin \\mathbb {R}.$ Note that from () it follows that $\\text{Im}(\\beta _+^0/\\alpha _+^0)=-\\text{Im}(\\beta _-^0/\\alpha _-^0)$ .", "From now on, with no loss of generality, we assume that $ \\text{Im}(\\beta _+^0/\\alpha _+^0)>0, $ which amounts to choosing appropriately the labels $+,-$ associated with the two zeros of $\\mu (k)$ .", "If we denote by $\\operatorname{adj}(A)$ the adjugate of the matrix $A$ , so that $A^{-1}=\\operatorname{adj}(A)/\\det A$ , the long-distance behavior of the inverse Kasteleyn matrix is given [34] as $K^{-1}(w,b)\\stackrel{|x|\\rightarrow \\infty }{=}\\frac{1}{2\\pi }\\sum _{\\omega =\\pm }[\\operatorname{adj}(M(p^\\omega ))]_{\\ell ,\\ell ^{\\prime }}\\frac{e^{-i p_0^\\omega x}}{\\phi ^0_\\omega (x)}+O(|x|^{-2})$ where $\\phi ^0_\\omega (x)=\\omega (\\beta ^0_\\omega x_1-\\alpha ^0_\\omega x_2).$ Note that since the zeros $p_0^\\omega $ of $\\mu (k)$ are simple, the matrix $\\operatorname{adj}M(p_0^\\omega )$ has rank 1.", "This means that we can write $\\operatorname{adj}M(p_0^\\omega )=U^\\omega \\otimes V^\\omega $ for vectors $U^\\omega ,V^\\omega \\in \\mathbb {C}^{|\\mathcal {I}|}$ , where $\\otimes $ is the Kronecker product.", "Let $e=(b,w),e^{\\prime }=(b^{\\prime },w^{\\prime })$ be two fixed edges of $G^0_L$ : we assume that the black endpoint of $e$ (resp.", "of $e^{\\prime }$ ) has coordinates ${\\bf x}=(x,\\ell )$ (resp.", "${\\bf x^{\\prime }}=(x^{\\prime },\\ell ^{\\prime })$ ) and that the white endpoint of $e$ (resp.", "$e^{\\prime }$ ) has coordinates $(x+v(e),m)$ with $m\\in \\mathcal {I}$ (resp.", "coordinates $(x^{\\prime }+v(e^{\\prime }),m^{\\prime })$ ).", "Of course, $v(e)$ is either $(0,0)$ or $(0,\\pm 1)$ or $(\\pm 1,0)$ , and similarly for $v(e^{\\prime })$ .", "Note that the coordinates of the white endpoint of $e$ are uniquely determined by the coordinates of the black endpoint and the orientation labelrecall the conventions on labeling the type of edges, in Section REF .", "$j \\in \\lbrace 1,\\dots ,4\\rbrace $ of $e$ : in this case we will write $v(e)=:v_{j,\\ell }$ , $K(b,w)=:K_{j,\\ell }$ and in (REF ), $U_m=:U_{j,\\ell }$ .", "The (infinite-volume) truncated dimer-dimer correlation under the measure $\\mathbb {P}_{L,0}$ is given asthe index $\\theta \\in \\lbrace -1,+1\\rbrace ^2$ in $K_\\theta (b,w)$ is dropped, since the dependence on $r$ is present only for edges at the boundary of the basis graph $G_0$ (see Figure REF , so that for fixed $(b,w)$ and $L$ large, $K_\\theta (b,w)$ is independent of $r$ $\\mathbb {E}_{0}({\\mathbb {1}}_{e};{\\mathbb {1}}_{e^{\\prime }})=\\lim _{L\\rightarrow \\infty } \\mathbb {E}_{L,0}({\\mathbb {1}}_{e};{\\mathbb {1}}_{e^{\\prime }})=-K(b,w)K(b^{\\prime },w^{\\prime })K^{-1}(w^{\\prime },b)K^{-1}(w,b^{\\prime }).$ As a consequence of the asymptotic expression (REF ), we have that as $|x^{\\prime }-x| \\rightarrow \\infty $ , $\\mathbb {E}_0[{\\mathbb {1}}_e;{\\mathbb {1}}_{e^{\\prime }}]=A_{j,\\ell ,j^{\\prime },\\ell ^{\\prime }}(x,x^{\\prime })+B_{j,\\ell ,j^{\\prime },\\ell ^{\\prime }}(x,x^{\\prime })+R^0_{j,\\ell ,j^{\\prime },\\ell ^{\\prime }}(x,x^{\\prime })$ with $\\begin{split} A_{j,\\ell ,j^{\\prime },\\ell ^{\\prime }}(x,x^{\\prime })&= \\sum _{\\omega =\\pm } \\frac{ K^0_{\\omega ,j,\\ell } K^0_{\\omega ,j^{\\prime },\\ell ^{\\prime }}}{ (\\phi ^0_\\omega (x-x^{\\prime }))^2} \\\\B_{j,\\ell ,j^{\\prime },\\ell ^{\\prime }}(x,x^{\\prime })&= \\sum _{\\omega =\\pm } \\frac{H^0_{\\omega ,j,\\ell }H^0_{-\\omega ,j^{\\prime },\\ell ^{\\prime }}}{|\\phi ^0_\\omega (x-x^{\\prime })|^2}e^{2ip_0^\\omega (x-x^{\\prime })} \\\\|R^0_{j,\\ell ,j^{\\prime }\\ell ^{\\prime }}(x,x^{\\prime })|&\\le C |x-x^{\\prime }|^{-3}.\\end{split}$ where $\\begin{split}& K^0_{\\omega ,j,\\ell }:=\\frac{1}{2\\pi }K_{j,\\ell }e^{-ip_0^\\omega v_{j,\\ell }}U^\\omega _{j,\\ell }V^\\omega _\\ell \\\\& H^0_{\\omega ,j,\\ell }:=\\frac{1}{2\\pi }K_{j,\\ell }e^{ip_0^\\omega v_{j,\\ell }} U^{-\\omega }_{j,\\ell }V^{\\omega }_{\\ell }.\\end{split}$" ], [ "A fermionic representation for $Z_{L,\\lambda }$", "In this subsection, we work again with generic edge weights, i.e., we do not assume that they have any spatial periodicity." ], [ "Determinants and Grassmann integrals", "We refer for instance to [21] for an introduction to Grassmann variables and Grassmann integration; here we just recall a few basic facts.", "To each vertex $v$ of $G_L$ we associate a Grassmann variable.", "Recall that vertices are distinguished by coordinates ${\\bf x}=(x,\\ell )\\in {\\bf \\Lambda }= \\Lambda \\times \\mathcal {I}$ .", "We denote the Grassmann variable of the black (resp.", "white) vertex of coordinate ${\\bf x}$ as $\\psi ^+_{\\bf x}$ (resp.", "$\\psi ^-_{\\bf x}$ ).", "We denote by $\\int D\\psi f(\\psi )$ the Grassmann integral of a function $f$ and since the variables $\\psi ^\\pm _{\\bf x}$ anti-commute among themselves and there is a finite number of them, we need to define the integral only for polynomials $f$ .", "The Grassmann integration is a linear operation that is fully defined by the following conventions: $\\int D\\psi \\,\\prod _{{\\bf x}\\in {\\bf \\Lambda }}\\psi ^-_{\\bf x}\\psi ^+_{\\bf x}=1 ,$ the sign of the integral changes whenever the positions of two variables are interchanged (in particular, the integral of a monomial where a variable appears twice is zero) and the integral is zero if any of the $2|{\\bf \\Lambda }|$ variables is missing.", "We also consider Grassmann integrals of functions of the type $f(\\psi )=\\exp (Q(\\psi ))$ , with $Q$ a sum of monomials of even degree.", "By this, we simply mean that one replaces the exponential by its finite Taylor series containing only the terms where no Grassmann variable is repeated.", "For the partition function $Z_{L,0}=Z_{G_L^0}$ of the dimer model on $G_L^0$ we have formula (REF ) of previous subsection where the Kasteleyn matrices $K_\\theta $ are fixed as in Section REF , recall also Remark REF .", "Using the standard rewriting of determinants as Gaussian Grassmann integrals (i.e.", "Grassmann integrals where the integrand is the exponential of the corresponding quadratic form), one immediately obtains $Z_{L,0}=\\frac{1}{2}\\sum _{\\theta \\in \\lbrace -1,+1\\rbrace ^2}c_\\theta \\int D\\psi \\, e^{-\\psi ^+K _\\theta \\psi ^-}.$" ], [ "The partition function as a non-Gaussian, Grassmann integral", "The reason why the r.h.s.", "of (REF ) is the sum of four determinants (and $Z_{L,0}$ is the sum of four Gaussian Grassmann integrals) is that $G_L^0$ is embedded on the torus, which has genus 1: for a dimer model embedded on a surface of genus $g$ , the analogous formula would involve the sum of $4^g$ such determinants [20], [40].", "This is clearly problematic for the graph $G_L$ with non-planar edges, since in general it can be embedded only on surfaces of genus $g$ of order $L^2$ (i.e.", "of the order of the number of non-planar edges) and the resulting formula would be practically useless for the analysis of the thermodynamic limit.", "Our first crucial result is that, even when the weights of the non-planar edges $N_L$ are non-zero, the partition function can again be written as the sum of just four Grassmann integrals, but these are non Gaussian (that is, the integrand is the exponential is a polynomial of order higher than 2).", "To emphasize that the following identity holds for generic edge weights, we will write $Z_{L,\\underline{t}}$ for the partition function.", "Proposition 1 One has the identity $Z_{L,\\underline{t}}=\\sum _{M\\in \\Omega _L}\\prod _{e\\in M}t_e=\\frac{1}{2}\\sum _{\\theta \\in \\lbrace -1,+1\\rbrace ^2} c_\\theta \\int D\\psi e^{-\\psi ^+ K_{\\theta } \\psi ^-+V_{\\underline{t}}(\\psi )}$ where $c_\\theta $ are given in (REF ), $\\Lambda =(-L/2,L/2]^2\\cap \\mathbb {Z}^2$ as above, $V_{\\underline{t}}(\\psi )=\\sum _{x \\in \\Lambda } V^{(x)}(\\psi |_{B_x})$ and $V^{(x)}$ is a polynomial with coefficients depending on the weights of the edges incident to the cell $B_x$ , $\\psi |_{B_x}$ denotes the collection of the variables $\\psi ^\\pm $ associated with the vertices of cell $B_x$ (as a consequence, the order of the polynomial is at most $m^2$ ).", "When the edge weights $\\lbrace t_e\\rbrace $ are invariant by translations by ${\\bf e_1,e_2}$ , then $V^{(x)}$ is independent of $x$ .", "The form of the polynomial $V^{(x)}$ is given in formula (REF ) below; the expression in the r.h.s.", "can be computed easily when either the cell size $m$ is small, or each cell contains a small number of non-planar edges.", "For an explicit example, see Appendix .", "We need some notation.", "If $(b,w)$ is a pair of black/white vertices joined by the edge $e$ of weight $t_e$ , let us set $\\psi _\\theta (e):=\\left\\lbrace \\begin{array}{lll}-t_e \\psi ^+_b\\psi ^-_w &\\text{ if }& e\\in N_L\\\\-K_\\theta (e) \\psi ^+_b\\psi ^-_w &\\text{ if } &e\\in E^0_L\\end{array}\\right.$ with $K_\\theta (e)=K_\\theta (b,w)$ the Kasteleyn matrix element corresponding to the pair $(b,w)$ , which are the endpoints of $e$ .", "We fix a reference dimer configuration $M_0\\in \\Omega ^0_L$ , say the one where all horizontal edges of every second column are occupied, see Fig.", "REF .", "Then, we draw the non-planar edges on the two-dimensional torus on which $G^0_L$ is embedded, in such a way that they do not intersect any edge in $M_0$ (the non-planar edges will in general intersect each other and will intersect some edges in $E^0_L$ that are not in $M_0$ ).", "Given $ J\\subset N_L$ , we let $P_{J}$ be the set of edges in $E^0_L$ that are intersected by edges in $J$ .", "The drawing of the non-planar edges can be done in such a way that resulting picture is still invariant by translations of ${\\bf e_1,e_2}$ , the non-planar edges do not exit the corresponding cell and the graph obtained by removing the edges in $N_L\\cup P_{N_L}$ (i.e.", "all the non-planar edges and the planar edges crossed by them) is $2-$ connected.", "See Figure REF .", "Figure: A single cell B x B_x, with the reference configuration M 0 M_0(thick, blue edges).", "The non-planar edges (red) are drawn in a way thatthey do not intersect the edges of M 0 M_0 and do not exit thecell.", "Note that non-planar edges can cross each other.", "The dottededges, crossed by the planar edges, belong to P N L P_{N_L}.", "If thenon-planar edges cross only horizontaledges in the same column (shaded) of the cell and vertical edges from every second row (shaded), the graph obtained by removing red edges anddotted edges is 2-connected.We start by rewriting $Z_{L,\\underline{t}}=\\sum _{J\\subset N_L}\\sum _{S\\subset P_{J}}\\sum _{M\\in \\Omega _{J,S}}w(M)$ where $\\Omega _{J,S}$ is the set of dimer configurations $M$ such that a non-planar edge belongs to $M$ iff it belongs to $J$ , and an edge in $P_J$ belongs to $M$ iff it belongs to $S$ .", "Given $M\\in \\Omega _{J,S}$ , we write $M$ as the disjoint union $M=J\\cup S\\cup M^{\\prime }$ and, with obvious notations, $w(M)=w(M^{\\prime })w(S)w(J)$ so that (REF ) becomes $Z_{L,\\underline{t}}=\\sum _{J\\subset N_L}w(J)\\sum _{S\\subset P_{J}}w(S)\\sum _{M^{\\prime }\\sim {J,S}}w(M^{\\prime })$ where $M^{\\prime }\\sim S,J$ means that $M^{\\prime }\\cup S\\cup J$ is a dimer configuration in $\\Omega _{J,S}$ .", "To proceed, we use the following Lemma 1 There exists $\\epsilon _S^J=\\pm 1$ such that $\\sum _{M^{\\prime }\\sim J,S}w(J)w(S)w(M^{\\prime })= \\epsilon _S^J\\sum _{\\theta \\in \\lbrace -1,+1\\rbrace ^2}\\frac{c_\\theta }{2}\\int D\\psi \\,e^{-\\psi ^+ K_\\theta \\psi ^-}\\prod _{e\\in J\\cup S}\\psi (e) .$ Here, $\\psi (e), e\\in J\\cup S$ is the same as $\\psi _\\theta (e)$ : we have removed the index $r$ because, since the endpoints $b,w$ of $e$ belong to the same cell, the right hand side of (REF ) is independent of $\\theta $ .", "If $J=S=\\emptyset $ , the product of $\\psi (e)$ in the right hand side of (REF ) should be interpreted as being equal to 1.", "Moreover, $\\epsilon _{\\emptyset }^{\\emptyset }=1$ and, letting $J_x$ (resp.", "$S_x$ ) denote the collection of edges in $J$ (resp.", "$S$ ) belonging to the cell $B_x, x\\in \\Lambda $ , one has $\\epsilon ^J_S=\\prod _{x\\in \\Lambda } \\epsilon ^{J_x}_{S_x}.$ Let us assume for the moment the validity of Lemma REF and conclude the proof of Proposition REF .", "Going back to (REF ), we deduce that $Z_{L,\\underline{t}}=\\sum _\\theta \\frac{c_\\theta }{2}\\int D\\psi \\,e^{-\\psi ^+K_\\theta \\psi ^-}\\prod _{x\\in \\Lambda }\\left[\\sum _{J_x}\\sum _{S_x\\subset P_{J_x}}\\epsilon ^{J_x}_{S_x}\\prod _{e\\in J_x\\cup S_x}\\psi (e)\\right].$ The expression in brackets in (REF ) can be written as $1+F_x(\\psi )=e^{V^{(x)}(\\psi |_{B_x})}$ where $F_x(\\psi )$ is a polynomial in the Grassmann fields of the box $B_x$ , such that $F_x(0)=0$ and containing only monomials of even degree, and $V^{(x)}(\\psi |_{B_x})=\\sum _{n\\ge 1}\\frac{(-1)^{n-1}}{n} \\left(F_x(\\psi )\\right)^n.$ First of all, let us define a $2-$ connected graph $G_{J,S}$ , embedded on the torus, obtained from $G_L$ as follows: the edges belonging to $N_L\\cup P_J$ are removed.", "At this point, every cell $B_x$ contains a certain number (possibly zero) of faces that are not elementary squares, and the graph is still 2-connected, recall the discussion in the caption of Figure REF .", "the boundary of every such non-elementary face $\\eta $ contains an even number of vertices that are endpoints of edges in $J\\cup S$ .", "We connect these vertices pairwise via new edges that do not cross each other, stay within $\\eta $ and have endpoints of opposite color.", "See Figure REF for a description of a possible procedure.", "We let $E_{J,S}$ denote the collection of the added edges.", "Figure: Left drawing: a cell with a collection JJ of non-diagonaledges (red) and of edges S⊂P J S\\subset P_J (thick blue edges).", "Thedotted edges are those in P J ∖SP_J\\setminus S. Center drawing: thenon-elementary face η\\eta obtained when the edges in N L ∪P J N_L\\cup P_Jare removed.", "Only the endpoints of edges in J∪SJ\\cup S are drawn.", "Right drawing: a planar,bipartite pairing of the endpoints of J∪SJ\\cup S. Theedges in E J,S E_{J,S} are drawn in orange.", "A possiblealgorithm for the choice of the pairing is as follows: choose arbitrarily a pair (w 1 ,b 1 )(w_1,b_1) ofwhite/black vertices that are adjacent along the boundary of η\\eta and pair them.", "At step n>1n>1, choose arbitrarily a pair (w n ,b n )(w_n,b_n)that is adjacent once the vertices w i ,b i ,i<nw_i,b_i,i<n areremoved.", "Note that some of the edges in E J,S E_{J,S} may form double edges with the edges of G L 0 G^0_L on the boundary of η\\eta (this is the case for (b 1 ,w 1 )(b_1,w_1) and (b 3 ,w 3 )(b_3,w_3) in the example in the figure).The first observation is that the l.h.s.", "of (REF ) can be written as $\\Big (\\prod _{e\\in J\\cup S} t_e \\Big )\\Big (\\sum _{\\begin{array}{c}M\\in \\Omega _{G_{J,S}}:\\\\M\\supset E_{J,S}\\end{array}} w(M)\\Big )\\Big |_{t_{e}= 1, e\\in E_{J,S}}$ where $\\Omega _{G_{J,S}}$ is the set of perfect matchings of the graph $G_{J,S}$ and as usual $ w(M)$ is the product of the edge weights in $M$ .", "The new edges $E_{J,S}$ are assigned a priori arbitrary weights $\\lbrace t_e\\rbrace _{e\\in E_{J,S}}$ , to be eventually replaced by 1.", "Let $K^{J,S}_\\theta , \\theta \\in \\lbrace -1,+1\\rbrace ^2$ denote the Kasteleyn matrices corresponding to the four relevant orientations $D_\\theta $ of $G_{J,S}$ , for some choice of the basic orientation on $G_{J,S}$ (recall Definition REF ).", "Since $G_{J,S}$ is embedded on the torus and is 2-connected, Eq.", "(REF ) guarantees that the sum in the second parentheses in (REF ) can be rewritten (before setting $t_e=1$ for all $e\\in E_{J,S}$ ) as $\\sum _{\\begin{array}{c}M\\in \\Omega _{G_{J,S}}\\\\M\\supset E_{J,S}\\end{array}} w(M)=\\frac{1}{2}\\sum _{\\theta }c_\\theta \\left(\\prod _{e\\in E_{J,S}}t_e\\partial _{t_e}\\right)\\det K^{J,S}_\\theta .$ In fact, the suitable choice of ordering of vertices mentioned in Remark REF is independent of $J,S$ , because the reference configuration $M_0$ is independent of $J,S$ .", "Using the basic properties of Grassmann variables, the r.h.s.", "of (REF ) equals $\\frac{1}{2}\\sum _{\\theta }c_\\theta \\left(\\prod _{e\\in E_{J,S}}t_e\\partial _{t_e}\\right)\\int D\\psi \\,e^{-\\psi ^+ K^{J,S}_\\theta \\psi ^-}\\\\= \\frac{1}{2}\\sum _{\\theta }c_\\theta \\int D\\psi \\,e^{-\\psi ^+ K^{J,S}_\\theta \\psi ^-}\\left(\\prod _{e\\in E_{J,S}}\\psi ^{J,S}_\\theta (e)\\right)$ where, in analogy with (REF ), $\\psi ^{J,S}_\\theta (e)=-K^{J,S}_\\theta (b,w)\\psi ^+_b\\psi ^-_w$ .", "We claim: Lemma 2 The choice of the basic orientation of $G_{J,S}$ can be made so that the Kasteleyn matrices $K_\\theta ^{J,S}$ satisfy: if $e=(b,w)\\in G_{J,S}\\setminus E_{J,S}$ , then $K^{J,S}_\\theta (b,w)= K_\\theta (b,w)$ , with $K_\\theta $ the Kasteleyn matrices of the graph $G_L^{0}$ , fixed by the choices explained in Section REF .", "if instead $e=(b,w)\\in E_{J,S}$ and is contained in cell $B_x$ , then $K^{J,S}_\\theta (b,w)=t_e\\sigma _e^{J_x,S_x}$ with $ \\sigma _e^{J_x,S_x}=\\pm 1$ a sign that depends only on $J_x,S_x$ .", "Assuming Lemma REF , and letting $E_{J_x,S_x}$ denote the subset of edges in $E_{J,S}$ that belong to cell $B_x$ , we rewrite (REF ) as $ \\frac{1}{2}\\sum _{\\theta }c_\\theta \\int D\\psi \\,e^{-\\psi ^+ K_\\theta \\psi ^-}\\prod _x\\prod _{e=(b,w)\\in E_{J_x,S_x}}(-t_e\\sigma _e^{J_x,S_x}\\psi ^+_b\\psi ^-_w),$ where we could replace $K^{J,S}_\\theta $ by $K_\\theta $ at exponent, because $\\begin{split} & e^{-\\psi ^+ K^{J,S}_\\theta \\psi ^-}\\left(\\prod _{e\\in E_{J,S}}\\psi ^{J,S}_\\theta (e)\\right)=\\Big (\\prod _{e=(b,w)\\in G_{J,S}\\setminus E_{J,S}}e^{-\\psi ^+_b K^{J,S}_\\theta (b,w) \\psi ^-_w}\\Big )\\left(\\prod _{e\\in E_{J,S}}\\psi ^{J,S}_\\theta (e)\\right)\\\\& =\\Big (\\prod _{e=(b,w)\\in G_{J,S}\\setminus E_{J,S}}e^{-\\psi ^+_b K_\\theta (b,w) \\psi ^-_w}\\Big )\\left(\\prod _{e\\in E_{J,S}}\\psi ^{J,S}_\\theta (e)\\right)=e^{-\\psi ^+ K_\\theta \\psi ^-}\\left(\\prod _{e\\in E_{J,S}}\\psi ^{J,S}_\\theta (e)\\right),\\end{split}$ thanks to the Grassmann anti-commutation properties and the fact that $K^{J,S}_\\theta (b,w)=K_\\theta (b,w)$ for any $(b,w)\\in G_{J,S}\\setminus E_{J,S}$ .", "Eq.", "(REF ) can be further rewritten as $\\frac{\\prod _{e\\in E_{J,S}}t_e}{\\prod _{e\\in J\\cup S}t_e}\\sum _{\\theta }\\frac{c_\\theta }{2} \\int D\\psi \\,e^{-\\psi ^+ K_\\theta \\psi ^-}\\prod _x \\Big (\\epsilon ^{J_x}_{S_x}\\prod _{e\\in J_x\\cup S_x}\\psi (e)\\Big ),$ where $\\epsilon ^{J_x}_{S_x}$ is a sign, equal to $\\pi (J_x,S_x)\\Big (\\prod _{e\\in E_{J_x,S_x}}\\sigma _e^{J_x,S_x}\\Big )\\Big (\\prod _{e\\in S_x}{\\rm sign}(K_\\theta (e))\\Big ),$ and $\\pi (J_x,S_x)$ is the sign of the permutation needed to recast $\\prod _{(b,w)\\in E_{J_x,S_x}}\\psi ^+_b\\psi ^-_w$ into the form $\\prod _{(b,w)\\in J_x\\cup S_x}\\psi ^+_b\\psi ^-_w$ ; note also that, for $e\\in S_x$ , $K_\\theta (e)$ is independent of $\\theta $ .", "Putting things together, the statement of Lemma REF follows.", "Recall that $G_{J,S}$ is a 2-connected graph, with the same vertex set as $G^0_L$ , and edge set obtained, starting from $E_L$ , by removing the edges in $N_L\\cup P_J$ and by adding those in $E_{J,S}$ .", "We introduce a sequence of 2-connected graphs $G^{(n)},n=0,\\dots , z= |E_{J,S}|$ embedded on the torus, all with the same vertex set.", "Label the edges in $E_{J,S}$ as $e_1,\\dots ,e_z$ (in an arbitrary order).", "Then, $G^{(0)}$ is the graph $G_L^0$ with the edges in $N_L\\cup P_J$ removed and $G^{(n)},1\\le n\\le z$ is obtained from $G^{(0)}$ by adding edges $e_1,\\dots ,e_n$ .", "Note that $G^{(z)}=G_{J,S}$ .", "We will recursively define the basic orientation $D^{(n)}$ of $G^{(n)}$ , in such a way that for $n=z$ the properties stated in the Lemma hold for the Kasteleyn matrices $K_\\theta ^{(z)}=K^{J,S}_\\theta $ .", "The construction of the basic orientation is such that for $n>m$ , $D^{(n)}$ restricted to the edges of $G^{(m)}$ is just $D^{(m)}$ .", "That is, at each step $n>1$ we just need to define the orientation of $e_n$ .", "For $n=0$ , $G^{(0)}$ is a sub-graph of $G_L^0$ and we simply define $D^{(0)}$ to be the restriction of $D$ (the basic orientation of $G^0_L$ ) to the edges of the basis graph of $G^{(0)}$ .", "Since the orientation of these edges will not be modified in the iterative procedure, point (i) of the Lemma is automatically satisfied.", "We need to show that $D^{(0)}$ is indeed a basic orientation for $G^{(0)}$ , in the sense of Definition REF .", "In fact, an inner face $\\eta $ of the basis graph of $G^{(0)}$ is either an elementary square face (which belongs also to the basis graph of $G_L^0$ ), or it is a non-elementary face as in the middle drawing of Fig.", "REF .", "In the former case, the fact that the boundary of $\\eta $ is clockwise odd is trivial, since its orientation is the same as in the basic orientation of $G_L^0$ .", "In the latter case, the boundary of $\\eta $ is a cycle $\\Gamma $ of $\\mathbb {Z}^2$ that contains no vertices in its interior.", "The fact that $\\Gamma $ is clockwise odd for $D$ then is well-known [31].", "Assume now that the basic orientation $D^{(n)}$ of $G^{(n)}$ has been defined for $n\\ge 0$ and that the choice of orientation of each $e=(b,w)\\in E_{J,S}$ that is an edge of $G^{(n)}$ contained in the cell $B_x$ , has been done in a way that depends on $J,S$ only through $J_x,S_x$ .", "If $n=z$ , recalling how Kasteleyn matrices $K_\\theta $ are defined in terms of the orientations, claim (ii) of the Lemma is proven.", "Otherwise, we proceed to step $n+1$ , that is we define the orientation of $e_{n+1}$ as explained in Figure REF .", "This choice is unique and, again, depends on $J,S$ only through $J_x,S_x$ .", "The proof of the Lemma is then concluded.", "Figure: An inner face η\\eta of G (n) G^{(n)} and the edge e n+1 e_{n+1}.", "After adding e n+1 e_{n+1}, η\\eta split into two inner faces η 1 ,η 2 \\eta _1,\\eta _2 of G (n+1) G^{(n+1)}.", "By assumption, the boundary Γ\\Gamma of η\\eta isclockwise-odd for the orientation D (n) D^{(n)}.", "Therefore, exactly one of the two paths Γ 1 ,Γ 2 \\Gamma _1,\\Gamma _2 contains an odd number of anti-clockwise oriented edges and there is a unique orientation of e n+1 e_{n+1} such that the boundaries of both η 1 ,η 2 \\eta _1,\\eta _2 are clockwise odd.", "Since, by induction, the orientation of Γ\\Gamma depends on J,SJ,S only through J x ,S x J_x,S_x, with xx the label of the cell the face belongs to, the same is true also for the orientation of e n+1 e_{n+1}." ], [ "Generating function and Ward Identities", "In this subsection we consider again dimer weights that are periodic under translations by integer multiples of ${\\bf e_1,\\bf e_2}$ .", "In view of Proposition REF , the generating function $W_L(A)$ of dimer correlations, defined, for $A: E_L \\rightarrow \\mathbb {R}$ , by $ e^{W_L(A)}:=\\sum _{M\\in \\Omega _L} w(M) \\prod _{e\\in E_L} e^{A_e1_e(M)}, $ can be equivalently rewritten as $e^{W_L(A)}=\\frac{1}{2}\\sum _{\\theta \\in \\lbrace 1,-1\\rbrace ^2}c_\\theta e^{\\mathcal {W}^{(\\theta )}_L(A)}$ , where $e^{\\mathcal {W}^{(\\theta )}_L(A)}=\\int D\\psi e^{S_\\theta (\\psi )+V(\\psi ,A)}, $ where $S_\\theta (\\psi )=-\\psi ^+ K_\\theta \\psi ^-$ and $V(\\psi ,A):=-\\psi ^+K_\\theta ^A\\psi _--S_\\theta (\\psi )+V_{\\underline{t}(A)}(\\psi )$ .", "Here, $K_\\theta ^A$ (resp.", "$V_{\\underline{t}(A)}(\\psi )$ ) is the Kasteleyn matrix as in Section REF (resp.", "the potential as in (REF )) with edge weights $\\underline{t}(A)=\\lbrace t_ee^{A_e}\\rbrace _{e\\in E_L}$ .", "As in [27], it is convenient to introduce a generalization of the generating function, in the presence of an external Grassmann field coupled with $\\psi $ .", "Namely, letting $\\phi =\\lbrace \\phi ^\\pm _{\\bf x}\\rbrace _{{\\bf x} \\in {\\bf \\Lambda }}$ a new set of Grassmann variables, we define $\\begin{split}& e^{W_L(A,\\phi )}:=\\frac{1}{2}\\sum _{\\theta \\in \\lbrace 1,-1\\rbrace ^2}c_\\theta e^{\\mathcal {W}^{(\\theta )}_L(A,\\phi )},\\\\\\text{with}\\qquad &e^{\\mathcal {W}^{(\\theta )}_L(A,\\phi )}:=\\int D\\psi \\, e^{S_\\theta (\\psi )+V(\\psi ,A)+(\\psi ,\\phi )}\\end{split}$ and $(\\psi ,\\phi ):=\\sum _{{\\bf x} \\in {\\bf \\Lambda }}(\\psi ^+_{{\\bf x}}\\phi ^-_{\\bf x}+\\phi ^+_{\\bf x}\\psi ^-_{\\bf x})$ .", "The generating function is invariant under a local gauge symmetry, which is associated with the local conservation law of the number of incident dimers at each vertex of $\\Lambda $ : Proposition 2 (Chiral gauge symmetry) Given two functions $\\alpha ^+ : {\\bf \\Lambda } \\rightarrow \\mathbb {R}$ and $\\alpha ^- : {\\bf \\Lambda } \\rightarrow \\mathbb {R}$ , we have ${W}_{L}(A,\\phi )=-i\\sum _{{\\bf x} \\in {\\bf \\Lambda }} (\\alpha _{{\\bf x}}^++\\alpha _{\\bf x}^-)+{W}_{L}(A+i\\alpha ,\\phi e^{i\\alpha })$ where, if $e=(b,w) \\in E_L$ with ${\\bf x}$ and ${\\bf y}$ the coordinates of $b$ and $w$ , respectively, $(A+i\\alpha )_e:=A_e+i(\\alpha ^+_{\\bf x}+\\alpha ^-_{\\bf y})$ , while $(\\phi e^{i\\alpha })^\\pm _{\\bf x}:=\\phi ^\\pm _{\\bf x} e^{i\\alpha ^\\mp _{\\bf x}}$ .", "The proof simply consists in performing a change of variables in the Grassmann integral, see [25].", "The gauge symmetry (REF ), in turn, implies exact identities among correlation functions, known as Ward Identities.", "Given edges $e_1,\\ldots ,e_k$ and a collection of coordinates ${\\bf x_1},\\ldots ,{\\bf x_n},{\\bf y_1},\\ldots ,{\\bf y_n}$ , defineWe refer e.g.", "to [25] for the meaning of the derivative with respect to Grassmann variables the truncated multi-point correlation associated with the generating function $\\mathcal {W}_L(A,\\phi )$ : $\\begin{split}& g_L(e_1,\\ldots ,e_k;{\\bf x_1},\\ldots ,{\\bf x_n};{\\bf y_1},\\ldots ,{\\bf y_n})\\\\& \\quad :=\\partial _{A_{e_1}} \\cdots \\partial _{A_{e_k}}\\partial _{\\phi ^-_{\\bf y_1}} \\cdots \\partial _{\\phi ^-_{\\bf y_n}} \\partial _{\\phi ^+_{\\bf x_1}} \\cdots \\partial _{\\phi ^+_{\\bf x_n}}{W}_L(A,\\phi ) \\big |_{A \\equiv 0, \\phi \\equiv 0}.", "\\end{split}$ Three cases will play a central role in the following: the interacting propagator $G^{(2)}$ , the interacting vertex function $G^{(2,1)}$ and the interacting dimer-dimer correlation $G^{(0,2)}$ , which deserve a distinguished notation: letting ${\\bf x}=(x,\\ell ), {\\bf y}=(y,\\ell ^{\\prime }), {\\bf z}=(z,\\ell ^{\\prime \\prime })$ , and denoting by $e$ (resp.", "$e^{\\prime }$ ) the edge with black vertex ${\\bf x}=(x,\\ell )$ (resp.", "${\\bf y}=(y,\\ell ^{\\prime })$ ) and label $j\\in \\mathcal {J}_\\ell $ (resp.", "$j^{\\prime }\\in \\mathcal {J}_{\\ell ^{\\prime }}$ ), we define $\\begin{split}& G^{(2)}_{\\ell ,\\ell ^{\\prime };L}(x, y):=g_L(\\emptyset ;{\\bf x};{\\bf y}) \\\\& G^{(2,1)}_{j,\\ell ,\\ell ^{\\prime },\\ell ^{\\prime \\prime };L}({ x},{y},{ z}):= g_L(e;{\\bf y};{\\bf z})\\\\& G^{(0,2)}_{j,j^{\\prime },\\ell ,\\ell ^{\\prime };L}({x,y}):=g_L(e,e^{\\prime };\\emptyset ;\\emptyset ).\\end{split}$ As a byproduct of the analysis of Section , the $L \\rightarrow \\infty $ of all multi-point correlations $g_L(e_1,\\ldots ,e_k,{\\bf x_1},\\ldots ,{\\bf x_n},{\\bf y_1},\\ldots ,{\\bf y_n})$ exist; we denote the limit simply by dropping the index $L$ .", "Let us define the Fourier transforms of the interacting propagator and interacting vertex function via the following conventions: for $\\ell ,\\ell ^{\\prime },\\ell ^{\\prime \\prime }\\in \\mathcal {I}$ and $j\\in \\mathcal {J}_{\\ell }$ , we let $\\begin{split}& \\hat{G}^{(2)}_{\\ell , \\ell ^{\\prime }}(p):=\\sum _{x\\in \\mathbb {Z}^2} e^{ip x} G^{(2)}_{\\ell ,\\ell ^{\\prime }}(x, 0) \\\\& \\hat{G}^{(2,1)}_{j,\\ell ,\\ell ^{\\prime }, \\ell ^{\\prime \\prime }}(k,p):=\\sum _{x,y\\in \\mathbb {Z}^2} e^{-ip x-ik \\cdot y}G^{(2,1)}_{j,\\ell ,\\ell ^{\\prime },\\ell ^{\\prime \\prime }}(x,0,y).\\end{split} $ Proposition 3 (Ward identity) Given $\\ell ^{\\prime }, \\ell ^{\\prime \\prime } \\in \\mathcal {I}$ , we have $\\sum _{\\begin{array}{c}e \\in \\mathcal {E}\\end{array}}\\hat{G}^{(2,1)}_{j(e),\\ell (e),\\ell ^{\\prime },\\ell ^{\\prime \\prime }}(k,p)(e^{-ip \\cdot v(e)}-1)=\\hat{G} ^{(2)}_{\\ell ^{\\prime },\\ell ^{\\prime \\prime }}(k+p)-\\hat{G}^{(2)}_{\\ell ^{\\prime },\\ell ^{\\prime \\prime }}(k)$ where $\\mathcal {E}$ is the set of edges $e=(b(e),w(e))$ having an endpoint in the cell $B_{(0,0)}$ and the other in $B_{(0,-1)} \\cup B_{(-1,0)}$ .", "Also, $\\ell (e)\\in \\mathcal {I}$ is the type of $b(e)$ , $j(e)\\in \\mathcal {J}_{\\ell (e)}$ is the label associated with the edge $e$ , while $v(e)\\in \\lbrace (0,\\pm 1),(\\pm 1,0)\\rbrace $ is the difference of cell labels of $w(e)$ and $b(e)$ , see discussion after (REF ).", "We start by differentiating both sides of the gauge invariance equation (REF ): fix $ {\\bf x}=(x,\\ell ) \\in {\\bf \\Lambda } $ , differentiate first with respect to $\\alpha ^+_{\\bf x}$ and set $\\alpha \\equiv 0$ : $1=\\sum _{\\begin{array}{c}e =(b,w) \\in E_L \\\\ {\\bf x}(b)={\\bf x}\\end{array}} \\partial _{A_e} {W}_L(A,\\phi )+\\phi ^-_{\\bf x}\\partial _{\\phi ^-_{\\bf x}}{W}_L(A,\\phi )$ where ${\\bf x}(b)=(x(b),\\ell (b))$ is the coordinate of the black endpoint $b$ of the edge $e$ .", "The above sum thus contains as many terms as the number of edges incident to the black site of coordinate ${\\bf x}$ , i.e.", "as the number of elements in $\\mathcal {J}_{\\ell (b)}$ .", "Then, differentiate with respect to $\\phi ^-_{\\bf z}$ and $\\phi ^+_{\\bf y}$ and set $A \\equiv \\phi \\equiv 0$ : $\\sum _{\\begin{array}{c}e=(b,w) \\in E_L \\\\ {\\bf x}(b)={\\bf x}\\end{array}} g_L(e;{\\bf y};{\\bf z})+\\delta _{{\\bf x, z}}g_L(\\emptyset ;{\\bf y};{\\bf z})=0.$ Repeating the same procedure but differentiating first with respect to $\\alpha ^-_{\\bf x}$ rather than $\\alpha ^+_{\\bf x}$ , we obtain $\\sum _{\\begin{array}{c}e=(b,w) \\in E_L \\\\ {\\bf x}(w)={\\bf x}\\end{array}} g_{L}(e;{\\bf y};{\\bf z})+\\delta _{{\\bf x, y}} g_L(\\emptyset ;{\\bf y};{\\bf z})=0 $ where ${\\bf x}(w)$ is the coordinate of the white vertex of $e$ .", "Now we sum both (REF ) and (REF ) over $\\ell \\in \\mathcal {I}$ (the type of the vertex ${\\bf x}$ ) with the cell index $x$ fixed; then we take the difference of the two expressions thus obtained and we send $L \\rightarrow \\infty $ .", "When taking the difference, the contribution from edges whose endpoints both belong to cell $B_x$ cancel and we are left with $\\sum _{e=(x^{\\prime },j,\\ell )\\in E_{\\partial B_x}} (-1)^{\\delta _{x,x^{\\prime }}} G^{(2,1)}_{j,\\ell ,\\ell ^{\\prime },\\ell ^{\\prime }}(x^{\\prime },y,z)=(\\delta _{x,z}-\\delta _{x,y})G^{(2)}_{\\ell ^{\\prime },\\ell ^{\\prime \\prime }}(y,z),$ where we used the notation in (REF ), and we denoted by $E_{\\partial B_x}$ the set of edges of $E^0$ having exactly one endpoint in the cell $B_x$ .", "Note that in the first sum, in writing $e=(x^{\\prime },j,\\ell )$ , we used the usual labeling of the edge $e$ in terms of the coordinates $(x^{\\prime },\\ell )$ of its black site and of the label $j\\in \\mathcal {J}_\\ell $ .", "Note also that, if $e=(x^{\\prime },j,\\ell )\\in E_{\\partial B_x}$ , then $x^{\\prime }$ is either $x$ or $x\\pm (0,1),x\\pm (1,0)$ .", "See Figure REF .", "Figure: The cell B x B_x (only vertices on its boundary are drawn) together with the edges in E ∂B x =ℰ 1,x ∪ℰ 2,x ∪ℰ 1,x ' ∪ℰ 2,x ' E_{\\partial B_x}=\\mathcal {E}_{1,x}\\cup \\mathcal {E}_{2,x}\\cup \\mathcal {E}^{\\prime }_{1,x}\\cup \\mathcal {E}^{\\prime }_{2,x}.", "To each edge ee in ℰ 1,x \\mathcal {E}_{1,x} (resp.", "in ℰ 2,x \\mathcal {E}_{2,x}) there corresponds a unique edge e ' e^{\\prime } in ℰ 1,x ' \\mathcal {E}_{1,x}^{\\prime } (resp.", "ℰ 2,x ' \\mathcal {E}_{2,x}^{\\prime }) whose endpoints are of the same type.Using the last remark in the caption of Fig.REF , we can rewrite the sum in the left hand side of (REF ) as a sum over edges in $\\mathcal {E}_{1,x}\\cup \\mathcal {E}_{2,x}$ , each term containing the difference of two vertex functions $G^{(2,1)}_{j,\\ell ,\\ell ^{\\prime },\\ell ^{\\prime \\prime }}$ .", "Passing to Fourier space via (REF ), we obtain (REF ), as desired.", "Remark 4 For later reference, note that, if $e$ crosses the path $C_1$ (resp.", "$C_2$ ) of Figure REF , i.e., if $e\\in \\mathcal {E}_{1,x}$ (resp.", "$e\\in \\mathcal {E}_{2,x}$ ), then, for any $p=(p_1,p_2)\\in \\mathbb {R}^2$ and $v(e)$ defined as in the statement of Proposition REF , $p \\cdot v(e)={\\left\\lbrace \\begin{array}{ll}-p_2 \\sigma _e& \\text{ if $e$ crosses $C_1$}\\\\+p_1 \\sigma _e& \\text{ if $e$ crosses $C_2$},\\end{array}\\right.", "}$ with $\\sigma _e=\\pm 1$ the same sign appearing in the definition (REF ) of height function." ], [ "Proof of Theorem ", "One important conclusion of the previous section is Proposition REF , which states the validity of exact identities among the (thermodynamic limit of) correlation functions of the dimer model.", "In this section we combine these exact identities with a result on the large-distance asymptotics of the correlation functions, which includes the statement of Theorem REF , and use them to prove Theorem REF .", "The required fine asymptotics of the correlation functions is summarized in the following proposition, whose proof is discussed in Section .", "Proposition 4 There exists $\\lambda _0>0$ such that, for $|\\lambda |\\le \\lambda _0$ , the interacting dimer-dimer correlation for $x\\ne y$ can be represented in the following form: $&&G^{(0,2)}_{j,j^{\\prime },\\ell ,\\ell ^{\\prime }}(x,y) = \\frac{1}{4\\pi ^2 Z^2(1-\\tau ^2)}\\sum _{\\omega =\\pm } \\frac{K^{(1)}_{\\omega ,j,\\ell } K^{(1)}_{\\omega ,j^{\\prime },\\ell ^{\\prime }}}{(\\phi _\\omega (x-y))^2}\\\\&&\\qquad + \\frac{B}{4\\pi ^2} \\sum _{\\omega =\\pm } \\frac{K^{(2)}_{\\omega ,j,\\ell } K^{(2)}_{-\\omega ,j^{\\prime },\\ell ^{\\prime }}}{| \\phi _\\omega (x-y)|^{2(1-\\tau )/(1+\\tau )}} e^{2i\\, p^\\omega \\cdot (x-y)}+R_{j,j^{\\prime },\\ell ,\\ell ^{\\prime }}(x,y)\\nonumber \\;,$ where: $\\lambda \\mapsto Z$ , $\\lambda \\mapsto \\tau $ and $\\lambda \\mapsto B$ are real-valued analytic functions satisfying $Z=1+O(\\lambda )$ , $\\tau =O(\\lambda )$ and $B=1+O(\\lambda )$ ; $\\phi _\\omega (x):=\\omega (\\beta _\\omega x_1-\\alpha _\\omega x_2)$ where $\\lambda \\mapsto \\alpha _\\omega ,\\lambda \\mapsto \\beta _\\omega $ are complex-valued analytic functions satisfying $\\overline{\\alpha _+}=- \\alpha _{-}, \\overline{\\beta _+}=- \\beta _{-}$ ; $\\lambda \\mapsto K^{(i)}_{\\omega ,j,\\ell }$ with $i\\in \\lbrace 1,2\\rbrace $ are complex-valued analytic functions of $\\lambda $ satisfying $ K^{(i)}_{+,j,\\ell }=\\overline{K^{(i)}_{-,j,\\ell }}$ ; $\\lambda \\mapsto p^\\omega $ are analytic functions with values in $[-\\pi ,\\pi ]^2$ for $\\lambda $ real, satisfying $p^+=-p^-$ and $2p^+\\ne 0$ mod $(2\\pi ,2\\pi )$ ; the correction term $R_{j,j^{\\prime },\\ell ,\\ell ^{\\prime }}(x,y)$ is translational invariant and satisfies $|R_{j,j^{\\prime },\\ell ,\\ell ^{\\prime }}(x,0)|\\le C |x|^{-1+C|\\lambda |}$ for some $C>0$ .", "Moreover, there exists an additional set of complex-valued analytic function $\\lambda \\mapsto I_{\\omega ,\\ell ,\\ell ^{\\prime }},\\omega =\\pm 1,\\ell ,\\ell ^{\\prime }\\in \\mathcal {I},$ such that the Fourier transforms of the interacting propagator and of the interacting vertex function satisfy: $ \\hat{G}^{(2)}_{\\ell ,\\ell ^{\\prime }}(k+ p^\\omega ) \\stackrel{ k \\rightarrow 0}{=} I_{\\omega ,\\ell ,\\ell ^{\\prime }} \\hat{G}^{(2)}_{R,\\omega }(k)[1+O(|k|^{1/2})],$ and, if $0<\\mathfrak {c}\\le |p|,|k|,|k+p|\\le 2\\mathfrak {c}$ , $\\hat{G}^{(2,1)}_{j,\\ell ,\\ell ^{\\prime },\\ell ^{\\prime \\prime }}(k+ p^\\omega , p)\\stackrel{\\mathfrak {c}\\rightarrow 0}{=} -\\sum _{\\omega ^{\\prime }=\\pm } K^{(1)}_{\\omega ^{\\prime },j,\\ell } I_{\\omega ,\\ell ^{\\prime },\\ell ^{\\prime \\prime }} \\hat{G}^{(2,1)}_{R,\\omega ^{\\prime },\\omega }(k,p)[1+O(\\mathfrak {c}^{1/2})]\\;,$ where $K^{(1)}_{\\omega ,j,\\ell }$ is the same as in (REF ) and $\\hat{G}^{(2)}_{R,\\omega }(k)$ ,$\\hat{G}^{(2,1)}_{R,\\omega ,\\omega ^{\\prime }}(k,p)$ are two functions satisfying, for $D_\\omega (p)=\\alpha _\\omega p_1+\\beta _\\omega p_2$ , $\\sum _{\\omega ^{\\prime }=\\pm } D_{\\omega ^{\\prime }}(p)\\hat{G}^{(2,1)}_{R,\\omega ^{\\prime },\\omega }(k,p)=\\frac{1}{Z(1-\\tau )} \\Big [\\hat{G}^{(2)}_{R,\\omega }(k) - \\hat{G}^{(2)}_{R,\\omega }(k+p)\\Big ]\\Big (1+O(\\lambda |p|)\\Big )\\;,$ with $Z, \\tau $ the same as in (REF ), and $\\hat{G}^{(2,1)}_{R,-\\omega ,\\omega }(k,p)=\\tau \\frac{D_{\\omega }(p)}{D_{-\\omega }(p)}\\hat{G}^{(2,1)}_{R,\\omega ,\\omega }(k,p)\\Big (1+O(|p|)\\Big ).$ Finally, $\\hat{G}^{(2)}_{R,\\omega }(k) \\sim c_1|k|^{-1+O(\\lambda ^2)}$ as $k\\rightarrow 0$ , and, if $0<\\mathfrak {c}\\le |p|,|k|,|k+p|\\le 2\\mathfrak {c}$ , $\\hat{G}^{(2,1)}_{R,\\omega ,\\omega ^{\\prime }}(k,p) \\sim c_2\\mathfrak {c}^{-2+O(\\lambda ^2)}$ as $\\mathfrak {c} \\rightarrow 0$ , for two suitable non-zero constants $c_1,c_2$ .", "A few comments are in order.", "First of all, the statement of Theorem REF , (REF ), follows from (REF ), which is just a way to rewrite it: it is enough to identify $K_{\\omega ,j,\\ell }$ with $(2\\pi Z\\sqrt{1-\\tau ^2})^{-1}K^{(1)}_{\\omega ,j,\\ell }$ , $H_{\\omega ,j,\\ell }$ with $(\\sqrt{B}/2\\pi )K^{(2)}_{\\omega ,j,\\ell }$ , and $\\nu $ with $(1-\\tau )/(1+\\tau )$ .", "Moreover, we emphasize that Proposition REF is the analogue of [27] and its proof, discussed in the next section, is a generalization of the corresponding one.", "The main ideas behind the proof remain the same: in order to evaluate the correlation functions of the non-planar dimer model we start from the Grassmann representation of the generating function, (REF ), and we compute it via an iterative integration procedure, in which we first integrate out the degrees of freedom associated with a length scale 1, i.e., the scale of the lattice, then those on length scales $2, 4, \\ldots , 2^{-h}, \\ldots $ , with $h<0$ .", "The output of the integration of the first $|h|$ steps of this iterative procedure can be written as a Grassmann integral similar to the original one, with the important difference that the `bare potential' $V(\\psi ,A)+(\\psi ,\\phi )$ is replaced by an effective one, $V^{(h)}(\\psi ,A,\\phi )$ , that, after appropriate rescaling, converges to a non-trivial infrared fixed point as $h\\rightarrow -\\infty $ .", "The large-distance asymptotics of the correlation functions of the dimer model can thus be computed in terms of those of such an infrared fixed-point theory, or of those of any other model with the same fixed point (i.e., of any other model in the same universality class, the Luttinger universality class).", "The reference model we choose for this asymptotic comparison is described in [27], which we refer the reader to for additional details.", "It is very similar to the Luttinger model, and differs from it just for the choice of the quartic interaction: it describes a system of Euclidean chiral fermions in $\\mathbb {R}^2$ (modeled by Grassmann fields denoted $\\psi ^\\pm _{x,\\omega }$ , with $x\\in \\mathbb {R}^2$ the space label and $\\omega \\in \\lbrace +,-\\rbrace $ the chirality label), with relativistic propagator and a non-local (in both space dimensions, contrary to the case of the Luttinger model) density-density interactionBy `density' of fermions with chirality $\\omega $ we mean the quadratic monomial $\\psi ^+_{x,\\omega }\\psi ^-_{x,\\omega }$ ; the reference model we consider has an interaction coupling the density of fermions with chirality $+$ with that of fermions with opposite chirality, see [27].", "For later reference, we also introduce the notion of fermionic `mass' of chirality $\\omega $ , associated with the off-diagonal (in the chirality index) quadratic monomial $\\psi ^+_{x,\\omega }\\psi ^-_{x,-\\omega }$ ..", "The bare parameters of the reference model, in particular the strength of its density-density interaction, are chosen in such a way that its infrared fixed point coincides with the one of our dimer model of interest.", "The remarkable feature of the reference model is that, contrary to our dimer model, it is exactly solvable in a very strong sense: its correlation functions can all be computed in closed form.", "For our purposes, the relevant correlations are those denotedThe label $R$ stands for `reference' or `relativistic'.", "$G^{(2,1)}_{R,\\omega ,\\omega ^{\\prime }}$ (the vertex function of the reference model, corresponding to the correlation of the density of chirality $\\omega $ with a pair of Grassmann fields of chirality $\\omega ^{\\prime }$ ), $G^{(2)}_{R,\\omega }$ (the interacting propagator, corresponding to the correlation between two Grassmann fields of chirality $\\omega $ ), $S^{(1,1)}_{R,\\omega ,\\omega }$ (the density-density correlation between two densities with the same chirality $\\omega $ ) and $S^{(2,2)}_{R,\\omega ,-\\omega }$ (the mass-mass correlation between two masses – see footnote REF – of opposite chiralities): these are the correlations, in terms of which the asymptotics of the vertex function, interacting propagator and dimer-dimer correlation of our dimer model can be expressed.", "Remark 5 The connection between the interacting propagator of the dimer model and that of the reference model can be read from (REF ); similarly, the one between the vertex functions of the two models can be read from (REF ).", "Moreover, in view of the asymptotics of $S^{(1,1)}_{R,\\omega ,\\omega }$ and of $S^{(2,2)}_{R,\\omega ,-\\omega }$ , see [27], (REF ) can be rewritten as $\\sum _{\\omega =\\pm } \\big [K^{(1)}_{\\omega ,j,\\ell }K^{(1)}_{\\omega ,j^{\\prime },\\ell ^{\\prime }}S^{(1,1)}_{R,\\omega ,\\omega }(x,y)+K^{(2)}_{\\omega ,j,\\ell }K^{(2)}_{-\\omega ,j^{\\prime },\\ell ^{\\prime }}S^{(2,2)}_{R,\\omega ,-\\omega }(x,y)e^{2ip^\\omega \\cdot (x-y)} \\big ]$ plus a faster decaying remainder, which explains the connection between the dimer-dimer correlation and the density-density and mass-mass correlations of the reference model.", "The fact that the infrared behavior of the dimer model discussed in this paper can be described via the same reference model used for the dimer model in [27] is highly non-trivial and a priori non-obvious.", "In fact, the Grassmann representation of our non-planar dimer model involves Grassmann fields labelled by $x\\in \\Lambda $ and $\\ell \\in \\mathcal {I}=\\lbrace 1,\\ldots ,m^2/2\\rbrace $ : therefore, one could expect that the infrared behavior of the system is described in terms of a reference model involving fields labelled by an index $\\ell \\in \\mathcal {I}$ .", "This, a priori, could completely change at a qualitative level the nature of the infrared behavior of the system, which crucially depends on the number of mutually interacting massless fermionic fields.", "For instance, it is well known that 2D chiral fermions with an additional spin degree of freedom (which is the case of interest for describing the infrared behavior of the 1D Hubbard model), behaves differently, depending on the sign of the density-density interaction: for repulsive interactions it behaves qualitatively in the same way as the Luttinger model [6], while for attractive interaction the model dynamically generates a mass and enters a `Mott-insulator' phase [37].", "In our setting, remarkably, despite the fact that the number of Grassmann fields used to effectively describe the model is large for a large elementary cell (and, in particular, is always larger than 1), the number of massless fields is the same as in the case of [27]: in fact, we will show that, out of the $m^2/2$ fields $\\psi ^\\pm _{(x,\\ell )}$ with $\\ell \\in \\lbrace 1,\\ldots ,m^2/2\\rbrace $ , all but one of them are massive, i.e., their correlations decay exponentially to zero at large distances, with rate proportional to the inverse lattice scaling.", "Therefore, for the purpose of computing the generating function, we can integrate them out in one single step of the iterative integration procedure, after which we are left with an effective theory of a single massless Grassmann field with chirality index $\\omega $ associated with the two zeros of $\\mu $ , see (REF ), completely analogous to the one studied in [27].", "See the next section for details.", "While the proof of the fine asymptotic result summarized in Proposition REF is hard, and based on the sophisticated procedure just described, the proof of Theorem REF given Proposition REF is relatively easy, and close to the analogous proof discussed in [27].", "We provide it here.", "Let us start with one definition.", "Given the face $\\eta _0\\in \\bar{F}$ ($\\bar{F}$ and $\\eta _x,x=(x_1,x_2)$ were defined in Section REF , just before Theorem REF ), let $\\mathcal {E}_{1,0}$ (resp.", "$\\mathcal {E}_{2,0}$ ), be the set of vertical (resp.", "horizontal) edges crossed by the horizontal (resp.", "vertical) path $C_{\\eta _0\\rightarrow \\eta ^{\\prime }}$ connecting $\\eta _0$ to the face $\\eta ^{\\prime }\\in \\bar{F}$ given by $\\eta ^{\\prime }=\\eta _{(1,0)}$ (resp.", "$\\eta ^{\\prime }=\\eta _{(0,1)}$ ).", "See Fig.REF , where the same paths and edge sets around the cell $B_x$ rather than $B_0$ are shown.", "For $e\\in \\mathcal {E}_{q,0}$ , $q=1,2$ , we let $(x(e),\\ell (e))$ denote the coordinates of its black vertex and $j(e)\\in \\mathcal {J}_{\\ell (e)}$ the type of the edge.", "We also recall from Section REF that $\\sigma _e=\\pm 1$ is defined in (REF ).", "Proposition 5 For $q=1,2$ and $\\omega =\\pm $ , one has $\\sum _{e \\in \\mathcal {E}_{q,0}}\\sigma _e \\frac{K^{(1)}_{\\omega ,j(e),\\ell (e)}}{Z\\sqrt{1-\\tau ^2}}=-i\\omega \\sqrt{\\nu }\\, {\\mathrm {d}}_q {\\phi }_\\omega $ where $\\nu =(1-\\tau )/(1+\\tau )$ , and $\\begin{split}& {\\mathrm {d}}_1 \\phi _\\omega := \\phi _\\omega (x + (1,0)))-\\phi _\\omega (x)=\\omega \\beta _\\omega ,\\\\& {\\mathrm {d}}_2 \\phi _\\omega := \\phi _\\omega (x + (0,1)))-\\phi _\\omega (x)=-\\omega \\alpha _\\omega .", "\\end{split}$ Start with the Ward Identity in Fourier space (REF ) evaluated for $k$ replaced by $k+ p^\\omega $ and substitute (REF ) and (REF ) in it for $\\mathfrak {c} \\rightarrow 0$ .", "Recalling that $0<\\mathfrak {c}<|k|,|p|,|k+p|<2\\mathfrak {c}$ we obtain for $\\mathfrak {c}$ small $\\sum _{\\omega ^{\\prime }=\\pm } \\mathcal {D}_{\\omega ^{\\prime }}(p)G^{(2,1)}_{R,\\omega ^{\\prime },\\omega }(k,p)=(G^{(2)}_{R,\\omega }(k)-G^{(2)}_{R,\\omega }(k+p))(1+O(\\mathfrak {c}^{1/2}))$ where $\\mathcal {D}_{\\omega }(p):=-i\\sum _{e \\in \\mathcal {E}}K^{(1)}_{\\omega ,j(e),\\ell (e)}p \\cdot v(e)$ , with $\\mathcal {E}=\\mathcal {E}_{1,0}\\cup \\mathcal {E}_{2,0}$ the set of edges defined in Proposition REF .", "Now comparing the above relation with the identity (REF ), by using (REF ) and by identifying terms at dominant order for $|p|$ small we obtain (recall the definition of $D_\\omega (p)$ right before (REF )): $\\mathcal {D}_\\omega (p) D_{-\\omega }(p)+\\tau \\mathcal {D}_{-\\omega }(p) D_{\\omega }(p)=Z(1-\\tau ^2)D_{\\omega }(p) D_{-\\omega }(p).$ Letting $p=(p_1,p_2), v(e)=(v_1(e),v_2(e))$ , imposing $p_2=0,p_1 \\ne 0$ first and $p_1=0,p_2 \\ne 0$ then, we find a linear system for the coefficients $-i\\sum _{e \\in \\mathcal {E}} K_{\\omega ,\\ell (e),j(e)} v_q(e)$ , for $q=1,2$ and $\\omega =\\pm $ whose solution is $\\begin{split}& \\sum _{e \\in \\mathcal {E}} K^{(1)}_{\\omega ,j(e),\\ell (e)} v_1(e)=iZ(1-\\tau ) \\alpha _\\omega ,\\\\& \\sum _{e \\in \\mathcal {E}} K^{(1)}_{\\omega ,j(e),\\ell (e)} v_2(e)=iZ(1-\\tau )\\beta _\\omega .\\end{split}$ Note that, by the very definition of $\\mathcal {E}=\\mathcal {E}_{1,0}\\cup \\mathcal {E}_{2,0}$ , if $e\\in \\mathcal {E}$ , then $v_1(e)\\ne 0$ iff $e\\in \\mathcal {E}_{2,0}$ , while $v_2(e)\\ne 0 $ iff $e\\in \\mathcal {E}_{1,0}$ .", "Recall also the relation between $v(e)$ and $\\sigma _e$ outlined in Remark REF : in view of this, (REF ) is equivalent to $\\sum _{e \\in \\mathcal {E}_{1,0}} \\frac{K^{(1)}_{\\omega ,j(e),\\ell (e)}}{Z\\sqrt{1-\\tau ^2}} \\sigma _e=-i\\sqrt{\\frac{1-\\tau }{1+\\tau }}\\,\\beta _\\omega = -i\\omega \\sqrt{\\nu }\\, {\\mathrm {d}}_1 \\phi _\\omega \\\\\\sum _{e \\in \\mathcal {E}_{2,0}} \\frac{K_{\\omega ,j(e),\\ell (e)}}{Z\\sqrt{1-\\tau ^2}} \\sigma _e=i\\sqrt{\\frac{1-\\tau }{1+\\tau }}\\, \\alpha _\\omega = -i\\omega \\sqrt{\\nu }\\,{\\mathrm {d}}_2 \\phi _\\omega ,$ where we used $\\nu =(1-\\tau )/(1+\\tau )$ and the definition (REF ).", "Given Theorem REF , the proof of Theorem REF is essentially identical to that of [27] and of [24].", "Here we give only a sketch and we emphasize only the role played by the relation (REF ) that we have just proven.", "First of all, we choose a path $C_{\\eta _{x^{(1)}}\\rightarrow \\eta _{x^{(2)}}}$ from face $\\eta _{x^{(1)}}$ to $\\eta _{x^{(2)}}$ that crosses only edges that join different cells.", "Since $\\eta _{x^{(1)}},\\eta _{x^{(2)}}\\in \\bar{F}$ , the path $C_{\\eta _{x^{(1)}}\\rightarrow \\eta _{x^{(2)}}}$ visits a sequence of faces $\\eta _{y^{(1)}},\\dots ,\\eta _{y^{(k)}}\\in \\bar{F}$ , with $y^{(1)}={x^{(1)}}, y^{(k)}={x^{(2)}}$ and $|y^{(a)}-y^{(a+1)}|=1$ .", "The set of edges crossed by the path between $\\eta _{y^{(a)}}$ and $\\eta _{y^{(a+1)}}$ , denoted $\\mathcal {E}_{(a)}$ , is a translation of either $\\mathcal {E}_{1,0}$ (if $y^{(a+1)}-y^{(a)}$ is horizontal) or $\\mathcal {E}_{2,0}$ (if $y^{(a+1)}-y^{(a)}$ is vertical).", "Similarly, one defines a path $C_{\\eta _{x^{(3)}}\\rightarrow \\eta _{x^{(4)}}}$ and correspondingly a sequence of faces $\\eta _{z^{(1)}},\\dots ,\\eta _{z^{(k^{\\prime })}}\\in \\bar{F}$ and $\\mathcal {E}^{\\prime }_{(a)}$ the set of edges crossed by the path between $\\eta _{z^{(a)}}$ and $\\eta _{z^{(a+1)}}$ .", "The two paths can be chosen so that $C_{\\eta _{x^{(1)}}\\rightarrow \\eta _{x^{(2)}}}$ is of length $O(|x^{(1)}-x^{(2)}|)$ and $C_{\\eta _{x^{(3)}}\\rightarrow \\eta _{x^{(4)}}}$ is of length $O(|x^{(3)}-x^{(4)}|)$ , while they are at mutual distance at least of order $\\min (|x^{(i)}-x^{(j)}|,i\\ne j)$ .", "See [24] for more details.", "From the definition (REF ) of height function, we see that $\\mathbb {E}_\\lambda \\left[(h(\\eta _{x^{(1)}})-h(\\eta _{x^{(2)}}));(h(\\eta _{x^{(3)}})-h(\\eta _{x^{(4)}}))\\right]=\\sum _{\\begin{array}{c}1\\le a< k,\\\\ 1\\le a^{\\prime }< k^{\\prime }\\end{array}}\\sum _{\\begin{array}{c}e\\in \\mathcal {E}_{(a)}, \\\\ e^{\\prime }\\in \\mathcal {E}^{\\prime }_{(a^{\\prime })}\\end{array}} \\sigma _e \\sigma _{e^{\\prime }} \\mathbb {E}_\\lambda [{\\mathbb {1}}_e;{\\mathbb {1}}_{e^{\\prime }}].$ As a consequence of Proposition REF , for edges $e,e^{\\prime }$ with black sites of coordinates $(x,\\ell ),(x^{\\prime },\\ell ^{\\prime })$ and with orientations $j,j^{\\prime }$ , respectively, we have that $\\begin{split}\\mathbb {E}_\\lambda [{\\mathbb {1}}_e;{\\mathbb {1}}_{e^{\\prime }}]&= \\sum _{\\omega =\\pm } \\frac{ K^{(1)}_{\\omega ,j,\\ell }}{Z\\sqrt{1-\\tau ^2}}\\frac{ K^{(1)}_{\\omega ,j^{\\prime },\\ell ^{\\prime }}}{Z\\sqrt{1-\\tau ^2}}\\frac{1}{4\\pi ^2 ( \\phi _\\omega (x-x^{\\prime }))^2}\\\\&+ \\frac{B}{4\\pi ^2}\\sum _{\\omega =\\pm } \\frac{K^{(2)}_{\\omega ,j,\\ell }K^{(2)}_{-\\omega ,j^{\\prime },\\ell ^{\\prime }}}{ | \\phi _\\omega (x-x^{\\prime })|^{2(1-\\tau )/(1+\\tau )}}e^{2ip^\\omega (x-x^{\\prime })}+R_{j,j^{\\prime },\\ell ,\\ell ^{\\prime }}(x,x^{\\prime }).\\end{split}$ At this point we plug this expression into (REF ).", "The oscillating term in (REF ), proportional to $B$ , and the error term $R_{j,\\ell ,j^{\\prime },\\ell }(x,x^{\\prime })$ , once summed over $e,e^{\\prime }$ , altogether end up in the error term in (REF ) (see the analogous argument in [24]).", "As for the main term involving $K^{(1)}_{\\omega ,j,\\ell }$ , we observe that if we fix $a,a^{\\prime }$ , then for $e \\in \\mathcal {E}_{(a)},e^{\\prime }\\in \\mathcal {E}^{\\prime }_{(a^{\\prime })}$ we can replace in (REF ) $\\phi _\\omega (x-x^{\\prime })$ by $\\phi _\\omega (y^{(a)}-z^{(a^{\\prime })})$ , up to an error term of the same order as $R_{j,j^{\\prime },\\ell ,\\ell ^{\\prime }}(x,x^{\\prime })$ , which again contributes to the error term in (REF ).", "We are thus left with $\\begin{split}& \\sum _{\\omega =\\pm }\\sum _{\\begin{array}{c}1\\le a< k,\\\\ 1\\le a^{\\prime }< k^{\\prime }\\end{array}}\\frac{1}{4\\pi ^2 ( \\phi _\\omega (y^{(a)}-z^{(a^{\\prime })}))^2} \\sum _{e\\in \\mathcal {E}_{(a)}} \\sigma _e \\frac{ K_{\\omega ,j(e),\\ell (e)}}{Z\\sqrt{1-\\tau ^2}}\\sum _{e^{\\prime }\\in \\mathcal {E}^{\\prime }_{(a^{\\prime })}}\\sigma _{e^{\\prime }} \\frac{ K_{\\omega ,j(e^{\\prime }),\\ell (e^{\\prime })}}{Z\\sqrt{1-\\tau ^2}}\\\\& =- \\nu \\sum _{\\omega =\\pm }\\sum _{\\begin{array}{c}1\\le a< k,\\\\ 1\\le a^{\\prime }< k^{\\prime }\\end{array}}\\frac{(y^{(a+1)}-y^{(a)})\\cdot {\\rm d}\\phi _\\omega \\, (z^{(a^{\\prime }+1)}-z^{(a^{\\prime })})\\cdot {\\rm d}\\phi _\\omega }{4\\pi ^2 ( \\phi _\\omega (y^{(a)}-z^{(a^{\\prime })}))^2}\\\\& =- \\frac{\\nu }{2\\pi ^2}\\Re \\Big [\\sum _{\\begin{array}{c}1\\le a< k,\\\\ 1\\le a^{\\prime }< k^{\\prime }\\end{array}}\\frac{(y^{(a+1)}-y^{(a)})\\cdot {\\rm d}\\phi _+\\, (z^{(a^{\\prime }+1)}-z^{(a^{\\prime })})\\cdot {\\rm d}\\phi _+}{ ( \\phi _+(y^{(a)}-z^{(a^{\\prime })}))^2}\\Big ]\\end{split}$ where in the first step we used Proposition REF and defined ${\\rm d}\\phi _\\omega :=({\\rm d}_1\\phi _\\omega , {\\rm d}_2\\phi _\\omega )$ .", "As explained in [27] and [24] (see also [34] in the non-interacting case), this sum equals the integral in the complex plane $- \\frac{\\nu }{2\\pi ^2}\\Re \\int _{\\phi _+(x^{(1)})}^{\\phi _+(x^{(2)})}dz\\int _{\\phi _+(x^{(3)})}^{\\phi _+(x^{(4)})}dz^{\\prime }\\frac{1}{(z-z^{\\prime })^2}$ (which equals the main term in the r.h.s.", "of (REF )), plus an error term (coming from the Riemann approximation) estimated as in the r.h.s.", "of (REF )." ], [ "Proof of Proposition ", "In this section we give the proof of Proposition REF (which immediately implies Theorem REF , as already commented above), via the strategy sketched after its statement.", "As explained there, the novelty compared to the proof in [27] is the reduction to an effective model involving a single Grassmann critical field $\\varphi $ , of the same form as the one analyzed in [27].", "Therefore, most of this section will be devoted to the proof of such reduction, which consists of the following steps.", "Our starting point is the generating function of correlations in its Grassmann form, see (REF ).", "In (REF ), we first integrate out the `ultraviolet' degrees of freedom at the lattice scale, see Section REF below; the resulting effective theory can be conveniently formulated in terms of a collection of chiral fields $\\lbrace \\psi ^\\pm _{x,\\omega }\\rbrace _{x\\in \\Lambda }^{\\omega \\in \\lbrace +,-\\rbrace }$ , where $\\psi ^\\pm _{x,\\omega }$ are Grassmann vectors with $|\\mathcal {I}|$ components, which represent fluctuation fields supported in momentum space close to the unperturbed Fermi points $p_0^\\omega $ .", "Next, we perform a `rigid rotation' of these Grassmann vectors via a matrix $B$ that is independent of $x$ but may depend on the chirality index $\\omega $ ; the rotation is chosen so to block-diagonalize the reference quadratic part of the effective action, in such a way that the corresponding covariance is the direct sum of two terms, a one-dimensional one, which is singular at $p^\\omega _0$ , and a non-singular one, of dimension $|\\mathcal {I}|-1$ ; the components associated with this non-singular $(|\\mathcal {I}|-1)\\times (|\\mathcal {I}|-1)$ block are referred to as the `massive components', which can be easily integrated out in one step, see Section REF below (this is the main novel contribution of this section, compared with the multiscale analysis in [27]).", "In Section REF below we reduce essentially to the setting of [27], that is, to an effective theory that involves one single-component “quasi-particle” chiral massless field, which can be analyzed along the same lines as [27].", "Finally, in Section REF we conclude the proof of Proposition REF ." ], [ "Integration of the ultraviolet degrees of freedom", "We intend to compute the generating function (REF ) with $\\theta $ boundary conditions.", "We introduce Grassmann variables in Fourier space via the following transformation: $\\hat{\\psi }^\\pm _{k}:= \\sum _{x \\in \\Lambda } e^{\\mp ik x} \\psi ^\\pm _{x}, \\qquad \\psi ^\\pm _{x}=\\frac{1}{L^2}\\sum _{k \\in \\mathcal {P}(\\theta )} e^{\\pm ik x} \\hat{\\psi }^\\pm _{k},$ where we recall that each $\\psi _x^\\pm $ and each $\\hat{\\psi }_k^\\pm $ has $|\\mathcal {I}|$ components and indeed we assume that $\\psi _x^+=(\\psi ^+_{x,1},\\dots ,\\psi ^+_{x,|\\mathcal {I}|})$ is a row vector while similarly $\\psi ^-_x$ is a column vector (whenever unnecessary, we shall drop the `color' index $\\ell \\in \\mathcal {I}$ ); in this way the transformation above is performed component-wise.", "For each $\\theta \\in \\lbrace -1,+1\\rbrace ^2$ , we let $p^\\omega _\\theta $ , $\\omega =\\pm 1$ denote the element of $\\mathcal {P}(\\theta )$ that is closest to $p^\\omega _0$In the case of more than one momentum at minimum distance, any choice of $p_\\theta ^\\pm $ will work.", "The dependence on $L$ of $p_\\theta ^\\pm $ is understood, we rewrite $\\psi ^\\pm _x=\\psi ^{\\prime \\pm }_{x}+ \\Psi ^{\\pm }_{x}$ with $ \\psi ^{\\prime \\pm }_{x}=\\frac{1}{L^2}\\sum _{k \\notin \\lbrace p^{+}_\\theta ,p^-_\\theta \\rbrace } e^{\\pm ik x} \\hat{\\psi }^\\pm _{k},\\quad \\Psi ^\\pm _{x}={\\frac{1}{L^2}}\\sum _{k\\in \\lbrace p^{+}_\\theta ,p^-_\\theta \\rbrace }e^{\\pm ik x} \\hat{\\psi }^\\pm _{k}.$ Noting that $S_\\theta (\\psi )=S_\\theta (\\Psi )+S_\\theta (\\psi ^{\\prime }):=-\\frac{1}{L^{2}}\\sum _{k\\in \\lbrace p^{+}_\\theta ,p^-_\\theta \\rbrace }\\hat{\\psi }^+_kM(k)\\hat{\\psi }_k^-- \\frac{1}{L^{2}}\\sum _{k\\notin \\lbrace p^{+}_\\theta ,p^-_\\theta \\rbrace }\\hat{\\psi }^{\\prime +}_k M(k)\\hat{\\psi }_k^{\\prime -},$ we rewrite $ e^{\\mathcal {W}_{L}^{(\\theta )}(A,\\phi )}=\\Big (\\prod _{k\\notin \\lbrace p^{+}_\\theta ,p^-_\\theta \\rbrace }\\mu (k)\\Big )\\int D\\Psi \\, e^{S_\\theta (\\Psi )}\\int P(D\\psi ^{\\prime })\\, e^{V(\\psi ,A)+(\\psi ,\\phi )},$ where $D\\Psi =\\prod _{k\\in \\lbrace p^{+}_\\theta ,p^-_\\theta \\rbrace }\\big (L^{2|\\mathcal {I}|}D\\hat{\\Psi }_k\\big )$ and the Grassmann “measure” $D\\hat{\\Psi }_k$ is defined, as usual, so that $\\int \\Big (\\prod _{k\\in \\lbrace p^{+}_\\theta ,p^-_\\theta \\rbrace }D\\hat{\\Psi }_k\\Big )\\, \\Big (\\prod _{k\\in \\lbrace p^{+}_\\theta ,p^-_\\theta \\rbrace }\\prod _{\\ell \\in \\mathcal {I}}\\hat{\\Psi }^-_{k,\\ell }\\hat{\\Psi }^+_{k,\\ell }\\Big )=1,$ while we have $\\int \\Big (\\prod _{k\\in I}D\\hat{\\Psi }_k\\Big ) Q(\\Psi )=0$ whenever $Q(\\Psi )$ is a monomial in $\\lbrace \\hat{\\Psi }^\\pm _{k,\\ell }\\rbrace _{k\\in \\lbrace p^{+}_\\theta ,p^-_\\theta \\rbrace ,\\ell \\in \\mathcal {I}}$ of degree strictly lower or strictly larger than $4|\\mathcal {I}|$ .", "Moreover, $P(D\\psi ^{\\prime })$ is the Grassmann Gaussian integration, normalized so that $\\int P(D\\psi ^{\\prime })=1$ , associated with the propagator $g^{\\prime }(x,y)=\\int P(D \\psi ^{\\prime }) \\psi ^{\\prime -}_{x} \\psi ^{\\prime +}_{y}=L^{-2}\\sum _{k\\notin \\lbrace p^{+}_\\theta ,p^-_\\theta \\rbrace } e^{-ik(x-y)} (M(k))^{-1}.$ Note that, since $\\psi ^{\\prime \\pm }_{x}$ is a vector with $|\\mathcal {I}|$ components, $g^{\\prime }(x,y)$ is an $|\\mathcal {I}|\\times |\\mathcal {I}|$ matrix, for fixed $x,y$ .", "Remark 6 We emphasize also that, since the zeros of $\\mu $ are simple, $\\mu (k)\\ne 0$ for every $k\\notin \\lbrace p^+_\\theta ,p^-_\\theta \\rbrace $ (this is the reason why we singled out the two momenta $p^\\omega _\\theta $ where $\\mu $ possibly vanishes and $M$ is not invertible).", "Next we introduce the following Definition 2 We let $\\chi _\\omega : \\mathbb {R}^2\\longrightarrow [0,1],\\omega =\\pm 1$ be two $C^\\infty $ functions in the Gevrey class of order 2, see [24], with the properties that: $\\chi _\\omega (k)=\\chi _{-\\omega }(-k)$ , $\\chi _\\omega (k)=1$ if $|k-p^\\omega _0|\\le c_0/2$ , and $\\chi _\\omega (k)=1$ if $|k-p^\\omega _0|> c_0$ , with $c_0$ a small enough positive constant, such that in particular the support of $\\chi _+$ is disjoint from the support of $\\chi _{-}$ .", "We will specify later a more explicit definition of $\\chi _\\omega $ .", "We rewrite $g^{\\prime }=g^{(0)}+g^{(1)}$ , with $\\begin{split}& g^{(0)}(x,y)=L^{-2} \\sum _{\\omega =\\pm } \\sum _{k \\notin \\lbrace p^+_\\theta ,p^-_\\theta \\rbrace } e^{-ik(x-y)}\\chi _\\omega (k) (M(k))^{-1},\\\\& g^{(1)}(x,y)=L^{-2} \\sum _{k \\in \\mathcal {P}(\\theta )} e^{-ik(x-y)}(1-\\chi _+(k)-\\chi _-(k)) (M(k))^{-1}.\\end{split}$ Since the cutoff functions $\\chi _\\omega $ are Gevrey functions of order 2, the propagator $g^{(1)}$ has stretched-exponential decay at large distances $\\Vert g^{(1)}(x,y)\\Vert \\le C e^{-\\kappa \\sqrt{|x-y|}},$ for suitable $L$ -independent constants $C,\\kappa >0$ , cf.", "with [27] $|\\mathcal {I}|\\times |\\mathcal {I}|$ (recall that the propagators are $|\\mathcal {I}|\\times |\\mathcal {I}|$ matrices; the norm in the l.h.s.", "is any matrix norm).", "In (REF ), $|x-y|$ denotes the graph distance between $x$ and $y$ on $G_L$ .", "Using the addition principle for Grassmann Gaussian integrations [24], we rewrite (REF ) as $e^{\\mathcal {W}_{L}^{(\\theta )}(A,\\phi )}= \\Big (\\prod _{k\\notin \\lbrace p^+_\\theta ,p^-_\\theta \\rbrace }\\mu (k)\\Big )\\int D\\Psi \\, e^{S_\\theta (\\Psi )}\\int P_{(0)}(D\\psi ^{(0)}) \\\\ \\times \\int P_{(1)}(D\\psi ^{(1)})\\, e^{V(\\Psi +\\psi ^{(0)}+\\psi ^{(1)},A)+(\\Psi +\\psi ^{(0)}+\\psi ^{(1)},\\phi )}\\\\= \\Big (\\prod _{k \\notin \\lbrace p^+_\\theta ,p^-_\\theta \\rbrace } \\mu (k) \\Big ) e^{L^2 E^{(0)}+S^{(0)}(J,\\phi )} \\int D \\Psi \\, e^{S_\\theta (\\Psi )}\\int P_{(0)} (D\\psi ^{(0)})\\,e^{V^{(0)}(\\Psi +\\psi ^{(0)},J,\\phi )}$ where: $P_{(0)}$ and $P_{(1)}$ are the Grassmann Gaussian integrations with propagators $g^{(0)}$ and $g^{(1)}$ , respectively, i.e., letting $O_\\omega =\\lbrace k \\in \\mathcal {P}(\\theta )\\setminus \\lbrace p^+_\\theta ,p^-_\\theta \\rbrace : \\chi _\\omega (k) \\ne 0$ , $ P_{(0)}(D\\psi )=\\prod _\\omega \\frac{\\Big (L^{2|\\mathcal {I}||O_\\omega |}\\prod _{k\\in O_\\omega }D\\hat{\\psi }_k\\Big )\\exp {\\Big (-L^{-2} \\sum _{ k \\in O_\\omega } (\\chi _\\omega (k))^{-1} \\hat{\\psi }^{+}_k M(k)\\hat{\\psi }^-_k }\\Big )}{\\Big (\\prod _{k \\in O_\\omega }\\mu (k) (\\chi _\\omega (k))^{-|\\mathcal {I}|}\\Big )},$ and a similar explicit expression for $P_{(1)}$ holds; $J=\\lbrace J_e\\rbrace _{e\\in E_L}$ with $J_e=e^{A_e}-1$ ; $E^{(0)}$ , $S^{(0)}$ and $V^{(0)}$ are defined via $L^2 E^{(0)}+S^{(0)}(J,\\phi )+V^{(0)}(\\psi ,J,\\phi )=\\log \\int P_{(1)}(D\\psi ^{(1)}) e^{V(\\psi +\\psi ^{(1)},A)+(\\psi +\\psi ^{(1)},\\phi )},$ with $E^{(0)},S^{(0)}$ fixed uniquely by the condition that $V^{(0)}(0,J,\\phi )=S^{(0)}(0,0)=0$ .", "Proceeding as in the proof of [27], one finds that the effective potential $V^{(0)}$ can be represented as follows: $V^{(0)}(\\psi ,J,\\phi )=\\sum _{\\begin{array}{c}n>0 \\\\ m,q \\ge 0 \\\\ n+q \\in 2\\mathbb {N}\\end{array}} \\sum ^*_{\\begin{array}{c}\\underline{x},\\underline{y},\\underline{z} \\\\ \\underline{\\ell },\\underline{\\ell }^{\\prime },\\underline{\\ell }^{\\prime },\\\\ \\underline{s}, \\underline{\\sigma },\\underline{\\sigma }^{\\prime } \\end{array}} \\psi ^{\\underline{\\sigma }}_{\\underline{x},\\underline{\\ell }} J_{\\underline{y},\\underline{\\ell }^{\\prime }, \\underline{s}} \\phi ^{\\underline{\\sigma }^{\\prime }}_{\\underline{z},\\underline{\\ell }^{\\prime \\prime }} W_{n,m,q;a}(\\underline{x},\\underline{y},\\underline{z})$ where the second sum runs over $\\underline{x} \\in \\Lambda ^n, \\underline{y} \\in \\Lambda ^m,\\underline{z} \\in \\Lambda ^q$ , $\\underline{\\ell } \\in \\mathcal {I}^n,\\underline{\\ell }^{\\prime } \\in \\mathcal {I}^m,\\underline{\\ell }^{\\prime \\prime } \\in \\mathcal {I}^q$ , $ \\underline{s} \\in \\mathcal {J}_{\\ell _1}\\times \\dots \\times \\mathcal {J}_{\\ell _m}$ , $\\underline{\\sigma }\\in \\lbrace +,-\\rbrace ^n,\\underline{\\sigma }^{\\prime } \\in \\lbrace +,-\\rbrace ^q$ (the $*$ on the sum indicates the constraint that $\\sum _{i=1}^n \\sigma _i+\\sum _{i=1}^q\\sigma _i^{\\prime }=0$ ), and we defined $J_{\\underline{y},\\underline{\\ell ^{\\prime }},\\underline{s}}:=\\prod _{i=1}^m J_{y_i,\\ell ^{\\prime }_i,s_i}$ (here $J_{y,\\ell ,s}$ stands for $J_e$ when the edge $e \\in E_L$ has black site of coordinates $(y,\\ell )$ and orientation $s \\in \\mathcal {J}_{\\ell }$ ), $\\psi ^{\\underline{\\sigma }}_{\\underline{x},\\underline{\\ell }}:=\\prod _{i=1}^n \\psi ^{\\sigma _i}_{x_i,\\ell _i}$ , and similarly for $\\phi ^{\\underline{\\sigma }^{\\prime }}_{\\underline{z},\\underline{\\ell }^{\\prime \\prime }}$ ; finally, ${a}:=(\\underline{\\ell },\\underline{\\sigma },\\underline{\\ell }^{\\prime },\\underline{s},\\underline{\\ell }^{\\prime \\prime },\\underline{\\sigma }^{\\prime })$ .", "Without loss of generality, we can assume that the kernels $W_{n,m,q;(\\underline{\\ell },\\underline{\\sigma },\\underline{\\ell }^{\\prime },\\underline{s},\\underline{\\ell }^{\\prime \\prime },\\underline{\\sigma }^{\\prime })}$ are symmetric under permutations of the indices $(\\underline{y},\\underline{\\ell }^{\\prime }, \\underline{s})$ and antisymmetric both under permutations of $(\\underline{x},\\underline{\\ell },\\underline{\\sigma })$ and of $(\\underline{z}, \\underline{\\ell }^{\\prime \\prime },\\underline{\\sigma }^{\\prime })$ .", "A representation similar to (REF ) holds also for $S^{(0)}(J,\\varphi )$ with kernels $W^{0,m,q}_{a}(\\underline{y}, \\underline{z})$ , where ${a}=(\\underline{\\ell }^{\\prime },\\underline{s},\\underline{\\ell }^{\\prime \\prime },\\underline{\\sigma }^{\\prime })$ .", "As discussed after [27], using the Battle-Brydges-Federbush-Kennedy determinant formula and the Gram-Hadamard bound [21] one finds that $E^{(0)}$ and the values of the kernels $W_{n,m,q;a}(\\underline{x},\\underline{y},\\underline{z})$ at fixed positions $\\underline{x},\\underline{y},\\underline{z}$ are real analytic functions of the parameter $\\lambda $ , for $|\\lambda |\\le \\lambda _0$ and $\\lambda _0$ sufficiently small but independent of $L$ .", "Moreover, in the analyticity domain, $|E^{(0)}| \\le C |\\lambda |$ , and $\\Vert W_{n,m,q}\\Vert _{\\kappa ,0} \\le C^{n+m+q} |\\lambda |^{\\mathbb {1}_{n+q>2} \\max \\lbrace 1,c(n+q)\\rbrace }$ for suitable positive constants $C,c$ independent of $L$ .", "Here the weighted norm $\\Vert \\cdot \\Vert _{\\kappa ,0}$ is defined as $\\Vert W_{n,m,q}\\Vert _{\\kappa ,0}:=L^{-2} \\sup _{a} \\sum _{\\underline{x}, \\underline{y}, \\underline{z}} |W_{n,m,q;a}(\\underline{x},\\underline{y},\\underline{z})| e^{\\frac{\\kappa }{2}\\sqrt{\\delta (\\underline{x},\\underline{y}, \\underline{z})}},$ where $\\kappa >0$ is the same as in (REF ), and $\\delta (\\cdot )$ denotes the tree distance, that is the length of the shortest tree on the torus connecting points with the given coordinates.", "Remark 7 The kernels of the effective potential $V^{(0)}$ , of $S^{(0)}$ , as well as the constant $E^{(0)}$ , depend on $\\theta $ , because both the interaction $V(\\psi ,A)$ in (REF ) and the propagator $g^{(1)}$ involved in the integration do.", "Both these effects can be thought of as being associated with boundary conditions assigned to the Grassmann fields, periodic in both coordinate directions for $\\theta =(-,-)$ , anti-periodic in both coordinate directions for $\\theta =(+,+)$ , and mixed (periodic in one direction and anti-periodic in the other) in the remaining two cases.", "Therefore, using Poisson summation formula (see e.g.", "[24], where notations are different), both $g^{(1)}$ and the kernels of $V^{(0)}$ and $S^{(0)}$ can be expressed via an `image rule', analogous to the summation over images in electrostatics, of the following form: $ g^{(1)}(x,y)=\\sum _{n=(n_1,n_2)\\in \\mathbb {Z}^2}(-1)^{\\frac{\\theta _1+1}{2} n_1+\\frac{\\theta _2+1}{2} n_2}g^{(1),\\infty }(x-y+nL),$ where $g^{(1),\\infty }(x)=\\lim _{L\\rightarrow \\infty }g^{(1)}(x,0)$ (an analogous sum rule holds for the kernels of $V^{(0)}$ and $S^{(0)}$ ).", "From this representation, together with the decay bounds mentioned above on $g^{(1)}$ and on the kernels of the effective potential, it readily follows that the dependence upon $\\theta $ of these functions is a finite-size effect that is stretched-exponentially small in $L$ .", "Similarly, the dependence upon $\\theta $ of $E^{(0)}$ corresponds to a stretched-exponentially small correction as $L\\rightarrow \\infty $ (see also [24]).", "Therefore, all these corrections are irrelevant for the purpose of computing the thermodynamic limit of thermodynamic functions and correlations.", "For this reason and for ease of notation, here and below we will not indicate the dependence upon $\\theta $ explicitly in most of the functions and constants involved in the multiscale construction." ], [ "Integration of the massive degrees of freedom", "Using (REF ) in (REF ) and renaming $\\Psi +\\psi ^{(0)}\\equiv \\psi $ , we get $e^{\\mathcal {W}_{L}^{(\\theta )}(A,\\phi )}=e^{L^2(t^{(0)}+ E^{(0)})+S^{(0)}(J,\\phi )} \\\\\\times \\int D\\psi \\, e^{-L^{-2} \\sum _\\omega \\sum _{k \\in \\mathcal {B}_\\omega } (\\chi _\\omega (k))^{-1}\\hat{\\psi }^+_{k} M(k) \\hat{\\psi }^-_{k}}e^{V^{(0)}(\\psi ,J,\\phi ) }$ where, recalling that $O_\\omega $ was defined right before (REF ), $\\mathcal {B}_\\omega :=O_\\omega \\cup \\lbrace p^\\omega _\\theta \\rbrace =\\lbrace k \\in \\mathcal {P}(\\theta ): \\chi _\\omega (k)\\ne 0 \\rbrace ,$ $D\\psi :=\\prod _{\\omega =\\pm }\\Big (L^{2|\\mathcal {I}||\\mathcal {B}_\\omega |}\\prod _{k\\in \\mathcal {B}_\\omega }D\\hat{\\psi }_k\\Big )$ andwe have set $t^{(0)}:=\\frac{1}{L^2}\\sum _{k \\in (\\cup _\\omega \\mathcal {B}_\\omega )^c} \\log \\mu (k)+\\frac{|\\mathcal {I}|}{L^2}\\sum _{\\omega }\\sum _{k \\in O_\\omega } \\log \\chi _\\omega (k).$ Since $p^+_0$ is a simple zero of $\\mu (k)$ , there exists an invertible complex matrix $B_+$ such that $B_+ M(p^+_0) B_+^{-1}=\\begin{pmatrix}0 & 0 \\\\0 & A_+\\end{pmatrix} $ for an invertible $(|\\mathcal {I}|-1) \\times (|\\mathcal {I}|-1) $ matrix $A_+$ .", "Clearly, $B_+$ (and, therefore, $A_+$ ) is not defined uniquely; we choose it arbitrarily, in such a way that (REF ) holds, and fix it once and for all.", "Taking the complex conjugate in the above equation and using the symmetry of $M$ , see (REF ), one finds that the same relation holds at $p^-_0$ with matrices $B_-:=\\overline{B_+}$ , $A_-:=\\overline{A_+}$ .", "Let ${\\bf M}_\\omega (k):=B_\\omega M(k)B_\\omega ^{-1}$ , and define the matrices $T_\\omega (k), W_\\omega (k), U_\\omega (k)$ and $V_\\omega (k)$ of sizes $1\\times 1$ , $(|\\mathcal {I}|-1) \\times (|\\mathcal {I}|-1)$ , $1 \\times (|\\mathcal {I}|-1)$ and $(|\\mathcal {I}|-1) \\times 1$ , respectively, via $\\begin{pmatrix}T_\\omega (k) & U_\\omega (k) \\\\ V_\\omega (k) & W_\\omega (k)\\end{pmatrix}:={\\bf M}_\\omega (k).$ Analyticity of $M(k)$ in $k$ implies, in particular, that $T_\\omega (k+p^\\omega _0)$ , $U_\\omega (k+p_0)$ and $V_\\omega (k+p^\\omega _0)$ are all $O(k)$ as $k\\rightarrow 0$ , while $W_\\omega (k+p^\\omega _0)=A_\\omega +O(k)$ .", "Let $\\mathcal {B}^{(2)}_\\omega \\supset \\mathcal {B}_\\omega $ be the ball centered at $p_0^\\omega $ with radius $2c_0$ , and assume that $c_0$ is so small that $\\inf _{k\\in \\mathcal {B}^{(2)}_\\omega }|\\det W_\\omega (k)|$ is positive.", "Taking the determinant at both sides of (REF ), letting $\\rho _\\omega :=\\det A_\\omega $ , we find that $\\mu (k)\\stackrel{k\\rightarrow p^\\omega _0}{=}\\rho _\\omega T_\\omega (k)+O((k-p^\\omega _0)^2)$ so that, recalling (REF ), $T_\\omega (k+p^\\omega _0)\\stackrel{k\\rightarrow 0}{=}\\frac{\\alpha ^0_\\omega k_1+ \\beta ^0_\\omega k_2}{\\rho _\\omega }+O(k^2).$ Since $W_\\omega (k)$ is non singular on $\\mathcal {B}_\\omega $ , for $k\\in \\mathcal {B}_\\omega $ we can block diagonalize ${\\bf M}_\\omega $ as ${\\bf M}_\\omega (k)=\\begin{pmatrix}1 & U_\\omega (k)W^{-1}_\\omega (k) \\\\0 & \\mathbb {1}\\end{pmatrix}\\begin{pmatrix}{\\bf T}_\\omega (k) & 0 \\\\0 & W_\\omega (k)\\end{pmatrix}\\begin{pmatrix}1 & 0 \\\\W^{-1}_\\omega (k) V_\\omega (k) & \\mathbb {1}\\end{pmatrix}$ where ${\\bf T}_\\omega (k):=T_\\omega (k)-U_\\omega (k) W^{-1}_\\omega (k)V_\\omega (k)$ is the Schur complement of the block $W_\\omega $ .", "Note that from the properties of $U_\\omega ,V_\\omega ,W_\\omega $ , the function ${\\bf T}_\\omega $ satisfies ${\\bf T}_\\omega (k+p^\\omega _0)\\stackrel{k\\rightarrow 0}{=}\\frac{\\alpha ^0_\\omega k_1+\\beta ^0_\\omega k_2}{\\rho _\\omega }+O(k^2),$ like $T_\\omega $ .", "In view of this decomposition, we perform the following change of Grassmann variables: for $k \\in \\mathcal {B}_\\omega $ we define $\\begin{split}& (\\hat{\\varphi }^+_{k},\\hat{\\xi }^+_{k,1}, \\dots ,\\hat{\\xi }^+_{k,|\\mathcal {I}|-1}):=\\hat{\\psi }^+_{k} B_\\omega ^{-1} \\begin{pmatrix}1 & U_\\omega (k) W^{-1}_\\omega (k) \\\\ 0 & \\mathbb {1}\\end{pmatrix} \\\\& (\\hat{\\varphi }^-_{k}, \\hat{\\xi }^-_{k,1}, \\dots , \\hat{\\xi }^-_{k,|\\mathcal {I}|-1})^T:= \\begin{pmatrix}1 & 0 \\\\ W^{-1}_\\omega (k) V_\\omega (k) & \\mathbb {1}\\end{pmatrix} B_\\omega \\hat{\\psi }^-_{k}.\\end{split}$ For later convenience, we give the following Lemma 3 Define $\\psi ^\\pm _x:=L^{-2}\\sum _\\omega \\sum _{k \\in \\mathcal {B}_\\omega } e^{\\pm ikx} \\hat{\\psi }^+_k$ and $\\xi ^\\pm _x(\\omega ):=L^{-2}\\sum _{k \\in \\mathcal {B}_\\omega }e^{\\pm ikx} \\hat{\\xi }^+_{k},\\quad \\varphi ^\\pm _x(\\omega ):=L^{-2}\\sum _{k \\in \\mathcal {B}_\\omega }e^{\\pm ikx} \\hat{\\varphi }^+_k.$ Then, the inverse of the transformation (REF ) in $x$ space is $\\begin{split}& \\psi ^+_{x,\\ell }=\\sum _{\\omega } \\Big ( \\varphi ^+_{x}(\\omega )(B_\\omega )_{1 \\ell } +(\\varphi ^+(\\omega ) \\ast \\tau ^+_{\\omega ,\\ell })_x+\\sum _{j=2}^{|\\mathcal {I}|} \\xi ^+_{x,j-1}(\\omega ) (B_\\omega )_{j \\ell } \\Big )\\\\& \\psi ^-_{x,\\ell }=\\sum _{\\omega } \\Big ( (B_\\omega ^{-1})_{\\ell 1}\\varphi ^-_{x}(\\omega ) +( \\tau ^-_{\\omega ,\\ell }\\ast \\varphi ^-(\\omega ))_x+\\sum _{j=2}^{|\\mathcal {I}|}(B_\\omega ^{-1})_{\\ell j} \\xi ^-_{x,j-1}(\\omega ) \\Big )\\end{split}$ where $\\begin{split} &{\\tau }_{\\omega ,\\ell }^+(x):=-L^{-2}\\sum _{k \\in \\mathcal {P}(\\theta )} \\sum _{j=2}^{|\\mathcal {I}|} e^{ikx} \\chi _\\omega \\big (\\tfrac{k+p_0^\\omega }{2}\\big )(U_\\omega (k)\\cdot W^{-1}_\\omega (k))_j (B_\\omega )_{j\\ell }\\\\&{\\tau }_{\\omega ,\\ell }^-(x):=-L^{-2}\\sum _{k \\in \\mathcal {P}(\\theta )} \\sum _{j=2}^{|\\mathcal {I}|}e^{-ikx}\\chi _\\omega \\big (\\tfrac{k+p_0^\\omega }{2}\\big )(B^{-1}_\\omega )_{\\ell j}(W_\\omega ^{-1}(k) \\cdot V_\\omega (k))_j .\\end{split}$ The proof is essentially an elementary computation (one inverts the linear relation (REF ) for given $k$ and then takes the Fourier transform to obtain the expression in real space) but there is a slightly delicate point, that is to see where the cut-off function $\\chi _\\omega \\big (\\tfrac{k+p_0^\\omega }{2}\\big )$ comes from.", "After a few elementary linear algebra manipulations, one finds that $\\psi ^+_{x,\\ell }$ equals an expression like in the r.h.s.", "of (REF ), where the term $(\\varphi ^+(\\omega ) \\ast \\tau ^+_{\\omega ,\\ell })_x$ is replaced by $\\frac{1}{L^2}\\sum _\\omega \\sum _{k\\in \\mathcal {B}_\\omega }\\hat{\\varphi }^+_k f^\\omega _\\ell (k)e^{i k x},\\quad f^\\omega _\\ell (k):=-\\sum _{j=2}^{|\\mathcal {I}|}\\big (U_\\omega (k)W^{-1}_\\omega (k)\\big )_j(B_\\omega )_{j\\ell }.$ Since the sum is restricted to $k\\in \\mathcal {B}_\\omega $ , we can freely multiply the summand by $\\chi _\\omega \\big (\\tfrac{k+p_0^\\omega }{2}\\big )$ , which is identically equal 1 there, since the argument is at distance at most $c_0/2$ from $p^\\omega _0$ .", "At that point, we use the fact that $\\hat{\\varphi }^+_k{\\bf 1}_{k\\in \\mathcal {B}_\\omega }=\\sum _x\\varphi ^+_x(\\omega )e^{-i k x}$ and we immediately obtain that (REF ) coincides with $(\\varphi ^+(\\omega ) \\ast \\tau ^+_{\\omega ,\\ell })_x$ , with $\\tau ^+_{\\omega ,\\ell }$ as in (REF ).", "At this point we go back to (REF ), that we rewrite as $e^{\\mathcal {W}_{L}^{(\\theta )}(A,\\phi )}=e^{L^2 ({\\bf t}^{(0)}+ E^{(0)})+S^{(0)}(J,\\phi )}\\\\\\times \\int D\\varphi \\,e^{-L^{-2} \\sum _{\\omega }\\sum _{k \\in \\mathcal {B}_\\omega } (\\chi _\\omega (k))^{-1}\\hat{\\varphi }^+_{k} {\\bf T}_\\omega (k) \\hat{\\varphi }^-_{k}} \\int P_W(D\\xi )\\, e^{\\widetilde{V}^{(0)}(\\varphi ,\\xi ,J,\\phi )}$ where ${\\bf t}^{(0)}:= L^{-2}\\sum _{k \\in O^{\\prime }}\\log \\mu (k)+L^{-2}\\sum _{\\omega =\\pm }\\sum _{k\\in \\mathcal {B}_\\omega }\\big (\\log \\det W_\\omega (k)+\\log \\chi _\\omega (k)\\big ),$ $D\\varphi :=\\prod _{\\omega =\\pm }\\Big (L^{2|\\mathcal {B}_\\omega |}\\prod _{k\\in \\mathcal {B}_\\omega }D\\hat{\\varphi }_k\\Big )$ , $P_{W}(D\\xi )$ is the normalized Gaussian Grassmann integration with propagator (which is a $(|\\mathcal {I}|-1)\\times (|\\mathcal {I}|-1)$ matrix) $g^W_{\\omega ,\\omega ^{\\prime }}(x,y):=\\int P(D\\xi )\\, \\xi ^-_x(\\omega ) \\xi ^+_y(\\omega ^{\\prime })= \\frac{\\delta _{\\omega ,\\omega ^{\\prime }}}{L^2}\\sum _{k \\in P(\\theta )} e^{-ik(x-y)} \\chi _\\omega (k) (W_\\omega (k))^{-1},$ and $\\widetilde{V}^{(0)}(\\varphi ,\\xi ,J,\\phi )$ is the same as $V^{(0)}(\\psi ,J,\\phi )$ , once $\\psi $ is re-expressed in terms of the new variables $(\\varphi ,\\xi )$ , as in Lemma REF .", "Remark 8 Note that, because of $\\chi (\\cdot )$ , the sums (REF ) defining $\\tau ^\\pm _{\\omega ,\\ell }(x)$ are restricted to momenta $k\\in \\mathcal {B}_\\omega ^{(2)}$ where $W_\\omega $ is indeed invertible.", "Note also that, from the smoothness of $\\hat{\\tau }^\\pm _{\\omega ,\\ell }(k)$ it follows that $\\tau ^\\pm _{\\omega ,\\ell }(x)$ decays to zero in a stretched-exponential way, similar to (REF ).", "That is, $\\psi $ is essentially a local function of $\\varphi ,\\xi $ .", "As a consequence, the kernels of $\\widetilde{V}^{(0)}$ satisfy qualitatively the same bounds as those of $V^{(0)}$ .", "Since $W_\\omega (\\cdot )$ is smooth and invertible in the support of $\\chi _\\omega $ , we see from (REF ) that the propagator of the variables $\\lbrace \\xi _{x}(\\omega )\\rbrace $ decays as $\\Vert g^W(x,y)\\Vert \\le C e^{-\\kappa \\sqrt{|x-y|}}$ uniformly in $L$ , a behavior analogous to (REF ).", "For this reason, we call the variables $\\lbrace \\xi _{x}(\\omega )\\rbrace $ massive.", "On the other hand, we call critical the remaining $\\lbrace \\varphi _{x}(\\omega )\\rbrace $ variables.", "The integration of the massive fields $\\xi $ , which is performed in a way completely analogous to the one of $\\psi ^{(1)}$ in (REF ), produces an expression for the generating functional in terms of a Grassmann integral involving only the critical fields $\\varphi $ : $e^{\\mathcal {W}_{L}^{(\\theta )}(A,\\phi )}=e^{L^2 E^{(-1)}+S^{(-1)}(J,\\phi )} \\int D\\varphi e^{-L^{-2} \\sum _{\\omega }\\sum _{k \\in \\mathcal {B}_\\omega } \\hat{\\varphi }^+_{k} (\\chi _\\omega (k))^{-1}{\\bf T_\\omega }(k) \\hat{\\varphi }^-_{k}} e^{V^{(-1)}(\\varphi ,J,\\phi )}$ where $L^2(E^{(-1)}-{\\bf t}^{(0)}-E^{(0)})+S^{(-1)}(J,\\phi )-S^{(0)}(J,\\phi )+V^{(-1)}(\\varphi ,J,\\phi )\\\\=\\log \\int P_W(D\\xi )e^{\\widetilde{V}^{(0)}(\\varphi ,\\xi ,J,\\phi )}$ and $V^{(-1)}, S^{(-1)}$ are fixed in such a way that $V^{(-1)}(0,J,\\phi )=S^{(-1)}(0,0)=0$ .", "The effective potential $V^{(-1)}$ can be represented in a way similar to (REF ), namely $V^{(-1)}(\\varphi ,J,\\phi )=\\sum _{\\begin{array}{c}n>0 \\\\ m,q \\ge 0 \\\\ n+q \\in 2\\mathbb {N}\\end{array}} \\sum ^*_{\\begin{array}{c}\\underline{x},\\underline{y},\\underline{z} \\\\ \\underline{\\ell }^{\\prime },\\underline{\\ell }^{\\prime \\prime }, \\underline{\\omega }\\\\ \\underline{s}, \\underline{\\sigma },\\underline{\\sigma }^{\\prime }\\end{array}} \\varphi ^{\\underline{\\sigma }}_{\\underline{x}}(\\underline{\\omega }) J_{\\underline{y},\\underline{\\ell }^{\\prime }, \\underline{s}} \\phi ^{\\underline{\\sigma }^{\\prime }}_{\\underline{z},\\underline{\\ell }^{\\prime \\prime }} W^{(-1)}_{n,m,q;a}(\\underline{x},\\underline{y},\\underline{z};\\underline{\\omega })$ where $\\underline{\\omega }\\in \\lbrace -1,+1\\rbrace ^n$ and $\\varphi ^{\\underline{\\sigma }}_{\\underline{x}}(\\underline{\\omega }):=\\prod _{i=1}^n \\varphi ^{\\sigma _i}_{x_i}(\\omega _i)$ , while the other symbols and labels have the same meaning as in (REF ).", "In virtue of the decay properties of the propagator $g_{\\omega ,\\omega ^{\\prime }}^W$ , the kernels $W^{(-1)}_{n,m,q;a}(\\underline{x},\\underline{y},\\underline{z};\\underline{\\omega })$ of $V^{(-1)}(\\varphi ,J,\\phi )$ satisfy the same bounds as (REF )." ], [ "Reduction to the setting of {{cite:6b7a0dab53aeb1bc4215e9cf458d58d466adbf1b}}", "We are left with the integral of the critical variables, which we want to perform in a way analogous to that discussed in [27].", "In order to get to a point where we can literally apply the results of [27], a couple of extra steps are needed.", "First, in order to take into account the fact that, in general, the interaction has the effect of changing the location of the singularity in momentum space of the propagator of $\\varphi $ , as well as the value of the residues at the singularity, we find it convenient to rewrite the `Grassmann action' in (REF ), $-L^{-2} \\sum _{\\omega }\\sum _{k \\in \\mathcal {B}_\\omega } \\hat{\\varphi }^+_{k} (\\chi _\\omega (k))^{-1}{\\bf T_\\omega }(k) \\hat{\\varphi }^-_{k}+V^{(-1)}(\\varphi ,J,\\phi ),$ in the form of a reference quadratic part, with the `right' singularity structure, plus a remainder, whose specific value will be fixed a posteriori via a fixed-point argument.", "More precisely, we proceed as described in [27]: we introduce $N(\\varphi )=L^{-2} \\sum _{\\omega } \\sum _{k \\in \\mathcal {B}_\\omega } \\hat{\\varphi }^+_k(-{\\bf T_\\omega }(p^\\omega )+a_\\omega (k_1-p^\\omega _1)+b_\\omega (k_2-p^\\omega _2))\\hat{\\varphi }^-_k$ where $p^\\omega ,a_\\omega ,b_\\omega $ will be fixed a posteriori, and are assumed to satisfy $|p^\\omega -p^\\omega _0| \\ll 1$ for $\\lambda $ small, $p^+=-p^-$ and $\\overline{a_+}=-a_-, \\overline{b_+}=-b_-$ .", "Define also $ C_\\omega (k):={\\bf T_\\omega }(k)-\\chi _\\omega (k)\\Big ({\\bf T_\\omega }(p^\\omega )-a_\\omega (k_1-p^\\omega _1)-b_\\omega (k_2-p^\\omega _2)\\Big )$ and note that it satisfies $C_\\omega (p^\\omega )=0$ , $\\partial _{k_1}C_\\omega (p^\\omega )=\\partial _{k_1}{\\bf T_\\omega }(p^\\omega )+a_\\omega =:\\alpha _\\omega , \\qquad \\partial _{k_2}C_\\omega (p^\\omega )=\\partial _{k_2}{\\bf T_\\omega }(p^\\omega )+b_\\omega =:\\beta _\\omega ,$ as well as the symmetry $C_{-\\omega }(-k)=\\overline{C_\\omega (k)}$ .", "Let us introduce the matrix $\\mathcal {M}$ (the same as in [27]) given by $\\mathcal {M}=\\frac{1}{\\sqrt{\\Delta }}\\begin{pmatrix}\\beta ^1 & \\beta ^2 \\\\-\\alpha ^1 & -\\alpha ^2\\end{pmatrix}$ where $\\alpha ^1$ and $\\alpha ^2$ (resp.", "$\\beta ^1$ and $\\beta ^2$ ) are, respectively, the real and imaginary part of $\\alpha _+$ (resp.", "$\\beta _+$ ), see (REF ), and $\\Delta :=\\alpha ^1\\beta ^2-\\alpha ^2\\beta ^2$ is a positive real number, in agreement with (REF ): note, in fact, that at $\\lambda =0$ the sign of $\\Delta $ is the same as the sign of $\\text{Im}(\\beta _+/\\alpha _+)$ .", "At this point, we can finally fix the cut-off functions $\\chi _\\omega $ of Definition REF as follows: $\\chi _\\omega (k):=\\chi (|\\mathcal {M}^{-1}(k-p^\\omega )|)$ where $\\chi :\\mathbb {R}\\mapsto [0,1]$ is a compactly supported function in the Gevrey class of order 2.", "It is immediate to verify that $\\chi $ can be chosen so that that properties (i)-(ii) of Definition REF are verified.", "Given this, we rewrite (REF ) as $e^{\\mathcal {W}_{L}^{(\\theta )}(A,\\phi )}=e^{L^2 E^{(-1)}+S^{(-1)}(J,\\phi )} \\int D\\varphi e^{-L^{-2} \\sum _{\\omega }\\sum _{k \\in \\mathcal {B}_\\omega } \\hat{\\varphi }^+_{k} (\\chi _\\omega (k))^{-1} C_\\omega (k) \\hat{\\varphi }^-_{k}} e^{N(\\varphi )+V^{(-1)}(\\varphi ,J,\\phi )}.$ In the above integration, the momenta closest to the zeros of $C_\\omega $ (i.e., close to $p^\\omega $ ) play a special role and have to be treated at the end of the multiscale procedure, as discussed in [27].", "For a given $\\theta \\in \\lbrace -1,+1\\rbrace ^2$ , denote by $k^\\pm _\\theta \\in \\mathcal {B}_\\omega $ the closest momenta to $p^\\pm $ respectively (with the same remark as in footnote REF in case of several possible choices) and note that they satisfy $k_\\theta ^+=-k_\\theta ^-$ .", "Next we define $\\hat{\\Phi }_\\omega ^\\pm :=\\varphi _{k_\\theta ^\\omega }^\\pm $ , $\\Phi _{\\omega ,x}^\\pm :=L^{-2}e^{\\pm i k_\\theta ^\\omega x}\\hat{\\Phi }_\\omega ^\\pm $ and $\\mathcal {P}^{\\prime }(\\theta ):=\\mathcal {P}(\\theta )\\setminus \\lbrace k_\\theta ^\\pm \\rbrace $ .", "Since $C_\\omega $ does not vanish on $\\mathcal {P}^{\\prime }(\\theta )$ , we can rewrite (REF ) as $e^{\\mathcal {W}_{L}^{(\\theta )}(A,\\phi )}=e^{L^2 {\\bf E}^{(-1)}+S^{(-1)}(J,\\phi )}\\\\\\times \\int D\\Phi e^{-L^{-2}\\sum _{\\omega }\\Phi ^+_\\omega C_\\omega (k_\\theta ^\\omega )\\Phi ^-_\\omega } \\int \\tilde{P}_{(\\le -1)}(D\\varphi ) e^{N(\\varphi ,\\Phi )+V^{(-1)}(\\varphi ,\\Phi ,J,\\phi )}$ where, letting $\\mathcal {B}_\\omega ^{\\prime }:=\\mathcal {B}_\\omega \\cap \\mathcal {P}^{\\prime }(\\theta )$ , ${\\bf E}^{(-1)}=E^{(-1)}+L^{-2}\\sum _{\\omega }\\sum _{k \\in \\mathcal {B}_\\omega ^{\\prime } }(\\log C_\\omega (k)-\\log \\chi _\\omega (k)).$ Moreover, $D\\Phi :=L^4D\\hat{\\Phi }_+\\, D\\hat{\\Phi }_-$ and $\\tilde{P}_{(\\le -1)}(D\\varphi )$ is the normalized Grassmann Gaussian integration with propagator $\\int \\tilde{P}_{(\\le -1)}(D\\varphi ) \\varphi _{\\omega ,x}^-\\varphi ^+_{\\omega ^{\\prime },y}=\\delta _{\\omega ,\\omega ^{\\prime }}\\frac{1}{L^2}\\sum _{k \\in \\mathcal {B}_\\omega ^{\\prime }}e^{-ik(x-y)}\\chi _\\omega (k)(C_\\omega (k))^{-1}.$ Finally, we remark that since the momenta $k$ in (REF ) are close to $p^\\omega $ , the propagator (REF ) has an oscillating prefactor $e^{-ip^\\omega (x-y)}$ that it is convenient to extract.", "To this end, we define quasi-particle fields $\\varphi _{x,\\omega }^{\\pm ,(\\le -1)}$ via $\\varphi ^\\pm _x(\\omega )=:e^{\\pm i p^\\omega x} \\varphi ^{\\pm ,(\\le -1)}_{x,\\omega }.$ Note that the propagator of the quasi-particle fields equals $\\int P_{(\\le -1)}(d\\varphi ^{\\le -1}) \\varphi ^{-,(\\le -1)}_{x,\\omega }\\varphi ^{+,(\\le -1)}_{y,\\omega ^{\\prime }}=\\delta _{\\omega ,\\omega ^{\\prime }}g^{(\\le -1)}_\\omega (x,y)\\\\ g^{(\\le -1)}_\\omega (x,y):= \\frac{1}{L^2}\\sum _{k\\in \\mathcal {P}^{\\prime }_\\omega (\\theta )}\\frac{e^{-ik(x-y)}\\chi (k+p^\\omega -p^\\omega _0)\\chi (|\\mathcal {M}^{-1}k|)}{C_\\omega (k+p^\\omega )},$ where $\\mathcal {P}_\\omega ^{\\prime }(\\theta )=\\lbrace k: k+p^\\omega \\in \\mathcal {P}^{\\prime }(\\theta )\\rbrace $ .", "Of course, the r.h.s.", "of (REF ) is just the r.h.s.", "of (REF ) multiplied by $e^{i p^\\omega (x-y)}$ .", "We now rewrite (REF ) as $e^{\\mathcal {W}_{L}^{(\\theta )}(A,\\phi )} \\\\ = e^{L^2 {\\bf E}^{(-1)}+S^{(-1)}(J,\\phi )} \\int D\\Phi e^{-L^{-2}\\sum _{\\omega }\\Phi ^+_\\omega C_\\omega (k_\\theta ^\\omega )\\Phi ^-_\\omega } \\int P_{(\\le -1)}(D\\varphi ^{(\\le -1)}) e^{\\mathcal {V}^{(-1)}(\\varphi ^{(\\le -1)},\\Phi ,J,\\phi )},$ where $\\mathcal {V}^{(-1)}(\\varphi ,\\Phi ,J,\\phi ):=N(\\Phi ,\\varphi )+V^{(-1)}(\\Phi ,\\varphi ,J,\\phi ),$ and in the r.h.s.", "it is meant that the $\\varphi $ variables are expressed in terms of the quasi-particle fields as in (REF ).", "That is, we have simply re-expressed $V^{(-1)}$ in terms of the quasi-particle fields and we included the counter-terms in the definition of effective potential.", "After this rewriting, we find that the following representation holds for $\\mathcal {V}^{(-1)}$ : $\\mathcal {V}^{(-1)}(\\varphi ,J,\\phi )=\\sum _{\\begin{array}{c}n>0 \\\\ m,q \\ge 0 \\\\ n+q \\in 2\\mathbb {N}\\end{array}} \\sum ^*_{\\begin{array}{c}\\underline{x},\\underline{y},\\underline{z} \\\\ \\underline{\\ell }^{\\prime },\\underline{\\ell }^{\\prime \\prime }, \\underline{\\omega }\\\\ \\underline{s}, \\underline{\\sigma },\\underline{\\sigma }^{\\prime }\\end{array}} \\varphi ^{\\underline{\\sigma }}_{\\underline{x}, \\underline{\\omega }} J_{\\underline{y},\\underline{\\ell }^{\\prime }, \\underline{s}} \\phi ^{\\underline{\\sigma }^{\\prime }}_{\\underline{z},\\underline{\\ell }^{\\prime \\prime }} \\mathcal {W}^{(-1)}_{n,m,q;\\underline{\\omega },a}(\\underline{x},\\underline{y},\\underline{z}),$ with kernels $\\mathcal {W}^{(-1)}_{n,m,q;\\underline{\\omega },a}$ satisfying the same estimates as in (REF ).", "The kernels $\\mathcal {W}^{(-1)}_{n,m,q;\\underline{\\omega },a}$ are the analogues of $W^{(-1)}_{n,m;\\underline{\\omega },\\underline{r}}$ in [27] and satisfy the same properties spelled in [27] and following lines.", "Here the labels $a$ denote the collection of labels $(\\underline{\\sigma },\\underline{\\ell ^{\\prime }},\\underline{s},\\underline{\\ell ^{\\prime \\prime }},\\underline{\\sigma }^{\\prime })$ .", "At this point, we have reduced precisely to the fermionic model studied in [27]." ], [ "Infrared integration and conclusion of the proof of Proposition ", "Once the partition function is re-expressed as in (REF ), we are in the position of applying the multiscale analysis of [27]: note in fact that (REF ) has exactly the same form as [27] with its second line written as in [27].", "Therefore, at this point, we can integrate out the massless fluctuation field $\\varphi $ via the same iterative procedure described in [27] and following sections.", "Such a procedure allows us to express the thermodynamic and correlation functions of the theory in terms of an appropriate sequence of effective potentials $\\mathcal {V}^{(h)}$ , $h<0$ .", "The discussion in [27] implies that we can fix $p^\\omega , a_\\omega , b_\\omega $ uniquely as appropriate analytic functions of $\\lambda $ , for $\\lambda $ sufficiently small (so that, in particular, (REF ) is satisfied), in such a way that the whole sequence of the effective potentials is well defined for $\\lambda $ sufficiently small, their kernels are analytic in $\\lambda $ uniformly in the system size, and they admit a limit as $L\\rightarrow \\infty $ .", "In particular, the running coupling constants characterizing the local part of the effective potentials are analytic functions of $\\lambda $ and the associated critical exponents are analytic functions of $\\lambda $ , see [27].", "The existence of the thermodynamic limit of correlation functions follows from [27].", "The proofs of (REF ), (REF ) and (REF ) in Proposition REF follow from the discussion in [27] (they are the analogues of [27]) and this, together with the fact that (REF ) and (REF ) are just restatements of [27] and [27], respectively, concludes the proof of Proposition REF .", "A noticeable, even though mostly aesthetic, difference between the statements of Proposition REF and [27] is in the labeling of the constants $K^{(1)}_{\\omega ,j,\\ell }$ and $K^{(2)}_{\\omega ,j,\\ell }$ in (REF ), as compared to those in [27], which are called there $\\hat{K}_{\\omega ,r}$ and $\\hat{H}_{\\omega ,r}$ , and in the presence of the constants $I_{\\omega ,\\ell ,\\ell ^{\\prime }}$ in (REF )-(REF ), which are absent in their analogues in [27].", "This must be traced back to the different labeling of the sites and edges and, correspondingly, of the external fields $\\phi $ and $A$ , used in this paper, as compared to [27].", "First of all, in this paper the edges and the external fields of type $A$ are labelled $(x,j,\\ell )$ , with $(j,\\ell )$ playing the same role as the index $r$ in [27]; correspondingly, the analogues of the running coupling constants $Y_{h,r,(\\omega _1,\\omega _2)}$ defined in [27] should now be labelled $Y_{h,(j,\\ell ),(\\omega _1,\\omega _2)}$ ; by repeating the discussion in [27] leading to [27], it is apparent that the analogues of the constants $\\hat{K}_{\\omega ,r}, \\hat{H}_{\\omega ,r}$ should now be labelled $(\\omega ,j,\\ell )$ , as anticipated.", "Concerning the constants $I_{\\omega ,\\ell ,\\ell ^{\\prime }}$ , they come from the local part of the effective potentials in the presence of the external fields $\\phi $ .", "After having integrated out the massive degrees of freedom, the infrared integration procedure involves at each step a splitting of the effective potential into a sum of its local part $\\mathcal {L}\\mathcal {V}^{(h)}$ and of its `renormalized', or `irrelevant', part $\\mathcal {R} \\mathcal {V}^{(h)}$ , as discussed in [27].", "In [27], for simplicity, we discussed the infrared integration only in the absence of external $\\phi $ fields.", "In their presence, the definition of localization must be adapted accordingly.", "When acting on the $\\phi $ -dependent part of the effective potential, using a notation similar to [27], we let $\\begin{split}\\mathcal {L} \\Big (\\mathcal {V}^{(h)}(\\varphi ,J,\\phi )-\\mathcal {V}^{(h)}(\\varphi ,J,0)\\Big )&=\\sum _{x\\in \\Lambda }\\sum _{\\omega ,\\ell } \\Big (\\varphi ^+_{x,\\omega }\\phi ^-_{x,\\ell }e^{ip^\\omega \\cdot x}\\hat{\\mathcal {W}}^{(h),\\infty }_{1,0,1;\\omega ,(+,\\ell ,-)}(0)\\\\ & +\\varphi ^-_{x,\\omega } \\phi ^+_{x,\\ell } e^{-ip^\\omega \\cdot x}\\hat{\\mathcal {W}}^{(h),\\infty }_{1,0,1;\\omega ,(-,\\ell ,+)}(0)\\Big ).\\end{split}$ Next, in analogy with [27], we let $I^\\pm _{h,\\omega ,\\ell }:=\\frac{1}{\\sqrt{Z_{h-1}}}\\hat{W}^{(h),\\infty }_{1,0,1;\\omega ,\\ell ,(\\pm ,\\mp )}(0),$ where $Z_h$ is a real, scalar, function of $\\lambda $ , called the `wave function renormalization', recursively defined as in [27].", "Eq.", "(REF ) defines the running coupling constant (r.c.c.)", "associated with the external field $\\phi $ .", "Note that such r.c.c.", "naturally inherit the label $\\ell $ from the corresponding label of the external field $\\phi $ .", "A straightforward generalization of the discussion in [27] shows that $I^\\pm _{h,\\omega ,\\ell }$ are analytic in $\\lambda $ and converge as $h\\rightarrow -\\infty $ to finite constants $I^\\pm _{-\\infty ,\\omega ,\\ell }$ , which are, again, analytic functions of $\\lambda $ .", "Therefore, by repeating the discussion in [27] for $\\hat{G}^{(2)}_{\\ell ,\\ell ^{\\prime }}(k+p^\\omega )$ and $\\hat{G}^{(2,1)}_{j,\\ell _0,\\ell ,\\ell ^{\\prime }}(k+p^\\omega ,p)$ , we find that the dominant asymptotic behavior of these correlations as $k,p\\rightarrow 0$ is proportional to $I^+_{-\\infty ,\\omega ,\\ell }I^-_{-\\infty ,\\omega ,\\ell ^{\\prime }}$ , times a function that is independent of $\\ell ,\\ell ^{\\prime }$ .", "Building upon this, we obtain (REF ) and (REF ), with $I_{\\omega ,\\ell ,\\ell ^{\\prime }}$ proportional to $I^+_{-\\infty ,\\omega ,\\ell }I^-_{-\\infty ,\\omega ,\\ell ^{\\prime }}$ .", "Additional details are left to the reader." ], [ "Proof of Theorem ", "In order to prove Theorem REF we proceed as in [24]: using the fact that convergence of the moments of a random variable $\\xi _n$ to those of a Gaussian random variable $\\xi $ implies convergence in law of $\\xi _n$ to $\\xi $ , we reduce the proof of (REF ) to that of the following identities: $ \\begin{split}& \\lim _{\\epsilon \\rightarrow 0}\\mathbb {E}_\\lambda (h^\\epsilon (f);h^{\\epsilon }(f))=\\frac{\\nu (\\lambda )}{2\\pi ^2}\\int \\, dx\\int \\, dy\\, f(x)\\, f(y)\\, \\mathfrak {R}[\\log \\phi _+(x-y)].\\\\& \\lim _{\\epsilon \\rightarrow 0}\\mathbb {E}_\\lambda (\\underbrace{h^\\epsilon (f);\\cdots ;h^{\\epsilon }(f)}_{n\\ \\text{times}})=0, \\qquad n>2\\end{split}$ where the l.h.s.", "of the second line denotes the $n^{th}$ cumulant of $h^\\epsilon (f)$ .", "The first equation is a straightforward corollary of Theorem REF , for additional details see [24].", "For the proof of the second equation we need to show that, for any $2n$ -ple of distinct points $x_1,\\ldots ,x_{2n}$ , $ \\mathbb {E}_\\lambda (h(\\eta _{x_1})-h(\\eta _{x_2});\\cdots ;h(\\eta _{x_{2n-1}})-h(\\eta _{x_{2n}}))=O((\\min _{1\\le i<j\\le 2n}|x_i-x_j|)^{-\\theta }),$ for some constant $\\theta >0$ .", "In fact, by proceeding as in [24], Eq.", "(REF ) readily implies the second line of (REF ).", "In order to prove (REF ), we first expand each difference within the expectation in the left side as in (REF ), thus getting $ \\text{LHS of (\\ref {higher})}=\\sum _{e_1\\in C_{\\eta _{x_{1}}\\rightarrow \\eta _{x_{2}}}}\\cdots \\sum _{e_n\\in C_{\\eta _{x_{2n-1}}\\rightarrow \\eta _{x_{2n}}}}\\sigma _{e_1}\\cdots \\sigma _{e_n}\\mathbb {E}_\\lambda (1_{e_1};\\cdots ;1_{e_n}).$ At a dimensional level, the truncated $n$ -point correlation in the right hand side decays like $d^{-n(1+O(\\lambda ))}$ , where $d$ is the minimal pairwise distance among the edges $e_1,\\ldots ,e_n$ ; therefore, the result of the $n$ -fold summation in (REF ) is potentially unbounded as $\\max _{i<j}|x_i-x_j|\\rightarrow \\infty $ .", "In order to show that this is not the case, and actually the result of the $n$ -fold summation is bounded as in the right hand side of (REF ), we need to exhibit appropriate cancellations.", "Once more, we use the comparison of the dimer lattice model with the infrared reference model, which allows us to re-express the multi-point truncated dimer correlation $\\mathbb {E}_\\lambda (1_{e_1};\\cdots ;1_{e_n})$ as a dominant term, which is the multi-point analogue of (REF ), plus a remainder, which decays faster at large distances.", "More precisely, by using a decomposition analogous to [24] and using the analogue of [24], if $e_i$ has labels $(x_i,j_i,\\ell _i)$ , we rewrite $ \\begin{split}\\mathbb {E}_\\lambda (1_{e_1};\\cdots ;1_{e_n})&=\\sum _{\\begin{array}{c}\\omega _1,\\ldots ,\\omega _n=\\pm \\\\ s_1,\\ldots ,s_n=1,2\\end{array}} \\Big ({\\prod _{r=1}^nK^{(s_r)}_{\\omega _r,j_r,\\ell _r}\\big (\\prod _{r:s_r=2}e^{2ip^{\\omega _r}\\cdot x_r}}\\big )\\Big ) S^{(s_1,\\ldots ,s_n)}_{R;\\omega _1,\\ldots ,\\omega _n}(x_1,\\ldots ,x_n)\\\\&+\\text{Err}(e_1,\\ldots ,e_n),\\end{split}$ where $S^{(s_1,\\ldots ,s_n)}_{R;\\omega _1,\\ldots ,\\omega _n}$ are the multi-point density-mass correlations of the reference model (defined as in [24] or as the multi-point analogue of [27]).", "Moreover, if $D_{\\underline{x}}$ is the diameter of $\\underline{x}=(x_1,\\ldots ,x_n)$ and if the minimal separation among the elements of $\\underline{x}$ is larger than $c_0 D_{\\underline{x}}$ for some positive constant $c_0$ , then, for $\\theta $ equal to, say, $1/2$ (in general, $\\theta $ can be any positive constant smaller than $1-O(\\lambda )$ ) the remainder term is bounded as $|\\text{Err}(e_1,\\ldots ,e_n)|\\le C_{n,\\theta }(c_0)D_{\\underline{x}}^{-n-\\theta }$ .", "The latter bound is the analogue of [24].", "MoreoverSee the discussion after [24] for references about the properties of $S^{(s_1,\\dots ,s_n)}_{R;\\omega _1,\\dots ,\\omega _n}$ that are discussed in this paragraph., the functions $S^{(s_1,\\ldots ,s_n)}_{R;\\omega _1,\\ldots ,\\omega _n}$ are non-zero only if the quasi-particle indices satisfy the constraint $\\sum _{i:s_i=2}\\omega _i=0$ (this is the multi-point generalization of [24]).", "Finally, and most importantly, if $s_1=\\cdots =s_n=1$ , then $S^{(1,\\ldots ,1)}_{R;\\omega _1,\\ldots ,\\omega _n}(x_1,\\ldots , x_n)\\equiv 0, \\qquad n>2,$ which is the analogue of [24] and is an instance of `bosonization' for the reference model: in fact, (REF ) can be interpreted by saying that the $n$ -point (with $n>2$ ) truncated density correlations of the reference model (recall Footnote REF for the definition of `density' and `mass' observables) are all identically equal to zero.", "In conclusion, in the right hand side of (REF ) we can replace $\\mathbb {E}_\\lambda (1_{e_1};\\cdots ;1_{e_n})$ by the right hand side of (REF ), where the term with $s_1=\\cdots =s_n=1$ vanishes.", "Therefore, all the terms we are left with either involve oscillating factors $\\prod _{r:s_r=2}e^{2ip^{\\omega _r}\\cdot x_r}$ or the remainder term $\\text{Err}(e_1,\\ldots ,e_n)$ .", "In both cases, exactly like in the case $n=2$ , the contribution of these terms to the $n$ -fold summation over $e_1,\\dots ,e_n$ in (REF ) is bounded better than the naive dimensional estimate, and we are led to the bound in (REF ).", "For a detailed discussion of how the estimate of the summation is performed, we refer the reader to [24]." ], [ "An explicit example of non-planar dimer model", "Here, we work out the Grassmann potential $V$ for the easiest but non-trivial example of non-planar dimer model.", "Choose $m=4$ for the cell size and let the edge weights be invariant by translations by multiples of $m$ , so that $V_x$ in Proposition REF does not depend on $x$ .", "In this example we add just one non planar edge per cell, denoted by $e_\\lambda $ , connecting the leftmost black site in the second row to the rightmost white in the same row; it crosses two vertical edges, denoted by $e_1,e_2$ , see Figure REF .", "Figure: A 4×44\\times 4 cell with the edges e λ ,e 1 ,e 2 e_\\lambda ,e_1,e_2 colored in red, blue, green, respectively.Let $\\psi (e_\\lambda ),\\psi (e_1),\\psi (e_2)$ be the Grassmann monomials defined in (REF ) (we drop the index $\\theta $ ).", "From the definition (REF ), one can check that the potential satisfies $V(\\psi )=F(\\psi )$ and that it is given by $V(\\psi )=\\varepsilon _\\emptyset ^{\\lbrace e_\\lambda \\rbrace }\\psi (e_\\lambda )+\\varepsilon ^{\\lbrace e_\\lambda \\rbrace }_{\\lbrace e_1\\rbrace } \\psi (e_\\lambda )\\psi (e_1)+\\varepsilon ^{\\lbrace e_\\lambda \\rbrace }_{\\lbrace e_2\\rbrace } \\psi (e_\\lambda )\\psi (e_2)+\\varepsilon ^{\\lbrace e_\\lambda \\rbrace }_{\\lbrace e_1,e_2\\rbrace } \\psi (e_\\lambda ) \\psi (e_1) \\psi (e_2).$ The computation of the signs $\\varepsilon _S^{J}$ can be easily done starting from (REF ) and with the help of Fig.", "REF ; details are left to the reader.", "The final result is that $\\varepsilon _\\emptyset ^{\\lbrace e_\\lambda \\rbrace }=\\varepsilon ^{\\lbrace e_\\lambda \\rbrace }_{\\lbrace e_1,e_2\\rbrace }=1, \\varepsilon ^{\\lbrace e_\\lambda \\rbrace }_{\\lbrace e_1\\rbrace }=\\varepsilon ^{\\lbrace e_\\lambda \\rbrace }_{\\lbrace e_2\\rbrace }=-1.$ Figure: The set of edges E J,S E_{J,S} with J={e λ },S=∅J=\\lbrace e_\\lambda \\rbrace ,S=\\emptyset (drawing (a)), J={e λ },S={e 1 }J={\\lbrace e_\\lambda \\rbrace ,S=\\lbrace e_1\\rbrace } (drawing (b)), J={e λ },S={e 2 }J={\\lbrace e_\\lambda \\rbrace ,S=\\lbrace e_2\\rbrace } (drawing (c)), J={e λ },S={e 1 ,e 2 }J={\\lbrace e_\\lambda \\rbrace ,S=\\lbrace e_1,e_2\\rbrace } (drawing (d)), colored in orange.", "Here the orientation of black edges coincides with that on G L 0 G^0_L (see Fig.", "), while that of orange edges is the one described in Lemma and in the caption of Figure ." ], [ "Acknowledgments", "We wish to thank Benoît Laslier for contributing with ideas and discussions in the early stages of this project.", "A. G. gratefully acknowledges financial support of the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (ERC CoG UniCoSM, grant agreement n. 724939), and of MIUR, PRIN 2017 project MaQuMA, PRIN201719VMAST01.", "F. T. gratefully acknowledges financial support of the Austria Science Fund (FWF), Project Number P 35428-N." ] ]
2207.10428
[ [ "Nonrelativistic Transport Theory from Vorticity Dependent Quantum\n Kinetic Equation" ], [ "Abstract We study the three-dimensional transport theory of massive spin-1/2 fermions resulting from the vorticity dependent quantum kinetic equation.", "This quantum kinetic equation has been introduced to take account of noninertial properties of rotating coordinate frames.", "We show that it is the appropriate relativistic kinetic equation which provides the vorticity dependent semiclassical transport equations of the three-dimensional Wigner function components.", "We establish the semiclassical kinetic equations of a linearly independent set of components.", "By means of them, kinetic equations of the chiral scalar distribution functions are derived.", "They furnish the 3D kinetic theory which permits us to study the vector and axial vector current densities by focusing on the mass corrections to the chiral vortical and separation effects." ], [ "Introduction", "Transport of Dirac fermions in the presence of external electromagnetic fields can be studied by means of the covariant Wigner function which obeys the quantum kinetic equation (QKE) [1], [2].", "Wigner function can be decomposed into some covariant fields whose equations of motion follow from the QKE.", "Founded on these field equations, one derives relativistic transport theories of Dirac particles.", "A brief overview of the covariant Wigner function approach was given in [3] and recently it has been reviewed in details in [4].", "Relativistic formalism has the advantage of being manifestly Lorenz invariant.", "Nevertheless, nonrelativistic transport equations are necessary for being able to start with initial distribution functions and construct solutions of transport equations [5], [6], [7].", "There exist some different methods of formulating nonrelativistic kinetic theories of Dirac particles.", "One of these methods is to construct the four-dimensional (4D) transport equations of a set of covariant fields and then integrate them over the zeroth-component of four-momentum, so that the three-dimensional (3D) transport equations which are correlative to the 4D ones are extracted.", "Another method is to integrate all of the quantum kinetic equations of the covariant fields over the zeroth-component of four-momentum at the beginning and then derive nonrelativistic transport theory from these 3D quantum kinetic equations [8], [7].", "This is also called the equal-time formalism.", "There also exists a strictly 3D approach of acquiring transport equation of Dirac particles which does not refer to the Wigner function [9].", "Quarks of the quark-gluon plasma formed in heavy-ion collisions are treated as massless [10], [11].", "Thus, chiral kinetic theory is useful to inspect their dynamical features [12], [13], [14], [15], [16], [17], [18], [19], [20].", "The QKE of the relativistic Wigner function generates the anomalous magnetic effects as well as the vorticity effects correctly [20].", "It is worth noting that the vorticity of fluid matches the angular velocity of the fluid in the comoving frame.", "The QKE possesses an explicit dependence on the electromagnetic fields but not on the vorticity of fluid.", "When the Wigner function is expressed in the Clifford algebra basis, the QKE gives a set of equations for the chiral vector fields.", "In solving some of these equations one introduces the frame four-vector [21], [22], [23].", "It can be identified with the comoving frame velocity which also appears in equilibrium distribution functions.", "Derivatives of the comoving frame four-velocity generate terms depending on the vorticity.", "These are the sources of vorticity dependence in the relativistic QKE formulations of the massless fermions.", "One cannot generate noninertial forces like the Coriolis force within this formalism.", "However, in [24], vortical effects were derived by using the similarity between the Lorentz and the Coriolis forces.", "This formulation has been shown to result in a rotating coordinate frame from the first principles [25].", "To build in this similarity, a modification of QKE by means of enthalpy current was introduced in [26].", "The discrepancy in treating magnetic and vortical effects reflects itself drastically, especially in 3D CKT when both electromagnetic fields and vorticity are taken into account.", "The vorticity dependent quantum kinetic equation (VQKE) was shown to yield a 3D CKT which does not depend on the spatial coordinates [26].", "It is consistent with the chiral anomaly and generates the chiral magnetic and vortical effects and the Coriolis force.", "The underlying Lagrangian formalism which yields VQKE was presented in [27].", "Constituent quarks of the quark-gluon plasma created in heavy-ion collisions are approximately massless.", "Thus, to get a better understanding of their dynamical properties, one needs to uncover the mass corrections to chiral theories.", "Covariant kinetic theories of massive spin-1/2 particles have been studied in terms of QKE within two different approaches in [28], [29].", "In principle, nonrelativistic transport equations can be provided by integrating the 4D kinetic equations.", "But, for massive fermions, extracting the 3D kinetic equations which are correlative to the 4D kinetic equations can only be done under some simplifying approximations as they have been shown within the VQKE approach in [30], [27].", "We have already mentioned that there also exists another nonrelativistic approach which is the so-called equal-time formalism.", "By integrating the equations of the relativistic Wigner function components over the zeroth-component of momentum, one sets up the equations of the components of the 3D Wigner function.", "Then, one employs these 3D equations to derive nonrelativistic kinetic equations of Dirac particles in the presence of the external electromagnetic fields.", "This has been studied in [31] by developing the original formulation of [8].", "In contrary to the 4D Wigner function approaches, this 3D formalism does not generate vortical effects.", "Because, without solving some of the equations of the 4D Wigner function components one cannot generate vorticity dependent terms.", "Therefore, to obtain a similar 3D approach by taking account of vorticity of the fluid, the QKE of the Wigner function should possess an explicit dependence on the fluid vorticity.", "The VQKE is the unique covariant formalism which has this property An attempt to study nonrelativistic kinetic theory of massive fermions in the presence of rotational field within the 3D Wigner function method is presented in [40].", "However, they start with a wave equation which is not covariant.", "For us it obscure how one can justify use of the covariant Wigner function when it is constructed by a wave function which obeys a non-covariant equation of motion..", "In this work, we study the 3D formulation of the VQKE by extending the method of [8], [31], by taking care of the vortical effects only.", "We briefly review the VQKE in the absence of electromagnetic fields and present the equations of the components of covariant Wigner function in the next section.", "Their integration over the zeroth-component of momentum in a frame adequate to study nonrelativistic dynamics, lead to the 3D constraint and transport equations as reported in Sec.", ".", "These equations which the components of the 3D Wigner function obey are studied and their semiclassical solutions are acquired in terms of a set of independent functions.", "In Sec.", ", kinetic equations of this set of fields are established up to the first order in the Planck constant, $\\hbar .$ Mass corrections to the chiral vortical and separation effects are studied in Sec.", ".", "Conclusions and discussions of possible future directions are given in Sec.", "." ], [ "Vorticity Dependent Quantum Kinetic Equation", "The quantum kinetic equation for a fluid in the comoving frame with the four-velocity $u_\\mu ;$ $u_\\mu u^\\mu =1,$ whose linear acceleration vanishes, $ u_\\nu \\partial ^\\nu u_\\mu =0,$ is $\\left[\\gamma _\\mu \\left(\\pi ^\\mu + \\frac{i\\hbar }{2} {\\mathcal {D}}^\\mu \\right)-m \\right] W(x,p) = 0.$ Here, ${\\mathcal {D}}^{\\mu } &\\equiv & \\partial ^{\\mu }-j_{0}(\\Delta ) w^{\\mu \\nu } \\partial _{p \\nu } , \\\\\\pi ^{\\mu } &\\equiv &p^{\\mu }-\\frac{\\hbar }{2} j_{1}(\\Delta )w^{\\mu \\nu } \\partial _{p \\nu } ,$ where $\\partial ^\\mu \\equiv \\partial / \\partial x_\\mu ,$ $\\partial _p^\\mu \\equiv \\partial / \\partial p_\\mu ,$ and $j_{0},j_{1}$ are spherical Bessel functions in $\\Delta \\equiv \\frac{\\hbar }{2} \\partial _{p} \\cdot \\partial _{x}.$ The 4D space-time derivative, $\\partial _{\\mu },$ contained in $\\Delta $ acts on $w^{\\mu \\nu } ,$ but not on the Wigner function, $W(x,p).$ On the contrary, $ \\partial _{p \\nu }$ acts on the Wigner function, but not on $ w^{\\mu \\nu } .$ The action which generates (REF ) has been presented in [27].", "There, it was shown that when the equations of motion of the fields presenting fluid are satisfied, $w^{\\mu \\nu } $ is given with an arbitrary constant $\\kappa $ as $w_{\\mu \\nu } =(\\partial _\\mu h)u_\\nu - (\\partial _\\nu h)u_\\mu + \\kappa h\\Omega _{\\mu \\nu },$ where $h=u\\cdot p$ and $\\Omega _{\\mu \\nu } =\\frac{1}{2}( \\partial _\\mu u_\\nu -\\partial _\\nu u_\\mu ).$ The fluid four-vorticity is defined as $\\omega _\\mu =(1/2)\\epsilon _{\\mu \\nu \\alpha \\beta }u^\\nu \\Omega ^{\\alpha \\beta } .$ The Wigner function can be written through the 16 generators of the Clifford algebra as $W=\\frac{1}{4}\\left(\\mathcal {F}+i \\gamma ^{5} \\mathcal {P}+\\gamma ^{\\mu } \\mathcal {V}_{\\mu }+\\gamma ^{5} \\gamma ^{\\mu } \\mathcal {A}_{\\mu }+\\frac{1}{2} \\sigma ^{\\mu \\nu } \\mathcal {S}_{\\mu \\nu }\\right),$ where $\\mathcal {C}_a \\equiv \\left\\lbrace \\mathcal {F},\\mathcal {P},\\mathcal {V}_{\\mu },\\mathcal {A}_{\\mu },\\mathcal {S}_{\\mu \\nu }\\right\\rbrace ,$ respectively, are the scalar, pseudoscalar, vector, axial-vector, and antisymmetric tensor components of the 4D Wigner function.", "These covariant fields can be expanded in powers of Planck constant: $\\mathcal {C}_a =\\sum _n\\hbar ^n\\mathcal {C}^{(n)}_a.$ We deal with the semiclassical approximation where only the zeroth- and first-order fields in $\\hbar $ are considered.", "Thus, to derive the equations which they satisfy, instead of (REF ), (), we only need to deal with $D^{\\mu } &\\equiv & \\partial _{x}^{\\mu }-w^{\\mu \\nu } \\partial _{p \\nu } , $ and $p^{\\mu } .", "$ By plugging the decomposed Wigner function, (REF ), into the VQKE, (REF ), one derives the equations satisfied by the fields $\\mathcal {C}_a , $ whose real parts are $p\\cdot \\mathcal {V}-m \\mathcal {F} =0, \\\\{p_{\\mu } \\mathcal {F}-\\frac{\\hbar }{2} D^{\\nu } \\mathcal {S}_{\\nu \\mu }-m \\mathcal {V}_{\\mu }=0}, \\\\{-\\frac{\\hbar }{2} D_{\\mu } \\mathcal {P}+\\frac{1}{2} \\epsilon _{\\mu \\nu \\alpha \\beta } p^{\\nu } S^{\\alpha \\beta }+m \\mathcal {A}_{\\mu }=0}, \\\\{\\frac{\\hbar }{2} D_{[\\mu } \\mathcal {V}_{\\nu ]}-\\epsilon _{\\mu \\nu \\alpha \\beta } p^{\\alpha } \\mathcal {A}^{\\beta }-m \\mathcal {S}_{\\mu \\nu }=0}, \\\\\\frac{\\hbar }{2} D \\cdot \\mathcal {A}+m \\mathcal {P} =0,$ and the imaginary parts are ${\\hbar D \\cdot \\mathcal {V}=0}, \\\\{p \\cdot \\mathcal {A}=0}, \\\\{\\frac{\\hbar }{2} D_{\\mu } \\mathcal {F}+p^{\\nu } \\mathcal {S}_{\\nu \\mu }=0}, \\\\{p_{\\mu } \\mathcal {P}+\\frac{\\hbar }{4} \\epsilon _{\\mu \\nu \\alpha \\beta } D^{\\nu } \\mathcal {S}^{\\alpha \\beta }=0}, \\\\{p_{[\\mu } \\mathcal {V}_{\\nu ]}+\\frac{\\hbar }{2} \\epsilon _{\\mu \\nu \\alpha \\beta } D^{\\alpha } \\mathcal {A}^{\\beta }=0}.$ It can be observed that not all of the fields ${\\cal C}_a$ are relevant to formulate a 4D kinetic theory.", "Depending on the choice of independent set of fields, one establishes different relativistic kinetic theories [29], [28].", "In the subsequent sections we will refer to the 4D formulation given in [30], which was acquired following the approach of [28]." ], [ "3D semiclassical transport and constraint equations ", "The equal-time transport theory of Dirac particles in the presence of external electromagnetic fields which has been proposed in [5] was incomplete.", "In [8], it was shown that to have a complete nonrelativistic transport theory of spinor electrodynamics, one should start with the covariant QKE of [1], [2].", "We mainly adopt the approach of [8].", "However, there is a subtle difference: Electromagnetic field strength is independent of $p_0$ in contrary to $w_{\\mu \\nu } $ which explicitly depends on it.", "We will show how to surmount this difficulty.", "Nonrelativistic (3D) Wigner function is defined as the integral of 4D Wigner function over the zeroth-component of momentum: $W_3(x,\\mathbf {p})=\\int dp_0 W(x,p)\\gamma _0.$ Let us define the 3D components in the Clifford algebra basis as $W_3(x,\\mathbf {p})=\\frac{1}{4}[f_0+\\gamma _5 f_1-i\\gamma _0 \\gamma _5 f_2 +\\gamma _0 f_3 +\\gamma _5\\gamma _0 \\mathbf {\\gamma }\\cdot \\mathbf {g}_0+\\gamma _0\\mathbf {\\gamma }\\cdot \\mathbf {g}_1-i\\mathbf {\\gamma }\\cdot \\mathbf {g}_2-\\gamma _5\\mathbf {\\gamma }\\cdot \\mathbf {g}_3].$ The 4D and 3D components are related as $\\begin{array}{lclclcl}f_0(x,\\mathbf {p}) &= & \\int dp_0 {\\cal V}_0(x,p), & &f_1(x,\\mathbf {p}) &= & \\int dp_0 {\\cal A}_0(x,p), \\\\&&&&&\\\\f_2(x,\\mathbf {p}) &= & \\int dp_0 {\\cal P}(x,p), & & f_3(x,\\mathbf {p}) &= & \\int dp_0 {\\cal F}(x,p), \\\\&&&&&\\\\\\mathbf {g}_0(x,\\mathbf {p}) &= & \\int dp_0 \\mathbf {{\\cal A}}(x,p), & & \\mathbf {g}_1(x,\\mathbf {p}) &= & \\int dp_0 \\mathbf {{\\cal V}}(x,p),\\\\&&&&&\\\\\\mathrm {g}_2^i(x,\\mathbf {p}) &= & -\\int dp_0 {\\cal S}^{0i}(x,p), & & \\mathrm {g}_3^i(x,\\mathbf {p}) &= &\\frac{1}{2}\\epsilon ^{ijk}\\int dp_0 {\\cal S}_{jk}(x,p).\\end{array}\\nonumber $ To express $w_{\\mu \\nu }$ in terms of the 3D vorticity, $\\mathbf {\\omega },$ which is uniform, we choose the frame $\\begin{aligned}u^\\mu =(1,\\mathbf {0}), & &\\omega ^\\mu =(0,\\mathbf {\\omega }).\\end{aligned}$ Thus, (REF ) yields $w^{0i} = -\\epsilon ^{ijk}p_j \\omega _k, \\ w^{ij} = \\kappa p_0 \\epsilon ^{ijk}\\omega _k.$ Contrary to electromagnetic field strength, $w_{ij}$ is $p_0$ dependent.", "In this frame, the components of $D_\\mu =(D_t,\\mathbf {D}),$ (REF ), are $D_t &=& \\partial _t + (\\mathbf {p}\\times \\mathbf {\\omega })\\cdot {\\mathbf {\\nabla }}_p, \\\\{\\mathbf {D}} &=& \\mathbf {\\nabla }+\\kappa p_0 \\mathbf {\\omega }\\times \\mathbf {\\nabla }_p .", "$ While obtaining the 3D formalism by integrating the 4D transport equations over $p_0,$ the dependence of $\\mathbf {D}$ on $p_0$ should be handled carefully.", "To establish the 3D formalism, we integrate the relativistic equations (REF )-() over $p_0.$ They will be separated into two groups [8] by inspecting their dependence on the time derivative $\\partial _t.$ The equations containing $\\partial _t$ yield the transport equations: $& \\hbar \\left(D_t f_0 +\\int dp_0 {\\mathbf {D}}\\cdot \\mathbf {V}\\right) = 0,\\\\& \\hbar \\left(D_t f_1 + \\int dp_0 {\\mathbf {D}}\\cdot \\mathbf {A}\\right) +2mf_2 = 0,\\\\& \\hbar D_t f_2+2\\mathbf {p} \\cdot \\mathbf {g}_3-2mf_1=0,\\\\& \\hbar D_t f_3-2\\mathbf {p} \\cdot \\mathbf {g}_2 = 0,\\\\& \\hbar \\left(D_t \\mathbf {g}_0 + \\int dp_0 {\\mathbf {D}}A_0\\right)-2 \\mathbf {p}\\times \\mathbf {g}_1=0,\\\\& \\hbar \\left(D_t \\mathbf {g}_1 + \\int dp_0 {\\mathbf {D}}V_0\\right)-2\\mathbf {p}\\times \\mathbf {g}_0 + 2m\\mathbf {g}_2 =0,\\\\& \\hbar \\left( D_t {\\mathrm {g}}^i_2 - \\int dp_0 D_j S^{ji} \\right) + 2p^i f_3-2m{\\mathrm {g}}^i_1 =0,\\\\& \\hbar \\left( D_t \\mathrm {g}_3^i + \\int dp_0 \\varepsilon ^{ijk} D_j S_{k0} \\right) - 2p^i f_2 = 0.$ The others are the constraint equations: $& \\int dp_0 p_0 V_0 -\\mathbf {p}\\cdot \\mathbf {g}_1 - mf_3=0,\\\\& \\int dp_0 p_0 A_0 -\\ \\mathbf {p}\\cdot \\mathbf {g}_0 = 0,\\\\& \\int dp_0 p_0 P - \\frac{1}{4}\\hbar \\int dp_0 \\varepsilon _{ijk}D^i S^{jk} = 0,\\\\& \\int dp_0 p_0 F + \\frac{1}{2}\\hbar \\int dp_0 D^i S_{0i}-mf_0=0,\\\\& \\int dp_0 p_0 \\mathbf {A}-\\mathbf {p}f_1-\\frac{\\hbar }{2}\\int dp_0 {\\mathbf {D}}\\times \\mathbf {V}-m\\mathbf {g}_3 = 0,\\\\& \\int dp_0 p_0 \\mathbf {V}-\\mathbf {p}f_0-\\frac{\\hbar }{2}\\int dp_0 {\\mathbf {D}}\\times \\mathbf {A}=0,\\\\& \\int dp_0 p^0 S_{0i} -\\mathbf {p} \\times \\mathbf {g}_3 + \\frac{\\hbar }{2}\\int dp_0 D_i F=0,\\\\& \\frac{1}{2}\\epsilon _{ijk}\\int dp_0p^0 S^{jk}-(\\mathbf {p} \\times \\mathbf {g}_2)_i-\\frac{\\hbar }{2}\\int dp_0 D_i P+m\\mathrm {g}_{0i}=0.$ These equations show that not all of the 3D fields are independent.", "In fact, we can express them in terms of $f_0$ and $\\mathbf {g}_0$ as it will be discussed below.", "In the classical limit, (REF ) simplifies and yields the classical on-shell condition $(p^2-m^2) W(x,p) = 0, $ whose solutions are $p_0=\\pm E_p,$ where $E_p=\\sqrt{\\mathbf {p}^2+m^2}.$ Therefore, in the classical limit, the fields can be written as the sum of positive and negative energy solutions: $\\mathcal {C} _a^{(0)}(x,p)= \\mathcal {C} _a^{(0)+}(x,p)\\delta (p_0-E_p)+ \\mathcal {C} _a^{(0)-}(x,p)\\delta (p_0+E_p).$ Thus, at the leading order in $\\hbar ,$ the $p_0$ integrals in (REF )-() can easily be performed and all of the 3D fields can be expressed in terms of $f_0$ and $\\mathbf {g}_0$ in the classical limit as [8] $f_1^{(0)\\pm }&=&\\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {g}_0^ {(0)\\pm },\\\\f_2^{(0)\\pm }&= &0,\\\\f_3^{(0)\\pm }&= &\\pm \\frac{m}{E_p}f_0^{(0)\\pm },\\\\\\mathbf {g}_{1}^{(0)\\pm }&= &\\pm \\frac{\\mathbf {p}}{E_p}f_0^{(0)\\pm },\\\\\\mathbf {g}_2^{(0)\\pm }&=& \\frac{\\mathbf {p}}{m}\\times \\mathbf {g}_0^{(0)\\pm },\\\\\\mathbf {g}_3^{(0)\\pm }&= &\\pm \\frac{E_p^2 \\mathbf {g}_0^{(0)\\pm }-(\\mathbf {p}\\cdot \\mathbf {g}_0^{(0)\\pm })\\mathbf {p}}{mE_p}.$ To solve the transport and constraint equations to determine the 3D fields in terms of $f_0$ and $\\mathbf {g}_0$ at the $\\hbar $ -order, one can attempt to add a $\\hbar $ -order term to the on-shell condition (REF ).", "However, by inspecting the relativistic semiclassical solutions of (REF )-() given in [30], it can be observed that each field ${\\cal C}_a $ satisfies a different mass shell condition at the $\\hbar $ -order.", "In [31], it was suggested to write $\\mathcal {C} ^\\pm _a(x,p) = \\mathcal {C} _a^{(0)\\pm }(x,p)\\delta (p_0\\mp E_p)+\\hbar {\\cal A}_a^\\pm (p).$ and define the 3D shell shifts as $\\Delta E_{a}^\\pm (x,\\mathbf {p})= \\int dp_0 p_0 {\\cal A}_a^\\pm (p).$ The operator $\\mathbf {D}$ depends on $p_0,$ so that the related energy averages are expressed as $\\int dp_0 {\\mathbf {D}}{\\cal C}_a(x.p) &=& \\int dp_0 \\left(\\mathbf {\\nabla }+\\kappa p_0 \\mathbf {\\omega }\\times \\mathbf {\\nabla }_p \\right)\\left({\\cal C}_a^\\pm (x,p)\\delta (p_0\\mp E_p)+\\hbar A_a^\\pm (p) \\right) \\nonumber \\\\& =& {\\mathbf {D}}^{(0)}_\\pm {\\cal C}_a^\\pm (x,\\mathbf {p} )+ \\hbar \\kappa (\\mathbf {\\omega }\\times \\mathbf {\\nabla }_p) \\Delta {E}_{a}^\\pm , $ where ${\\mathbf {D}}^{(0)}_\\pm &=&\\mathbf {\\nabla } \\pm \\kappa E_p \\mathbf {\\omega }\\times \\mathbf {\\nabla }_p \\pm \\frac{\\kappa }{E_p}\\mathbf {\\omega }\\times \\mathbf {p} \\nonumber \\\\&\\equiv & \\mathbf {\\partial }_\\pm ^{(0)}\\pm \\frac{\\kappa }{E_p}\\mathbf {\\omega }\\times \\mathbf {p}.", "$ Let us now compare this formulation with the equal-time QKE approach of [8], [31].", "There, the energy averages $\\int dp_0 D_\\mu ^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}}) }{\\cal C}_a(x.p)= (D_t ^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}})},{\\mathbf {D}}^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}})} ){\\cal C}_a(x,\\mathbf {p} )$ are given in terms of the electromagnetic fields $\\mathbf {E},$ $\\mathbf {B}$ as $D_t ^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}})}=\\partial _t +\\mathbf {E} \\cdot \\mathbf {\\nabla }_p ,$ and $\\mathbf {D}^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}}) }=\\mathbf {\\nabla }+\\mathbf {B} \\times \\mathbf {\\nabla }_p .$ We set the electric charge $Q=1.$ Observe that one obtains $D_t$ given in (REF ) from $D_t ^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}})}$ by the substitution ${\\mathbf {E}}\\rightarrow \\mathbf {p}\\times \\mathbf {\\omega }.$ However, $\\mathbf {D}^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}}) }$ is quite different from (REF ).", "First of all, although $\\mathbf {D}^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}}) }$ is independent of $\\hbar ,$ in (REF ) there exists a term which is at the order of $\\hbar .$ Also the $\\hbar $ independent terms are not similar.", "Only $\\mathbf {D}^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}}) } $ corresponds to $ \\mathbf {\\partial }^{(0)}$ by ${\\mathbf {B}}\\rightarrow \\kappa E_p \\mathbf {\\omega }.$ Thus, one cannot generate our results from the ones given in [31], by substituting $ \\mathbf {E}, {\\mathbf {B}} $ with $ \\mathbf {p}\\times \\mathbf {\\omega }, \\kappa E_p \\mathbf {\\omega }.$ The transport equations at the first order in $\\hbar $ can be read from (REF )-() as $& D_t f_0^{(0)\\pm }+{\\mathbf {D}}_\\pm ^{(0)}\\cdot \\mathbf {g}_1^{(0)\\pm }=0,{}\\\\& D_t f_1^{(0)\\pm }+{\\mathbf {D}}_\\pm ^{(0)}\\cdot \\mathbf {g}_0^{(0)\\pm }+2mf_2^{(1)\\pm }=0,{}\\\\& D_tf_2^{(0)\\pm }+\\mathbf {p}\\cdot \\mathbf {g}_3^{(1)\\pm }-2mf_1^{(1)\\pm }=0,{}\\\\& D_t f_3^{(0)\\pm }-2\\mathbf {p}\\cdot \\mathbf {g}_2^{(1)\\pm }=0,{}\\\\& D_t \\mathbf {g}_0^{(0)\\pm }+{\\mathbf {D}}_\\pm ^{(0)} f_1^{(0)\\pm }-2\\mathbf {p}\\times \\mathbf {g}_1^{(1)\\pm }=0,{}\\\\& D_t \\mathbf {g}_1^{(0)\\pm }+{\\mathbf {D}}_\\pm ^{(0)}f_0^{(0)\\pm }-2\\mathbf {p}\\times \\mathbf {g}_0^{(1)\\pm }+2m\\mathbf {g}_2^{(1)\\pm }=0,{}\\\\& D_t \\mathbf {g}_2^{(0)\\pm }+{\\mathbf {D}}_\\pm ^{(0)}\\times \\mathbf {g}_3^{(0)\\pm }+2\\mathbf {p}f_3^{(1)\\pm }-2m\\mathbf {g}_1^{(1)\\pm }=0,{}\\\\& D_t \\mathbf {g}_3^{(0)\\pm }-{\\mathbf {D}}_\\pm ^{(0)}\\times \\mathbf {g}_2^{(0)\\pm }-2\\mathbf {p}f_2^{(1)\\pm }=0.", "{}$ By plugging (REF ) into (REF )-() and employing the definition (REF ), the constraint equations at the first order in $\\hbar $ are acquired as $& \\pm E_p f_0^{(1)\\pm }+ \\Delta E_{f_0}^\\pm -\\mathbf {p}\\cdot \\mathbf {g}_{1}^{(1)\\pm } -mf_3^{(1)\\pm }=0,\\\\& \\pm E_p f_1^{(1)\\pm }+ \\Delta E_{f_1}^\\pm -\\mathbf {p}\\cdot \\mathbf {g}_{0}^{(1)\\pm }=0,\\\\& \\pm E_p f_2^{(1)\\pm }+ \\Delta E_{f_2}^\\pm +\\frac{1}{2}{\\mathbf {D}}_{\\pm }^{(0)}\\cdot \\mathbf {g}_{3}^{(0)\\pm }=0,\\\\& \\pm E_p f_3^{(1)\\pm }+ \\Delta E_{f_3}^\\pm -\\frac{1}{2}{\\mathbf {D}}_\\pm ^{(0)} \\cdot \\mathbf {g}_{2}^{(0)\\pm }-mf_0^{(1)\\pm }=0,\\\\& \\pm E_p \\mathbf {g}_0^{(1)\\pm }+\\Delta {\\mathbf {E}}_{\\mathbf {g}_0}^\\pm -\\mathbf {p} f_1^{(1)\\pm }-\\frac{1}{2} {\\mathbf {D}}_{\\pm }^{(0)} \\times \\mathbf {g}_{1}^{(0)\\pm }-m \\mathbf {g}_3^{(1)\\pm }=0,\\\\& \\pm E_p \\mathbf {g}_1^{(1)\\pm }+\\Delta {\\mathbf {E}}_{\\mathbf {g}_1}^\\pm -\\mathbf {p}f_0^{(1)\\pm }-\\frac{1}{2} {\\mathbf {D}}_{\\pm }^{(0)} \\times \\mathbf {g}_{0}^{(0)\\pm }=0,\\\\& \\pm E_p \\mathbf {g}_2^{(1)\\pm }+\\Delta {\\mathbf {E}}_{\\mathbf {g}_2}^\\pm -\\mathbf {p}\\times \\mathbf {g}_3^{(1)\\pm }+\\frac{1}{2}{\\mathbf {D}}_\\pm ^{(0)}f_3^{(0)\\pm }=0,\\\\& \\pm E_p \\mathbf {g}_3^{(1)\\pm }+\\Delta {\\mathbf {E}}_{\\mathbf {g}_3}^\\pm +\\mathbf {p}\\times \\mathbf {g}_2^{(1)\\pm }-m\\mathbf {g}_0^{(1)\\pm }=0.$ Once we are acquainted with $\\Delta E_{a}^\\pm (x,\\mathbf {p}),$ the constraint equations (REF )-() can be solved to express the first order components of the fields in terms of $f_0^{(0)\\pm }$ and $\\mathbf {g}_0^{(0)\\pm }.$ Some of the shell shifts can be acquired by making use the covariant formalism as we have presented in Appendix .", "The remaining shell shifts should be determined by using the constraint and transport equations (REF )-().", "In conclusion, we calculated the mass shell shifts as $\\Delta E_{f_0}^\\pm &=-\\frac{\\kappa }{2}\\mathbf {\\omega }\\cdot \\mathbf {g}_0^{(0)\\pm },\\\\\\Delta E_{f_1}^\\pm &=\\mp \\frac{\\kappa }{2E_p}\\mathbf {p}\\cdot \\mathbf {\\omega } f_0^{(0)\\pm },\\\\\\Delta E_{f_2}^\\pm &=\\frac{(1+\\kappa )}{2m}(\\mathbf {p}\\times \\mathbf {\\omega })\\cdot \\mathbf {g}_0^{(0)\\pm },\\\\\\Delta E_{f_3}^\\pm &=\\mp \\frac{1}{2mE_p}(\\mathbf {p}\\times \\mathbf {\\omega })\\cdot (\\mathbf {p}\\times \\mathbf {g}_0^{(0)\\pm })\\mp \\kappa \\Bigg (\\frac{E_p^2 \\mathbf {g}_0^{(0)\\pm }-(\\mathbf {p}\\cdot \\mathbf {g}_0^{(0)\\pm })\\mathbf {p}}{2mE_p}\\Bigg )\\cdot \\mathbf {\\omega },\\\\\\Delta {\\mathbf {E}}_{\\mathbf {g}_0}^\\pm &=-\\Bigg (\\frac{\\kappa }{2}\\mathbf {\\omega }+\\frac{\\mathbf {\\omega } \\mathbf {p}^2-\\mathbf {p}(\\mathbf {\\omega }\\cdot \\mathbf {p})}{2E_p^2}\\Bigg ) f_0^{(0)\\pm },\\\\\\Delta {\\mathbf {E}}_{\\mathbf {g}_1}^\\pm &=\\mp \\frac{1}{2E_p}(\\mathbf {p}\\times \\mathbf {\\omega })\\times \\mathbf {g}_0^{(0)\\pm } \\mp \\frac{\\kappa }{2E_p}(\\mathbf {p}\\cdot \\mathbf {g}_0^{(0)\\pm })\\mathbf {\\omega },\\\\\\Delta {\\mathbf {E}}_{\\mathbf {g}_2}^\\pm &=\\frac{m\\mathbf {p}\\times \\mathbf {\\omega }}{2E_p^2}f_0^{(0)\\pm },\\\\\\Delta {\\mathbf {E}}_{\\mathbf {g}_3}^\\pm &=\\mp \\frac{m\\kappa }{2E_p}\\mathbf {\\omega }f_0^{(0)\\pm }.$ It is a curious fact that although $\\mathbf {D}^{(0)}$ given in (REF ) is different from its electromagnetic analogue $\\mathbf {D}^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}}) }$ , we still get the correspondence between the shell shifts given in [31] and the ones calculated here as in (REF )-(), by the substitution ${\\mathbf {E}}\\rightarrow \\mathbf {p}\\times \\mathbf {\\omega }$ and ${\\mathbf {B}} \\rightarrow \\kappa E_p \\mathbf {\\omega }.$ Now, the constraint equations (REF )-() are employed to determine the field components at the first order in $\\hbar ,$ in terms of $f_0^{(0)\\pm }$ and $\\mathbf {g}_0^{(0)\\pm }$ as follows $f_1^{(1)\\pm }&=&\\frac{\\kappa }{2E_p^2}\\mathbf {p}\\cdot \\mathbf {\\omega }f_0^{(0)\\pm }\\pm \\frac{1}{E_p}\\mathbf {p}\\cdot \\mathbf {g}_0^{(1)\\pm },\\\\f_2^{(1)\\pm }&=&\\mp \\frac{(1-\\kappa )}{2mE_p}(\\mathbf {p}\\times \\mathbf {\\omega })\\cdot \\mathbf {g}_0^{(0)\\pm }-\\frac{1}{2m}{\\mathbf {D}}_\\pm \\cdot \\mathbf {g}_0^{(0)\\pm }+\\frac{1}{2mE_p^2}\\mathbf {p}\\cdot (\\mathbf {p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)})\\mathbf {g}_0^{(0)\\pm },\\\\f_3^{(1)\\pm }&=&\\pm \\frac{m}{E_p}f_0^{(1)\\pm }\\mp \\frac{(\\mathbf {p}\\times {\\mathbf {D}}_\\pm ^{(0)})\\cdot \\mathbf {g}_0^{(0)\\pm }}{2mE_p} +\\frac{1}{2mE_p^2}(\\mathbf {p}\\times \\mathbf {\\omega })\\cdot (\\mathbf {p}\\times \\mathbf {g}_0^{(0)\\pm })-\\frac{\\kappa }{2m}\\mathbf {\\omega }\\cdot \\mathbf {g}_0^{(0)\\pm }\\nonumber \\\\&&-\\kappa \\frac{(\\mathbf {p}\\cdot \\mathbf {g}_0^{(0)\\pm })\\mathbf {p}\\cdot \\mathbf {\\omega }}{2mE_p^2},\\\\\\mathbf {g}_1^{(1)\\pm } &=&\\pm \\frac{1}{E_p}\\mathbf {p}f_0^{(1)\\pm }+ \\frac{1}{2E_p^2}(\\mathbf {p}\\times \\mathbf {\\omega })\\times \\mathbf {g}_0^{(0)\\pm }+ \\frac{\\kappa }{2E_p^2}(\\mathbf {p}\\cdot \\mathbf {g}_0^{(0)\\pm })\\mathbf {\\omega }\\pm \\frac{1}{2E_p} {\\mathbf {D}}_{\\pm }^{(0)} \\times \\mathbf {g}_{0}^{(0)\\pm },\\\\\\mathbf {g}_2^{(1)\\pm }&=& \\frac{1}{m} \\mathbf {p}\\times \\mathbf {g}_0^{(1)\\pm }+\\frac{\\mathbf {p}(\\mathbf {p}\\cdot \\mathbf {\\partial }^{(0)}_\\pm )f_0^{(0)\\pm }}{2mE_p^2}\\mp \\frac{1}{2mE_p}(\\mathbf {p}\\times \\mathbf {\\omega })f_0^{(0)\\pm }-\\frac{1}{2m}{\\mathbf {D}}_\\pm ^{(0)}f_0^{(0)\\pm },\\\\\\mathbf {g}_3^{(1)\\pm } &=& \\pm \\frac{E_p}{m}\\mathbf {g}_0^{(1)\\pm }\\mp \\frac{1}{mE_p} \\mathbf {p}(\\mathbf {p}\\cdot \\mathbf {g}_0^{(1)\\pm })+\\frac{1}{2mE_p^2}\\mathbf {p}\\times (\\mathbf {p}\\times \\mathbf {\\omega })f_0^{(0)\\pm }+\\frac{m\\kappa }{2E_p^2}\\mathbf {\\omega }f_0^{(0)\\pm }\\nonumber \\\\&&\\pm \\frac{1}{2mE_p}\\mathbf {p}\\times {\\mathbf {D}}_\\pm ^{(0)}f_0^{(0)\\pm }.$ We determined all of the 3D field components in terms of $f_0$ and $\\mathbf {g}_0$ up to the first order in $\\hbar .$ In the next section we will derive their semiclassical kinetic equations." ], [ "Semiclassical kinetic equations of $f_0$ and {{formula:4d24c807-85b4-4dc8-bc03-eca583520494}} ", "Kinetic equation of the particle number density $f_0$ at the zeroth order in $\\hbar $ can easily derived from () and (REF ) as $\\Bigg (D_t\\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }^{(0)}_\\pm \\Bigg )f_0^{(0)\\pm }=0.$ By employing () and () we can get kinetic equation of the spin density ${\\mathbf {g}}_0$ at the zeroth order in $\\hbar $ as follows, $\\Bigg (D_t\\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg )\\mathbf {g}_0^{(0)\\pm } =\\frac{1}{E_p^2}(\\mathbf {p}\\times \\mathbf {\\omega })(\\mathbf {p}\\cdot \\mathbf {g}_0^{(0)\\pm })-\\kappa \\mathbf {\\omega }\\times \\mathbf {g}_0^{(0)\\pm }.$ Let us now derive the kinetic equations of $f_0$ and $\\mathbf {g}_0$ at next-to-leading order in $\\hbar .$ To carry out our calculations the transport equations at the second order in $\\hbar $ are needed.", "They can be acquired by making use of (REF ) in (REF )-(): $& D_t f_0^{(1)\\pm }+{\\mathbf {D}}_\\pm ^{(0)} \\cdot \\mathbf {g}_1^{(1)\\pm }+\\frac{\\kappa \\mathbf {\\omega }}{2E_p}\\mathbf {p}\\cdot (\\mathbf {\\omega }\\times \\mathbf {\\nabla }_p)\\mathbf {g}_0^{(0)\\pm }=0, \\\\& D_t f_1^{(1)\\pm }+{\\mathbf {D}}_\\pm ^{(0)}\\cdot \\mathbf {g}_0^{(1)\\pm }+\\frac{\\kappa (\\mathbf {\\omega }\\cdot \\mathbf {p})}{2E_p^2}\\mathbf {p}\\cdot (\\mathbf {\\omega }\\times \\mathbf {\\nabla }_p)f_0^{(0)\\pm }+2mf_2^{(2)\\pm }=0, \\\\& D_t f_2^{(1)\\pm } + 2\\mathbf {p}\\cdot \\mathbf {g}_3^{(2)\\pm }-2mf_1^{(2)\\pm }=0, \\\\& D_t f_3^{(1)\\pm } - 2\\mathbf {p}\\cdot \\mathbf {g}_2^{(2)\\pm }=0,\\\\& D_t \\mathbf {g}_0^{(1)\\pm }+{\\mathbf {D}}_\\pm ^{(0)} f_1^{(1)\\pm }\\pm \\frac{\\kappa ^2(\\mathbf {\\omega }\\cdot \\mathbf {p})}{2E_p}\\mathbf {\\omega }\\times \\left(\\frac{\\mathbf {p}}{E_p^2}-\\mathbf {\\nabla }_p\\right)f_0^{(0)\\pm }-2\\mathbf {p}\\times \\mathbf {g}_1^{(2)\\pm }=0, \\\\& D_t \\mathbf {g}_1^{(1)\\pm }+{\\mathbf {D}}_\\pm ^{(0)} f_0^{(1)\\pm } -\\frac{\\kappa ^2}{2}(\\mathbf {\\omega }\\times \\mathbf {\\nabla }_p)(\\mathbf {\\omega }\\cdot \\mathbf {g}_0^{(0)\\pm })-2\\mathbf {p}\\times \\mathbf {g}_0^{(2)\\pm }+2m\\mathbf {g}_2^{(2)\\pm }=0, \\\\& D_t \\mathbf {g}_2^{(1)\\pm } + {\\mathbf {D}}_\\pm ^{(0)} \\times \\mathbf {g}_3^{(1)\\pm } \\mp \\frac{m\\kappa ^2}{2E_p}\\mathbf {\\omega }\\times \\left(\\mathbf {\\omega }\\times \\left(\\frac{\\mathbf {p}}{E_p^2}-\\mathbf {\\nabla }_p\\right)\\right)f_0^{(0)\\pm }+2\\mathbf {p}f_3^{(2)\\pm }-2m\\mathbf {g}_1^{(2)\\pm }=0, \\\\& D_t \\mathbf {g}_3^{(1)\\pm }-{\\mathbf {D}}_\\pm ^{(0)} \\times \\mathbf {g}_2^{(1)\\pm } +\\frac{m\\kappa \\mathbf {\\omega }}{2E_p^2} \\mathbf {p}\\cdot (\\mathbf {\\omega }\\times \\mathbf {\\nabla }_p)f_0^{(0)\\pm }-2\\mathbf {p}f_2^{(2)\\pm }=0.", "$ We would like to emphasize the fact that at this order the resemblance between $\\mathbf {D}^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}}) }$ and (REF ) is completely lost due to the presence of the $\\hbar $ -order term in the latter.", "By employing (), () and (REF ), we derived the time evolution of $f_0^{(1)\\pm }$ in term of $\\mathbf {g}_0^{(0)\\pm }$ as $\\begin{aligned}\\Bigg (D_t \\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg ) f_0^{(1)\\pm }=& -\\frac{\\kappa }{2E_p^2}(\\mathbf {\\omega }\\times \\mathbf {p})\\cdot (\\mathbf {\\partial }_\\pm ^{(0)}\\times \\mathbf {g}_0^{(0)\\pm }) \\\\&+ \\frac{1}{2E_p^2}(\\mathbf {p}\\times \\mathbf {\\omega }) \\cdot \\left( \\mathbf {\\nabla }\\times \\mathbf {g}_0^{(0)\\pm }\\right)- \\frac{\\kappa }{2E_p^2}\\mathbf {\\omega }\\cdot (\\mathbf {p}\\cdot \\mathbf {\\nabla })\\mathbf {g}_0^{(0)\\pm }.\\end{aligned}$ After some cumbersome calculations by making use of (REF ), (), (), (REF ), () and (), we obtained the dynamical evolution of $\\mathbf {g}_0^{(1)\\pm }$ depending on $f_0^{(0)\\pm }$ as $\\begin{aligned}\\Bigg (D_t\\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg )\\mathbf {g}_0^{(1)\\pm }=&- \\kappa (\\mathbf {\\omega }\\times \\mathbf {g}_0^{(1)\\pm })+ \\frac{\\mathbf {p}\\cdot \\mathbf {g}_0^{(1)\\pm }}{E_p^2}(\\mathbf {p}\\times \\mathbf {\\omega })\\\\&-\\frac{\\kappa \\mathbf {\\omega }}{2E_p^2} \\mathbf {p}\\cdot \\mathbf {\\nabla }f_0^{(0)\\pm } +\\frac{\\mathbf {p}(\\mathbf {p}\\cdot \\mathbf {\\omega })}{2E_p^4}(\\mathbf {p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)})f_0^{(0)\\pm }\\\\&-\\frac{\\mathbf {p}^2(\\mathbf {p}\\cdot \\mathbf {\\omega })}{2 E_p^4}{\\mathbf {D}}_\\pm ^{(0)}f_0^{(0)\\pm }\\mp \\frac{\\mathbf {p}\\cdot \\mathbf {\\omega }}{2E_p^3}(\\mathbf {p}\\times \\mathbf {\\omega })f_0^{(0)\\pm }\\\\&\\pm \\frac{\\kappa }{2E_p}\\mathbf {p}\\times \\Bigg (-\\mathbf {\\omega }^2 \\mathbf {\\nabla }_p + \\mathbf {\\omega }(\\mathbf {\\omega }\\cdot \\mathbf {\\nabla }_p)+\\frac{1}{E_p^2}\\mathbf {\\omega }(\\mathbf {p}\\cdot \\mathbf {\\omega })\\Bigg )f_0^{(0)\\pm }\\\\&\\mp \\frac{1}{2E_p^3}\\mathbf {p}\\times (\\mathbf {p}\\times \\mathbf {\\omega })D_t f_0^{(0)\\pm }.\\end{aligned}$ We established the semiclassical kinetic equations of $f_0$ and $\\mathbf {g}_0$ .", "It is also possible to deal with the kinetic equations of some other components of the 3D Wigner function like $f_1$ and $\\mathbf {g}_3,$ where the latter is related to magnetic dipole-moment." ], [ " Kinetic Theories of the right- and left-handed fermions", "In heavy-ion collisions, because of considering the constituent quarks of the quark-gluon plasma as massless, one expects that the collective dynamics yield the chiral vortical and separation effects due to vorticity.", "We would like to study how the quark mass affects this picture.", "To study the mass corrections, we need the kinetic equations satisfied by the right-handed and left-handed distribution functions $f_{\\scriptscriptstyle { R}}, f_{\\scriptscriptstyle { L}},$ defined by $f_\\chi = \\frac{1}{2}(f_0 +\\chi f_1 ),$ where $\\chi =\\lbrace +,-\\rbrace $ , and $f_+ \\equiv f_{\\scriptscriptstyle { R}}$ and $f_- \\equiv f_{\\scriptscriptstyle { L}}$ .", "However, the 3D kinetic equations (REF ), (REF ) and (REF ), (REF ) are given in terms of $f_0$ and $\\mathbf {g}_0.$ Thus, we have to specify the spin current $\\mathbf {g}_0$ , by respecting the relations (REF ) and (REF ).", "First, let the direction of the spin current be parallel to $\\mathbf {p}.$ Then, (REF ) implies that $\\mathbf {g}_0^{(0)\\pm } = \\pm \\frac{E_p }{\\mathbf {p}^2} \\mathbf {p}f_1^{(0)\\pm }.$ By plugging (REF ) into the classical kinetic equation of the spin current given by (REF ), we find $\\Bigg (D_t \\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg )f_1^{(0)\\pm } = 0 .$ Recall that it has the same form with the classical kinetic equation satisfied by $f_0^{(0)\\pm }$ , (REF ).", "Additionally, (REF ) allows us to write the right-hand side of (REF ) in terms of $f_1^{(0)\\pm }:$ $\\begin{aligned}\\Bigg (D_t \\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg ) f_0^{(1)\\pm }=&\\pm \\frac{1}{2E_p\\mathbf {p}^2}\\Bigg ({(1+\\kappa )}(\\mathbf {p}\\times (\\mathbf {p}\\times \\mathbf {\\omega }))-{\\kappa }(\\mathbf {\\omega }\\cdot \\mathbf {p})\\mathbf {p}\\Bigg )\\cdot \\mathbf {\\nabla }f_1^{(0)\\pm }\\\\&+\\frac{\\kappa ^2}{2\\mathbf {p}^2}(\\mathbf {p}\\cdot \\mathbf {\\omega })(\\mathbf {p}\\times \\mathbf {\\omega })\\cdot \\mathbf {\\nabla }_p f_1^{(0)\\pm }.\\end{aligned}$ Now, we desire to find the kinetic equation satisfied by $f_1^{(1)\\pm }$ .", "For this purpose, first observe that the equation (REF ) can be solved as $\\mathbf {g}_0^{(1)\\pm } = \\pm \\frac{E_p\\mathbf {p}}{\\mathbf {p}^2}f_1^{(1)\\pm }\\mp \\frac{\\kappa \\mathbf {\\omega }}{2E_p}f_0^{(0)\\pm }\\pm E_p\\mathbf {p}\\times {\\mathbf {F}}^\\pm ,$ where ${\\mathbf {F}}^\\pm $ is a free vector field which will be fixed shortly.", "Then, by plugging (REF ) into (REF ) and then multiplying it with $\\pm \\mathbf {p} / E_p,$ we find $\\begin{aligned}\\Bigg (D_t\\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg )f_1^{(1)\\pm }=&-(\\mathbf {p}\\times (\\mathbf {p}\\times \\mathbf {\\omega }))\\cdot {\\mathbf {F}}^\\pm \\mp \\frac{\\kappa \\mathbf {p}\\cdot \\mathbf {\\omega }}{2E_p^3} \\mathbf {p}\\cdot \\mathbf {\\nabla }f_0^{(0)\\pm }\\\\&+\\frac{\\kappa \\mathbf {\\omega }}{2E_p^2}\\cdot \\mathbf {p}\\Bigg (D_t\\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg )f_0^{(0)\\pm }.\\end{aligned}$ To have an equation compatible with (REF ), we choose ${\\mathbf {F}}^\\pm $ to be ${\\mathbf {F}}^\\pm = \\mp \\frac{(\\kappa +1)}{2E_p\\mathbf {p}^2}\\mathbf {\\nabla }f_0^{(0)\\pm }-\\frac{\\kappa ^2}{2\\mathbf {p}^2}\\mathbf {\\omega }\\times \\mathbf {\\nabla }_p f_0^{(0)\\pm }.$ By inserting it into (REF ) one gets the kinetic equation $\\begin{aligned}\\Bigg (D_t\\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg )f_1^{(1)\\pm }=&\\pm \\frac{(\\kappa +1)}{2E_p\\mathbf {p}^2}(\\mathbf {p}\\times (\\mathbf {p}\\times \\mathbf {\\omega }))\\cdot \\mathbf {\\nabla }f_0^{(0)\\pm }+\\frac{(\\mathbf {p}\\cdot \\mathbf {\\omega })\\kappa ^2}{2\\mathbf {p}^2}(\\mathbf {p}\\times \\mathbf {\\omega })\\cdot \\mathbf {\\nabla }_p f_0^{(0)\\pm }\\\\&\\mp \\frac{\\kappa \\mathbf {\\omega }\\cdot \\mathbf {p}}{2E_p^3} \\mathbf {p}\\cdot \\mathbf {\\nabla }f_0^{(0)\\pm }+\\frac{\\kappa \\mathbf {\\omega }\\cdot \\mathbf {p}}{2E_p^2}\\Bigg (D_t\\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg )f_0^{(0)\\pm }.\\end{aligned}$ The last term can be set equal to zero due to (REF ).", "However, instead of doing that, we add a similar vanishing term to the right-hand side of (REF ): $\\begin{aligned}\\Bigg (D_t \\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg ) f_0^{(1)\\pm }=& \\pm \\frac{(\\kappa +1)}{2E_p \\mathbf {p}^2}(\\mathbf {p}\\times (\\mathbf {p}\\times \\mathbf {\\omega }))\\cdot \\mathbf {\\nabla }f_1^{(0)\\pm }+\\frac{(\\mathbf {p}\\cdot \\mathbf {\\omega })\\kappa ^2}{2\\mathbf {p}^2}(\\mathbf {p}\\times \\mathbf {\\omega })\\cdot \\mathbf {\\nabla }_pf_1^{(0)\\pm }\\\\&\\mp \\frac{\\kappa }{2E_p} \\frac{\\mathbf {\\omega }\\cdot \\mathbf {p}}{\\mathbf {p}^2}(\\mathbf {p}\\cdot \\mathbf {\\nabla })f_1^{(0)\\pm }+\\frac{\\kappa \\mathbf {\\omega }\\cdot \\mathbf {p}}{2E_p^2}\\Bigg (D_t\\pm \\frac{\\mathbf {p}}{E_p}\\cdot \\mathbf {\\partial }_\\pm ^{(0)}\\Bigg )f_1^{(0)\\pm }.\\end{aligned}$ Notice that adding the last term is equivalent to a shift of $f_0^{(1)\\pm }$ with the term $\\frac{\\kappa \\mathbf {\\omega }\\cdot \\mathbf {p}}{2E_p^2}f_1^{(0)\\pm }.$ By combining (REF ) and (REF ), we find the kinetic equations $\\begin{aligned}&\\Bigg \\lbrace \\Bigg (1-\\hbar \\frac{\\chi \\kappa (\\mathbf {p}\\cdot \\mathbf {\\omega })}{2E_p^2}\\Bigg )\\partial _t+\\Bigg [\\Bigg (1-\\hbar \\frac{\\chi \\kappa (\\mathbf {p}\\cdot \\mathbf {\\omega })}{2E_p^2}\\Bigg )(\\kappa +1)-\\hbar \\frac{\\chi (\\mathbf {p}\\cdot \\mathbf {\\omega })\\kappa ^2}{2\\mathbf {p}^2}\\Bigg ](\\mathbf {p}\\times \\mathbf {\\omega })\\cdot \\mathbf {\\nabla }_p\\\\&+\\Bigg [ \\pm \\frac{\\mathbf {p}}{E_p}\\mp \\hbar \\frac{\\chi (\\kappa +1)}{2E_p \\mathbf {p}^2}\\mathbf {p}\\times (\\mathbf {p}\\times \\mathbf {\\omega })\\pm \\hbar \\frac{\\chi \\kappa m^2 (\\mathbf {p}\\cdot \\mathbf {\\omega })}{4E_p^3\\mathbf {p}^2}\\mathbf {p} \\Bigg ]\\cdot \\mathbf {\\nabla }\\Bigg \\rbrace f_\\chi ^\\pm \\\\&=\\pm \\hbar \\frac{\\chi \\kappa m^2\\mathbf {p}\\cdot \\mathbf {\\omega }}{4E_p^3 \\mathbf {p}^2} \\mathbf {p}\\cdot \\mathbf {\\nabla }f_{-\\chi }^\\pm .\\end{aligned}$ Therefore, we establish the kinetic theory $\\Bigg [\\sqrt{\\eta }_\\chi ^\\pm \\partial _t+(\\sqrt{\\eta }\\dot{\\mathbf {x}})^\\pm _\\chi \\cdot \\mathbf {\\nabla }+(\\sqrt{\\eta }\\dot{\\mathbf {p}})_\\chi ^\\pm \\cdot \\mathbf {\\nabla }_p\\Bigg ]f_{\\chi }^\\pm =\\pm \\hbar \\frac{\\chi m^2 \\mathbf {p}\\cdot \\mathbf {\\omega }}{4E_p^3 \\mathbf {p}^2}\\mathbf {p}\\cdot \\mathbf {\\nabla }f_{-\\chi }^\\pm , $ with $\\sqrt{\\eta }_\\chi ^\\pm &=1-\\hbar \\frac{\\chi (\\mathbf {p}\\cdot \\mathbf {\\omega })}{2E_p^2}, \\\\(\\sqrt{\\eta }\\dot{\\mathbf {x}})^\\pm _\\chi &= \\pm \\frac{\\mathbf {p}}{E_p}\\pm \\hbar \\frac{\\chi \\mathbf {\\omega }}{E_p}\\mp \\hbar \\frac{\\chi (\\mathbf {p}\\cdot \\mathbf {\\omega })\\mathbf {p}}{4E_p}\\Bigg ( \\frac{3}{\\mathbf {p}^2}+\\frac{1}{E_p^2}\\Bigg ), \\\\(\\sqrt{\\eta }\\dot{\\mathbf {p}})_\\chi ^\\pm &=\\Bigg [2\\Bigg (1-\\hbar \\frac{\\chi (\\mathbf {p}\\cdot \\mathbf {\\omega })}{2E_p^2}\\Bigg )-\\hbar \\frac{\\chi (\\mathbf {p}\\cdot \\mathbf {\\omega })}{2\\mathbf {p}^2}\\Bigg ](\\mathbf {p}\\times \\mathbf {\\omega }).", "$ We set $\\kappa =1$ for acquiring the Coriolis force correctly.", "The term appearing on the right-hand-side of (REF ) shows that for the massive fermions right- and left-handed distributions cannot be decoupled.", "Getting inspiration from the left- and right-handed decompositions of the distribution functions, $f_0=f_{\\scriptscriptstyle { R}}+f_{\\scriptscriptstyle { L}},\\ f_1=f_{\\scriptscriptstyle { R}}-f_{\\scriptscriptstyle { L}},$ we write the shell shifts in (REF ) and () as $\\Delta E_{f_0}^\\pm =\\Delta E_{f_0{\\scriptscriptstyle { R}}}^\\pm +\\Delta E_{f_0{\\scriptscriptstyle { L}}}^\\pm &= &\\mp \\frac{E_p\\, \\mathbf {p}\\cdot \\mathbf {\\omega }}{2\\mathbf {p}^2}(f_R^{(0)\\pm }-f_L^{(0)\\pm }),\\\\\\Delta E_{f_1}^\\pm = \\Delta E_{f_1{\\scriptscriptstyle { R}}}^\\pm -\\Delta E_{f_1{\\scriptscriptstyle { L}}}^\\pm &= &\\mp \\frac{\\mathbf {p}\\cdot \\mathbf {\\omega }}{2E_p}(f_R^{(0)\\pm }+f_L^{(0)\\pm }).$ Hence, for the left- and right-handed fermions we define $\\Delta E_\\chi ^\\pm = \\mp \\frac{\\chi }{4E_p}\\Bigg (1+\\frac{E_p^2}{\\mathbf {p}^2}\\Bigg )\\mathbf {p}\\cdot \\mathbf {\\omega }.$ Therefore, the dispersion relations are $\\epsilon _{p,\\chi }^{\\pm }=\\pm E_p \\mp \\hbar \\frac{\\chi }{4E_p}\\Bigg (1+\\frac{E_p^2}{\\mathbf {p}^2}\\Bigg )\\mathbf {p}\\cdot \\mathbf {\\omega }.$ The particle number current density can be written in terms of the equilibrium distribution function as $\\mathbf {j}_\\chi ^\\pm =\\int \\frac{d^3 \\mathbf {p}}{(2\\pi )^3} (\\sqrt{\\eta }\\dot{\\mathbf {x}})^{\\pm }_\\chi f_\\chi ^{eq\\pm }(\\epsilon _{p,\\chi }^\\pm ).$ Let the equilibrium distribution function be taken as the Fermi-Dirac distribution: $f_\\chi ^{eq\\pm }(\\epsilon _{p,\\chi }^\\pm )=\\frac{1}{e^{\\pm (\\epsilon _{p,\\chi }^\\pm -\\mu _\\chi )/T}+1},$ where $\\mu _\\chi $ is the chiral chemical potential, $T$ is the temperature and we employed the dispersion relations (REF ).", "We can expand (REF ) in Taylor series as $f_\\chi ^{eq\\pm }(\\epsilon _{p,\\chi }^\\pm ) \\approx f_\\chi ^{eq\\pm }(E_p)\\mp \\hbar \\frac{\\chi }{4E_p}\\Bigg (1+\\frac{E_p^2}{\\mathbf {p}^2}\\Bigg )\\mathbf {p}\\cdot \\mathbf {\\omega } \\frac{df_\\chi ^{eq\\pm }(E_p)}{dE_p},$ where $f_\\chi ^{eq\\pm }(E_p)=\\frac{1}{e^{(E_p\\mp \\mu _\\chi )/T}+1}.$ Notice that the equilibrium distribution function depends only on the magnitude of the momentum.", "Therefore, we can evaluate the angular part of the integral in (REF ), yielding $\\begin{aligned}\\mathbf {j}_\\chi ^\\pm &= \\hbar \\chi \\mathbf {\\omega }\\int \\frac{d |\\mathbf {p}|}{24\\pi ^2}\\mathbf {p}^2 \\Bigg [\\Bigg (\\pm \\frac{8}{E_p} \\pm \\frac{ m^2 }{E_p^3}\\Bigg ) f_\\chi ^{eq\\pm }(E_p) -\\Bigg (\\frac{1}{E_p^2}+\\frac{1}{\\mathbf {p}^2}\\Bigg )\\mathbf {p}^2 \\frac{d f_\\chi ^{eq\\pm }(E_p)}{dE_p}\\Bigg ] .\\end{aligned}$ Since the classical terms vanish, the current densities are at the order of $\\hbar .$ Then, the vector and axial vector current densities, $\\mathbf {j}_{\\scriptscriptstyle { V}}= \\mathbf {j}_{\\scriptscriptstyle { R}}+\\mathbf {j}_{\\scriptscriptstyle { L}}, \\ \\ \\mathbf {j}_{\\scriptscriptstyle { A}}=\\mathbf {j}_{\\scriptscriptstyle { R}}- \\mathbf {j}_{\\scriptscriptstyle { L}},$ are accomplished as $\\mathbf {j}_{{\\scriptscriptstyle { V}},{\\scriptscriptstyle { A}}}& = &\\sum _\\pm \\hbar \\mathbf {\\omega }\\int \\frac{d |\\mathbf {p}|}{24\\pi ^2}\\mathbf {p}^2 \\Bigg [\\Bigg (\\pm \\frac{8}{E_p} \\pm \\frac{ m^2 }{E_p^3}\\Bigg ) f_{{\\scriptscriptstyle { V}},{\\scriptscriptstyle { A}}}^{eq\\pm }(E_p) -\\Bigg (\\frac{1}{E_p^2}+\\frac{1}{\\mathbf {p}^2}\\Bigg )\\mathbf {p}^2 \\frac{d f_{{\\scriptscriptstyle { V}},{\\scriptscriptstyle { A}}}^{eq\\pm }(E_p)}{dE_p}\\Bigg ] \\nonumber \\\\&\\equiv & \\sigma _{{\\scriptscriptstyle { V}}, {\\scriptscriptstyle { A}}}\\mathbf {\\omega }.$ We introduced $f_{{\\scriptscriptstyle { V}},{\\scriptscriptstyle { A}}}^\\pm =\\frac{1}{e^{(E_p\\mp \\mu _R)/T}+1}\\mp \\frac{1}{e^{(E_p\\mp \\mu _L)/T}+1}.$ Observe that at zero temperature, the distribution functions transform into the Heaviside step function for positive energy particles and vanish for negative energy particles.", "For simplicity, let us set $\\mu _{\\scriptscriptstyle { R}}=\\mu _{\\scriptscriptstyle { L}}\\equiv \\mu $ .", "Then, the vector current vanishes and the axial vector current gives $\\lim _{T\\rightarrow 0}\\sigma _{\\scriptscriptstyle { A}}=\\frac{\\hbar }{2\\pi ^2}\\Bigg [\\frac{3\\mu ^2-m^2}{3\\mu }\\sqrt{\\mu ^2-m^2}-\\frac{1}{2}m^2\\ln (\\frac{\\mu +\\sqrt{\\mu ^2-m^2}}{m})\\Bigg ] \\theta (\\mu -m).$ In the small mass limit we get $\\lim _{T\\rightarrow 0}\\sigma ^+_A =\\hbar \\frac{\\mu \\sqrt{\\mu ^2-m^2}}{2\\pi ^2} \\theta (\\mu -m).$ This result is in harmony with the field theoretic calculations performed by means of Kubo formula in [33], [34], [35].", "Let us inspect the chiral (massless) limit: First of all, (REF )-() generate the chiral kinetic theory $\\Bigg [\\sqrt{\\eta }_\\chi ^{{\\scriptscriptstyle { C}}\\pm }\\partial _t+(\\sqrt{\\eta }\\dot{\\mathbf {x}})^{{\\scriptscriptstyle { C}}\\pm }_\\chi \\cdot \\mathbf {\\nabla }+(\\sqrt{\\eta }\\dot{\\mathbf {p}})_\\chi ^{{\\scriptscriptstyle { C}}\\pm } \\cdot \\mathbf {\\nabla }_p\\Bigg ]f_{\\chi }^\\pm =0 , $ with $\\sqrt{\\eta }_\\chi ^{{\\scriptscriptstyle { C}}\\pm }&=1-\\hbar \\frac{\\chi \\mathbf {\\omega }\\cdot \\mathbf {p}}{2\\mathbf {p}^2}, \\\\(\\sqrt{\\eta }\\dot{\\mathbf {x}})^{{\\scriptscriptstyle { C}}\\pm }_\\chi &=\\pm \\frac{\\mathbf {p}}{|{\\mathbf {p}}|}\\mp \\hbar \\frac{\\chi }{|{\\mathbf {p}}|^3}\\mathbf {p}(\\mathbf {p}\\cdot \\mathbf {\\omega })\\pm \\hbar \\frac{\\chi }{|{\\mathbf {p}}|}\\mathbf {\\omega }, \\\\(\\sqrt{\\eta }\\dot{\\mathbf {p}})^{{\\scriptscriptstyle { C}}\\pm }_\\chi &=2\\mathbf {p}\\times \\mathbf {\\omega }-\\hbar \\chi \\frac{3(\\mathbf {p}\\cdot \\mathbf {\\omega })}{2\\mathbf {p}^2}(\\mathbf {p}\\times \\mathbf {\\omega }).", "$ Then, (REF ) gives the dispersion relation for chiral particles as $\\epsilon _{p,\\chi }^{{\\scriptscriptstyle { C}}\\pm }=\\pm |{\\mathbf {p}}| \\mp \\hbar \\frac{\\chi }{2}\\hat{\\mathbf {p}}\\cdot \\mathbf {\\omega }.$ It is consistent with the dispersion relation obtained in [36], [9], [37].", "Moreover, the dynamical evolution of the spatial coordinate vector, (), coincides with the one established in [26].", "Let the equilibrium distribution be given by the Fermi-Dirac distribution: $f_\\chi ^{eq\\pm }(\\epsilon _{p,\\chi }^{{\\scriptscriptstyle { C}}\\pm })=\\frac{1}{e^{\\pm (\\epsilon _{p,\\chi }^{{\\scriptscriptstyle { C}}\\pm }-\\mu _\\chi )/T}+1},$ Thus, the chiral particle number current densities are acquired as $\\mathbf {j}_\\chi ^{{\\scriptscriptstyle { C}}\\pm } = \\hbar \\chi \\mathbf {\\omega }\\int \\frac{d |\\mathbf {p}|}{3\\pi ^2} \\Bigg (\\pm { |\\mathbf {p}|} f_\\chi ^{eq\\pm }( |\\mathbf {p}|)-\\frac{1}{4}\\mathbf {p}^2\\frac{df_\\chi ^{eq\\pm }( |\\mathbf {p}|)}{d|\\mathbf {p}|}\\Bigg ).$ We can perform the integrals and obtain the vector and axial vector currents as $\\mathbf {j}_{\\scriptscriptstyle { V}}=\\hbar \\frac{\\mu \\mu _{\\scriptscriptstyle { A}}}{2\\pi ^2} \\mathbf {\\omega },\\ \\ \\mathbf {j}_{\\scriptscriptstyle { A}}=\\hbar \\left(\\frac{T^2}{12}+\\frac{\\mu ^2 +\\mu ^2_{\\scriptscriptstyle { A}}}{4\\pi ^2} \\right)\\mathbf {\\omega },$ where $\\mu =\\mu _{\\scriptscriptstyle { R}}+\\mu _{\\scriptscriptstyle { L}},$ $\\mu _{\\scriptscriptstyle { A}}=\\mu _{\\scriptscriptstyle { R}}-\\mu _{\\scriptscriptstyle { L}}.$ These coincide with the results reported in [20].", "Therefore, we conclude that in the massless limit the chiral vector and separation effects are generated correctly." ], [ "Discussions", "The VQKE of the Wigner function leads to the transport equations of the components of covariant Wigner function.", "By integrating them over $p_0,$ we write the equations which the components of 3D Wigner function obey.", "They can be separated into the transport and constraint equations.", "The vector component of the covariant $D_\\mu =(D_t,\\mathbf {D})$ operator depends explicitly on $p_0$ as in ().", "Hence, also the transport equations depend explicit on $p_0,$ in contrary to the transport equations which have been defined in [8], [31].", "$p_0$ integrals are performed by employing the on-shell conditions of the covariant fields.", "Then, $\\mathbf {D}$ effectively becomes as in (REF ), which is very different from the $\\mathbf {D}^{({\\scriptscriptstyle { E}}{\\scriptscriptstyle { M}})}$ appearing in [8], [31].", "Therefore, it is not possible to generalize the method of [31] directly to our case.", "Nevertheless, to study the 3D transport and constraint equations, we follow the method proposed in [31] and let each component of the Wigner function satisfy a different on-shell condition at the $\\hbar $ -order.", "We presented these shell shifts and by plugging them into the constraint equations we expressed the components of 3D Wigner function at the first order in terms of $f_0, \\mathbf {g}_0.$ We consider $f_0$ and $\\mathbf {g}_0$ as the independent components.", "The main objective is to establish the semiclassical kinetic equations of the fields which are chosen as the independent set of components.", "After some cumbersome calculations we acquired them as in (REF ), (REF ) and (REF ), (REF ).", "To accomplish the mass corrections to the chiral (massless) kinetic equations, we have fixed the spin current $\\mathbf {g}_0$ in terms of $f_0$ and $f_1.$ Then, we derived the kinetic equations of right- and left-handed distribution functions in (REF ), which provide us the kinetic theories of the right- and left-handed fermions.", "We acquired their dispersion relations and calculated particle number current densities by choosing the equilibrium distribution functions appropriately.", "We have shown that the massless case generates the chiral vortical and separation effects correctly.", "Therefore, we succeeded in accomplishing the mass corrections to the chiral effects.", "A challenging future research direction is the study of 3D transport theory of VQKE in the presence of electromagnetic fields.", "As far as the contributions linear in electromagnetic fields and vorticity are concerned, this can simply be achieved by gathering the results obtained here and the ones reported in [31].", "However, establishing kinetic equations of $f_0$ and $\\mathbf {g}_0$ up to the first order in $\\hbar $ in the presence of only vorticity or electromagnetic fields are already very difficult.", "Thus, when they are considered together, deriving the semiclassical kinetic equations of $f_0$ and $\\mathbf {g}_0$ will be a demanding task.", "Covariant kinetic equations established in [30] may give some hints to solve this problem.", "Kinetic equations are useful mainly when collisions are taken into account.", "Thus, incorporating scatterings in the 3D formulation is desired.", "Unfortunately we do not know how to do it for the VQKE.", "In principle, this can be achieved by considering the collisions in the covariant approach first and then deal with the 3D VQKE by integrating them over $p_0.$ This will generate collision terms on the right hand side of (REF )-().", "In this respect, the methods employed in [38], [39] can be useful.", "The other method would be to introduce collisions to the kinetic equations of the independent set of fields (REF ), (REF ), (REF ), (REF ).", "This is another challenging open problem." ], [ "Calculation of shell shifts from the covariant formulation", "Semiclassical solutions of the kinetic equations obeyed by the components of 4D Wigner function, (REF )-(), have been presented in [30].", "By inspecting $\\hbar $ -order components of those solutions, one observes that some of them are expressed in the form ${\\cal C}_i^{(1)} = \\beta _i^{(1)} \\delta (p^2-m^2)- \\Delta E_{i}(p) \\delta ^{\\prime }(p^2-m^2),$ where $\\beta _i^{(1)} $ are first order fields.", "One can notice that the 3D mass shell shifts for these fields can be obtained as $\\Delta E_{i}(\\mathbf {p}) = \\int \\Delta E_i(p) \\delta (p^2-m^2) dp_0.$ In this fashion, we calculated the on-shell energy shifts for the following components of 4D Wigner function.", "The scalar field ${\\cal F}:$ ${\\cal F}^{(1)}=m\\delta (p^2-m^2)f_V^1-\\frac{m}{2} \\delta ^{\\prime }(p^2-m^2)f_A^0 \\Sigma ^{(0)}_{\\mu \\nu }w^{\\mu \\nu }.$ $f_A^0,f_V^1$ are scalars and $\\Sigma ^{(0)}_{\\mu \\nu }=-(1/m)\\varepsilon _{\\mu \\nu \\alpha \\beta } p^\\alpha s^\\beta ,$ where $s_\\mu $ is the spin quantization direction four-vector.", "$\\Delta E^\\pm _{f_3}(\\mathbf {p}) =\\pm \\frac{m}{2}\\int dp_0 \\Sigma ^{(0)}_{\\mu \\nu }w^{\\mu \\nu } f_A^0 \\delta (p^2-m^2)= \\mp \\frac{1}{2E_p}(\\mathbf {p}\\times \\mathbf {\\omega })\\cdot \\mathbf {g}_2^{(0)\\pm }-\\kappa \\mathbf {g}_3^{(0)\\pm }\\cdot \\mathbf {\\omega }.$ The axial-vector field ${\\cal A}_\\mu :$ ${\\cal A}^{(1)}_\\mu = \\frac{1}{2}\\epsilon _{\\mu \\nu \\rho \\sigma }p^\\nu \\Sigma ^{(1) \\rho \\sigma }\\delta (p^2-m^2)-\\frac{1}{2}\\epsilon _{\\mu \\nu \\rho \\sigma }w^{\\rho \\sigma }p^\\nu f_V^0\\delta ^{\\prime }(p^2-m^2).$ $\\Sigma ^{(1)}_{ \\mu \\nu }$ is an antisymmetric tensor field and $f_V^0$ is a scalar.", "For ${\\cal A}_0 :$ $\\Delta E_{f_1}^\\pm (\\mathbf {p}) = \\pm \\frac{1}{2}\\int dp_0 \\epsilon _{ijk}w^{jk}p^i f_V^0\\delta (p^2-m^2) = \\mp \\frac{\\kappa }{2E_p}\\mathbf {p}\\cdot \\mathbf {\\omega } f_0^{(0)\\pm }.$ For $\\mathbf {{\\cal A} }:$ $\\Delta {E}^{i\\pm }_{\\mathbf {g}_0}(\\mathbf {p})=\\mp \\frac{1}{2}\\int dp_0\\epsilon ^{i\\nu \\alpha \\beta }w_{\\alpha \\beta }p_\\nu f_V^0 \\delta (p^2-m^2)=-\\Bigg (\\frac{\\kappa }{2}\\mathbf {\\omega }+\\frac{\\mathbf {\\omega } \\mathbf {p}^2-\\mathbf {p}(\\mathbf {\\omega }\\cdot \\mathbf {p})}{2E_p^2}\\Bigg )^i f_0^{(0)\\pm }.$ The antisymmetric tensor field $S_{\\mu \\nu }:$ $S^{(1)}_{\\mu \\nu } = m\\Sigma ^{(1)}_{\\mu \\nu } \\delta (p^2-m^2) - m w_{\\mu \\nu } f_V^0\\delta ^{\\prime }(p^2-m^2).$ For $S_{0i}:$ $\\Delta E_{\\mathbf {g}_2}^{i\\pm }(\\mathbf {p}) = \\pm m\\int dp_0 w^{i0}f_V^0 \\delta (p^2-m^2) = \\frac{m(\\mathbf {p}\\times \\mathbf {\\omega })^i}{2E_p^2}f_0^{(0)\\pm }.$ For $S_{ij}:$ $\\Delta E_{\\mathbf {g}_3}^{i\\pm }(\\mathbf {p})&=\\pm \\frac{m}{2}\\int dp_0 \\epsilon ^{ijk}w_{jk}f_V^0 \\delta (p^2-m^2) =\\mp \\frac{m\\kappa }{2E_p}{\\omega }^i f_0^{(0)\\pm }.$ These are the mass shell shifts which we determine from the covariant approach." ] ]
2207.10429
[ [ "Multi-photon-addition amplified coherent state" ], [ "Abstract State $g^{\\hat{n}}\\hat{a}^{\\dag m}\\left\\vert \\alpha \\right\\rangle $ and state $\\hat{a}^{\\dag m}g^{\\hat{n}}\\left\\vert \\alpha \\right\\rangle $ are same to state $\\hat{a}^{\\dag m}\\left\\vert g\\alpha \\right\\rangle $, which is called as multi-photon-addition amplified coherent state (MPAACS) by us.", "Here, $\\hat{n}$, $\\hat{a}^{\\dag }$, $\\left\\vert \\alpha \\right\\rangle $, $g$ ( $\\geq 1$), and $m$ are photon number operator, creation operator, coherent state, gain facor, and an interger, respectively.", "We study mathematical and physical properties for these MPAACSs, including normalization, photon component analysis, Wigner function, effective gain, quadrature squeezing, and equivalent input noise.", "Actually, the MPAACS, which contains more nonclassicality, is an amplified version of photon-added coherent state (PACS) introduced by Agrwal and Tara [Phys.", "Rev.", "A 43, 492 (1991)].", "Our work provides theoretical references for implementing amplifiers for light fields." ], [ "Introduction", "In physics, signal amplification is a simple concept.", "However, signal amplification unavoidably comes with noise[1].", "Often, the introduced noise makes it difficult for people to distinguish the amplified signal.", "The downside is that quantum noise will restrict quantum technologies such as quantum cloning[2], [3] and superluminal information transfer[4].", "In order to conquer this restriction, people are more looking forward to taking advantage of noiseless amplification, which can be implemented by probabilistic operations[5], [6].", "In recent years, the theoretical and experimental study on ideal noiseless linear amplification (NLA) has attracted the interests of the researchers[7], [8], [9], [10].", "The NLA has been used for many quantum information tasks such as loss suppression[11], quantm repeater[12], quantum error correction[13], and entanglement distillation[14].", "The NLA can be described by the operator $g^{\\hat{n}}$ (AM), where $\\hat{n}=\\hat{a}^{\\dag }\\hat{a}$ is the photon number operator and $g$ ($g>1$ ) is the gain factor.", "Here $g^{\\hat{n}}\\equiv \\hat{1}$ is just the identity operator if $g=1$ .", "The NLA can be implemented probabilistically by combinating multiple photon addition and subtraction with current technology[15].", "Theoretically, an ideal noiseless amplifier (described by $g^{\\hat{n}}$ ) can map an input state $\\rho _{in}$ into an output state $\\rho _{out}$ , i.e., $g^{\\hat{n}}:\\rho _{in}\\longmapsto \\rho _{out}$ .", "For input Fock state $\\left|n\\right\\rangle $ , we have $g^{\\hat{n}}:\\left|n\\right\\rangle \\longmapsto \\left|n\\right\\rangle $ due to $\\hat{n}\\left|n\\right\\rangle =n\\left|n\\right\\rangle $ .", "For input coherent state (CS), we have $g^{\\hat{n}}:\\left|\\alpha \\right\\rangle \\longmapsto \\left|g\\alpha \\right\\rangle $ because of $g^{\\hat{n}}\\left|\\alpha \\right\\rangle =e^{(g^{2}-1)\\left|\\alpha \\right|^{2}/2}\\left|g\\alpha \\right\\rangle $ .", "Generally, the input states to be amplified by $g^{\\hat{n}}$ include Gaussian and non-Gaussian state[16].", "Moreover, the output states must be re-normalized after operating $g^{\\hat{n}}$ because this operator is unbounded.", "In 2016, Park et al.", "suggested and compared several schemes for non-deterministic NLA of CSs using $\\hat{a}^{\\dag 2}$ , $\\hat{a}\\hat{a}^{\\dag }$ , $\\left( \\hat{a}\\hat{a}^{\\dag }\\right) ^{2}$ , $\\hat{a}^{\\dag 4}$ , $\\hat{a}\\hat{a}^{\\dag }\\hat{a}^{\\dag 2}$ , $\\hat{a}^{\\dag 2}\\hat{a}\\hat{a}^{\\dag }$ , which may work as amplifiers for CSs with weak, medium or large amplitudes.", "Among them, the two-photon addition ($\\hat{a}^{\\dag 2}$ ) scheme work more effectively than others as a noiseless amplifier[17].", "Before then, in 1991, Argarwal and Tara had introduced photon-added CSs (PACS) $\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ , which can be generated by applying a $m$ -photon addition $\\hat{a}^{\\dag m}$ (AD) on the CS $\\left|\\alpha \\right\\rangle $ , where $m$ is an interger[18].", "It is undeniable that the AD $\\hat{a}^{\\dag m}$ also works as an amplifier to amplify $\\left|\\alpha \\right\\rangle $ to $\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ .", "In this paper, we shall study the behaviour of $\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ under the action of $g^{\\hat{n}}$ .", "In another words, we shall obtain new quantum states by further applying $g^{\\hat{n}}$ on $\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ .", "We will analyze their nonclassical properties and amplification effects.", "The key is to examine the combinatorial effects of AD $\\hat{a}^{\\dag m}$ and AM $g^{\\hat{n}}$ on CSs.", "The remaining paper is organized as follows.", "In Sec.2, we introduce our considered states accompanying with description and normalization.", "In Sec.3, we analyze photon components for them.", "In Sec.4, we study Wigner functions to show character of Gaussianity and nonclassicality.", "Sec.5 is devoted to studying effective gain, quadrature squeezing and equivalent input noise for these states.", "Conclusions are summarized in the last section." ], [ "Generating quantum states", "Based on operators $g^{\\hat{n}}$ , $\\hat{a}^{\\dag m}$ , and CS $\\left|\\alpha \\right\\rangle $ , we introduce a class of new states by two equivalent ways.", "The conceptual generating schemes are shown in Fig.1.", "Figure: (a) Conceptional generating schemes of ADAMCS ψ 1 \\left|\\protect \\psi _{1}\\right\\rangle and AMADCS ψ 2 \\left|\\protect \\psi _{2}\\right\\rangle .", "It is interesting to prove that ψ 1 \\left|\\protect \\psi _{1}\\right\\rangle and ψ 2 \\left|\\protect \\psi _{2}\\right\\rangle are the same MPAACS ψ\\left|\\protect \\psi \\right\\rangle .", "(b) The map α↦a †m gα\\left|\\protect \\alpha \\right\\rangle \\longmapsto a^{\\dag m}\\left|g\\protect \\alpha \\right\\rangle can be implemented by way 1 α↦a †m α↦a †m gα\\left|\\protect \\alpha \\right\\rangle \\longmapsto a^{\\dag m}\\left|\\protect \\alpha \\right\\rangle \\longmapsto a^{\\dag m}\\left|g\\protect \\alpha \\right\\rangle or by way 2 α↦gα↦a †m gα\\left|\\protect \\alpha \\right\\rangle \\longmapsto \\left|g\\protect \\alpha \\right\\rangle \\longmapsto a^{\\dag m}\\left|g\\protect \\alpha \\right\\rangle after employing g n ^ g^{\\hat{n}} and a ^ †m \\hat{a}^{\\dag m} in sequence.Way 1: Employing $\\hat{a}^{\\dag m}$ then $g^{\\hat{n}}$ on $\\left|\\alpha \\right\\rangle $ , we get the addition-amplification coherent state (ADAMCS) $\\left|\\psi _{1}\\right\\rangle =\\frac{1}{\\sqrt{N_{1}}}g^{\\hat{n}}\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle , $ with normalization factor $N_{1}=g^{2m}m!e^{\\left( g^{2}-1\\right) \\left|\\alpha \\right|^{2}}L_{m}(-g^{2}\\left|\\alpha \\right|^{2}), $ where $L_{m}(x)$ is the $m$ th-order Laguerre polynomial[19].", "This map include two processes, i.e., $\\hat{a}^{\\dag m}:$ $\\left|\\alpha \\right\\rangle \\longmapsto \\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ and $g^{\\hat{n}}:$ $\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle \\longmapsto \\left|\\psi _{1}\\right\\rangle $ , where $\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ is the intermediate state.", "Way 2: Employing $g^{\\hat{n}}$ then $\\hat{a}^{\\dag m}$ on $\\left|\\alpha \\right\\rangle $ , we get the amplification-addition coherent state (AMADCS) $\\left|\\psi _{2}\\right\\rangle =\\frac{1}{\\sqrt{N_{2}}}\\hat{a}^{\\dag m}g^{\\hat{n}}\\left|\\alpha \\right\\rangle , $ with normalization factor $N_{2}=m!e^{\\left( g^{2}-1\\right) \\left|\\alpha \\right|^{2}}L_{m}(-g^{2}\\left|\\alpha \\right|^{2}).", "$ This map include two processes, i.e., $g^{\\hat{n}}:$ $\\left|\\alpha \\right\\rangle \\longmapsto \\left|g\\alpha \\right\\rangle $ and $\\hat{a}^{\\dag m}:$ $\\left|g\\alpha \\right\\rangle \\longmapsto \\left|\\psi _{2}\\right\\rangle $ , where $\\left|g\\alpha \\right\\rangle $ is the intermediate state.", "Although operators $g^{\\hat{n}}$ and $\\hat{a}^{\\dag m}$ do not commute with each other (i.e.", "$g^{\\hat{n}}\\hat{a}^{\\dag m}\\ne \\hat{a}^{\\dag m}g^{\\hat{n}}$ ), the relation $g^{\\hat{n}}\\hat{a}^{\\dag m}=g^{m}\\hat{a}^{\\dag m}g^{\\hat{n}}$ is satisfied and leads to $\\left|\\psi _{1}\\right\\rangle =\\left|\\psi _{2}\\right\\rangle $ after normalization.", "Thus, we redefine them as $\\left|\\psi \\right\\rangle $ in the form $\\left|\\psi \\right\\rangle =\\frac{1}{\\sqrt{N}}\\hat{a}^{\\dag m}\\left|g\\alpha \\right\\rangle , $ with normalization factor $N=m!L_{m}(-g^{2}\\left|\\alpha \\right|^{2})$ .", "In this paper, we call $\\left|\\psi \\right\\rangle $ as MPAACS, which is just an amplified PACS.", "By using $\\left|\\psi _{1}\\right\\rangle $ in Appendix A or by using $\\left|\\psi _{2}\\right\\rangle $ in Appendix B in two parallel ways, we get the expressions of state description, normalization facor, expectation value, density matrix elements, and Wigner function for $\\left|\\psi \\right\\rangle $ .", "Without question, the results in Appendix A are same to those in Appendix B except $N_{1}=g^{2m}N_{2}$ .", "As illustrated in Table I, states including $\\left|0\\right\\rangle $ , $\\left|m\\right\\rangle $ , $\\left|\\alpha \\right\\rangle $ , $\\left|g\\alpha \\right\\rangle $ , and $\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ are special cases of $\\left|\\psi \\right\\rangle $ with proper $g$ , $\\alpha $ , and $m$ .", "Table: Special cases of the MPAACS" ], [ "Component analysis of the MPAACS", "The MPAACS can be expanded in terms of Fock states as $\\left|\\psi \\right\\rangle =\\sum _{k=m}^{\\infty }c_{k}\\left|k\\right\\rangle , $ with $c_{k}=\\frac{\\sqrt{k!", "}\\left( g\\alpha \\right) ^{k-m}e^{-g^{2}\\left|\\alpha \\right|^{2}/2}}{\\left( k-m\\right) !\\sqrt{m!L_{m}(-g^{2}\\left|\\alpha \\right|^{2})}}.", "$ It is interesting to note that the components including $\\left|k\\right\\rangle $ ($k<m$ ) are missing in the MPAACS.", "Moreover, when $g=1$ , the result can be reduced to that in the work of Agarwal and Tara[18].", "Accordingly, the density matrix element (DME) for $\\rho =\\left|\\psi \\right\\rangle \\left\\langle \\psi \\right|$ can be expressed as $\\rho _{kl}=\\left\\langle k\\right|\\rho \\left|l\\right\\rangle =c_{k}c_{l}^{\\ast }$ , whose numerical results also can be calculated according to Eq.", "(A.7) or Eq.", "(B.6) in the Appendix.", "The photon number distribution (PND) can be written as $\\rho _{kk}=\\left|c_{k}\\right|^{2}$ , i.e., the diagonal terms of the density matrix.", "Without loss of generality, we shall take $\\alpha =\\left|\\alpha \\right|e^{i\\theta _{p}}$ only with $\\theta _{p}=0$ in the numerical analysis.", "Fig.2 presents the PNDs $\\rho _{kk}$ of the MPAACSs with different parameters ($\\left|\\alpha \\right|$ , $g$ , $m$ ).", "The results show that: (a) The effect of photon-added number $m$ can be observed in Fig.2(a) at fixed $\\left|\\alpha \\right|=1$ , $g=2$ and for three different $m $ .", "As $m$ increases, the PNDs $\\rho _{kk}$ approach higher-photon regime where all the $\\left|k\\right\\rangle $ terms with $k<m$ are missing, like a shifted version of original amplified CS ($m=0$ ).", "(b) The effect of gain factor $g$ can be observed in Fig.2(b) at fixed $\\left|\\alpha \\right|=1$ , $m=2$ and for three different $g$ .", "It is obvious to see that the PNDs approach higher-photon regime as $g$ increasing.", "(c) The effect of the field amplitude $\\left|\\alpha \\right|$ can be observed in Fig.2(c) at fixed $g=2$ , $m=2$ and for three different $\\left|\\alpha \\right|$ .", "We find if $\\left|\\alpha \\right|=0 $ , there is the sole component of $\\left|m\\right\\rangle $ .", "If case $\\left|\\alpha \\right|>0$ , the PNDs approach higher-photon regime as $\\left|\\alpha \\right|$ increasing.", "Fig.3 presents the absolute values of the DMEs ($\\left|\\rho _{kl}\\right|$ ) of the MPAACSs showing the effects of parameters ($\\left|\\alpha \\right|$ , $g$ , $m$ ).", "Generally, two adjacent states, corresponding to two adjacent graphs in Fig.3, can be connected through a map with its operation.", "These maps include $\\hat{a}^{\\dag m}:\\left|0\\right\\rangle \\longmapsto \\left|m\\right\\rangle $ for (a)-(b); $D\\left(\\alpha \\right) :\\left|0\\right\\rangle \\longmapsto \\left|\\alpha \\right\\rangle $ for (a)-(c), where $D\\left( \\alpha \\right) =e^{\\alpha a^{\\dag }-\\alpha ^{\\ast }a}$ is the displacement operator; $\\hat{a}^{\\dag m}:\\left|\\alpha \\right\\rangle \\longmapsto \\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ for (c)-(d); $g^{\\hat{n}}:\\left|\\alpha \\right\\rangle \\longmapsto \\left|g\\alpha \\right\\rangle $ for (c)-(e); $g^{\\hat{n}}:\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle \\longmapsto \\hat{a}^{\\dag m}\\left|g\\alpha \\right\\rangle $ for (d)-(f); $\\hat{a}^{\\dag m}:\\left|g\\alpha \\right\\rangle \\longmapsto \\hat{a}^{\\dag m}\\left|g\\alpha \\right\\rangle $ for (e)-(f).", "Of course, the map $\\left|m\\right\\rangle \\longmapsto \\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ for (b)-(d) can not be realized simply by $D\\left( \\alpha \\right) $ .", "It can be easily seen from Fig.3 that: (1) The AD $\\hat{a}^{\\dag m}$ leads to the rescale of corrresponding DMEs and the displacement to higher indeces ($\\rho _{k,l}\\rightarrow \\rho _{k+m,l+m}$ ), leaving all $\\rho _{k,l}$ with $k,l<m$ void.", "(2) The AM $g^{\\hat{n}}$ leads to the rescale of corrresponding DMEs.", "Figure: PNDs of ψ\\left|\\protect \\psi \\right\\rangle with different (α\\left|\\protect \\alpha \\right|, gg, mm).", "(a) showing the effectof mm; (b) showing the effect of gg; (c) showing the effect of α\\left|\\protect \\alpha \\right|.Figure: Absolute values of DMEs for ψ\\left|\\protect \\psi \\right\\rangle -(α\\left|\\protect \\alpha \\right|, gg, mm) taking (a) 0\\left|0\\right\\rangle -(0, 1, 0); (b) m\\left|m\\right\\rangle -(0, 1, 2); (c) α\\left|\\protect \\alpha \\right\\rangle -(1, 1, 0); (d) a †m αa^{\\dag m}\\left|\\protect \\alpha \\right\\rangle -(1, 1, 2); (e) gα\\left|g\\protect \\alpha \\right\\rangle -(1, 2, 0); and (f) a †m gαa^{\\dag m}\\left|g\\protect \\alpha \\right\\rangle -(1, 2, 2).", "Indeed, anytwo adjacent states can be connected through appropriate operations such as DαD\\left( \\protect \\alpha \\right) , a †m a^{\\dag m}, and g n ^ g^{\\hat{n}}." ], [ "Wigner function of the MPAACS", "In phase-space formalism, the Wigner function (WF) is an important quasiprobability distribution, representing the corresponding quantum state[20], [21], [22].", "One can judge Gaussianity or non-Gaussianity from its WF form and non-classicality from its Wigner negativity[23], [24].", "Theoretically, $W_{\\rho }\\left( \\beta \\right) $ can be also obtained by means of the following transformation $W_{\\rho }\\left( \\beta \\right) =\\sum _{k,l=0}^{\\infty }\\rho _{kl}W_{\\left|k\\right\\rangle \\left\\langle l\\right|}\\left( \\beta \\right) $ which is associated with $\\rho _{kl}$ and $W_{\\left|k\\right\\rangle \\left\\langle l\\right|}\\left( \\beta \\right) $ (WF of operator $\\left|k\\right\\rangle \\left\\langle l\\right|$ ).", "But it is very difficult to simulate perfectly as given by Eq.", "(REF ) because of the infinite summation.", "In experiment, the approximate WF can be reconstructed from a truncated density matrix with finite dimension through tomographic analysis.", "Fortunately, analytical WF of the MPAACS can be obtained from Eq.", "(A.8) or Eq.", "(B.7) as follows $W_{\\rho }\\left( \\beta \\right) =\\dfrac{2(-1)^{m}L_{m}(\\left|2\\beta -g\\alpha \\right|^{2})}{\\pi L_{m}(-g^{2}\\left|\\alpha \\right|^{2})}e^{-2\\left|\\beta -g\\alpha \\right|^{2}}, $ in the ($x,y$ ) phase space, where $\\beta =(x+iy)/\\sqrt{2}$ .", "As expected, Eq.", "(REF ) can reduce to the following special cases.", "(1) If $\\alpha =0$ and $m=0$ , then we obtain WF of vacuum state $W_{\\left|0\\right\\rangle }\\left( \\beta \\right) =\\dfrac{2}{\\pi }e^{-2\\left|\\beta \\right|^{2}}.", "$ (2) If $m=0$ and $g=1$ , then we obtain WF of coherent state $W_{\\left|\\alpha \\right\\rangle }\\left( \\beta \\right) =\\dfrac{2}{\\pi }e^{-2\\left|\\beta -\\alpha \\right|^{2}}.", "$ (3) If $m=0$ , then we obtain WF of amplified CS $W_{\\left|g\\alpha \\right\\rangle }\\left( \\beta \\right) =\\dfrac{2}{\\pi }e^{-2\\left|\\beta -g\\alpha \\right|^{2}}.", "$ (4) If $\\alpha =0$ , then we obtain WF of Fock state $W_{\\left|m\\right\\rangle }\\left( \\beta \\right) =\\dfrac{2}{\\pi }(-1)^{m}e^{-2\\left|\\beta \\right|^{2}}L_{m}(4\\left|\\beta \\right|^{2}).", "$ (5) If $g=1$ , then we obtain WF of photon-added CS $W_{\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle }\\left( \\beta \\right) =\\dfrac{2(-1)^{m}L_{m}(\\left|2\\beta -\\alpha \\right|^{2})}{\\pi L_{m}(-\\left|\\alpha \\right|^{2})}e^{-2\\left|\\beta -\\alpha \\right|^{2}}.", "$ Eq.", "(REF ) is just the equation (3.8) in Ref.[18].", "Corresponding to Fig.3 and according to Eqs.", "(REF )-(REF ), we plot WFs in Fig.4 and their sections (with $y=0$ ) in Fig.5(a), as well as their marginal distributions in Fig.5(b).", "Here, the marginal distribution in $x$ direction can be evaluated numerically by $p\\left( x\\right) =\\int _{-\\infty }^{\\infty }W\\left( \\beta \\right) dy,$ where the scaling relations such as $\\int _{-\\infty }^{\\infty }W\\left( \\beta \\right) d^{2}\\beta =1$ and $\\int _{-\\infty }^{\\infty }p\\left( x\\right) dx=1$ must be ensured.", "Figure: WFs WβW\\left( \\protect \\beta \\right) of ψ\\left|\\protect \\psi \\right\\rangle in the phase space (x,y)(x,y).", "Here, ψ\\left|\\protect \\psi \\right\\rangle -(α\\left|\\protect \\alpha \\right|, gg, mm) aretaking (a) 0\\left|0\\right\\rangle -(0, 1, 0); (b) m\\left|m\\right\\rangle -(0, 1, 2); (c) α\\left|\\protect \\alpha \\right\\rangle -(1, 1, 0); (d) a †m αa^{\\dag m}\\left|\\protect \\alpha \\right\\rangle -(1, 1, 2); (e) gα\\left|g\\protect \\alpha \\right\\rangle -(1, 2, 0); and (f) a †m gαa^{\\dag m}\\left|g\\protect \\alpha \\right\\rangle -(1, 2, 2).Figure: (a) Sections W(x,y=0)W(x,y=0) and (b) Marginal distribution p(x)p(x) of WβW\\left( \\protect \\beta \\right) for ψ\\left|\\protect \\psi \\right\\rangle in Fig.4 as a function of xx.", "Here, ψ\\left|\\protect \\psi \\right\\rangle -(α\\left|\\protect \\alpha \\right|, gg, mm) taking(red dashed) 0\\left|0\\right\\rangle -(0, 1, 0); (red solid) m\\left|m\\right\\rangle -(0, 1, 2); (blue dashed) α\\left|\\protect \\alpha \\right\\rangle -(1, 1, 0); (blue solid) a †m αa^{\\dag m}\\left|\\protect \\alpha \\right\\rangle -(1, 1, 2); (black dashed) gα\\left|g\\protect \\alpha \\right\\rangle -(1, 2, 0); and (blacksolid) a †m gαa^{\\dag m}\\left|g\\protect \\alpha \\right\\rangle -(1, 2, 2).The left three figures of Fig.4 correspond to vacuum state $\\left|0\\right\\rangle $ , coherent state $\\left|\\alpha \\right\\rangle $ , and amplified CS $\\left|g\\alpha \\right\\rangle $ .", "Their distributions are Gaussian without Wigner negativity and with different central positions.", "The right three figures of Fig.4 correspond to Fock state $\\left|m\\right\\rangle $ , Photon-added CS $\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ , and MPAACS $\\hat{a}^{\\dag m}\\left|g\\alpha \\right\\rangle $ .", "Their distributions are non-Gaussian and showing Wigner negativity.", "Main characters including Gaussianity (or non-Gaussianity) and nonclassicality (negativity) can also be seen from their corresponding sections in $y=0$ in Fig.5(a).", "Morover, we see from Fig.5(b) that, as $m$ is increased, the width of distribution $p(x)$ becomes narrower comparing to that of $\\left|\\alpha \\right\\rangle $ and $\\left|g\\alpha \\right\\rangle $ (corresponding to $m=0$ ) with $\\left|\\alpha \\right|>0$ .", "We think, this is owe to the effect of AD $\\hat{a}^{\\dag m}$ ." ], [ "Effective gain, quadrature squeezing and equivalent input noise", "Generally, many properties of light states are related to amplitude quadrature $\\hat{x}=\\left( a+a^{\\dag }\\right) /\\sqrt{2}$ and phase quadrature $\\hat{p}=\\left( a-a^{\\dag }\\right) /(i\\sqrt{2})$[25], [26].", "In order to study the combinatorial contributions of AM $g^{\\hat{n}}$ and AD $\\hat{a}^{\\dag m}$ , we compare the properties of $\\left|\\alpha \\right\\rangle $ and $\\left|\\psi \\right\\rangle $ in terms of the effective gain, quadrature squeezing, and equivalent input noise.", "After our full consideration, we only use quadrature operator $\\hat{x}$ to discuss all these properties.", "Effective gain: An effective gain[27], [28] from the input $\\left|\\alpha \\right\\rangle $ to the output $\\left|\\psi \\right\\rangle $ can be defined as the ratio of the expectation values of the quadrature operator $\\hat{x}$ : $g_{eff}=\\frac{\\left\\langle \\hat{x}\\right\\rangle _{\\left|\\psi \\right\\rangle }}{\\left\\langle \\hat{x}\\right\\rangle _{\\left|\\alpha \\right\\rangle }}.", "$ In particularly, we have $g_{eff}^{(m=0)} &=&g, \\\\g_{eff}^{(m=1)} &=&g\\frac{2+g^{2}\\left|\\alpha \\right|^{2}}{1+g^{2}\\left|\\alpha \\right|^{2}}, \\\\g_{eff}^{(m=2)} &=&g\\frac{6+6g^{2}\\left|\\alpha \\right|^{2}+g^{4}\\left|\\alpha \\right|^{4}}{2+4g^{2}\\left|\\alpha \\right|^{2}+g^{4}\\left|\\alpha \\right|^{4}}, \\\\&&\\vdots $ Figure: (a) Effective gain g eff g_{eff}, (b) Variances Δx 2 \\left\\langle \\left(\\Delta x\\right) ^{2}\\right\\rangle , and (c) Equivalent input noise N eq N_{eq}as the function of α\\left|\\protect \\alpha \\right| .", "Here,parameters are taking g=1g=1 (red),2,2 (blue),3,3 (black) and m=0m=0 (dashed),1,1 (dotdashed),2,2 (solid).In Fig.6(a), we plot $g_{eff}$ as a function of $\\left|\\alpha \\right|$ by taking different $g$ and $m$ .", "From Fig.6(a), we find that $g_{eff}\\gtrsim g$ is always right and $g_{eff}$ is a monotonical decreasing function of $\\left|\\alpha \\right|$ for different $g$ and $m>0$ .", "Two limiting cases include: (1) $g_{eff}\\rightarrow (m+1)g$ in the limit of $\\left|\\alpha \\right|\\rightarrow 0$ ; (2) $g_{eff}\\rightarrow g$ in the limit of $\\left|\\alpha \\right|\\rightarrow \\infty $ .", "Quadrature squeezing: Quadrature squeezing, as a nonclassical character of light states, can be evident by measuring the quadrature variances[29], [30].", "The quadrature variance in $\\hat{x}$ can be calculated from $\\left\\langle \\left( \\Delta \\hat{x}\\right) ^{2}\\right\\rangle =\\left\\langle \\hat{x}^{2}\\right\\rangle -\\left\\langle \\hat{x}\\right\\rangle ^{2},$ which is also called as the quadrature flunctuation.", "Similar definition $\\left\\langle \\left( \\Delta \\hat{p}\\right) ^{2}\\right\\rangle $ is available to quadrature $\\hat{p}$ .", "It is well known that $\\left|\\alpha \\right\\rangle $ and $\\left|g\\alpha \\right\\rangle $ are not quadrature squeezing states because of $\\left\\langle \\left( \\Delta \\hat{x}\\right)^{2}\\right\\rangle =\\left\\langle \\left( \\Delta \\hat{p}\\right)^{2}\\right\\rangle =0.5$ .", "While if $\\left\\langle \\left( \\Delta \\hat{x}\\right)^{2}\\right\\rangle $ or $\\left\\langle \\left( \\Delta \\hat{p}\\right)^{2}\\right\\rangle $ is smaller than $0.5$ , then the quantum state is quadrature squeezing.", "Thus, a question naturally arises: is the MPAACS quadrature squeezing?", "In order to answer this question, we only plot $\\left\\langle \\left( \\Delta \\hat{x}\\right) ^{2}\\right\\rangle $ as a function of $\\left|\\alpha \\right|$ for different $\\left|\\psi \\right\\rangle $ in Fig.6(b) regardless of $\\left\\langle \\left( \\Delta \\hat{p}\\right) ^{2}\\right\\rangle \\ge 0.5$ .", "Clearly, we find that (1) For $m=0$ case, the $\\hat{x}$ quadrature variance remains constant fluctuation $0.5$ (without quadrature squeezing) for any $g$ and $\\left|\\alpha \\right|$ .", "This is because $\\left|\\psi \\right\\rangle $ has been reduced to $\\left|\\alpha \\right\\rangle $ or $\\left|g\\alpha \\right\\rangle $ in this case.", "(2) For $m=1$ case, the $\\hat{x}$ quadrature exhibits squeezing if $\\left|g\\alpha \\right|>1 $ , which is consistent with the result for SPACS[31].", "(3) For $m=2$ case, the $\\hat{x}$ quadrature exhibits squeezing if $\\left|g\\alpha \\right|>0.938744$ .", "Other similar reduced fluctuations will be exhibited squeezing if $\\left|g\\alpha \\right|>0.900407$ for $m=3$ , $\\left|g\\alpha \\right|>0.873904$ for $m=4$ , $\\left|g\\alpha \\right|>0.854454$ for $m=5$ , and so on.", "(4) All above results show that the MPAACSs except $m=0$ will exhibit quadrature squeezing, only when $\\left|g\\alpha \\right|$ exceeds a certain threshold.", "(5) Moreover, we always have $\\left\\langle \\left( \\Delta \\hat{x}\\right) ^{2}\\right\\rangle \\rightarrow 0.5$ in the limit of $\\left|\\alpha \\right|\\rightarrow \\infty $ .", "Equivalent input noise: Noise of the output $\\left|\\psi \\right\\rangle $ referring to the input $\\left|\\alpha \\right\\rangle $ may be analyzed in terms of the equivalent input noise (EIN): $N_{eq}=\\frac{\\left\\langle \\left( \\Delta \\hat{x}\\right) ^{2}\\right\\rangle _{\\left|\\psi \\right\\rangle }}{g_{eff}^{2}}-\\left\\langle \\left( \\Delta \\hat{x}\\right) ^{2}\\right\\rangle _{\\left|\\alpha \\right\\rangle },$ which tells how much noise has been added to the input noise level.", "In fact, the EIN came from a classical electronics terminology and used to quantify the performance of an amplifier[27], [28], [32], [33].", "For an amplification process, the EIN is negative and indicates the characteristic of noiseless amplification[34].", "In Fig.6(c), the EINs are shown as a function of $\\left|\\alpha \\right|$ for $\\left|\\psi \\right\\rangle $ with different $g$ and $m $ .", "The numerical results reveal that $N_{eq}$ is clearly negative for all $\\left|\\alpha \\right|$ , except case $m=0$ and $g=1$ .", "The main results include: (1) For $m=0$ , we know that $N_{eq}$ remains constant $0.5/g^{2}$ $-0.5$ for any $g$ and all $\\left|\\alpha \\right|$ ; (2) In the limit of $\\left|\\alpha \\right|\\rightarrow 0$ , we see $N_{eq}\\rightarrow 0.5/g^{2}$ $-0.5$ for $m=0$ , $0.375/g^{2}$ $-0.5$ for $m=1$ , $0.277778/g^{2}$ $-0.5$ for $m=2$ , respectively; (3) In the limit of $\\left|\\alpha \\right|\\rightarrow \\infty $ , we see $N_{eq}\\rightarrow $ $0.5/g^{2}$ $-0.5$ for any $g$ ($>1$ ) and different $m$ ; (4) A minimum value of $N_{eq}$ can be see at proper $\\left|\\alpha \\right|$ for each case." ], [ "Conclusion and discussion", "We have introduced MPAACSs by applying AM $g^{\\hat{n}}$ and AD $\\hat{a}^{\\dag m}$ on $\\left|\\alpha \\right\\rangle $ and proved that state $g^{\\hat{n}}\\hat{a}^{\\dag m}\\left|\\alpha \\right\\rangle $ and state $\\hat{a}^{\\dag m}g^{\\hat{n}}\\left|\\alpha \\right\\rangle $ are state $\\hat{a}^{\\dag m}\\left|g\\alpha \\right\\rangle $ .", "From $\\left|\\alpha \\right\\rangle $ to $\\hat{a}^{\\dag m}\\left|g\\alpha \\right\\rangle $ , the combinatorial effect of $g^{\\hat{n}}$ and $\\hat{a}^{\\dag m}$ is working as an amplifier.", "From the point of view of quantum state engineering, these MPAACSs $\\hat{a}^{\\dag m}\\left|g\\alpha \\right\\rangle $ are a class of new quantum states, which include many familiar quantum states, such as $\\left|0\\right\\rangle $ , $\\left|m\\right\\rangle $ , $\\left|\\alpha \\right\\rangle $ , $\\left|g\\alpha \\right\\rangle $ , and $\\hat{a}^{\\dag m}\\left|g\\alpha \\right\\rangle $ .", "We have derived the normalization factor for the MPAACS and found that it is related to Lagurrel polynomials.", "Interesting physical properties are given analytically and simulated numerically according to the supplementary materials.", "The main results are summarized as follows.", "As for the effects of AM $g^{\\hat{n}}$ and AD $\\hat{a}^{\\dag m}$ on photon components of the MPAACSs, we find that: (1) The AD $\\hat{a}^{\\dag m}$ leads to the void of the low-photon components (including $\\left|0\\right\\rangle $ , $\\left|1\\right\\rangle $ , $\\cdots $ , $\\left|m-1\\right\\rangle $ ) and the re-layout of photon components.", "(2) The AM $g^{\\hat{n}}$ leads to the re-layout of photon components.", "As for the effects of AM $g^{\\hat{n}}$ and AD $\\hat{a}^{\\dag m}$ on WFs, we find that: (1) The AD $\\hat{a}^{\\dag m}$ is a non-Gaussian operation, which can transform a Gaussian state into a non-gaussian state, accompanying with Wigner negativity.", "(2) The AM $g^{\\hat{n}}$ is a Gaussian operation, which can remain original Gaussianity or non-Gaussianity of quantum state.", "As for the effects of AM $g^{\\hat{n}}$ and AD $\\hat{a}^{\\dag m}$ on amplification, squeezing and noise, we find that: (1) Both $g^{\\hat{n}}$ and $\\hat{a}^{\\dag m}$ can improve the effective gain by changing $g$ and $m$ .", "(2) The quadrature squeezing will exhibit when $\\left|g\\alpha \\right|$ exceeds a certain threshold except $m=0$ .", "(3) The EINs except case $g=1$ and $m=0$ are negative, showing the characteristic of noiseless amplification.", "In our previous works[35], [36], [37], we have introduced several amplified quantum states, such as amplified coherent state, amplified thermal state, and amplified squeezed vacuum, by applying $\\left( g-1\\right) \\hat{n}+1$ or $(g-\\sqrt{2g-1})\\hat{n}^{2}+(\\sqrt{2g-1}-1)\\hat{n}+1$ on the corresponding input states.", "These amplified states can exhibit their respective peculiar nonclassicality.", "Of course, these operators work as amplifiers of realizing signal amplification.", "However, it is actually impossible to implement a perfect noiseless amplifier described by $g^{\\hat{n}}$ , albeit with zero success probability[38].", "So, our present work only provides a theoretical reference for signal amplification in quantum technology or state generation in quantum state engineering." ], [ "Appendix: Supplemental Materials", "Using techniques such as $g^{\\hat{n}}=:e^{(g-1)\\hat{a}^{\\dag }\\hat{a}}:$ ($:\\cdots :$ denotes normal ordering) and $x^{n}=\\partial _{s}^{n}e^{sx}|_{s=0}$ , we provide the following supplemental materials for all informations discussed in the main text.", "In this appendix, we provide the state description, normalization, expectation value, density matrix elements and Wigner function for $\\left|\\psi _{1}\\right\\rangle $ .", "(a) State description Eq.", "(REF ) can be further written as $\\left|\\psi _{1}\\right\\rangle =\\frac{e^{-\\frac{\\left|\\alpha \\right|^{2}}{2}}}{\\sqrt{N_{1}}}\\partial _{s_{1}}^{m}e^{g\\left( \\alpha +s_{1}\\right) a^{\\dag }}\\left|0\\right\\rangle |_{s_{1}=0}, \\qquad \\mathrm {(A.1)}$ accompanying with conjugate state $\\left\\langle \\psi _{1}\\right|=\\frac{e^{-\\frac{\\left|\\alpha \\right|^{2}}{2}}}{\\sqrt{N_{1}}}\\partial _{t_{1}}^{m}\\left\\langle 0\\right|e^{g\\left( \\alpha ^{\\ast }+t_{1}\\right) a}|_{t_{1}=0},\\qquad \\mathrm {(A.2)}$ which leads to density operator $\\rho _{1}=\\left|\\psi _{1}\\right\\rangle \\left\\langle \\psi _{1}\\right|$ , $\\rho _{1}=\\frac{e^{-\\left|\\alpha \\right|^{2}}}{N_{1}}\\partial _{s_{1}}^{m}\\partial _{t_{1}}^{m}e^{g\\left( \\alpha +s_{1}\\right) a^{\\dag }}\\left|0\\right\\rangle \\left\\langle 0\\right|e^{g\\left( \\alpha ^{\\ast }+t_{1}\\right) a}|_{\\left( s_{1},t_{1}\\right) =0}.", "\\qquad \\mathrm {(A.3)}$ (b) Normalization The normalization factor is $N_{1}=e^{\\left( g^{2}-1\\right) \\left|\\alpha \\right|^{2}}\\partial _{s_{1}}^{m}\\partial _{t_{1}}^{m}e^{g^{2}\\left( t_{1}\\alpha +s_{1}\\alpha ^{\\ast }+s_{1}t_{1}\\right) }|_{\\left( s_{1},t_{1}\\right) =0}.\\qquad \\mathrm {(A.4)}$ Using the following formula $L_{m}\\left( xy\\right) =\\frac{(-1)^{m}}{m!", "}\\partial _{s}^{m}\\partial _{t}^{m}e^{-st+sx+ty}|_{\\left( s,t\\right) =0}, \\qquad \\mathrm {(A.5)}$ we easily obtain the analytical result of $N_{1}$ in Eq.", "(REF ).", "(c) Expectation value Here, we give $\\langle a^{\\dagger k}a^{l}\\rangle _{\\rho _{1}}$ as follows $\\langle a^{\\dagger k}a^{l}\\rangle _{\\rho _{1}}& =\\frac{e^{\\left(g^{2}-1\\right) \\left|\\alpha \\right|^{2}}}{N_{1}}\\partial _{s_{1}}^{m}\\partial _{t_{1}}^{m}\\partial _{f_{1}}^{k}\\partial _{h_{1}}^{l}\\\\& e^{g\\left( h_{1}\\alpha +f_{1}\\alpha ^{\\ast }\\right) +g^{2}\\left(t_{1}\\alpha +s_{1}\\alpha ^{\\ast }+s_{1}t_{1}\\right) }\\\\& e^{g\\left( h_{1}s_{1}+f_{1}t_{1}\\right) }|_{\\left(s_{1},t_{1},f_{1},h_{1}\\right) =0}.", "$ (d) Density matrix elements Here, we give $\\rho _{kl}^{(1)}=\\left\\langle k|\\rho _{1}|l\\right\\rangle $ with the following form $\\rho _{kl}^{(1)}& =\\frac{e^{-\\left|\\alpha \\right|^{2}}}{N_{1}\\sqrt{k!l!", "}}\\partial _{s_{1}}^{m}\\partial _{t_{1}}^{m}\\partial _{f_{1}}^{k}\\partial _{h_{1}}^{l} \\\\& e^{gf_{1}\\left( \\alpha +s\\right) +gh_{1}\\left( \\alpha ^{\\ast }+t\\right)}|_{\\left( s_{1},t_{1},f_{1},h_{1}\\right) =0}.", "$ (e) Wigner function WF of $\\rho _{1}$ has the following form $& W_{\\rho _{1}}\\left( \\beta \\right) \\\\& =\\dfrac{2}{\\pi N_{1}}e^{-\\left( g^{2}+1\\right) |\\alpha |^{2}-2\\left|\\beta \\right|^{2}+2g\\alpha \\beta ^{\\ast }+2g\\alpha ^{\\ast }\\beta } \\\\& \\partial _{s_{1}}^{m}\\partial _{t_{1}}^{m}e^{-g^{2}\\left( t_{1}\\alpha +s_{1}\\alpha ^{\\ast }+s_{1}t_{1}\\right) +2g\\left( t_{1}\\beta +s_{1}\\beta ^{\\ast }\\right) }|_{\\left( s_{1},t_{1}\\right) =0} $ In this appendix, we provide the state description, normalization, expectation value, density matrix elements and Wigner function for $\\left|\\psi _{2}\\right\\rangle $ .", "(a) State description Eq.", "(REF ) can be further written as $\\left|\\psi _{2}\\right\\rangle =\\frac{e^{-\\frac{\\left|\\alpha \\right|^{2}}{2}}}{\\sqrt{N_{2}}}\\partial _{s_{2}}^{m}e^{\\left(s_{2}+g\\alpha \\right) a^{\\dag }}\\left|0\\right\\rangle |_{s_{2}=0},\\qquad \\mathrm {(B.1)}$ accompanying with conjugate state $\\left\\langle \\psi _{2}\\right|=\\frac{e^{-\\frac{\\left|\\alpha \\right|^{2}}{2}}}{\\sqrt{N_{2}}}\\partial _{t_{2}}^{m}\\left\\langle 0\\right|e^{\\left( t_{2}+g\\alpha ^{\\ast }\\right) a}|_{t_{2}=0},\\qquad \\mathrm {(B.2)}$ and leads to density operator $\\rho _{2}=\\left|\\psi _{2}\\right\\rangle \\left\\langle \\psi _{2}\\right|$ , $\\rho _{2}=\\frac{e^{-\\left|\\alpha \\right|^{2}}}{N_{2}}\\partial _{s_{2}}^{m}\\partial _{t_{2}}^{m}e^{\\left( s_{2}+g\\alpha \\right) a^{\\dag }}\\left|0\\right\\rangle \\left\\langle 0\\right|e^{\\left(t_{2}+g\\alpha ^{\\ast }\\right) a}|_{\\left( s_{2},t_{2}\\right) =0}.", "\\qquad \\mathrm {(B.3)}$ (b) Normalization The normalization factor is $N_{2}=e^{\\left( g^{2}-1\\right) \\left|\\alpha \\right|^{2}}\\partial _{s_{2}}^{m}\\partial _{t_{2}}^{m}e^{gt_{2}\\alpha +gs_{2}\\alpha ^{\\ast }+s_{2}t_{2}}|_{\\left( s_{2},t_{2}\\right) =0},\\qquad \\mathrm {(B.4)}$ which leads to the analytical result of $N_{2}$ in Eq.", "(REF ).", "(c) Expectation value Here, we give $\\langle a^{\\dagger k}a^{l}\\rangle _{\\rho _{2}}$ as follows $\\langle a^{\\dagger k}a^{l}\\rangle _{\\rho _{2}}& =\\frac{e^{\\left(g^{2}-1\\right) \\left|\\alpha \\right|^{2}}}{N_{2}}\\partial _{s_{2}}^{m}\\partial _{t_{2}}^{m}\\partial _{f_{2}}^{k}\\partial _{h_{2}}^{l}\\\\& e^{g\\left( h_{2}+t_{2}\\right) \\alpha +g\\left(f_{2}+s_{2}\\right) \\alpha ^{\\ast }} \\\\& e^{h_{2}s_{2}+f_{2}t_{2}+s_{2}t_{2}}|_{\\left(s_{2},t_{2},f_{2},h_{2}\\right) =0}.", "$ (d) Density matrix elements Here, we give $\\rho _{kl}^{(2)}=\\left\\langle k|\\rho _{2}|l\\right\\rangle $ with the following form $\\rho _{kl}^{(2)}& =\\frac{e^{-\\left|\\alpha \\right|^{2}}}{N_{2}\\sqrt{k!l!", "}}\\partial _{s_{2}}^{m}\\partial _{t_{2}}^{m}\\partial _{f_{2}}^{k}\\partial _{h_{2}}^{l} \\\\& e^{f_{2}s_{2}+h_{2}t_{2}+g\\left( f_{2}\\alpha +h_{2}\\alpha ^{\\ast }\\right)}|_{\\left( s_{2},t_{2},f_{2},h_{2}\\right) =0}.", "$ (e) Wigner function WF of $\\rho _{2}$ has the following form $& W_{\\rho _{2}}\\left( \\beta \\right) \\\\& =\\dfrac{2}{\\pi N_{2}}e^{-\\left( g^{2}+1\\right) |\\alpha |^{2}-2\\left|\\beta \\right|^{2}+2g\\left( \\alpha \\beta ^{\\ast }+\\alpha ^{\\ast }\\beta \\right) } \\\\& \\partial _{s_{2}}^{m}\\partial _{t_{2}}^{m}e^{-g\\left( t_{2}\\alpha +s_{2}\\alpha ^{\\ast }\\right) -s_{2}t_{2}+2\\left( t_{2}\\beta +s_{2}\\beta ^{\\ast }\\right) }|_{\\left( s_{2},t_{2}\\right) =0} $ This project was supported by the National Natural Science Foundation of China (No.", "11665013)." ] ]
2207.10452
[ [ "$\\widehat{sl(2)}$ decomposition of denominator formulae of some BKM Lie\n superalgebras -- II" ], [ "Abstract The square-root of Siegel modular forms of CHL Z_N orbifolds of type II compactifications are denominator formulae for some Borcherds-Kac-Moody Lie superalgebras for N=1,2,3,4.", "We study the decomposition of these Siegel modular forms in terms of characters of two sub-algebras: one is a $\\widehat{sl(2)}$ and the second is a Borcherds extension of the $\\widehat{sl(2)}$.", "This is a continuation of our previous work where we studied the case of Siegel modular forms appearing in the context of Umbral moonshine.", "This situation is more intricate and provides us with a new example (for N=5) that did not appear in that case.", "We restrict our analysis to the first N terms in the expansion as a first attempt at deconstructing the Siegel modular forms and unravelling the structure of potentially new Lie algebras that occur for N=5,6." ], [ "Introduction", "In this work, we continue the study of Siegel modular forms that are, in some cases, the denominator formulae for some Borcherds-Kac-Moody (BKM) Lie superalgebras.", "These Siegel modular forms include examples for which the Lie algebra connection is not yet known.", "For such examples, the eventual goal is to prove (or disprove) the existence of Lie algebra whose denominator formulae are given by these Siegel modular forms.", "In our previous work[1], we studied a family of Siegel modular forms that are associated with Umbral moonshine[2].", "Here we consider Siegel modular forms that are associated with $L_2(11)$ -moonshine[3].", "The squares of these Siegel modular forms are the generating function of quarter BPS states in CHL $\\mathbb {Z}_N$ orbifolds (for $N=1,2,\\ldots ,6$ )[4], [5], [6], [7].", "The main tool that we use is to probe the structure of the Lie algebras are two subalgebras: one is a ${\\widehat{sl(2)}}$ subalgebra and the other is a Borcherds extension of the ${\\widehat{sl(2)}}$ subalgebra.", "We rewrite the Siegel modular forms in terms of characters of the sub-algebras – it enables us to cleanly track simple roots that appear in the denominator formulae.", "For simplicity, we focus on the situations when $N$ is prime, i.e., $N=2,3,5$ .", "These are modular forms of weight $k(N)+1=12/(N+1)$ of a level $N$ subgroup of $Sp(4,\\mathbb {Z})$ .", "The connection with Mathieu and $L_{2}(11)$ moonshine leads to a product formula given in Eq.", "(REF ), for the Siegel modular forms[8], [9], [3].", "For the prime cases, it is consistent with the product formulae given by David et al.", "[10] in the context of dyon counting.", "We rewrite the Siegel modular form as follows: $\\Delta ^{(N)}_{k(N)}(\\mathbf {Z}) = s^{1/2}\\,\\phi ^{(N)}_{k(N),1/2}(\\tau ,z) \\times \\Big [ 1 + \\sum _{m=1}^\\infty s^m\\, \\Psi ^{(N)}_{0,m}(\\tau ,z).", ")\\Big ]\\ ,$ The Jacobi forms $\\Psi ^{(N)}_{0,m}(\\tau ,z)$ will be the main object of our study.", "They are Jacobi forms of the congruence subgroup $\\Gamma ^0(N)$ with weight zero and index $m$ .", "We obtain explicit formulae for these Jacobi forms in terms of standard modular forms for $m\\le N$ .", "The analogous expansion in our previous work[1] had non-vanishing terms only for indices that were multiples of $N$ .", "We wish to show that the Siegel modular forms $\\Delta ^{(N)}_{k(N)}(\\mathbf {Z})$ are extensions of the Kac-Moody Lie algebra $\\mathfrak {g}(A^{(N)})$ obtained from the Cartan matrix, $A^{(N)}$ , defined in Eq.", "(REF ).", "We call the extension $\\mathcal {B}_N^{CHL}(A^{(N)})$ – the $CHL$ refers to the fact that the square of the modular forms are the generating functions of quarter BPS states in $CHL$ $\\mathbb {Z}_N$ orbifolds[11], [6], [5].", "The Cartan matrices $A^{(N)}$ are obtained from the walls of marginal stability in these models[12].", "These have nice behaviour only for $N=1,2,\\ldots ,6$ .", "The expectation is that for $N\\le 4$ , the extension $\\mathcal {B}_N^{CHL}(A^{(N)})$ is the usual Borcherds extension of $\\mathfrak {g}(A^{(N)})$ which leads to the sum side of the denominator formula given in Eq.", "(REF ).", "The Borcherds correction term is shown symbolically as $T$ in this formula – it is the contribution that one obtains by adding imaginary simple roots i.e., roots with negative or zero norm.", "A Cartan matrix can also be obtained as the matrix of inner products of simple root vectors which generate a root lattice.", "In all the six examples, the Cartan matrix has rank three and the root lattice is in Lorentzian space $\\mathbb {R}^{2,1}$ .", "A special feature of these lattices is that they admit a lattice Weyl vector $\\varrho ^{(N)}$ with inner product $\\langle \\varrho ^{(N)}, \\alpha \\rangle =-1$ where $\\alpha $ is a simple root.", "Such lattices have been studied by Nikulin and the corresponding Lie algebra connection by Gritsenko and Nikulin[13].", "An important result from Gritsenko and Nikulin is that the cases of $N\\le 4$ in our examples can admit Borcherds extensions.", "This is why we expect that $\\mathcal {B}_N^{CHL}(A^{(N)})$ are Borcherds extensions.", "Unlike the examples considered in our previous work[1], we are unaware of a proof that this is indeed the case for $N\\le 4$ .", "The reason one hopes that there might be a Lie algebra for $N=5,6$ is a physical one.", "The dyon counting generating function is provided us with Siegel modular forms that transform covariantly under the Weyl group of $\\mathfrak {g}(A^{(N)})$ .", "We have three examples of this variety, one of which was considered in [1].", "We restrict to the case of $N=5$ for simplicity in this work and the $N=6$ example should work similarly.", "Our goal in this work is a modest one.", "We study two sub-algebras of $\\mathcal {B}_N^{CHL}(A^{(N)})$ , one is an ${\\widehat{sl(2)}}\\in \\mathfrak {g}(A^{(N)} $ and another is a Borcherds extension of the ${\\widehat{sl(2)}}$ that we call $\\mathcal {B}_N^{CHL}({\\widehat{sl(2)}})$ .", "Interestingly, these subalgebras are the best examples to understand the idea behind the Borcherds extension.", "The positive roots of the Lie algebra $\\mathcal {B}_N^{CHL}({\\widehat{sl(2)}})$ that are not in the sub-algebra will organise into a representation of the sub-algebra.", "This is the motivation for us to look into character decompositions of the $\\Psi _{0,m}^{(N)}(\\tau ,z)$ in terms of ${\\widehat{sl(2)}}$ and $\\mathcal {B}_N^{CHL}({\\widehat{sl(2)}})$ .", "The goal of the present paper is a modest one.", "We would like to understand the structure of the irreducible roots that appear in the first $N$ terms.", "The main result of this paper is that we are able to characterize all the terms and they are consistent with our expectations.", "There are some surprises.", "For instance, $\\Psi _{0,2}^{(3)}(\\tau ,z)$ vanishes.", "This is due to perfect cancellations between two different terms.", "The organization of the paper is a follows.", "The introductory section is followed by section 2 where we provide the Lie algebra background as well as develop the notation used in the rest of the paper.", "Section 3 is where we obtain vector valued modular forms(vvmf) of $\\Gamma ^0(N)$ .", "The Fourier coefficients of the vvmf can be identified with the multiplicities of roots that appear.", "The roots that appear for $m\\le N$ are all simple.", "Most of them are imaginary and sometimes fermionic.", "All terms that appear to this order are consistent with Borcherds extensions.", "This includes the $N=5$ example where we expect new behaviour at $m=0$ where two real roots appear.", "In section 4, we convert the vvmfs of $\\Gamma ^0(N)$ into vvmfs of the full modular group.", "For one example alone, we are able to identify the vvmf to be a solution of a modular differential equation studied by Gannon[14].", "In all other situations, the rank of the vvmf is too large for us to numerically determine the modular differential equation.", "We conclude in section 5 with some remarks.", "An appendix is devoted to providing the background necessary for the computations that we have done in this paper." ], [ "The Lie algebra background", "A vector in $\\mathbb {R}^{2,1}$ can be represented by a real symmetric $2\\times 2$ matrix[15], [16].", "$\\begin{pmatrix}x \\\\ y \\\\ t\\end{pmatrix}\\longleftrightarrow v=\\begin{pmatrix}t + y & x \\\\ x & t-y\\end{pmatrix}$ with norm $\\langle v,v \\rangle =-2 \\det (v)= 2(x^2 + y^2 -t^2)$ .", "Consider the two vectors in given by $\\alpha _1= \\begin{pmatrix} 2 & 1 \\\\ 1 & 0 \\end{pmatrix}\\text{ and } \\ \\alpha _2=\\begin{pmatrix} 0 & -1 \\\\ -1 & 0 \\end{pmatrix} \\ .$ Starting from these two root vectors construct new root vectors as follows: $\\alpha _{a+2m} = \\left(\\gamma ^{(N)}\\right)^m \\cdot \\alpha _a \\cdot \\left((\\gamma ^{(N)})^T\\right)^m\\text{ for } a=1,2,$ where $\\gamma ^{(N)}=\\left(\\begin{matrix}1 & -1 \\\\ N & 1-N\\end{matrix} \\right)$ .", "Note that $\\gamma ^{(N)}$ and $-\\gamma ^{(N)}$ have identical action on the $\\alpha _i$ .", "For $N\\le 3$ , $\\gamma ^{(N)}$ has finite order and infinite for $N>3$ .", "Let $\\mathbf {X}_N$ denote the ordered sequence of distinct root vectors $\\alpha _i$ generated in this fashion.", "$\\mathbf {X}_N= (\\alpha _i) \\text{ for }i\\in \\mathcal {S}_N={\\left\\lbrace \\begin{array}{ll}(1,2,3\\text{ mod } 3)\\ , & N=1 \\\\(0,1,2,3\\text{ mod } 4)\\ , & N=2 \\\\(0,1,2,3,4,5\\text{ mod } 6)\\ , & N=3 \\\\\\mathbb {Z}\\ , & N=4,5,6\\end{array}\\right.", "}\\ .$ There is a Weyl vector $\\varrho ^{(N)}$ $\\varrho ^{(N)}=\\begin{pmatrix} 1/N & 1/2 \\\\ 1/2 & 1\\end{pmatrix}\\ ,$ with norm $\\langle \\varrho ^{(N)},\\varrho ^{(N)}\\rangle =\\frac{1}{2} -\\frac{2}{N}$ such that $\\langle \\varrho ^{(N)},\\alpha \\rangle = -1$ for all $\\alpha \\in \\mathbf {X}_N$ .", "Let $A^{(N)}$ for $N=1,2,\\ldots ,6$ denote matrices given by the Gram matrix of the root vectors $\\mathbf {X}_N$ $A^{(N)}= (a_{nm}):= \\langle \\alpha _m,\\alpha _n\\rangle \\ .$ One has $a_{nm}= 2 - \\frac{4}{N-4}(\\lambda _N^{n-m} + \\lambda _N^{m-n}-2)$ , where $\\lambda _N$ is any solution of the quadratic equation $\\lambda ^2 -(N-2)\\lambda + 1 =0\\ .$ Let $\\mathfrak {g}(A^{(N)})$ denote the Kac-Moody algebra associated with the Cartan matrix $A^{(N)}$[17].", "Recall that the Kac-Moody algebra, $\\mathfrak {g}(A)$ , associated with a Cartan matrix $A=(a_{mn})$ (with $m,n\\in I$ ) is given by the generators $(e_m,h_m, f_m)$ with Lie brackets $[e_m,f_n]=\\delta _{mn}\\ h_m\\ , \\ [h_m,e_n]= a_{mn}\\ e_n\\ , \\ [h_m,f_n]=-a_{mn}\\ f_n\\ , \\ [h_m,h_n]=0\\ ,$ subject to the Serre relations $(\\text{ad}\\ e_m)^{-a_{mn}+1} e_n=0\\ , \\ (\\text{ad}\\ f_m)^{-a_{mn}+1} f_n=0\\quad m\\ne n\\ ,$ where $(\\text{ad} x) y =[x,y]$ .", "The Borcherds extension of a Kac-Moody algebra, a BKM Lie algebra, is obtained by adding imaginary simple roots to $\\mathfrak {g}(A^{(N)})$ .", "A simple description is given by considering the Weyl denominator formula which takes the form: $\\Delta =\\sum _{w\\in W}\\text{det}(w) w \\Big [T\\ e^{-\\varrho }\\Big ] = e^{-\\varrho }\\ \\prod _{\\alpha \\in L_+} (1-e^{-\\alpha })^{\\text{mult}(\\alpha )} \\ .$ In the above formula, $W$ is the Weyl group generated by elementary reflections due to simple roots, $\\varrho $ is the Weyl vector, $L_+$ is the set of positive roots and $m(\\alpha )$ is the multiplicity of the root $\\alpha $ .", "The case when $T=1$ is for the case of Kac-Moody algebras.", "$T$ is the Borcherds correction term incorporates the inclusion of imaginary simple roots.", "(See appendix B of [1] and references therein for a detailed description.)", "A key aspect of the Borcherds extension is that $\\Delta $ is a suitable automorphic form that admits a product formula.", "An example: Let $A=\\begin{pmatrix}2 & -2 \\\\ -2 & 2\\end{pmatrix}$ .", "Then, $\\mathfrak {g}(A)$ is the ${\\widehat{sl(2)}}$ Kac-Moody Lie algebra with simple roots $(\\alpha _1,\\alpha _2)$ and $\\delta =\\alpha _1+\\alpha _2$ is an imaginary root with zero root.", "We will consider a family of Borcherds corrections that appear in this work.", "For $N=1,2,3,5$ , consider a situation where has $12/(N+1)$ distinct imaginary simple roots of weight $\\tfrac{1}{N}(\\delta ,2\\delta ,3\\delta ,\\ldots )$ and $(12/(N+1))-3$ imaginary simple roots of weight $(\\delta ,2\\delta ,3\\delta ,\\ldots )$ .", "The Borcherds correction factor due to these imaginary simple roots takes the form $T_N(\\delta ) = \\prod _{j=1}^{\\infty } \\left(1-e^{-\\tfrac{j\\delta }{N}}\\right)^{\\tfrac{12}{N+1}}\\left(1-e^{-j\\delta }\\right)^{-3+\\tfrac{12}{N+1}}\\ .$ For $N=5$ a negative power appears in the second term in the infinite product.", "The imaginary simple roots in this case correspond to isotropic fermionic simple roots.", "Identifying $e^{-\\delta }\\sim q=\\exp (2\\pi i\\tau )$ , we obtain a function of $\\tau $ .", "Let $T_N(\\tau ) = \\prod _{j=1}^{\\infty } \\left(1-q^{j/N}\\right)^{\\tfrac{12}{N+1}}\\left(1-q^{j}\\right)^{-3+\\tfrac{12}{N+1}}\\ .$ Up to an overall power of $q$ , $T_N(\\tau )$ can be expressed in terms of products of the Dedekind eta function.", "The automorphic form, that is denoted by $\\Delta $ in Eq.", "(REF ), for these examples is given by'the Jacobi form $\\phi _{k(N),1/2}(\\tau ,z)$ defined in Eq.", "(REF ).", "We will refer to these Borcherds-Kac-Moody Lie algebras by $\\mathcal {B}_N^{CHL}({\\widehat{sl(2)}})$ .", "As can be seen, there can be several inequivalent Borcherds extensions of a Kac-Moody Lie algebra." ], [ "Embedding ${\\widehat{sl(2)}}$ in {{formula:98e0b73e-06d5-4a9c-8bda-498c83e0357d}}", "The Cartan matrices.", "$A^{(N)}$ considered in paper I are identical to the ones that appear here as well.", "Thus, the embedding of ${\\widehat{sl(2)}}$ into $\\mathfrak {g}(A^{(N)})$ works here as well.", "Let $(e,h,f)$ be the generators of $sl(2)$ .", "The affine Lie algebra $\\widehat{sl(2)}$ is defined by $\\widehat{sl(2)} = sl(2)\\otimes \\mathbb {C}[t,t^{-1}]\\oplus \\mathbb {C}\\,\\hat{k} \\oplus \\mathbb {C}\\, d\\ ,$ where $\\hat{k}$ is the central extension and $d=-t d/dt$ is the derivation.", "We identify the Lie subalgebra of $\\mathfrak {g}(A^{(N)})$ generated by $e_1, f_1, e_2, f_2, h_1, h_2$ and $h_3$ with $\\widehat{sl(2)}$ Lie algebra.", "We choose the identification similar to the one considered by Feingold and Frenkel[15].", "$e\\otimes 1 =e_2\\ ,\\ f\\otimes 1=f_2\\ ,\\ f \\otimes t =e_1\\ , \\ e\\otimes t^{-1} =f_1 \\ ,$ For the Cartan subalgebra of $\\widehat{sl(2)}$ , using the above identification, we obtain $h_1 =-h\\otimes 1 +\\hat{k}\\ , \\ h_2 = h \\otimes 1 \\ ,\\ h_3= - h\\otimes 1 +4N\\, d\\ .$ The inverse is $h\\otimes 1 = h_2\\ , \\ \\hat{k} = h_1 + h_2 \\ , \\ d = \\frac{1}{4N} (h_2 + h_3) \\ .$" ], [ "The $\\mathcal {B}^{CHL}(A^{(N)})$ Lie algebras", "Let $\\mathcal {B}^{CHL}(A^{(N)})$ denote an extension of the $\\mathfrak {g}(A^{(N)})$ whose denominator formula is given by the Siegel modular forms, $\\Delta ^{(N)}_{k(N)}(\\textbf {Z})$ which we define next.", "Then the BKM Lie algebras $\\mathcal {B}_N^{CHL}({\\widehat{sl(2)}})$ are naturally sub-algebras of $\\mathcal {B}^{CHL}(A^{(N)})$ .", "A connection with Mathieu and $L_{2}(11)$ moonshine leads to the following formula for a Siegel modular form[8], [9], [3].", "Let $g\\in L_2(11)_B$ be an element of order $N\\le 6$ .", "A second-quantized version of moonshine gives the following formula for $\\Delta ^{(N)}_{k(N)}(\\textbf {Z})$ .", "$\\Delta ^{(N)}_{k(N)}(\\textbf {Z}) =s^{1/2} \\phi _{k(N),1/2}(\\tau ,z)\\exp \\left[- \\frac{1}{m}\\sum \\limits _{m=1}^{\\infty }s^m \\psi _{0,1}^{[1,g]}(\\tau ,z)\\Big |T(m)\\right]$ where the Hecke-like operator $T(m)$ is defined as followsHere $ \\psi _{0,1}^{(N)[g^s,g^r]}$ is half the $g^r$ -twisted elliptic genus of $K3$ twined by the element $g^s$ .", "In other words, the trace is over the Hilbert space twisted by $g^r$ with insertion of $g^s$ (`twined') in the trace.", "$\\psi _{0,1}^{(N)[1,g]}(\\tau ,z)\\Big |T(m) := \\sum \\limits _{ad=m} \\sum \\limits _{b=0}^{d-1} \\psi _{0,1}^{(N)[g^{-b},g]}\\left(\\tfrac{a \\tau +b}{d} ,az \\right)$ and $\\phi _{k(N),1/2}(\\tau ,z)=\\frac{\\theta _1(\\tau ,z)}{\\eta (\\tau )^3}\\ \\eta ^{[1,g]}(\\tau )$ are index half Jacobi forms with the eta products $\\eta ^{[1,g]}(\\tau )$ defined in Table REF .", "It has been shown in ref.", "[7] that this leads to a Borcherds-type product formula for $\\Delta ^{(N)}_{k(N)}(\\textbf {Z})$ .", "Consider the Fourier expansion $\\psi ^{[g^{b},g^d]}_{0,1}(\\tau ,z)=\\sum _{n\\in \\mathbb {Z}, n\\ge 0}\\sum _{\\ell \\in \\mathbb {Z}} c^{[b,d]}(n,\\ell )\\ q^{\\frac{n}{N}} r^\\ell \\ ,$ where $g$ is of order $N$ .", "Define $\\tilde{c}^{[\\alpha ,d]}(n,\\ell )$ as follows (with $\\omega _N=\\exp (2\\pi i/N)$ ) $\\tilde{c}^{[\\alpha ,d]}(n,\\ell )=\\frac{1}{N}\\sum _{\\alpha =0}^{N-1} \\ (\\omega _N)^{-\\alpha b} \\ c^{[b,d]}(n,\\ell )\\ .$ Then one has the product formula that is provides the product side of the denominator formula that defines $\\mathcal {B}^{CHL}(A^{(N)})$ .", "$\\Delta _{k(N)}^{(N)}(\\mathbf {Z})=q^{1/2N} r^{1/2} s^{1/2} \\times \\prod _{m=0}^\\infty \\prod _{\\alpha =0}^{N-1}\\prod _{\\begin{array}{c}n\\in \\mathbb {Z}-\\frac{\\alpha }{N}\\\\ n\\ge 0\\end{array}}\\prod _{\\ell \\in \\mathbb {Z}}(1-q^{n} r^\\ell s^m)^{ {\\tilde{c}}^{[\\alpha ,m]}(nmN,\\ell )}\\ .$ The modularity of the above formula is not manifest.", "However, it follows from a result in ref.", "[18] that it is a Siegel modular form of a level $N$ subgroup of $Sp(4,\\mathbb {Z})$ .", "The sum side of the Weyl denominator formula is usually obtain from an additive lift.", "There is a construction of Cléry and Gritsenko that leads to closely related Siegel modular form (at leveln $N$ ) starting from a index half Jacobi form[19].", "It has been shown in [3] that the expansion of this Siegel modular form about another cusp (given by the S-transform) matches with the product formula given in Eq.", "(REF ) to fairly high order.", "Combined with modularity, it is enough to prove that the two formulae are equivalent.", "It is not a clean formula in the sense that a closed formula was not given but the transformation rules for the Hecke operator was worked out on a case by case basis.", "Table: Eta products" ], [ "Covariance under the extended Weyl group", "The extended Weyl group of the root system $\\mathbb {X}_N$ is generated by three types of generators[5], [6], [7] The Weyl group $W$ of $\\mathfrak {g}(A^{(N)})$ is generated by all elementary Weyl reflections, $s_m$ , due to the simple roots $\\alpha _m$ for all $m$ in $\\mathcal {S}_N$ , the generator $\\gamma ^{(N)}$ , and the generator $\\widehat{\\delta }=\\begin{pmatrix}-1 & 1 \\\\ 0 & 1\\end{pmatrix}$ which acts on roots via the action $\\alpha \\rightarrow \\widehat{\\delta }\\cdot \\alpha \\cdot \\widehat{\\delta }^T$ .", "It acts on the simple roots in $\\mathbf {X}_N$ as the involution: $\\widehat{\\delta }: \\alpha _m \\leftrightarrow \\alpha _{3-m}\\ .$ The action of the generators of the extended Weyl group can be translated into an action on upper-halfspace with coordinates $\\mathbf {Z}$ .", "With this in hand, one can show using the modular properties of the Siegel modular forms, One has $\\begin{split}\\Delta ^{(N)}_{k(N)}(s_m\\cdot \\textbf {Z}) &= - \\Delta ^{(N)}_{k(N)}(\\textbf {Z})\\ , \\\\\\Delta ^{(N)}_{k(N)}(\\gamma ^{(N)}\\cdot \\textbf {Z}) &= + \\Delta ^{(N)}_{k(N)}(\\textbf {Z})\\ , \\\\\\Delta ^{(N)}_{k(N)}(\\widehat{\\delta }\\cdot \\textbf {Z}) &= + \\Delta ^{(N)}_{k(N)}(\\textbf {Z})\\ .\\end{split}$ These properties show that the Siegel modular forms have the necessary covariance under Weyl transformations." ], [ "Deconstructing the Lie algebra", "The Siegel modular form defined in Eq.", "(REF ) can be expanded as a power series in the variable $s$ .", "The leading term in the expansion is $s^{1/2}\\,\\phi ^{(N)}_{k(N),1/2}(\\tau ,z)$ which is the deonominator formula for the sub-algebra $\\mathcal {B}^{CHL}_N({\\widehat{sl(2)}})$ .", "$\\Delta ^{(N)}_{k(N)}(\\textbf {Z}) = s^{1/2}\\,\\phi ^{(N)}_{k(N),1/2}(\\tau ,z) \\Big [ 1 +\\sum _{m=1}^\\infty s^m\\, \\Psi ^{(N)}_{0,m}(\\tau ,z)\\Big ]\\ ,$ The above equations define the weight zero and index $m$ Jacobi forms $\\Psi ^{(N)}_{0,m}(\\tau ,z)$ .", "Explicit formulae for the Jacobi forms can be obtained by expanding the exponential in Eq.", "(REF ).", "For instance, one obtains $\\Psi ^{(N)}_{0,1}(\\tau ,z) &= -\\psi _{0,1}^{(N)[1,g]}(\\tau ,z)\\ , \\\\\\Psi ^{(N)}_{0,2}(\\tau ,z) &= -\\frac{1}{2}\\left(\\psi _{0,1}^{(N)[1,g]}(\\tau ,z)\\Big |T(2)-(\\psi _{0,1}^{(N)[1,g]}(\\tau ,z))^2\\right) \\ .$ We will be study the first $N$ terms in the expansion.", "They can be rewritten in terms of standard modular forms thereby enabling us to have formulae that can be directly used.", "A weak Jacobi form of $\\Gamma _0(N)$ , $\\xi _m$ , of weight zero and index $m$ can be expanded as follows: $\\xi _m(\\tau ,z) = \\sum _{j=0}^m \\alpha _j(\\tau )\\ A(\\tau ,z)^{m-j} B(\\tau ,z)^j\\ ,$ where $\\alpha _j(\\tau )$ are weight $2j$ modular forms of $\\Gamma _0(N)$ .", "However the $ \\Psi ^{(N)}_{0,m}(\\tau ,z)$ are Jacobi forms of $\\Gamma ^0(N)$ .", "Thus, we identify $\\xi _m$ with their transform $ \\Psi ^{(N)}_{0,m}(\\tau ,z)|S$ as they are modular forms of $\\Gamma _0(N)$ .", "This method is useful as the generators of the ring of modular forms of $\\Gamma _0(N)$ are well-known.", "We give the generators for the cases of interest in appendix REF ." ], [ "Details of the examples", "We now present explicit formulae for the Jacobi forms $ \\Psi ^{(N)}_{0,m}(\\tau ,z)|S$ for $N=2,3,5$ and $m=1,\\ldots ,N$ ." ], [ "$N=2$", "The Weyl-Kac-Borcherds denominator formula is given by the weight three Siegel modular form of a level 2 subgroup of $Sp(4,\\mathbb {Z})$ .", "$\\Delta _3(\\mathbf {Z}) = s^{1/2}\\,\\phi ^{(2)}_{3,1/2}(\\tau ,z) \\Big [ 1 + s\\, \\Psi ^{(2)}_{0,1}(\\tau ,z) + s^2\\, \\Psi ^{(2)}_{0,2}(\\tau ,z) + O(s^3)\\Big ]\\ ,$ where $\\phi ^{(2)}_{3,1/2}(\\tau ,z) &= \\theta _1(\\tau ,z)\\,\\eta (\\tau )^4 \\eta (\\tau /2)^4 \\\\\\Psi ^{(2)}_{0,1}(\\tau ,z) & = \\tfrac{1}{3} A(\\tau ,z) -\\tfrac{1}{3} E_2^{(2)}(\\tau /2)\\, B(\\tau ,z) \\\\\\Psi ^{(2)}_{0,2}(\\tau ,z) &=-\\tfrac{1}{72} A(\\tau ,z)^2 -\\tfrac{1}{18} E_2^{(2)}(\\tau /2) A(\\tau ,z) B(\\tau ,z) \\\\&\\quad + \\left(\\tfrac{29}{288} E_2^{(2)}(\\tau /2)^2 -\\tfrac{1}{32} E_4(\\tau /2) \\right) B(\\tau ,z)^2$ are Jacobi forms of $\\Gamma ^0(2)$ .", "We expect to observe two real simple roots in $\\Psi ^{(2)}_{0,2}(\\tau ,z)$ ." ], [ "$N=3$", "The Weyl-Kac-Borcherds denominator formula is given by the weight three Siegel modular form of a level 3 subgroup of $Sp(4,\\mathbb {Z})$ .", "$\\Delta _2(\\mathbf {Z}) = s^{1/2}\\,\\phi ^{(3)}_{2,1/2}(\\tau ,z) \\Big [ 1 + s\\, \\Psi ^{(3)}_{0,1}(\\tau ,z) + s^2\\, \\Psi ^{(3)}_{0,2}(\\tau ,z) + s^3\\, \\Psi ^{(3)}_{0,3}(\\tau ,z) + O(s^4)\\Big ]\\ ,$ where $\\phi ^{(3)}_{2,1/2}(\\tau ,z) &= \\theta _1(\\tau ,z)\\,\\eta (\\tau )^3 \\eta (\\tau /3)^3 \\\\\\Psi ^{(2)}_{0,1}(\\tau ,z) & = \\tfrac{1}{4} A(\\tau ,z) -\\tfrac{1}{4} E_2^{(3)}(\\tau /3)\\, B(\\tau ,z) \\\\\\Psi ^{(3)}_{0,2}(\\tau ,z) &= 0\\\\\\Psi ^{(3)}_{0,3}(\\tau ,z) &= \\tfrac{1}{864} A(\\tau ,z)^3 -\\tfrac{1}{96} E_2^{(3)}(\\tau /3) A(\\tau ,z)^2 B(\\tau ,z) \\\\& + \\left(\\tfrac{25}{1296} E_2^{(3)}(\\tau /3)^2 -\\tfrac{5}{2592} E_4(\\tau /3) \\right) A(\\tau ,z)B(\\tau ,z)^2 \\\\&+ (-\\tfrac{145}{11664}E_2^{(3)}(\\tau /3)^3 +\\tfrac{85}{23328}E_2^{(3)}(\\tau /3)E_4(\\tau /3) + \\tfrac{1}{1458}E_6(\\tau /3))B(\\tau ,z)^3$ are Jacobi forms of $\\Gamma ^0(3)$ .", "It is interesting to observe that $\\Psi ^{(3)}_{0,2}(\\tau ,z) = 0$ .", "This arises from a cancellation of multiple terms.", "The expectation is that there would have been no real simple roots and imaginary simple roots in this term.", "The vanishing says that there are no imaginary simple roots with negative norm.", "It could also be that there is a Bose-Fermi cancellation i.e., there are equal numbers of bosonic and fermionic roots.", "We expect to see two real simple roots in $\\Psi ^{(3)}_{0,3}(\\tau ,z)$ which is non-vanishing." ], [ "$N=5$", "The Weyl-Kac-Borcherds denominator formula is given by the weight three Siegel modular form of a level 3 subgroup of $Sp(4,\\mathbb {Z})$ .", "$\\Delta _1(\\mathbf {Z}) = s^{1/2}\\,\\phi ^{(5)}_{1,1/2}(\\tau ,z) \\Big [ 1 + \\sum _{m=1}^5s^m\\, \\Psi ^{(5)}_{0,m}(\\tau ,z) + O(s^6)\\Big ]\\ ,$ $\\phi ^{(5)}_{1,1/2}(\\tau ,z) &= \\theta _1(\\tau ,z)\\,\\eta (\\tau /5)^2 \\eta (\\tau )^2 \\\\\\Psi ^{(5)}_{0,1}(\\tau ,z) & = \\tfrac{1}{5} A(\\tau ,z) -\\tfrac{1}{5} E_2^{(5)}(\\tau /5)\\, B(\\tau ,z)$ We have shortened $A(\\tau ,z), B(\\tau ,z)$ to $A,B$ to make equations more compact.", "$\\Psi ^{(5)}_{0,2}(\\tau ,z) &= -\\tfrac{1}{144} A^2 -\\tfrac{1}{72} E_2^{(5)}(\\tau /5) A B \\\\&\\quad + \\left(-\\tfrac{53}{7200} E_2^{(5)}(\\tau /5)^2 +\\tfrac{1}{2400} E_4(\\tau /5) -\\tfrac{19}{200} \\eta (\\tau /5)^4\\eta (\\tau )^4 \\right) B^2\\\\\\Psi ^{(5)}_{0,3}(\\tau ,z) &= \\tfrac{1}{864} A(\\tau ,z)^3 -\\tfrac{1}{288} E_2^{(5)}(\\tau /5) A^2 B \\\\&\\quad + \\left(\\tfrac{17}{4800} E_2^{(5)}(\\tau /5)^2 -\\tfrac{1}{14400} E_4(\\tau /5)+\\tfrac{19}{1200} \\eta (\\tau /5)^4\\eta (\\tau )^4 \\right) A\\,B^2\\\\&+ \\left(-\\tfrac{53}{43200} E_2^{(5)}(\\tau /5)^3 +\\tfrac{1}{14400} E_2^{(5)}(\\tau /5)E_4(\\tau /5) -\\tfrac{19}{1200} E_2^{(5)}(\\tau /5) \\eta (\\tau /5)^4\\eta (\\tau )^4 \\right) B^3\\\\\\Psi ^{(5)}_{0,4}(\\tau ,z)&= \\tfrac{1}{20736} A^4-\\tfrac{1}{5184} E_2^{(5)}(\\tau /5) A^3\\, B \\\\&\\quad + \\left(\\tfrac{17}{57600} E_2^{(5)}(\\tau /5)^2 -\\tfrac{1}{172800} E_4(\\tau /5)+\\tfrac{19}{14400} \\eta (\\tau /5)^4\\eta (\\tau )^4 \\right) A^2\\,B^2\\\\&+ \\left(-\\tfrac{53}{259200} E_2^{(5)}(\\tau /5)^3 +\\tfrac{1}{86400} E_2^{(5)}(\\tau /5)E_4(\\tau /5) -\\tfrac{19}{7200} E_2^{(5)}(\\tau /5) \\eta (\\tau /5)^4\\eta (\\tau )^4 \\right) A\\,B^3\\\\&+ \\left(\\tfrac{2117}{25920000} E_2^{(5)}(\\tau /5)^4 -\\tfrac{1}{28800} E_2^{(5)}(\\tau /5)^2E_4(\\tau /5) +\\tfrac{2641}{360000} E_2^{(5)}(\\tau /5)^2 \\eta (\\tau /5)^4\\eta (\\tau )^4 \\right.", "\\\\&\\qquad \\qquad \\left.", "+\\tfrac{11}{8640000} E_4(\\tau /5)^2 + \\tfrac{779}{60000} \\eta (\\tau /5)^8\\eta (\\tau )^8\\right) B^4$ are Jacobi forms of $\\Gamma ^0(5)$ .", "We have not given an explicit formula for $\\Psi ^{(5)}_{0,5}(\\tau ,z)$ as the formula is big and unilluminating." ], [ "Characters of ${\\widehat{sl(2)}}$ and {{formula:1a7f5cf2-31af-40ae-92ed-98b8ba14dd7d}} ", "Consider the following roots $\\alpha ^{(N)}_0 =\\begin{pmatrix}2N-2 & 2N-1 \\\\ 2N-1 & 2N\\end{pmatrix} \\text{ and } \\ \\alpha ^{(N)}_3= \\begin{pmatrix} 0 & 1 \\\\ 1 & 2N \\end{pmatrix} .$ We will track these real simple roots as well as the zero-norm imaginary simple roots $\\delta _N^{\\prime }:=(\\alpha ^{(N)}_3+\\alpha _2) \\quad \\text{and}\\quad \\delta _N^{\\prime \\prime }:=(\\alpha ^{(N)}_0+\\alpha _2)\\ .$ The subscript $N$ is to emphasise that they change with $N$ unlike the zero-norm imaginary simple root $\\delta =(\\alpha _1+\\alpha _2)$ .", "The normalized $\\widehat{sl(2)}$ character at level $k$ , $\\chi _{k,\\ell }(\\tau ,z)$ , is defined by $\\chi _{k,\\ell }(\\tau ,z)&=\\frac{\\theta _{k+2,\\ell +1}(\\tau ,z)-\\theta _{k+2,-\\ell -1}(\\tau ,z)}{\\theta _{2,1}(\\tau ,z)-\\theta _{2,-1}(\\tau ,z)} \\text{ for } k,\\ell \\in \\mathbb {Z}_{\\ge 0} \\text{ and } 0\\le \\ell \\le k\\ ,$ where $\\theta _{m,a}(\\tau ,z) := \\sum _{k\\in \\mathbb {Z}} q^{m(k +\\frac{a}{2m})^2} r^{m(k +\\frac{a}{2m})}\\ .$ For weights $\\tilde{\\Lambda } = a \\delta + b \\alpha _2 + c \\delta _N^{\\prime }$ satisfying the condition $\\langle \\tilde{\\Lambda },\\delta \\rangle <0$ , the character of $\\mathcal {B}_N({\\widehat{sl(2)}})$ when $a=0$ is given by(see [1] for a similar derivation) $ \\widetilde{\\chi }_{k,\\ell }=q^{\\frac{1}{8}-\\frac{(\\ell +1)^2}{4 (k+2)}}\\frac{ \\chi _{k,\\ell }}{T_N(\\tau )}\\ ,$ with $k=4c$ and $\\ell =-2b$ .", "The weights are such that $a\\in \\frac{1}{N}\\mathbb {Z}_{\\ge 0}$ and $c\\in \\frac{1}{N}\\mathbb {Z}_{>0}$ .", "The character with $a\\ne 0$ is then $q^a\\ \\widetilde{\\chi }_{k,\\ell }$ ." ], [ "VVMFs from $\\widehat{sl(2)}$ decomposition", "The Jacobi forms $\\Psi _{0,m}^{(N)}$ can be expanded in terms of characters of ${\\widehat{sl(2)}}$ and those of the Borcherds extension $\\mathcal {B}_N({\\widehat{sl(2)}})$ The decomposition takes the form $\\Psi _{0,m}^{(N)}(\\tau ,z) &= \\sum _{j=-m}^{m} g^{N,m}_{j+1}(\\tau )\\, \\chi _{4m,2m+2j}(\\tau ,z)\\ , \\\\&= \\sum _{j=-m}^{m} f^{N,m}_{j+1}(\\tau )\\, \\widetilde{\\chi }_{4m,2m+2j}(\\tau ,z)\\ ,$ Further, one observes that $g^{N,m}_{j+1}(\\tau )=g^{N,m}_{-j+1}(\\tau )$ .", "This follows from the $\\mathbb {Z}_2$ outer automorphism under which $\\alpha _1\\leftrightarrow \\alpha _2$ and $\\alpha _0\\leftrightarrow \\alpha _3$ .", "Thus one has $(m+1)$ independent functions that we organize into a vector $\\mathbf {g}:=(g_1,g_2,\\ldots ,g_{m+1})^T$ .", "These are rank $(m+1)$ vector valued modular forms of $\\Gamma ^0{(N)}$ .", "Remark: The multiplicities of roots are given the coefficients of the $f_j^{N,m}(\\tau )$ which can be obtained from the $g_j^{N,m}(\\tau )$ using Eq.", "(REF )." ], [ "$N=2$", "The coefficients of Fourier series $T_2(\\tau )$ give the mutiplicity of the imaginary simple roots $\\delta ^{\\prime }_2$ and $\\delta _2^{\\prime \\prime }$ .", "The coefficient of $q^{y}$ gives the multiplicity of the roots $y\\,\\delta ^{\\prime }_2$ and $y\\,\\delta _2^{\\prime \\prime }$ .", "One has $T_2(\\tau )= 1 \\mathbf {- 4}\\ q^{1/2} +\\mathbf {1}\\ q + O(q^{3/2}))\\ .$ We will see that the expansions below are consistent with these numbers.", "$\\mathbf {g}^{2,1}(\\tau )=\\begin{pmatrix}q^{-1/4}(8 q^{1/2}+40 q+128 q^{3/2}+368 q^2+936 q^{5/2}+2176 q^{3}+\\cdots ) \\\\q^{1/12} (\\mathbf {-4} - 24 q^{1/2} - 88 q - 264 q^{3/2} - 692 q^2 - 1656 q^{5/2} +\\cdots )\\end{pmatrix}$ The leading term in the first row corresponds to the imaginary simple roots $(\\alpha ^{(1)}_3+\\tfrac{1}{2} \\delta )$ and $(\\alpha ^{(1)}_0+\\tfrac{1}{2} \\delta )$ as the constant piece is vanishing.", "This is consistent with simple real roots $\\alpha ^{(1)}_3$ and $\\alpha ^{(1)}_0$ not being present.", "In the second row, the leading term has multiplicity $-4$ and corresponds to the imaginary roots $\\frac{1}{2}\\delta _2^{\\prime }$ and $\\frac{1}{2}\\delta _2^{\\prime \\prime }$ .", "All other terms correspond to imaginary simple roots with negative norm.", "$\\mathbf {g}^{2,2}(\\tau )=\\begin{pmatrix}q^{-1/2}(-4 q^{1/2} + 2 q - 16 q^{3/2} - 2 q^2 - 56 q^{5/2} + 2 q^3 -144 q^{7/2} +\\cdots ) \\\\q^{-1/10}(\\mathbf {-1} + 4 q^{1/2} + q + 8 q^{3/2} - 2 q^2 + 24 q^{5/2} + 2 q^3 +64 q^{7/2} +\\cdots )\\\\q^{1/10} (\\mathbf {1} + 8 q^{1/2} + 28 q^{3/2} + 80 q^{5/2} - q^3 + \\cdots )\\end{pmatrix}$ The leading term in the second row above is the multiplicity of the real simple roots $\\alpha ^{(2)}_0$ and $(\\alpha ^{(2)}_3$ .", "They have multiplicity 1 and the minus sign comes from $\\det (w)$ in the denominator formulae.", "The Lie algebra $\\mathfrak {g}(A^{(2)})$ has four real simple roots.", "Thus, there are no more simple real roots to track.", "In the third/last row, the leading term has multiplicity $+1$ and corresponds to the imaginary roots $\\delta _2^{\\prime }$ and $\\delta _2^{\\prime \\prime }$ .", "Definition 3.1 Let $\\mathcal {I}$ denote the set of imaginary simple roots with negative norm whose multiplicity are given by the Fourier expansions of $f_j^{N,m}(\\tau )$ for $j=1,\\ldots , (m+1)$ and $m=1,\\ldots ,N$ .", "These are not the complete set of imaginary simple roots as more appear when $m>N$ ." ], [ "$N=3$", "The coefficients of Fourier series $T_2(\\tau )$ give the mutiplicity of the imaginary simple roots proportional to $\\delta ^{\\prime }_3$ and $\\delta _3^{\\prime \\prime }$ .", "The coefficient of $q^{y}$ gives the multiplicity of the roots $y\\,\\delta ^{\\prime }_3$ and $y\\,\\delta _3^{\\prime \\prime }$ .", "One has $T_3(\\tau )= 1\\mathbf {-3}\\ q^{1/3}+ \\mathbf {0}\\ q^{2/3}\\mathbf {-5}\\ q+O(q^{4/3}\\ .$ $\\mathbf {g}^{3,1}(\\tau )= \\frac{3\\eta (\\tau )^3}{\\eta (\\tau /3)^3}\\ \\begin{pmatrix}1 \\\\ -1\\end{pmatrix}= \\begin{pmatrix}q^{-1/4}\\ (3\\ q^{1/3} + O(q^{2/3}) \\\\q^{1/12}\\ (\\mathbf {-3} + O(q^{1/3})\\end{pmatrix}$ The leading term in the first row corresponds to the imaginary simple roots $(\\alpha ^{(1)}_3+\\tfrac{1}{2} \\delta )$ and $(\\alpha ^{(1)}_0+\\tfrac{1}{2} \\delta )$ as the constant piece is vanishing.", "This is consistent with simple real roots $\\alpha ^{(1)}_3$ and $\\alpha ^{(1)}_0$ not being present.", "In the second row, the leading term has multiplicity $-4$ and corresponds to the imaginary roots $\\frac{1}{2}\\delta _2^{\\prime }$ and $\\frac{1}{2}\\delta _2^{\\prime \\prime }$ .", "All other terms correspond to imaginary simple roots with negative norm.", "$\\mathbf {g}^{3,3}(\\tau )= \\begin{pmatrix}q^{-3/4}(14 q + 42 q^{4/3} + 126 q^{5/3} + 308 q^2 + 714 q^{7/3} +1512 q^{8/3} + \\cdots ) \\\\q^{-9/28}(-3 q^{1/3} - 9 q^{2/3} - 38 q^{1} - 99 q^{4/3} - 252 q^{5/3} - 549 q^{2} + \\cdots ) \\\\q^{-1/28} (-\\textbf {1} - 3 q^{1/3} - 9 q^{2/3} - 35 q - 75 q^{4/3} - 180 q^{5/3} -372 q^2 +\\cdots ) \\\\q^{3/28}(\\textbf {5} + 24 q^{1/3} + 72 q^{2/3} + 191 q + 453 q^{4/3} + 999 q^{5/3} +\\cdots )\\end{pmatrix}$" ], [ "$N=5$", "The coefficients of Fourier series $T_5(\\tau )$ give the mutiplicity of the imaginary simple roots $\\delta ^{\\prime }_5$ and $\\delta _5^{\\prime \\prime }$ .", "The coefficient of $q^{y}$ gives the multiplicity of the roots $y\\,\\delta ^{\\prime }_5$ and $y\\,\\delta _5^{\\prime \\prime }$ .", "T One has $T_5(\\tau )= 1 \\mathbf {-2}\\ q^{1/5}\\mathbf {-1}\\ q^{2/5}+\\mathbf {2}\\ q^{3/5}+\\mathbf {1}\\ q^{4/5}+\\mathbf {3}\\ q+O(q^{6/5})\\ .$ hese appear as the leading coefficient in the bottom row of each vvmf $\\mathbf {g}^{5,m}$ for $m=1,\\ldots ,5$ .", "$\\mathbf {g}^{5,1}(\\tau )=\\begin{pmatrix}q^{-1/4}(q^{1/5} + 3 q^{2/5} + 4 q^{3/5} + 7 q^{4/5} + 17 q + 24 q^{6/5} +44 q^{7/5} +\\cdots ) \\\\q^{1/12} (-\\textbf {2} - 3 q^{1/5} - 9 q^{2/5} - 12 q^{3/5} - 21 q^{4/5} - 35 q +\\cdots )\\end{pmatrix}$ $\\mathbf {g}^{5,2}(\\tau )=\\begin{pmatrix}q^{-1/2}(q^{2/5} + q^{3/5} + 2 q^{4/5} + q - 2 q^{6/5} + 7 q^{7/5} + 4 q^{8/5} +8 q^{9/5}+\\cdots ) \\\\q^{-1/10}(q^{1/5} - 2 q^{2/5} - q^{3/5} - 3 q^{4/5} + q + 5 q^{6/5} - 8 q^{7/5} - 3 q^{8/5} +\\cdots )\\\\q^{1/10} (-\\textbf {1} - 3 q^{1/5} + q^{2/5} - 2 q^{3/5} - q^{4/5} - 5 q - 12 q^{6/5} + \\cdots )\\end{pmatrix}$ $\\mathbf {g}^{5,3}(\\tau )= \\begin{pmatrix}q^{-3/4}(q^{3/5} + 4 q^{4/5} + 9 q + 14 q^{6/5} + 33 q^{7/5} + 52 q^{8/5} +126 q^{9/5} + \\cdots ) \\\\q^{-9/28}(-q^{2/5} - 3 q^{3/5} - 15 q^{4/5} - 25 q - 37 q^{6/5} - 74 q^{7/5} -106 q^{8/5}+ \\cdots ) \\\\q^{-1/28} (-3 q^{1/5} - 4 q^{2/5} - 11 q^{3/5} - 2 q^{4/5} - 18 q - 38 q^{6/5} -59 q^{7/5}+\\cdots ) \\\\q^{3/28}(\\textbf {2} + 9 q^{1/5} + 17 q^{2/5} + 41 q^{3/5} + 53 q^{4/5} + 110 q +201 q^{6/5} +\\cdots ) \\end{pmatrix}$ $\\mathbf {g}^{5,4}(\\tau )= \\begin{pmatrix}q^{-1}(q^{4/5} + 2 q + 5 q^{6/5} + 8 q^{7/5} - 2 q^{8/5} + 16 q^{9/5} +13 q^2 + 68 q^{11/5} +\\cdots ) \\\\q^{-5/9}(2 q^{3/5} - q^{4/5} - 11 q - 11 q^{6/5} + 24 q^{7/5} - 11 q^{8/5} +11 q^{9/5} +\\cdots ) \\\\q^{-2/9} (-q^{2/5} - 10 q^{3/5} - 7 q^{4/5} - 18 q - 18 q^{7/5} - 103 q^{8/5} -59 q^{9/5} +\\cdots )\\\\(-2 q^{1/5} - q^{2/5} + 14 q^{3/5} + 5 q^{4/5} + 19 q - 14 q^{6/5} +6 q^{7/5} + 123 q^{8/5}+\\cdots )\\\\q^{1/9}(\\textbf {1} + 6 q^{1/5} + 8 q^{2/5} - 6 q^{3/5} + 18 q^{4/5} + 12 q +74 q^{6/5} + 77 q^{7/5} +\\cdots )\\end{pmatrix} $ $\\mathbf {g}^{5,5}(\\tau )= \\begin{pmatrix}q^{-5/4}(q + 2 q^{6/5} + 6 q^{7/5} + 8 q^{8/5} + 14 q^{9/5} - 16 q^2 +40 q^{11/5} + 64 q^{12/5}+\\cdots )\\\\q^{-35/44}(5 q - 4 q^{6/5} - 12 q^{7/5} - 16 q^{8/5} - 28 q^{9/5} + 73 q^2 -74 q^{11/5}+\\cdots )\\\\q^{-19/44}(-21 q + 6 q^{6/5} + 18 q^{7/5} + 24 q^{8/5} + 42 q^{9/5} - 194 q^2 +112 q^{11/5}+\\cdots ) \\\\q^{-7/44}(-2 q^{1/5} - 6 q^{2/5} - 8 q^{3/5} - 14 q^{4/5} + 24 q - 40 q^{6/5} -64 q^{7/5}+\\cdots ) \\\\q^{1/44}(-\\textbf {1} + 4 q^{1/5} + 12 q^{2/5} + 16 q^{3/5} + 28 q^{4/5} - 34 q +72 q^{6/5} +\\cdots ) \\\\q^{5/44} (\\textbf {3} - 2 q^{1/5} - 6 q^{2/5} - 8 q^{3/5} - 14 q^{4/5} + 73 q -44 q^{6/5} - 76 q^{7/5}+\\cdots )\\end{pmatrix} $ In conclusion, we are able to show that $\\Delta _{k(N)}^{(N)}(\\mathbf {Z}) = \\sum _{w\\in W} \\det (w) w\\bigg [(e^{-\\varrho }\\Big (T_N(\\delta ) +(T_N(\\delta ^{\\prime }_N)-1) + (T_N(\\delta ^{\\prime \\prime }_N)-1) \\\\+ \\sum _{a\\in \\mathcal {I}} m(a)\\ e^{-a} + \\cdots \\Big )\\bigg ]$ where the set $\\mathcal {I}$ is as defined in Definition REF .", "The ellipsis refers to contributions from higher orders.", "Additional terms may be added by incorporating the action of the symmetry $\\gamma ^{(N)}$ to make the right hand side manifestly invariant under the extended Weyl group.", "The symmetry under the action of $\\widehat{\\delta }$ is already present.", "Terms such as these fit into the Borcherds extension of $\\mathfrak {g}(A^{(N)})$ .", "For $N=2,3$ , it expected that $\\mathcal {B}^{CHL}_N(A^{(N)})$ is a BKM Lie superalgebra and a suitably enlarged set $\\mathcal {I}$ should do the job.", "As far as we know, an explicit proof is not available in the literature.", "For $N=5$ , we expect a new set of real roots might appear at $m=10$ .", "In particular, is is known that following two real roots of norm 2 could appear as they are present in the product side.", "$\\tilde{\\alpha }_{1} =\\begin{pmatrix} 4 & 9 \\\\ 9 & 20 \\end{pmatrix} \\quad ,\\quad \\tilde{\\alpha }_{2} = \\begin{pmatrix} 6 & 11 \\\\ 11 &20 \\end{pmatrix}\\quad .$ These are associated with the ${\\widehat{sl(2)}}$ characters $\\chi _{40,18}$ and $\\chi _{40,22}$ .", "They should appear as the leading coefficient in the ${\\widehat{sl(2)}}$ character decomposition of $\\Psi _{0,10}^{(5)}(\\mathbf {Z})$ is given below.", "The relevant term in the second row is given in bold face and is vanishing.", "$\\mathbf {g}^{5,10}(\\tau )=\\begin{pmatrix}q^{-5/2}(q^2+2 q^{11/5}+5 q^{12/5}+12 q^{13/5}+27 q^{14/5}+114 q^3+\\cdots ) \\\\q^{-85/42}(\\mathbf {0\\,q^2}+8 q^{11/5}+27 q^{12/5}+20 q^{13/5}+17 q^{14/5}-603 q^3 + \\cdots ) \\\\q^{-67/42}(35 q^2-66 q^{11/5}-207 q^{12/5}-228 q^{13/5}-345 q^{14/5}+\\cdots ) \\\\q^{-17/14}(2 q^{7/5}-8 q^{8/5}-26 q^{9/5}-326 q^2+104 q^{11/5}+461 q^{12/5}+\\cdots )\\\\q^{-37/42}(5 q-16 q^{6/5}-54 q^{7/5}-40 q^{8/5}-34 q^{9/5}+1056 q^2+\\cdots ) \\\\q^{-25/42}(-35 q+66 q^{6/5}+207 q^{7/5}+228 q^{8/5}+345 q^{9/5}+\\cdots )\\\\q^{-5/14}(-q^{2/5}+4 q^{3/5}+13 q^{4/5}+164 q-80 q^{6/5}-318 q^{7/5}+\\cdots ) \\\\q^{-1/6}(2 q^{1/5}+9 q^{2/5}-4 q^{3/5}-25 q^{4/5}-397 q+102 q^{6/5}+\\cdots )\\\\q^{-1/42}(8 q^{1/5}-27 q^{2/5}-20 q^{3/5}-17 q^{4/5}+603 q-352 q^{6/5}+\\cdots )\\\\q^{1/14}(-3+14q^{1/5}+45 q^{2/5}+44 q^{3/5}+59 q^{4/5}-812 q+\\cdots )\\\\q^{5/42}(5-16 q^{1/5}-54 q^{2/5}-40q^{3/5}-34 q^{4/5}+1056 q+\\cdots )\\end{pmatrix}$ The leading term in row 1 is has $\\mathcal {B}_5^{CHL}({\\widehat{sl(2)}})$ weight vector $(2 \\delta - 10\\alpha _2 + 2 \\delta _5^{\\prime })$ and has positive norm.", "The multiplicities are given by the $\\mathcal {B}_5^{CHL}({\\widehat{sl(2)}})$ character expansion.", "The coefficient of $\\widetilde{\\chi }_{40,20}$ is $f_1^{5,10}(\\tau )&= T_5(\\tau )(q^2+2 q^{11/5}+5 q^{12/5}+12 q^{13/5}+27 q^{14/5}+114 q^3+\\cdots ) \\\\&= \\mathbf {q^2}+2 q^{13/5}+3 q^{14/5}+63 q^3 +\\cdots $ The other potential real roots associated with $q^{11/5}$ and $q^{12/5}$ do not appear.", "We need to understand this real root." ], [ "Vector valued modular forms", "In the previous section, we obtained vector valued modular forms of the congruence group $\\Gamma ^0(N)$ .", "We would like to obtain closed formulae for the Fourier coefficients of these modular forms.", "In [1], this was done by showing that the vvmfs satisfied a modular differential equation.", "However, those examples involved modular forms of the full modular group, $PSL(2,\\mathbb {Z})$ .", "So we construct vector valued modular forms for the whole group following a two-step procedureWe learned this method from the work of Borcherds who obtains modular forms for the full modular group in this fashion[20].", "This procedure is called lifting by Bajpai in [21].", "First, we convert the Jacobi forms of $\\Gamma ^0(N)$ into Jacobi forms of the full modular group.", "We obtain vector valued Jacobi forms in this fashion.", "Next, we carry out the character decomposition of the these vector valued Jacobi forms and obtain vector valued modular forms of the whole modular group.", "The price we pay is that the rank of the vector valued modular forms increases by the index of the subgroup in $PSL(2,\\mathbb {Z})$ ." ], [ "Vector Valued Jacobi Forms", "The Jacobi forms $\\Psi _{0,m}^{(N)}(\\tau ,z)$ are belong to $J_{0,m}(\\Gamma ^0(N))$ .", "The Jacobi forms, obtained by the action of $S$ , $\\psi _{0,m}^{(N)[1,g]}(\\tau ,z)\\Big |S\\in J_{0,m}(\\Gamma _0(N))$ .", "For prime $N=2,3,5$ , there are two cusps of width 1 and $N$ respectively.", "We restrict our discussion to only these three cases.", "We form a rank $(N+1)$ vector valued Jacobi Form (vvJF) of the full modular group, $PSL(2,\\mathbb {Z})$ .", "Let $\\psi \\equiv \\Psi _{0,m}^{(N)}(\\tau ,z)$ and define $\\widetilde{\\mathcal {V}}(\\psi ) =\\begin{pmatrix}\\psi (\\tau ,z)|S \\\\ \\psi (\\tau ,z) \\\\ \\psi (\\tau ,z)|T \\\\ \\vdots \\\\ \\psi (\\tau ,z)|T^{N-1}\\end{pmatrix} \\ .$ The first entry is the contribution from the cusp at infinity and the other $N$ are the contribution from the cusp at zero.", "Note that $T^N=1$ at the cusp at zero.", "The vvJF $\\widetilde{\\mathcal {V}}$ is reducible and $T$ has an off-diagonal action.", "We first make a change of basis so that $T$ is diagonal.", "Consider the Jacobi forms (with $\\omega _N=\\exp (2\\pi i/N)$ ) $\\widetilde{\\psi }_i(\\tau ,z) = \\frac{1}{N}\\sum _{j=0}^{N-1} \\omega _N^{ij}\\ \\psi (\\tau ,z)|T^j\\ , \\quad i=0,1,\\ldots , (N-1)\\mod {N}$ Now $T$ has a diagonal action i.e., $\\widetilde{\\psi }_i(\\tau ,z)|T = (\\omega _N)^i \\ \\widetilde{\\psi }_i\\quad \\text{ and }\\quad \\psi (\\tau ,z)|ST=\\psi (\\tau ,z)|S\\ .$ The rank $(N+1)$ vvJF $\\widetilde{\\mathcal {V}}$ is reducible and decomposes into a JF and a rank $N$ vvJF.", "The rank one Jacobi Form is given by the combination $\\mathcal {A}^{(N)}(\\tau ,z):=\\psi (\\tau ,z)|S + N \\widetilde{\\psi }_0(\\tau ,z)\\ .$ and the irreducible rank $N$ vvJF is given by $\\mathcal {V}^{(N)}(\\psi ) :=\\begin{pmatrix}\\psi (\\tau ,z)|S - \\widetilde{\\psi }_0(\\tau ,z)\\phantom{\\Big |} \\\\ \\widetilde{\\psi }_1(\\tau ,z) \\\\ \\vdots \\\\ \\widetilde{\\psi }_{N-1}(\\tau ,z)\\end{pmatrix} \\ .$ The $T$ matrix of the vvJF is $T_V=\\text{diag}(1,\\omega _N,\\ldots , (\\omega _N)^{N-1})$ and the $S$ -matrix can be obtained from the following formulae.", "$\\left(\\psi (\\tau ,z)|S - \\widetilde{\\psi }_0(\\tau ,z)\\right)\\Big |S &= -\\frac{1}{N} \\left(\\psi (\\tau ,z)|S - \\widetilde{\\psi }_0(\\tau ,z)\\right) + \\frac{N+1}{N} \\sum _{j=1}^{N-1} \\widetilde{\\psi }_j(\\tau ,z) \\\\\\psi (\\tau +j,z)|S &= \\psi (\\tau -j^{\\prime },z) \\text{ where } j\\ne 0\\text{ and } jj^{\\prime }=1\\text{ mod }N\\ .$ For fixed $H$ , the S-matrix is independent of the index of the Jacobi form, $\\Psi _{0,m}(\\tau ,z)$ We thus give the S-matrices for the three cases of interest.", "$S_V^{N=2}=\\frac{1}{2}\\begin{pmatrix}-1 & 3 \\\\1 & 1\\end{pmatrix}\\quad ,\\quad S_V^{N=3}=\\frac{1}{3}\\begin{pmatrix}-1 & 4 & 4 \\\\1 & -1 & 2 \\\\1 & 2 & -1\\end{pmatrix}$ $S_V^{N=5}=\\frac{1}{5}\\begin{pmatrix}-1 & 6 & 6 & 6 & 6 \\\\1 & \\frac{1}{2} \\left(3-\\sqrt{5}\\right) & -1-\\sqrt{5} & -1+\\sqrt{5} & \\frac{1}{2} \\left(3+\\sqrt{5}\\right) \\\\1 &-1 -\\sqrt{5} & \\frac{1}{2} \\left(3+\\sqrt{5}\\right) & \\frac{1}{2} \\left(3-\\sqrt{5}\\right) & -1+\\sqrt{5} \\\\1 & \\sqrt{5}-1 & \\frac{1}{2} \\left(3-\\sqrt{5}\\right) & \\frac{1}{2} \\left(3+\\sqrt{5}\\right) & -1-\\sqrt{5} \\\\1 & \\frac{1}{2} \\left(3+\\sqrt{5}\\right) & -1+\\sqrt{5} & -1-\\sqrt{5} & \\frac{1}{2} \\left(3-\\sqrt{5}\\right) \\\\\\end{pmatrix}$" ], [ "Vector valued modular forms", "The procedure of the previous sub-section can be applied to all the Jacobi forms, $\\Psi _{0,m}^{(N)}(\\tau ,z)$ .", "In the process we obtain one weight zero modular form that we denote by $\\mathcal {A}^{(N)}_m$ and a vvmf of weight zero and rank $N$ that we denote by $\\mathcal {V}^{(N)}_m$ in obvious notation.", "One can decompose the rank $m$ Jacobi form $\\mathcal {V}^{(N)}_m$ in terms of ${\\widehat{sl(2)}}$ characters, $\\chi _{4m,2\\ell }$ for $\\ell =0,\\ldots ,2m$ to obtain a rank $(m+1)N$ vector valued modular form for the full modular group, $PSL(2,\\mathbb {Z})$ .", "Since the rank grows fast, we will first study the $N=2$ case where we get vvmfs of rank 4 and rank 6.", "We are able to completely characterize the rank 4 example.", "The decomposition is as follows: (with $x=(N-1)(m+1)$ ) $\\mathcal {V}^{(N)}_m = \\begin{pmatrix}g_1\\ \\chi _{4m,2m} + g_2\\ (\\chi _{4m,2m-2}+\\chi _{4m,2m+2})+ \\cdots + g_{m+1}\\ (\\chi _{4m,0}+\\chi _{4m,4m}) \\\\\\vdots \\\\g_{x+1} \\chi _{4m,2m}+ g_{x+2}\\ (\\chi _{4m,2m-2}+\\chi _{4m,2m+2}) + \\cdots + g_{N(m+1)}(\\chi _{4m,0}+\\chi _{4m,4m})\\end{pmatrix}\\ ,$ which leads to the vvmf $\\mathcal {G}=(g_1,g_2,\\ldots ,g_{N(m+1)})^T$ .", "The S and T matrices are, however, easy to write out.", "Let $S_\\chi ^{(m)}$ and $T_\\chi ^{(m)}$ denote the matrices obtained from scalar Jacobi forms of index $m$ as was considered in paper I[1].", "Then, the S-matrix for the vvmf obtained from $\\mathcal {V}^{(N)}_m$ is given by $T = T_V^{(N)} \\otimes T_\\chi ^{(m)}\\quad \\text{and}\\quad S = S_V^{(N)} \\otimes S_\\chi ^{(m)}$ In this fashion, we obtain the data needed to determine the modular differential equation." ], [ "An example", "Consider $\\mathcal {V}^{(2)}_1$ which leads to a rank 4 example.", "We obtain the following $T$ and $S$ matrices.", "$T=\\text{diag}\\left(e^{-\\frac{i\\pi }{2}}, e^{\\frac{i\\pi }{6}} ,e^{\\frac{i\\pi }{2}}, e^{-\\frac{i 5 \\pi }{6}}\\right) \\quad ,\\quad S=\\frac{1}{2\\sqrt{3}}\\begin{pmatrix}1 & -2 & -3 & 6 \\\\-1 & -1 & 3 & 3 \\\\-1 & 2 & -1 & 2 \\\\1 & 1 & 1 & 1\\end{pmatrix}$ The first few terms in the Fourier expansion of the vvmf are given below.", "$\\begin{pmatrix}q^{-1/4} \\left(1+36 q+375 q^2+2162 q^3+10017 q^4+38550 q^5+132446 q^6+413478 q^7+\\cdots \\right) \\\\q^{-11/12} \\left(-3 q-93 q^2-681 q^3-3723 q^4-15879 q^5-58974 q^6-195186 q^7+\\cdots \\right)\\\\q^{-3/4} \\left(-8 q-128 q^2-936 q^3-4784 q^4-19968 q^5-72432 q^6-236392 q^7+\\cdots \\right)\\\\q^{-5/12} \\left(24 q+264 q^2+1656 q^3+7848 q^4+31104 q^5+108552 q^6+343992 q^7+\\cdots \\right)\\end{pmatrix}$ Equipped with this data, we can determine the matrix differential equation of Gannon[14] to which $\\mathcal {G}$ is one of the independent solutions.", "The data that we need for a rank $d$ situation are the following: an invertible set of exponents $\\Lambda $ , and a $d\\times d$ matrix $\\chi $ defined by $\\Xi (\\tau ):=\\big (\\mathcal {G}_1(\\tau ),\\mathcal {G}_2(\\tau ),\\ldots ,\\mathcal {G}_d(\\tau )\\big ) = q^\\Lambda \\ (\\mathbf {1}_d + \\chi \\ q + O(q^2))$ For our rank four example, we obtain $\\Lambda &=\\left(-\\frac{1}{4},-\\frac{11}{12} ,-\\frac{3}{4},-\\frac{5}{12}\\right)\\text{ and }\\\\\\chi &=\\begin{pmatrix}-8400 & 1296 & 36 & -15876 \\\\72 & 24 & -3 & -32 \\\\-102 & 54 & -8 & 432 \\\\1125 & 106 & 24 & 2800\\end{pmatrix}\\ .$ leading to the four solutions (column 3 is our solution) ${\\tiny q^\\Lambda =\\begin{pmatrix}-8400 q-651744 q^2-17978112 q^3 & 1296 q+28512 q^2+311040 q^3 & 1+36 q+375 q^2+2162 q^3 &-15876 q-2094498 q^2-84825468 q^3 \\\\72 q+43056 q^2+2127528 q^3 & 24 q+2064 q^2+33336 q^3 & -3 q-93 q^2-681 q^3 & 1-32 q-50161q^2-3921788 q^3 \\\\1-102 q-30051 q^2-1240398 q^3 & 54 q+2268 q^2+33372 q^3 & -8 q-128 q^2-936 q^3 & 432 q+228096q^2+14648688 q^3 \\\\1125 q+115650 q^2+3602097 q^3 & 1+106 q+3047 q^2+35814 q^3 & 24 q+264 q^2+1656 q^3 & 2800q+518224 q^2+24040112 q^3 \\\\\\end{pmatrix}}+ O(q^4)$" ], [ "Other examples", "We are unable to determine the modular differential equations in the other cases.", "The next lowest rank is six and we need to numerically determine twelve unknown constants.", "We have been unable to determine that.", "Until rank four, it is easy to determine the modular differential equation making use of an observation of Gannon in [14] which enables us to generate three linearly independent solutions given a solution.", "This puts rank five within reach of numerical computation." ], [ "The Jacobi Forms $\\mathcal {A}_m^{(N)}$", "The $\\mathcal {A}_m^{(N)}$ are Jacobi forms for the full modular group.", "One can expand these as follows: $\\mathcal {A}_m^{(N)}(\\tau ,z) = \\sum _{j=0}^m h_{2j}(\\tau )\\ A(\\tau ,z)^{m-j} B)\\tau ,z)^j\\ ,$ where $h_{2j}(\\tau )$ ($j=0,1,\\ldots ,m$ ) are modular forms of weight $2j$ .", "Since the ring of modular forms of $PSL(2,\\mathbb {Z})$ is generated by polynomials in $E_4(\\tau )$ and $E_6(\\tau )$ , we can characterize $\\mathcal {A}_m^{(N)}(\\tau ,z)$ by a few constants.", "$h_2(\\tau )=0$ since there is no weight two modular form for the full modular group.", "In this fashion, we can show that $\\mathcal {A}_1^{(N)}(\\tau ,z) &= A(\\tau ,z) = U_{0,1}(\\tau ,z) \\text{ for } N=2,3,5\\ ,\\\\\\mathcal {A}_2^{(2)}(\\tau ,z) &= -U_{0,2}\\tau ,z)\\ .$ In these two cases we obtain Umbral Jacobi forms defined in Eq.", "(REF ).", "That is not true in general.", "For instance, $\\mathcal {A}_2^{(3)} &=0\\ , \\\\\\mathcal {A}_3^{(3)} &= \\frac{1}{216} A(\\tau ,z)^3 + \\frac{5}{72} E_4(\\tau )A(\\tau ,z) B(\\tau ,z)^2 -\\frac{2}{27} E_6(\\tau ) B(\\tau ,z)^3\\ne U_{0,3}(\\tau ,z)\\ .$ We are not presenting the Jacobi forms that appear for $N=5$ ." ], [ "Concluding Remarks", "In this paper, we have begun a study of the decomposition of the Siegel modular forms $\\Delta ^{(N)}_{k(N)}(\\mathbf {Z})$ as denominator formulae for a Lie algebra under two sub-algebras of a Lie algebra, $\\mathcal {B}_N^{CHL}(A^{(N)})$ , that we wish to understand.", "There is a natural product formula that provides the product side of the denominator formula – this provides a description of the positive roots with their multiplicities.", "The character decomposition that we study is a probe on the sum side of the denominator formula.", "The work is preliminary as we focused on the first $N$ terms that appear.", "We also need to work out the cases of $N=4,6$ .", "The eventual goal is the following: (i) Rewrite the sum term in terms of orbits of the extended Weyl group, (ii) Verifying that the orbits are indeed Borcherds extensions for $N\\le 4$ , (iii) For the $N=5,6$ examples, we need to have a good description of the terms that don't fit into a Borcherds extension.", "The additive lift for the modular forms $\\Delta ^{(N)}_{k(N)}(\\mathbf {Z})$ was studied in [7].", "This was done by working out the S-transform of the Hecke operator appearing in an additive lift of Cléry and Gritsenko.", "This was done for a case by case basis.", "It would be interesting to carry it out for all cases and obtain a closed formula for the sum side.", "This might enable us to prove that the examples for $N\\le 4$ are indeed Borcherd extensions of $\\mathfrak {g}(A^{(N)})$ .", "Our approach to arriving at modular differential equations was blighted by the large ranks that appeared when we constructed vvmfs for the full modular group.", "The ranks grew as $N(m+1)$ – the factor of $N$ coming in this process.", "Is there a way to write modular differential equations for the congruence subgroup?", "The work of Bajpai might be a way to proceed[21].", "Gottesman has studied rank 2 examples of $\\Gamma _0(2)$ in his work[22].", "Acknowledgements: We thank S. Viswanath for collaboration and numerous discussions." ], [ "Modular Forms", "Let $\\mathbb {H}$ denote the upper half plane.", "Definition A.1 A modular form, of weight $k$ and character $\\chi $ , is a function $f:\\mathbb {H}\\rightarrow \\mathbb {C}$ such that for $\\gamma =\\begin{pmatrix} a& b \\\\ c& d\\end{pmatrix}\\in PSL(2,\\mathbb {Z})$ , one has $f|_k \\gamma (\\tau ) = \\chi (\\gamma )\\ f(\\tau )\\ ,$ where $f|_k \\gamma (\\tau ) := (c\\tau + d)^{-k}\\ f(\\gamma \\cdot \\tau )\\ ,$ and $\\gamma \\cdot \\tau = \\frac{a\\tau +b}{c\\tau +d}$ .", "The level $N$ sub-group $\\Gamma _0(N)\\subseteq PSL(2,\\mathbb {Z})$ is given by imposing to $\\gamma $ with $c=0\\text{ mod } N$ .", "Similarly, the subgroup $\\Gamma ^0(N)$ is defined by requiring $b=0\\text{ mod } N$ .", "The group $SL(2,\\mathbb {Z})$ is generated by two generators that are conventionally called the $T$ and $S$ .", "One has $T:\\ \\tau \\rightarrow \\tau +1\\quad ,\\quad S:\\ \\tau \\rightarrow -\\frac{1}{\\tau }\\ .$ Let $f(\\tau )$ be a modular form of $PSL(2,\\mathbb {Z})$ with weight $k$ .", "Then, $f(N\\tau )$ is a modular form of $\\Gamma _0(N)$ and $f(\\tau /N)$ is a modular form of $\\Gamma ^0(N)$ with weight $k$.", "Let $j$ be such that $(j,N)=1$ .", "Then we have the following two identities that are very useful.", "$\\begin{split}f\\left(\\tau /N\\right)\\big |_kS &= N^k\\, f\\left(N\\tau \\right)\\\\f\\left(\\tfrac{\\tau +j}{N}\\right)\\big |_kS &= f\\left(\\tfrac{\\tau -j^{\\prime }}{N}\\right)\\end{split}$ with $jj^{\\prime }=1\\mod {N}$ .", "The second line follows from the observation that $\\frac{S\\cdot \\tau +j}{N} = \\frac{j\\tau -1}{N\\tau }= G \\cdot \\left(\\frac{\\tau -j^{\\prime }}{N}\\right)\\ ,$ where $G=\\begin{pmatrix}j & (jj^{\\prime }-1)/N \\\\ N & j^{\\prime }\\end{pmatrix}\\in \\Gamma _0(N)$ ." ], [ "Examples", "A very nice and practical introduction to modular forms is the lectures by Zagier.", "We define the modular forms that appear in our work.", "The Dedekind eta function $\\eta (\\tau )$ is defined by (with $q=\\exp (2\\pi i \\tau )$ ) $\\eta (\\tau )= q^{1/24} \\prod _{m=1}^\\infty (1-q^m)\\ ,$ is a modular form of weight half and character given by a twenty-fourth root of unity.", "The Eisenstein series: Let $E_2(\\tau ) &= 1 -24 \\sum _{n=1}^\\infty \\sigma _1(n)\\ q^n \\ , \\\\E_4(\\tau ) &= 1 +240 \\sum _{n=1}^\\infty \\sigma _3(n)\\ q^n \\ ,\\\\E_6(\\tau ) &= 1 -504 \\sum _{n=1}^\\infty \\sigma _5(n)\\ q^n \\ .", "$ $E_4(\\tau )$ and $E_6(\\tau )$ are holomorphic modular forms of $PSL(2,\\mathbb {Z})$ with weights 4 and 6 respectively.", "They generate the ring of holomorphic modular forms of $PSL(2,\\mathbb {Z})$ .", "Any holomorphic modular form of $PSL(2,\\mathbb {Z})$ can be expressed a polynomial of these two modular forms.", "$E_2(\\tau )$ is not modular but $E_2^*(\\tau ) = E_2(\\tau ) -\\frac{2}{\\text{Im}(\\tau )} \\ ,$ is a non-holmorphic modular form of weight 2.", "The sub-group $\\Gamma _0(N)$ (for $N>1$ ) has holomorphic modular forms of weight 2 given by $E_2^{(N)}(\\tau ) := \\frac{1}{N-1} \\left(N E_2^*(N\\tau )-E_2^*(\\tau )\\right) =\\frac{1}{N-1} \\left(N E_2(N\\tau )-E_2(\\tau )\\right)\\ ,$ where we observe that the non-holomorphic pieces cancel away in writing the definition in the second form.", "Let $\\rho =1^{a_1}2^{a_2}\\cdots N^{a_N}$ be a cycle shape, for a conjugacy class of $M_{24}$ , with $\\sum _j j a_j=24$ .", "Then, the product $\\eta _\\rho (\\tau ):= \\prod _{j=1}^N \\eta (j\\tau )^{a_j}\\ ,$ is a modular form $\\Gamma _0(N)$ with character given by an $N$ -th root of unity (also see for a slightly different version)." ], [ "Ring of Generators for $\\Gamma _0(N)$", "Let $M(\\Gamma _0(N))$ denote ring of holomorphic modular forms of $\\Gamma _0(N)$ .", "We list the generators of this ring for the cases of interest(obtained from ).", "$PSL(2,\\mathbb {Z})$ has two generators: $E_4(\\tau )$ and $E_6(\\tau )$ .", "$M(\\Gamma _0(2))$ has two generators: $E_2^{(2)}(\\tau )$ and $E_4(2\\tau )$ .", "$M(\\Gamma _0(3))$ has three generators: $E_2^{(3)}(\\tau )$ , $E_4(3\\tau )$ and $E_6(3\\tau )$ .", "$M(\\Gamma _0(5))$ has three generators: $E_2^{(5)}(\\tau )$ , $E_4(5\\tau )$ and $\\eta (\\tau )^4\\eta (5\\tau )^4$ ." ], [ "Siegel and Jacobi Forms", "The group $Sp(4,\\mathbb {Z})$ is the set of $4\\times 4$ matrices written in terms of four $2\\times 2$ matrices $A$ , $B$ , $C$ , $D$ (with integral entries) as $M=\\left({\\begin{matrix}A & B \\\\C & D\\end{matrix}}\\right)$ satisfying $ A B^T = B A^T $ , $ CD^T=D C^T $ and $ AD^T-BC^T=I $ .", "This group acts naturally on the Siegel upper half space, $\\mathbb {H}_2$ , as $\\mathbf {Z}=\\begin{pmatrix} \\tau & z \\\\ z & \\tau ^{\\prime } \\end{pmatrix}\\longmapsto M\\cdot \\mathbf {Z}\\equiv (A \\mathbf {Z} + B)(C\\mathbf {Z} + D)^{-1} \\ .$ Definition A.2 A Siegel modular form, of weight $k$ with character $v$ with respect to $Sp(4,\\mathbb {Z})$ , is a holomorphic function $F:\\mathbb {H}_2\\rightarrow \\mathbb {C} $ satisfying $F|_k M (\\mathbf {Z}) = v(M)\\ F(\\mathbf {Z})\\ ,$ for all $M\\in Sp(4,\\mathbb {Z}) $ and the slash operation is defined as $F|_k M(\\mathbf {Z}) := \\det (C\\mathbf {Z}+D)^{-k}\\ F(M\\cdot \\mathbf {Z})\\ .$ In the limit $\\tau ^{\\prime }\\rightarrow i\\infty $ or $s=\\exp (2\\pi i\\tau ^{\\prime })\\rightarrow 0$ , a Siegel modular form $\\Phi _k(\\mathbf {Z})$ has the following expansion: $\\Phi _k(\\mathbf {Z}) = \\sum _{m=0}^\\infty s^m \\ \\phi _{k,m}(\\tau ,z) \\ .$" ], [ "Jacobi forms", "In the limit $\\tau ^{\\prime }\\rightarrow i\\infty $ or $s=\\exp (2\\pi i\\tau ^{\\prime })\\rightarrow 0$ , a Siegel modular form $\\Phi _k(\\mathbf {Z})$ has the following Fourier-Jacobi expansion: $\\Phi _k(\\mathbf {Z}) = \\sum _{m=0}^\\infty s^m \\ \\phi _{k,m}(\\tau ,z) \\ .$ The Jacobi group $\\Gamma _J$ is the sub-group of $Sp(4,\\mathbb {Z})$ that preserves the condition $s=0$ .", "The transformation of the Fourier-Jacobi coefficients, $\\phi _{k,m}(\\tau ,z)$ , under the Jacobi group is a natural definition of a Jacobi form.", "It is generated by two sub-groups, one is the modular group $PSL(2,\\mathbb {Z})$ embedded suitably in $Sp(4,\\mathbb {Z})$ and the other is the Heisenberg group defined below.", "The embedding of $\\left({\\begin{matrix} a & b \\\\ c & d\\end{matrix}}\\right)\\in PSL(2,\\mathbb {Z})$ in$Sp(4,\\mathbb {Z})$ is given by $\\widetilde{\\begin{pmatrix} a & b \\\\ c & d \\end{pmatrix}}\\equiv \\begin{pmatrix}a & 0 & b & 0 \\\\0 & 1 & 0 & 0 \\\\c & 0 & d & 0 \\\\0 & 0 & 0 & 1\\end{pmatrix}\\ .$ The above matrix acts on $\\mathbb {H}_2$ as $(\\tau ,z,\\sigma ) \\longrightarrow \\left(\\frac{a \\tau + b}{c\\tau +d},\\ \\frac{z}{c\\tau +d},\\ \\sigma -\\frac{c z^2}{c \\tau +d}\\right)\\ ,$ with $\\det (C\\mathbf {Z} + D)=(c\\tau +d)$ .", "The Heisenberg group, $H(\\mathbb {Z})$ , is generated by $Sp(2,\\mathbb {Z})$ matrices of the form $[\\lambda , \\mu ,\\kappa ]\\equiv \\begin{pmatrix}1 & 0 & 0 & \\mu \\\\\\lambda & 1 & \\mu & \\kappa \\\\0 & 0 & 1 & -\\lambda \\\\0 & 0 & 0 & 1\\end{pmatrix}\\qquad \\textrm {with } \\lambda , \\mu , \\kappa \\in \\mathbb {Z}$ The above matrix acts on $\\mathbb {H}_2$ as $(\\tau ,z,\\sigma ) \\longrightarrow \\left(\\tau ,\\ z+ \\lambda \\tau + \\mu ,\\ \\sigma + \\lambda ^2 \\tau + 2 \\lambda z + \\lambda \\mu +\\kappa \\right)\\ ,$ with $\\det (C\\mathbf {Z} + D)=1$ .", "Definition A.3 A Jacobi form of weight $k$ and index $m$ is a map $\\phi : \\mathbb {H}\\times \\mathbb {Z}\\rightarrow \\mathbb {C}$ satisfying $\\Phi |_kM(\\mathbf {Z}) = \\Phi (\\mathbf {Z})\\ .$ where $\\Phi (\\mathbf {Z}) := s^m \\phi _{k,m}(\\tau ,z)$ .", "The power of $s$ cancels the phases that appear for the Heisenberg group in the usual definition." ], [ "Examples", "The genus-one theta functions are defined by $\\theta \\left[\\genfrac{}{}{0.0pt}{}{a}{b}\\right] \\left(\\tau ,z\\right)=\\sum _{l \\in \\mathbb {Z}}q^{\\frac{1}{2} (l+\\frac{a}{2})^2}\\ r^{(l+\\frac{a}{2})}\\ e^{i\\pi lb}\\ ,$ where $a,\\ b\\in (0,1)\\text{ mod }2$ .", "We define $\\theta _1\\left(\\tau ,z\\right)\\equiv i\\ \\theta \\left[\\genfrac{}{}{0.0pt}{}{1}{1}\\right](\\tau ,z)$ , $\\theta _2\\left(\\tau ,z\\right)\\equiv \\theta \\left[\\genfrac{}{}{0.0pt}{}{1}{0}\\right]\\left(z_1,z\\right)$ , $\\theta _3\\left(\\tau ,z\\right)\\equiv \\theta \\left[\\genfrac{}{}{0.0pt}{}{0}{0}\\right]\\left(\\tau ,z\\right)$ and $\\theta _4\\left(\\tau ,z\\right)\\equiv \\theta \\left[\\genfrac{}{}{0.0pt}{}{0}{1}\\right]\\left(\\tau ,z\\right)$ .", "The following two index 1 Jacobi forms (with weights 0 and $-2$ respectively) are important.", "$A_{0,1}(\\tau ,z) &= 4\\left[ \\frac{\\theta _2(\\tau ,z)^2}{\\theta _2(\\tau ,0)^2}+\\frac{\\theta _3(\\tau ,z)^2}{\\theta _3(\\tau ,0)^2}+\\frac{\\theta _4(\\tau ,z)^2}{\\theta _4(\\tau ,0)^2} \\right] = (r^{-1}+10+r) + O(q)\\ ,\\\\B_{-2,1}(\\tau ,z) &=\\eta (\\tau )^{-6} \\theta _1(\\tau ,z)^2= (r^{-1}-2+r) + O(q)\\ .$ We usually drop writing the weight and index of these two basic Jacobi forms.", "All weak Jacobi forms are given by polynomials in these two Jacobi forms with coefficients given by modular forms of appropriate weight.", "Let $f_i =\\theta _i(\\tau ,z)/\\theta _i(\\tau ,0)$ for $i\\in \\lbrace 2,3,4\\rbrace $ .", "The Umbral Jacobi forms at lambency $\\ell $ are weak Jacobi forms of weight zero and index $(\\ell -1)$[2].", "We list the three that are relevant for us.", "$\\begin{split}&U_{0,1}(\\tau ,z) = 4( f_2^2 + f_3^2 + f_4^2)=\\left(\\tfrac{1}{r}+10 +r\\right) + \\cdots ,\\\\&U_{0,2}(\\tau ,z) = 2(f_2^2 f_3^2 + f_3^2 f_4^2 + f_4^2 f_2^2)=\\left(\\tfrac{1}{r}+4 +r\\right)+\\cdots ,\\\\& U_{0,3}(\\tau ,z) = 4 f_2^2 f_3^2 f_4^2=\\left(\\tfrac{1}{r}+2 +r\\right)+\\cdots .\\end{split}$" ], [ "Twisted-Twining Elliptic Genera of $K3$", "Let $g$ denote a finite symplectic automorphism of $K3$ of order $N$ .", "We denote one half of the elliptic genus of $K3$ twisted by $g^r$ and twined by $g^s$ by $\\psi _{0,1}^{[g^s,g^r]}(\\tau ,z)$ $\\psi _{0,1}^{[g^s,g^r]}(\\tau ,z)=\\frac{N}{2}\\, F_{(N)}^{(r,s)}(\\tau ,z)\\ ,$ where $ F_{(N)}^{(r,s)}(\\tau ,z)$ are defined in [10] for prime $N=2,3,5$ as follows: $\\begin{split}F_{(N)}^{(0,0)}(\\tau ,z)&=\\frac{2}{N} A(\\tau ,z) \\\\F_{(N)}^{(0,s)}(\\tau ,z)&=\\frac{2}{N(N+1)}\\left[ A(\\tau ,z)+ N B(\\tau ,z) E^{(N)}_2\\left( \\tau \\right) \\right]\\quad \\text{for}\\; 1 \\le s \\le (N-1) \\\\F_{(N)}^{(r,r l)}(\\tau ,z)&=\\frac{2}{N(N+1)}\\left[ A(\\tau ,z)- B(\\tau ,z) E^{(N)}_2\\left( \\tfrac{\\tau +l}{N} \\right) \\right] \\\\ &\\hspace*{144.54pt} \\text{for}\\; 1 \\le r \\le (N-1),\\;0 \\le l \\le (N-1), \\end{split}$" ] ]
2207.10502
[ [ "StreamYOLO: Real-time Object Detection for Streaming Perception" ], [ "Abstract The perceptive models of autonomous driving require fast inference within a low latency for safety.", "While existing works ignore the inevitable environmental changes after processing, streaming perception jointly evaluates the latency and accuracy into a single metric for video online perception, guiding the previous works to search trade-offs between accuracy and speed.", "In this paper, we explore the performance of real time models on this metric and endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.", "Specifically, we build a simple framework with two effective modules.", "One is a Dual Flow Perception module (DFP).", "It consists of dynamic flow and static flow in parallel to capture moving tendency and basic detection feature, respectively.", "Trend Aware Loss (TAL) is the other module which adaptively generates loss weight for each object with its moving speed.", "Realistically, we consider multiple velocities driving scene and further propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.", "In this realistic setting, we design a efficient mix-velocity training strategy to guide detector perceive any velocities.", "Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively compared to the strong baseline, validating its effectiveness." ], [ "Introduction", "Decision-making biases and errors that occur in autonomous driving will expose human lives be at stake.", "Wrong decisions often become more serious as the velocity of the vehicle increases.", "To avoid the irreversible situation, one effort is to perceive its environment and (re)act within a low latency.", "Figure: Illustration of visualization results of base detector and our method.", "The green boxes are ground truth, while the red ones are predictions.", "The red arrows mark the shifts of the prediction boxes caused by the processing time delay while our approach alleviates this issue.", "The lighter the color of the red box, the greater the offset caused by the faster speed.Of late, several real-time detectors  [1], [2], [3], [4], [5], [6] with strong performance and low latency constraints have come to the fore.", "However, they are yet mainly studied in an offline setting  [7].", "In a real-world online scenario, no matter how fast the algorithm gets, once the algorithm has handled the latest observations, the state of the world around the vehicle will change.", "As shown in Fig.", "REF , deviations between the altered state and the perceived results may trigger unsafe decisions for autonomous driving.", "As the velocity increased, the problem was further exacerbated (see Fig.", "REF ).", "Therefore, for both online perception and the real world, algorithms are expected to perceive future world states in relation to actual processing time.", "To address this issue,  [7] first proposes a new metric dubbed streaming accuracy, which coherently integrates accuracy and latency into a single metric for real-time online perception.", "It jointly evaluates the results of the entire perception stack at each moment, forcing the algorithm to perceive the state of the model to complete the processing.", "Under this practical evaluation framework, several strong detectors  [8], [9], [10] show significant performance degradation [7] from offline setting to streaming perception.", "Going one step further,  [7] proposes a meta-detector named Streamer that works with decision-theoretic scheduling, asynchronous tracking, and Kalman filter-based prediction to rescue much of the performance degradation.", "Following this work, Adaptive streamer [11] employs various approximate executions based on deep reinforcement learning to learn a better trade-off online.", "These works mainly seek a better trade-off plan between speed and accuracy in existing detectors, while the design of a novel streaming perception model has not been well explored.", "Unlike these, Fovea [12] focuses on the ability to utilize KDE-based mapping techniques to help detect small objects.", "The effort actually improves the performance of the meta-detector, but fails to address the intrinsic problem of streaming perception.", "One thing that the above works ignore is the existing real-time object detectors  [5], [6].", "With the addition of powerful data augmentation and a refined architecture, they achieve competitive performance and can run faster than 30 FPS.", "With these \"fast enough\" detectors, there is no room to trade off accuracy versus latency on streaming perception, since the detector's current frame result is always matched and evaluated by the next frame.", "These real-time detectors can narrow the performance gap between streaming perception and offline settings.", "Indeed, both the 1st  [13] and 2nd  [14] place solutions of Streaming Perception Challenge (Workshop on Autonomous Driving at CVPR 2021) utilise real-time models YOLOX  [6] and YOLOv5  [5] as their base detectors.", "Standing on the shoulders of the real-time models, we find that the performance gaps now all come from the fixed inconsistency between the current processing frame and the next matching frame.", "Therefore, the key solution for streaming perception is switched to forecast the results of the next frame in the current state.", "Through extensive experiments, we also found that the upper limit of performance drop is absolutely dependent on the driving velocity of the vehicle.", "As show on Fig.", "REF , the quicker the velocity is, the larger the performance drop.", "As a special case, no performance drop when stationary.", "This phenomenon is reasonable as driving at high speeds is prone to traffic accidents.", "Therefore, velocity perception is an essential part of forecasting consistency results.", "Unlike the heuristic methods such as Kalman filter  [15] adopted in  [7], in this paper, we directly endow the real-time detector with the ability to predict the future of the next frame.", "Specifically, we construct triplets of the last, the current, and next frame for training, where the model takes the last and current frames as input and learns to predict the detection results of the next frame.", "To improve the training efficiency, we propose two key design schemes: i) For model architecture, we conduct a Dual-Flow Perception (DFP) module to fuse the feature map from the last and the current frames.", "It consists of a dynamic flow and a static flow.", "Dynamic flow focuses on the motion trend of objects to predict while static flow provides basic information and features of detection through a residual connection.", "ii) For the training strategy, we introduce a Trend Aware Loss (TAL) to dynamically assign different weights to locate and predict each object, since we find that objects within one frame may have different moving speeds.", "In the realistic driving scene, vehicles often drive at different velocities including static.", "Therefore, training detector to perceive the future at a relatively fixed velocity range may cause to be unable to make the correct decisions at different velocities.", "To this end, we design a mix-velocity training strategy to efficiently train our detector to better perceive any velocities.", "Finally, we further design Velocity-aware streaming AP (VsAP) metric to coherently evaluate sAP on all velocity settings.", "We conduct comprehensive experiments on Argoverse-HD [16], [7] dataset, showing significant improvements in the stream perception task.", "In summary, the contributions of this work are as four-fold as follows: With the strong performance of real-time detectors, we find the key solution for streaming perception is to predict the results of the next frame.", "This simplified task is easy to be structured and learned by a model-based algorithm.", "We build a simple and effective streaming detector that learns to forecast the next frame.", "We propose two adaptation modules, i.e., Dual-Flow Perception (DFP) and Trend Aware Loss (TAL), to perceive the moving trend and predict the future.", "We find perceiving the driving velocity is the key to forecasting the consistently future results, and propose a mix-velocity training approach to endow model the ability to tackle different driving velocities.", "We also propose VsAP to evaluate the performance in all velocity setting.", "We achieve competitive performance on Argoverse-HD [16], [7] dataset without bells and whistles.", "Our method improves the sAP by +4.7% (the VsAP by 8.2%) compared to the strong baseline of the real-time detector and shows robust forecasting under the different moving speeds of the driving vehicle.", "Some preliminary results appear in  [17], and this paper includes them but significantly extends in the following aspects.", "We point out that training our detector with only using a single velocity scale will fail to perceive the future at differently driving speeds.", "To this end, we propose a mix-velocity training strategy to enhance the model to tackle different driving velocities.", "We find that random horizontal flip is the key for the detector to perceive the clue of the relative driving direction.", "Therefore, we update all experiments to establish a more strong detector.", "We conduct more experiments to establish our strong pipeline (ie., sample assignment of anchor points, more fusion operators, more prediction tasks, pre-training policies, and data augmentations) and ablate more factors to verify the superiority of our method (vehicle category, quicker driving speed, image scale, etc.", ")." ], [ "Related Works", "Image object detection.", "In the era of deep learning, detection algorithms can be divided into the two-stage  [18], [19], [20] and the one-stage  [21], [9], [22], [1] frameworks.", "The two-stage networks jointly train RPN and fast R-CNN to perform a paradigm that first finds objects and then refines classification and regression.", "The coarse-to-fine manner shows significant performance but undergoes larger latency.", "One-stage detectors abandon the proposal generation stage and directly output bounding boxes with final positioning coordinates and class probabilities.", "The one-step paradigms endow one-stage models with higher inference speed than two-stage ones.", "With the increasingly stringent requirements for model inference speed in reality, many works such as the YOLO series  [1], [2], [3], [4], [5], [6] focus on pursuing high performance and optimizing inference speed.", "These high speed models are equipped with a bunch of advanced techniques, including efficient backbones [23], [24], strong data augmentation strategies [25], [26], etc.", "Our work is based on the state-of-the-art real-time detector YOLOX  [6] which achieves strong performance among real-time detectors.", "Video object detection.", "Streaming perception also involves video object detection (VOD).", "The VOD task aims to alleviate the poor situation caused by the complex video variations, e.g., motion blur, occlusion, and out-of-focus.", "Attention-based methods [27], [28] employ attention mechanism [29] to establish a strong information (context, semantics, etc.)", "association between the local temporal information of key and support frames.", "Flow-based methods utilize optical flow to propagate features from key frames to support frames [30] or to extract the temporal-spatial clues between key frames and support ones [31].", "LSTM-based methods [32], [33], [34] use convolutional long short term memory (LSTM [35]) to process and save important temporal-spatial clues and enhance key frames.", "Tracking-based methods [36], [37] make use of the motion prediction information to detect interval frames adaptively and track frames in between.", "These frameworks focus on improving the detection accuracy for key frames, which needs to use history or future frames to conduct an enhanced process.", "They all focus on the offline setting while streaming perception considers the online processing latency and needs to predict the future results.", "Streaming perception.", "Streaming perception task coherently considers latency and accuracy.", "[7] firstly proposes streaming AP (sAP) to evaluate accuracy under the consideration of time delay.", "Facing latency, non-real-time detectors will miss some frames.", "[7] proposes a meta-detector to alleviate this problem by employing Kalman filter [15], decision-theoretic scheduling, and asynchronous tracking [36].", "[11] lists several factors (e.g., input scales, switchability of detectors, and scene aggregation.)", "and designs a reinforcement learning-based agent to learn a better combination for a better trade-off.", "Fovea [12] employ a KDE-based mapping to magnify the regions of small objects (e.g., long-distance vehicles appear in the image) so that the detectors can detect small objects without feeding larger resolution image.", "In short,  [7] and  [11] focus on searching for a better trade-off by utilizing Non-learning and learnable strategies, while Fovea [12] actually improves the upper performance of the detector.", "Instead of utilizing non-real-time detectors, both the 1st [13] and 2nd [14] place solution of Streaming Perception Challenge (Workshop on Autonomous Driving at CVPR 2021) adopt real-time models (YOLOX [6] and YOLOv5 [5]) as their base detectors.", "By using fast enough models, the miss frames will be reduced.", "However, no matter how many how fast the model is, there will always be a time delay of one unit.", "Based on that, we establish a strong baseline with real-time detector, and simplify the steaming perception to the task of “predicting the next frame”.", "And we further improve the performance by employing a learning-based model to explicitly predict the future instead of searching for a better trade-off between accuracy and latency.", "In addition, we explore and alleviate the streaming perception problem in case of vehicles driving at different velocities.", "Future prediction.", "Future prediction tasks aim to predict the results for the unobserved future data.", "Current tasks are explored in the era of semantic/instance segmentation.", "For semantic segmentation, early works  [38], [39] construct a mapping from past segmentation to future segmentation.", "Recent works  [40], [41], [42], [43] convert to predict intermediate segmentation features by employing deformable convolutions, teacher-student learning, flow-based forecasting, LSTM-based approaches, etc.", "For instance segmentation prediction, some approaches predict the pyramid features [44] or the pyramid feature of varied pyramid levels jointly [45], [46].", "The above prediction methods do not consider the misalignment of prediction and environment change caused by processing latency, leaving a gap to real-world application.", "In this paper, we focus on the more practical task of streaming perception.", "To the best of our knowledge, our work is also the first method to apply future prediction to object detection, which outputs discrete set predictions compared with the existing segmentation prediction tasks.", "Figure: Comparison on different detectors in streaming perception evaluation framework.", "Each block represents the process of the detector for one frame and its length indicates the running time.", "The dashed block indicates the time until the next frame data is received.Figure: The performance gap between offline and streaming perception setting brings about on Argoverse-HD dataset.", "'OF' and 'SP' indicate offline and streaming perception setting respectively.", "The number after @ is the input scale (the full resolution is 1200×19201200 \\times 1920)." ], [ "Streaming Perception", "Streaming perception organizes data as a set of sensor observations.", "To take the model processing latency into account,  [7] proposes a new metric named streaming AP (sAP) to simultaneously evaluate time latency and detection accuracy.", "As shown in Fig.", "REF , the streaming benchmark evaluates the detection results in a continuous time frame.", "After receiving and processing an image frame, sAP simulates the time latency between the streaming flow and examines the processed output with a ground truth of the real world state.", "Taking a non-real-time detector as an example, the output $y_1$ of the frame $F_1$ is matched and evaluated with the ground truth of $F_3$ , while the result of $F_2$ is ignored.", "Therefore, for streaming perception task, non-real-time detectors may miss many image frames, resulting in long-term offset results, which seriously affects the performance of offline detection.", "For real-time detectors (the total processing time of one frame is less than the time interval of image streaming), the task of streaming perception becomes explicit and easy.", "As shown in Fig.", "REF , the shorter the time interval the better, and if the detector takes more time than the unit time interval, a fixed pattern may not be optimal as shown in the decision-theoretic analysis in  [7], therefore, a fast real-time detector avoids these problems.", "This fixed matching pattern not only eradicates the missed frames but also reduces the time shift for each matched ground truth.", "In Fig.", "REF , we compare two detectors, Mask R-CNN [10] and YOLOX [6], using several image scales, and investigate the performance gap between streaming perception and offline settings.", "In the case of low-resolution inputs, the performance gap between the two detectors is small, since they both operate in real-time.", "However, as the resolution increases, Mask R-CNN's performance drops more because it runs slower.", "For YOLOX, its inference speed maintains real-time with the resolution increasing, so that the performance gap does not widen accordingly." ], [ "Pipeline", "The fixed matching pattern from real-time detectors also enables us to train a learnable model to mine latent moving trends and predict the objects of the next image frames.", "Our approach consists of a basic real-time detector, an offline training schedule, and an online inference strategy, which are described next." ], [ "Base detector", "We choose the recent proposed YOLOX  [6] as our base detector.", "It inherits and carries forward YOLO series  [1], [2], [3] to an anchor-free framework with several tricks, e.g., decoupled heads  [47], [48], strong data augmentations  [25], [26], and advanced label assigning  [49], achieving strong performance among real-time detectors.", "It is also the 1st place solution  [13] of Streaming Perception Challenge in the Workshop on Autonomous Driving at CVPR 2021.", "Unlike  [13], we remove some engineering acceleration tricks like TensorRT and change the input scale to the half resolution ($600 \\times 960$ ) to ensure the real-time speed without TensorRT.", "We also discard the extra datasets used in  [13], i.e., BDD100K [50], Cityscapes [51], and nuScenes [52] for pre-training.", "These shrinking changes definitely degrade the detection performance compared to  [13], but they lighten the executive burden and allow extensive experiments.", "We believe the shrinking changes are orthogonal to our work and can further improve performance." ], [ "Future aware label assignment", "We follow YOLOX to employ SimOTA [49] for label assignment.", "The advanced sample policy firstly delimits a region based on object center and ground truth priors.", "Next, jointly combining classification and localization loss as a metric to rank all proposals.", "Finally, it uses a dynamic top-k strategy to choose positive samples dynamically.", "The overall sampling pipeline is gorgeous to train model on still task, but a fuzzy brings about in future anticipation task, i.e., how to formulate a sampling region.", "To this end, we study three sampling ways: current, future, and combination regions.", "We found that employing region based on ground truth priors of future frame achieves best optimal, which implicitly supervises model to perceive future and reduces the difficulty of regression.", "The comparison results are showed in Tab.", "REF ." ], [ "Training", "Our overall training pipeline are shown in Fig.", "REF .", "We bind the last, the current, and the next ground truth boxes to construct a triplet $(F_{t-1}, F_t, G_{t+1})$ for training.", "The prime purpose for this scheme is simple and direct: it is inevitable to perceive the moving clues for each object for forecasting the future position of objects.", "To this end, we feed two adjacent frames ($F_{t-1}$ , $F_{t}$ ) into model and employ the ground truth of $F_{t+1}$ to supervise model to directly predict the detection results of the next frame.", "Based on the manner of inputs and supervision, we refactor the training dataset to the formulation of $\\lbrace (F_{t-1}$ , $F_{t}$ , $G_{t+1})\\rbrace _{t=1}^{n_{t}}$ , where $n_{t}$ is the total sample number.", "We abandon the first and last frame of each video clip.", "Reconstructing the dataset as a sweet spot, as we can continue training with a random shuffling strategy and improve efficiency with normal training on distributed GPUs.", "To better capture the moving trend between two input frames, we propose a Dual-Flow Perception Module (DFP) and a Trend-Aware Loss (TAL), introduced in the next subsection, to fuse the FPN feature maps of two frames and adaptively catch the moving trend for each object.", "We also investigate another indirect task that predicts in parallel the current gt boxes $G_t$ and the offsets of object transition from $G_t$ to $G_{t+1}$ .", "However, according to some ablation experiments, described in the next Sec.", "REF , we find that predicting the additional offsets always falls into a suboptimal task.", "We also try a weak supervision manner by predicting corresponding offsets and throwing away the prediction structure in the inference stage, which also shows a suboptimal result.", "One of the reasons is that the transition offset value between two adjacent frames is small and contains some numerically unstable noise.", "It also has some bad cases where the labels of the corresponding objects are sometimes unreachable (new objects appear or the current object disappears in the next frame)." ], [ "Inference", "The model takes two frames of images as input, which nearly doubles the computation and time consumption compared to the original detector.", "As shown in Fig.", "REF , to remove the dilemma, we use a feature buffer to store all the FPN feature maps of the previous frame $F_{t-1}$ .", "At inference time, our model only extracts the feature from the current image frame and then fuses it with the historical features from the buffer.", "With this strategy, our model runs almost as fast as the base detector.", "For the beginning frame $F_0$ of the stream, we duplicate the FPN feature maps as pseudo historical buffers to predict results.", "This duplication actually means “no moving” status and the static results are inconsistent with $F_1$ .", "Fortunately, the performance impact of this situation is minimal, as it is rare." ], [ "Dual-Flow Perception Module (DFP)", "Given the FPN feature maps of the current frame $F_{t}$ and the historical frame $F_{t-1}$ , we assume that the learned features should have two important features for predicting the next frame.", "One is to use trend features to capture the state of motion and estimate the magnitude of the motion.", "The other is a basic feature for the detector to localize and classify the corresponding objects.", "Therefore, We design a Dual-Flow Perception (DFP) module to encode the expected features with the dynamic flow and static flow, as seen in Fig.", "REF .", "Dynamic flow fuses the FPN feature from adjacent frames to learn the moving information.", "It first employs a shared weight $1\\times 1$ convolution layer followed by the batchnorm and SiLU  [53] to reduce the channel to half numbers for both two FPN features.", "Then, it simply concatenates these two reduced features to generate the dynamic features.", "We have studied several other fusing operations like add, non-local block [54], STN [55] based on squeeze-and-excitation network [56], and correlation layer [57], where concatenation shows the best efficiency and performance.", "As for static flow, we reasonably add the original feature of the current frame through a residual connection.", "In the later experiments, we find the static flow not only provides the basic information for detection but also improves the predicting robustness across different moving speeds of the driving vehicle.", "Figure: The inference pipeline.", "We employ a feature buffer to save the historical features of the latest frame and thus only need to extract current features.", "By directly aggregating the features stored at the last moment, we save the time of handling the last frame again.", "For the beginning of the video, we copy the current FPN features as pseudo historical buffers to predict results.Table: Ablation experiments for building a simple and strong pipeline.", "We employ a basic YOLOX-L detector as the baseline for all experiments.", "Our final designs are marked in gray." ], [ "Trend-Aware Loss (TAL)", "We notice an important fact in streaming, in which the moving speed of each object within one frame is quite different.", "The variant trends come from many aspects: different sizes and moving states of their own, occlusions, or the different topological distances.", "Motivated by the observations, we introduce a Trend-Aware Loss (TAL) which adopts adaptive weight for each object according to its moving trend.", "Generally, we pay more attention to the fast-moving objects as they are more difficult to predict the future states.", "To quantitatively measure the moving speed, we introduce a trend factor for each object.", "We calculate an IoU (Intersection over Union) matrix between the ground truth boxes of $F_{t+1}$ and $F_{t}$ and then conduct the max operation on the dimension of $F_{t}$ to get the matching IoU of the corresponding objects between two frames.", "The small value of this matching IoU means the fast-moving speed of the object and vice versa.", "If a new object comes in $F_{t+1}$ , there is no box to match it and its matching IoU is much smaller than usual.", "We set a threshold $\\tau $ to handle this situation and formulate the final trend factor $\\omega _i$ for each object in $F_{t+1}$ as: $mIoU_i = \\max _j (\\lbrace IoU(box^{t+1}_i,box^{t}_{j})\\rbrace )$ $\\omega _i = \\left\\lbrace {\\begin{array}{*{20}{c}}{1/mIoU_i \\qquad mIoU_i \\ge \\tau }\\\\{1/\\nu \\qquad \\qquad mIoU_i < \\tau }\\end{array}}, \\right.$ where $\\max _j$ represents the max operation among boxes in $F_t$ , $\\nu $ is a constant weight for the new coming objects.", "We set $\\nu $ as 1.6 (bigger than 1) to reduce the attention according to hyper-parameters grid searching.", "Note that simply applying the weight to the loss of each object will change the magnitude of the total losses.", "This may disturb the balance between the loss of positive and negative samples and decrease the detection performance.", "Inspired by  [58], [59], we normalize $\\omega _{i}$ to $\\hat{\\omega }_{i}$ intending to keep the sum of total loss unchanged: $\\hat{\\omega }_{i} = \\omega _i\\cdot \\frac{{\\sum _{i = 1}^N {{\\mathcal {L}^{reg}_i}} }}{{\\sum _{i = 1}^N {{\\omega _i}}{\\mathcal {L}^{reg}_i}}},$ where $\\mathcal {L}^{reg}_i$ indicates the regression loss of object $i$ .", "Next, we re-weight the regression loss of each object with $\\hat{\\omega }_{i}$ and the total loss is exhibited as: ${\\mathcal {L}_{total}} = \\sum \\limits _{i \\in positive}\\hat{\\omega }_{i}\\mathcal {L}^{reg}_i + {\\mathcal {L}_{cls}} + {\\mathcal {L}_{obj}}.$" ], [ "Velocity-awared streaming AP (VsAP)", "In a realistic driving scene, a vehicle may keep at a standstill, drive slowly and fast.", "An excellent algorithm ought to be capable of dealing with any traffic case at any driving velocities and even at a standstill.", "To dig the issue, it needs to rebuild a dataset that is collected at different driving velocities.", "The collecting process is time-consuming and labor-intensive so we contract and imitate different velocities using off-the-shelf datasets such as Argoverse-HD  [16], [7].", "We sample two frames at any intervals to simulate any velocities.", "The larger the interval is, the faster the vehicle drives.", "We copy two identical frames to simulate static.", "Technologically, we mark the triplet as $\\lbrace (F_{t-1}^{mx}$ , $F_{t}^{mx}$ , $G_{t+1}^{mx})\\rbrace _{t=1}^{n_{t}}$ , where $n_{t}$ is the total sample number and $mx$ is the m times velocity.", "To evaluate the comprehensive performance of different velocities on streaming perception setting, we propose VsAP which simply finds the average value of sAP at various velocity settings.", "The calculating process is exhibited as: ${\\rm VsAP} = \\sum \\limits _{i \\in M}{\\rm sAP}^M, M\\in [0,6].$ To tackle the more practical issue, we propose to train our detector by randomly sampling triple data at random velocity.", "The training framework can guide the model to perceive different velocities well (performing better in VsAP).", "However, the policy may fail to be compatible with TAL, as it will endow higher speed objects with larger loss weight and further bring about imbalance concentration among different velocity settings.", "In addition, the matching policy is not suitable as the driving speed increases.", "It may match an error object, leading to unsuccessful training.", "Facing this dilemma, we employ actually adjacent frames (30 FPS) to calculate the corresponding IoU, so that all velocity triples can be trained at an identity loss scale.", "We conduct the experiments on video autonomous driving dataset Argoverse-HD  [16], [7] (High-frame-rate Detection), which contains diverse urban outdoor scenes from two US cities.", "It has multiple sensors and high frame-rate sensor data (30 FPS).", "Following  [7], we only use the center RGB camera and the detection annotations provided by  [7].", "We also follow the train/val split in  [7], where the validation set contains 24 videos with a total of 15k frames." ], [ "Evaluation metrics", "We use sAP  [7] to evaluate all experiments.", "sAP is a metric for streaming perception.", "It simultaneously considers latency and accuracy.", "Similar to MS COCO metric  [60], it evaluates average mAP over IoU (Intersection-over-Union) thresholds from 0.5 to 0.95 as well as $\\rm AP_{s}$ , $\\rm AP_{m}$ , $\\rm AP_{l}$ for small, medium and large object.", "In addition, we report our proposed VsAP for evaluating the comprehensive performance at different velocity settings." ], [ "Implementation details", "If not specified, we use YOLOX-L  [6] as our default detector and conduct experiment on 1x velocity setting.", "All of our experiments are fine-tuned from the COCO pre-trained model by 15 epochs.", "We set batch size at 32 on 8 GTX 2080ti GPUs.", "We use stochastic gradient descent (SGD) for training.", "We adopt a learning rate of $0.001 \\times BatchSize / 64$ (linear scaling [61]) and the cosine schedule with a warm-up strategy for 1 epoch.", "The weight decay is 0.0005 and the SGD momentum is 0.9.", "The base input size of the image is $600 \\times 960$ while the long side evenly ranges from 800 to 1120 with 16 strides.", "We only use horizontal flip and do not use other data augmentations (such as Mosaic [5], Mixup [25], hsv etc.)", "since the feeding adjacent frames need to be aligned.", "For inference, we keep the input size at $600 \\times 960$ and measure the processing time on a Tesla V100 GPU.", "Table: The effect of the proposed pipeline, DFP, and TAL.", "'Off AP' means the corresponding AP using the base detector on the offline setting.", "'Pipe.'", "denotes the proposed pipeline, marked in gray, while '↑\\uparrow ' indicates the corresponding improvements.", "'(·\\cdot )' indicates the relative improvements based on the strong pipeline." ], [ "Ablations for Pipeline", "We conduct ablation studies for the pipeline design on six crucial components: the sample assigning policy the task of prediction, the feature used for fusion, the operation of fusion, the pre-training policy, and the data augmentations for transferring.", "We employ a basic YOLOX-L detector as the baseline for all experiments and keep the other five components unchanged when ablating one.", "With these analyses, we show the effectiveness and superiority of our final designs." ], [ "Sample assignment", "We study the region of anchor points for assign positive and negative samples, which is a key issue on one-stage detector.", "Results in Tab.", "REF shows that specify the anchor (point) region based on the future frame achieve best improvement.", "It reveals that the assigning manner may impose implicit supervision and reduce the difficulty of regression task." ], [ "Prediction task", "We compare the five types of prediction tasks mentioned in Sec.", "REF .", "As shown in Tab.", "REF , indirectly predicting offsets that from latest to current frame (or from current to future frame) with current bounding boxes gets even worse performance than the baseline.", "In contrast, directly anticipating future results achieves significant improvement (+3.0 AP).", "Based on directly prediction framework, both type of offsets used to conduct weak supervision lead to performance drop.", "This demonstrates the supremacy of directly forecasting the results of the next frame.", "Offsets play a reverse role and may implicate some noise of numerical instability." ], [ "Fusion feature", "Integrating the current and previous information is the key for the streaming task.", "For a general detector, there are three different patterns of features to integrate: input, backbone, and FPN pattern respectively.", "Technically, the input pattern directly concatenates two adjacent frames together and adjusts the input channel of the first layer.", "The backbone and FPN patterns employ a $1\\times 1$ convolution followed by batch normalization and SiLU to reduce half channels for each frame and then concatenate them together.", "We also explore to aggregate the features of FPN and backbone using the same fusing style.", "As shown in Tab.", "REF .", "The results of the input and backbone pattern decrease the performance by 0.9 and 0.7 AP.", "By contrast, the FPN pattern significantly boosts 3.0 AP, turning into the best choice.", "Additionally, embedding backbone features into the final enhancing node bring about performance drop (0.5 AP).", "These results indicate that the fusing FPN feature may get a better trade-off between capturing the motion and detecting the objects.", "Early fusion may fail to utilize stronger features or disrupt the pre-trained network parameters.", "Table: Grid search of τ\\tau and ν\\nu in Eq.", "for TAL.Table: Comparison results on different tasks for perceiving trend.", "'LOC', 'OBJ', and 'CLS' represent boxes regression, objectness prediction, and classification respectively.Figure: Detection results respective to 8 corresponding categories on Argoverse-HD dataset." ], [ "Fusion operation", "We also explore the fusion operation for FPN features.", "We seek several regular operators (i.e., element-wise add and concatenation) and advanced ones (i.e., spatial transformer network  [55] (STN)To implement STN for variable inputs, we adopt a SE  [56] block to calculate the transformation parameters instead of using flatten operation and fully connected layers in the original STN., non-local network  [54] (NL)For NL, we use the current feature to calculate the values and queries and use the previous feature to generate keys, and then feed to original NL module.", "and correlation layer [57].", "Tab.", "REF displays the performance among these operations.", "We can see that the element-wise add operation drops performance by 0.4 AP while other ones achieve considerable gains.", "We suppose that adding element-wise values may break down the relative information between two frames and fail to learn trending information.", "And among effective operations, concatenation is prominent because of its light parameters and high inference speed." ], [ "Early fine-tuning", "We further to study the effect of early fine-tuning on offline setting, aiming to exploring whether it helps the model learn the knowledge of the target domain.", "To this end, we early fine-tuning our detector with three types of augmentations (inheriting from YOLOX [6]) and compares with directly fine-tuning by online manner.", "Results from Tab.", "REF reveals that early fine-tuning achieve the same performance with directly fine-tuning (34.2 vs. 34.2).", "Early fine-tuning with augmentation achieves better performance on offline setting but having a negative impact on final fine-tuning.", "Therefore, we choose to directly conduct future prediction task, which do not need to execute redundant multi-stage training." ], [ "Data augmentation", "Next, we continue to explore how to add data augmentation for future prediction.", "It is burdensome since unfit augmentation may lead to misalignment between current and previous features and further fail to perceive motion information.", "We conduct the corresponding transform on input image and annotation to make sure error-free supervision.", "Flip shows the best performance than other three counterparts.", "Mosaic brings about performance drop as it actually shrink the object and increase task difficulty.", "In fact, flip implicitly exposes the model to anticipate different driving directions so that it learn a better generalization.", "Based on above ablation results, we can not only establish a stronger baseline, but work with concise training manner, light parameters and high inference speed." ], [ "Effect of DFP and TAL", "To validate the effect of DFP and TAL, we conduct extensive experiments on YOLOX detectors with different model sizes.", "In Tab.", "REF , “Pipe.” denotes our basic pipeline containing basic feature fusion and future prediction training.", "Compared to the baseline detector, the proposed pipelines have already improved the performance by 1.3 to 3.0 AP across different models.", "Based on these high-performance baselines, DFP and TAL can boost the accuracy of sAP by $\\sim $ 1.0 AP independently, and their combinations further improve the performance by nearly 2.0 AP.", "These facts not only demonstrate the effectiveness of DFP and TAL but also indicate that the contributions of the two modules are almost orthogonal.", "Indeed, DFP adopts dynamic flow and static flow to extract the moving state feature and basic detection feature separately and enhances the FPN feature for streaming perception.", "Meanwhile, TAL employs adaptive weight for each object to predict different trending.", "We believe the two modules cover different points for streaming perception: architecture and optimization.", "We hope that our simple design of the two modules will lead to future endeavors in these two under-explored aspects." ], [ "Grid search of $\\tau $ and {{formula:d3fed9ad-6908-49ca-b7a9-3618e6075e0f}}", "As depicted in Eq.", "REF , the value of $\\tau $ acts as a threshold to monitor newly emerging objects while $\\nu $ controls the degree of attention on the new objects.", "Increasing $\\tau $ too much may mislabel the fast-moving objects as new ones and decrease their weights while setting $\\tau $ too small will lead to missing some new objects.", "Therefore, The value of $\\tau $ needs to be fine-tuned.", "As for $\\nu $ , we set it larger than 1.0 so that the model pays less attention to the new-coming objects.", "We conduct a grid search for the two hyperparameters.", "Results in Tab.", "REF shows that $\\tau $ and $\\nu $ achieve the best performance at 0.5 and 1.6 respectively.", "Tab.", "REF supports the relevant statement ($\\nu $ should be larger than 1).", "As $\\nu $ is small than 1, it damages model performance, as more attention should not be placed on theoretically unpredictable new objects." ], [ "Perceiving task for TAL", "Tab.", "REF compares different tasks for conducting TAL.", "The comparison results show that only takes regression task into account achieve the best performance.", "Reweighting objectness prediction loss causes performance degradation while classification counterpart can no further gain.", "It reveals that adopting relative speed of objects as metric to more concentrate faster objects' regression task promote future perception.", "Figure: Results on different moving velocity settings.", "'D&T' represents DFP and TAL.Figure: Results on different image scales.", "'D&T' represents DFP and TAL.", "The number after 's' is the input scale (the full resolution is 1200×19201200 \\times 1920).Table: Comparison results on different forecasting manners.Figure: Visualization results of the baseline detector and the proposed method.", "The green boxes represent ground truth boxes, while red ones represent prediction results.Table: Performance comparison with state-of-the-art approaches on Argoverse-HD dataset.", "Size means the shortest side of input image and the input image resolution is 600×\\times 960 for our models.", "†\\dagger represents using extra datasets (BDD100K , Cityscapes , and nuScenes ) for pre-training.", "‡\\ddagger means using extra dataset and TensorRT.Figure: Results on different velocities training and testing.", "Subgraph (a) illustrates training with single velocity and test on different velocities.", "The stride of input images is simulated correspondingly to velocity during testing.", "Subgraph (b)-(h) display the performance of training with various velocities and testing on single velocity.Table: Comparison results on different velocity training strategies.", "'*' means advanced TAL with the velocity estimation using actual adjacent frames." ], [ "Performance on different categories", "In detail, we compare the performance with respective to different categories.", "The results are displayed in Fig.", "REF .", "By directly utilizing the base detector to detect vehicles, different types of objects behave variously.", "Small or slender vehicles like person, traffic light, and stop sign show a significantly performance drop, while large ones like bus and trunk are less affected.", "The phenomenon is reasonable as the shifts of prediction boxes caused by the processing time delay become larger with the scale reducing.", "For larger objects, the shifts has little effect on the change of IoU.", "More correspondingly detail visualization results can be see in Fig.", "REF .", "By embedding our proposed DFP and TAL, the performance gaps of different categories are significantly narrowed.", "It is noting that the detecting ability of some categories like bicycle and motorcycle close to the performance of oracle." ], [ "Robustness at different speeds", "Actually, we expect our method not only is capable of alleviating the issue caused by time delay at any speed, but achieve the same results as the offline performance.", "To this end, we further verify the robustness of our model at different moving speeds of the driving vehicle including static.", "To simulate the static (0$\\times $ speed) and faster speed (2$\\times $ , ..., 6$\\times $ ) environments, we re-sample the video frames to build new datasets.", "For 0$\\times $ speed setting, we treat it as a special driving state and re-sample the frames to the triplet $(F_{t}, F_{t}, F_{t})$ .", "It means the previous and current frames have no change and the model should predict the non-changed results.", "For N$\\times $ speed setting, we re-build the triplet data as $(F_{t-N}, F_{t}, F_{t+N}), N\\in [2,6]$ .", "This indicates the faster moving speed of both the ground truth objects and the driving vehicle.", "Results are listed in Fig.", "REF .", "For 0$\\times $ speed, the predicting results are supposed to be the same as the offline setting.", "However, if we only adopt the basic pipeline, we can see a significant performance drop (-2.0 AP) compared to the offline, which means the model fails to deduce the static state.", "By adopting the DFP module into the basic pipeline, we recover this reduction and achieve the comparable results as the offline performance.", "It reveals that DFP, especially the static flow, is a key to extracting the right moving trend and assisting in prediction.", "It is also worth noting that at 0$\\times $ speed, all the weights in TAL are one thus it has no influence.", "For N$\\times $ speed, as the objects move faster, the gap between offline and streaming perception is further expanded.", "Meanwhile, the improvements from our models, including the basic pipeline, DFP, and TAL, are also enlarged.", "These robustness results further manifest the superiority of our method." ], [ "Robustness at different scales", "We further verify the performance of our method at different image scales.", "Results are shown in Fig.", "REF .", "The strong pipeline with DFP and TAL modules continuously narrows the performance gap between offline and streaming perception settings.", "It illustrates that our approach and pipeline can (re)act at any input scales, which is prone to embed in different devices." ], [ "Comparison with Kalman Filter based forecasting", "We follow the implementation of  [7] and report the results in Tab.", "REF .", "For ordinary sAP ($1\\times $ ), our end-to-end method still outperforms the advanced baseline by 0.5 AP.", "Further, when we simulate and evaluate them with faster moving ($2\\times $ speed), our end-to-end model shows more superiority of robustness (33.3 sAP v.s.", "31.8 sAP).", "Besides, our e2e model brings lower extra latency (0.8 ms v.s.", "3.1 ms taking the average of 5 tests)." ], [ "Visualization results", "As shown in Fig.", "REF , we present the visualization results by comparing the base detector and our method.", "For the baseline detector, the predicting bounding boxes encounter severe time lag.", "The faster the vehicles and pedestrians move, the larger the predictions shift.", "For small objects like traffic lights, the overlap between predictions and ground truth becomes small and is even non.", "In contrast, our method alleviates the mismatch and fits accurately between the predicting boxes and moving objects.", "It further confirms the effectiveness of our method visually." ], [ "Actual analysis", "To further reveal the more actual scenes with different driving velocities, we conduct extensive experiments to analyse the regular pattern with different velocity training settings.", "Afterwards, we carry out comparison experiments to show the superiority of mix-velocity training strategy with DFP and TAL.", "The Fig.", "REF show the interesting results by single velocity training and various velocities testing.", "It reveals that the closer to the training velocity, the more testing performance gains, which is similar to the performance of different training image scales and testing image scales.", "The line chart shows that DFP and TAL stably narrow the performance drop by a little performance decrease of static at the expense.", "As shown in Tab.", "REF , we use the VsAP to measure the overall performance of all velocity settings.", "The fixed single velocity training framework need to search a better trade-off, like 2x achieves a better performance (32.36 VsAP).", "Different from the time-consuming and unscientific single-velocity training method, we use the mix-velocity training manner, which achieves the best performance (32.04 VsAP) and gains 8.24% improvement.", "We also compare the mix training with vanilla and advanced TAL.", "Results shows that the vanilla achieve a decent effect but is lower than the single velocity style searching manner, which may be caused by the imbalance between loss weights at different speeds.", "As a comparison, the advanced TAL adopting actually adjacent frames achieves higher gains." ], [ "Comparison with state-of-the-art", "We compare our method with other state-of-the-art detectors on Argoverse-HD dataset.", "As shown in Fig.", "REF , real-time methods show absolute advantages over non-real-time detectors.", "We also report the results of the 1st and 2nd place in Streaming Perception Challenge.", "They involve extra datasets and accelerating tricks which are out of the scope for this work, while our methods get competitive performance and even surpass the accuracy of the 2nd place scheme without any tricks.", "By adopting extra datasets for pre-training our method surpass the accuracy of the 1st place scheme only using 600 $\\times $ 960 resolution.", "With larger resolution inputs (1200 $\\times $ 1920), our method further achieve better performance (42.3 sAP)." ], [ "Conclusion", "This paper focuses on a streaming perception task that takes the processing latency into account.", "Under this metric, we reveal the superiority of using a real-time detector with the ability of future prediction for online perception.", "We further build a real-time detector with Dual-Flow Perception module and Trend-Aware Loss, alleviating the time lag problem in streaming perception.", "Extensive experiments show that our simple framework achieves superior performance against all state-of-the-art detectors.", "It also obtains robust results on different speed settings.", "Further, we synthetically consider the driving velocity and propose the mix-velocity training strategy with advanced TAL to training our detector for adapting to various velocities.", "We hope that our simple and effective design will motivate future efforts in this practical and challenging perception task." ] ]
2207.10433
[ [ "The Neural Race Reduction: Dynamics of Abstraction in Gated Networks" ], [ "Abstract Our theoretical understanding of deep learning has not kept pace with its empirical success.", "While network architecture is known to be critical, we do not yet understand its effect on learned representations and network behavior, or how this architecture should reflect task structure.In this work, we begin to address this gap by introducing the Gated Deep Linear Network framework that schematizes how pathways of information flow impact learning dynamics within an architecture.", "Crucially, because of the gating, these networks can compute nonlinear functions of their input.", "We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning.", "Our analysis demonstrates that the learning dynamics in structured networks can be conceptualized as a neural race with an implicit bias towards shared representations, which then govern the model's ability to systematically generalize, multi-task, and transfer.", "We validate our key insights on naturalistic datasets and with relaxed assumptions.", "Taken together, our work gives rise to general hypotheses relating neural architecture to learning and provides a mathematical approach towards understanding the design of more complex architectures and the role of modularity and compositionality in solving real-world problems.", "The code and results are available at https://www.saxelab.org/gated-dln ." ], [ "Introduction", "While neural networks have led to numerous impressive breakthroughs [39], [93], [2], [10], [59], [81], our theoretical understanding of these models has not advanced at the same pace.", "Although we have gained some insight into general principles of network optimization [64], [17], [13], [4], [68], we have a limited understanding of how the specific choice of architecture—that is, the mesoscale pattern of connectivity between hidden layers (Fig.", "REF )—affects a network's behavior [101], [67], [20], [77], [91].", "For example, when training networks on the ImageNet dataset, wide networks perform slightly better on classes reflecting scenes, whereas deep networks are slightly more accurate on classes related to consumer goods [62].", "Understanding the reasons behind these behaviors may lead to more systematic techniques for designing neural networks.", "Figure: A multi-modal network composed of simple modules shared across modalities and tasks.", "How do shared modules and pathways impact representation learning and generalization?Here, we address this gap by introducing and analyzing the Gated Deep Linear Network (GDLN) framework, which illuminates the interplay between data statistics, architecture, and learning dynamics.", "We ground the framework in the Deep Linear Network (DLN) setting as it is amenable to mathematical analysis, and previous works [76], [51], [5], [77] have observed that DLNs exhibit several nonlinear phenomena that are observed in deep neural networks.", "Because of the gating mechanism, however, GDLNs can compute nonlinear functions of their input, making them more expressive than standard deep linear networks.", "Our main contributions are: (i) We introduce the GDLN framework (sec:gdlnframework), which schematizes how pathways of information flow impact learning dynamics within an architecture.", "(ii) We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning (sec:gradientflowdynamics).", "(iii) Our analyses reveal the dynamics of learning in structured networks can be conceptualized as a neural race with an implicit bias towards shared representations (sec:neuralracered).", "(iv) We validate our findings on naturalistic datasets, with some relaxed assumptions (sec:experiment)." ], [ "Gated Deep Linear Network Framework", "A fundamental principle of neural network design is that powerful networks can be composed out of simple modules [73], [74], and a key intuition is that a network's compositional architecture should resonate in some way with the task to be performed.", "For instance, a multimodal network might process each modality independently before merging these streams for further processing [61], [30], and a multi-tasking NLP model might process its input through a shared encoder before it splits into task-specific pathways [21], [57], [87].", "Frequently, a network's compositional structure can be conceptualized as an “architecture graph” $\\Gamma $ , decorated by network modules, additional interactions, and learning mechanisms.", "Here we introduce a class of networks, Gated Deep Linear Networks (GDLNs), depicted in Fig.", "REF a, for which we can analytically study the effect of the architecture graph on learning and generalization.", "A GDLN is defined as follows.", "Let $\\Gamma $ denote a directed graph with nodes $\\mathbf {V}$ and edges $\\mathbf {E}$ , with the structure of $\\Gamma $ encoded by functions ${}{s}{s()}, {}{t}{t()}: \\mathbf {E}\\rightarrow \\mathbf {V}$ mapping an edge to its source and target node, respectively.", "For each $v\\in \\mathbf {V}$ , let $|v|\\in $ denote the number of neurons assign to that node, and let $h_v \\in ^{|v|}$ denote neural activations for the corresponding network layer.", "For each edge $e \\in \\mathbf {E}$ , let $W_e$ denote a $|{e}{t}{t(e)}| \\times |{e}{s}{s(e)}|$ weight matrix assigned to $e$ .", "An input node of $\\Gamma $ is a node with only outgoing edges, and an output node is a node with only incoming edges.", "Let $\\textrm {In}(\\Gamma ), \\textrm {Out}(\\Gamma ) \\subset V$ denote the sets of input nodes and output nodes of $\\Gamma $ , respectively.", "The GDLN associated with $\\Gamma $ computes a function as follows: An input example specifies values $x_v\\in ^{|v|}$ for all input nodes $v\\in \\textrm {In}(\\Gamma )$ , and the input nodes are fixed to their values $h_v=x_v$ for $v\\in \\textrm {In}(\\Gamma )$ .", "Then, activation propagates to subsequent layers according to $h_v=g_v\\sum _{q \\in \\mathbf {E}:t(q)=v} g_qW_qh_{s(q)}$ where $g_v$ is the node gate and $g_q$ is the edge gate.", "That is, activity propagates through the network as in standard neural networks, but modulated by gating variables that can act at the node and edge level.", "In essence, these gating variables enable nonlinear computation from input to output, and can be interpreted in several ways: The node level gating can be viewed as an approximate reduction of ReLU dynamics as a ReLU neuron's activity can be written as $\\max (0,wx)=\\textrm {step}(wx)wx$ .", "The gating variables can also be viewed as context-dependent control signals that modulate processing in the network.", "We discuss the extensive connections between the Gated Deep Linear Network and other approaches in Appendix .", "In order to keep the analysis mathematically tractable, we assume that the gating variables are simply specified directly for each input that will be processed and we consider them to be a part of the dataset.", "Figure: Formalism and notation.", "(a) The Gated Deep Linear Network applies gating variables to nodes and edges in an otherwise linear network.", "(b) The gradient for an edge (here using the orange edge in Panel (a) as an example) can be written in terms of paths through that edge (colored lines).", "Each path is broken into the component antecedent s ¯(p,e)\\bar{s}(p,e) and subsequent t ¯(p,e)\\bar{t}(p,e) to the edge." ], [ "Gradient Flow Dynamics and Reduction", "Having described the design of the network, we now exploit its special properties to understand learning dynamics and their relationship to network structure.", "In particular, we consider training all weights in the network to minimize the $L_2$ loss averaged over a dataset, $\\mathcal {L}(\\lbrace W\\rbrace )=\\left\\langle \\frac{1}{2} \\sum _{v\\in \\textrm {Out}(\\Gamma ) } ||y_v - h_v ||_2^2 \\right\\rangle _{x,y,g}$ where $y_v\\in ^{|v|}$ for $v\\in \\textrm {Out}(\\Gamma )$ are the target outputs for the output layers in the network, and $\\langle \\cdot \\rangle _{x,y,g}$ denotes the average over the training dataset and gating structures.", "The weights in the network can be updated using gradient descent.", "A key virtue of the GDLN formalism is that the gradient flow equations can be compactly expressed in terms of the paths through the network.", "We first lay out our notation, as illustrated in Fig.", "REF b for an example network.", "A path $p$ is a sequence of edges that joins a sequence of nodes in the network graph $\\Gamma $ .", "Let $\\mathcal {P}(e)$ be the set of all paths from any input node to any output node that pass through edge $e$ .", "Let $\\mathcal {T}(v)$ be the set of all paths terminating at node $v$ .", "We denote the source and the target node of the path $p$ as $s(p)$ and $t(p)$ , respectively.", "We denote the component of path $p$ that is subsequent to edge $e$ (i.e., the path whose source node is the target of $e$ , and that otherwise follows $p$ ) as $\\bar{t}(p,e)$ (for the `target' path of $e$ ).", "Similarly, we denote the component of path $p$ that precedes edge $e$ (i.e., the path whose target node is the source of $e$ , and that otherwise follows $p$ ) as $\\bar{s}(p,e)$ (for the `source' path of $e$ ).", "Overloading the notation, we will write $W_p$ where $p$ is a path to indicate the ordered product of all weights along the path $p$ , with the target of $p$ on the left and the source of $p$ on the right.", "Similarly, we write $g_p$ where $p$ is a path to denote the product of the (node and edge) gating variables along the path.", "With this notation, the gradient flow equations can be shown to be (full derivation in Appendix ), $\\tau \\frac{d}{dt}W_e &= & -\\frac{\\partial \\mathcal {L}(\\lbrace W\\rbrace )}{\\partial W_e} \\quad \\forall e \\in E, \\\\&= & \\sum _{p\\in \\mathcal {P}(e)} W_{\\bar{t}(p,e)}^T\\mathcal {E}(p)W^T_{\\bar{s}(p,e)} $ where the error term for path $p$ is $\\mathcal {E}(p) = \\Sigma ^{yx}(p) - \\sum _{j \\in \\mathcal {T}(t(p))} W_j\\Sigma ^{x}(j,p).", "$ Here the dataset statistics which drive learning are collected in the correlation matrices $\\Sigma ^{yx}(p)&=& \\left\\langle g_p y_{t(p)} x_{s(p)}^T\\right\\rangle _{y,x,g} \\\\\\Sigma ^x(j,p)&=&\\left\\langle g_jx_{s(j)}x_{s(p)}^Tg_p\\right\\rangle _{y,x,g} $ where $j$ and $p$ index two paths.", "Hence if there are $N$ paths through the graph from input nodes to output nodes, there are potentially $N$ distinct input-output correlation matrices and $N^2$ distinct input correlation matrices that are relevant to the dynamics.", "Remarkably, no other statistics of the dataset are considered by the gradient descent dynamics.", "Notably, these correlation matrices depend not just on the dataset statistics ($x$ and $y$ ), but also on the gating structure $g$ .", "The possible gating structures are limited by the architecture.", "In this way, the architecture of the network influences its learning dynamics.", "In essence, the core simplification enabled by the GDLN formalism is that the gating variables $g$ appear only in these data correlation matrices.", "They do not appear elsewhere in Eqns.", "(REF )-(REF ), which otherwise resemble the gradient flow for a deep linear network [76], [77].", "The effect of the nonlinear gating can thus be viewed as constructing pathway-dependent dataset statistics that are fed to deep linear subnetworks (pathways).", "Figure: XoR solution dynamics.", "(a) The XoR task with positive (red) and negative (blue) examples.", "Input-to-hidden weights from ReLU simulations (magenta) reveal four functional cell types.", "(b) GDLN with four paths, each active on one example.", "(c) Simulations of ReLU dynamics from small weights (red, 10 repetitions) closely track analytical solutions in the GDLN.", "Parameters: N h =128,τ=5/2,σ 0 =.0002.N_h=128,\\tau =5/2,\\sigma _0=.0002.As a simple example of the power of this framework relative to deep linear networks, consider the XoR task (Fig.", "REF a), a canonical nonlinear task that cannot be solved by linear networks.", "By choosing the gating structures to activate a different pathway on each example (Fig.", "REF b), the gated deep linear network can solve this task (Fig.", "REF c blue).", "Crucially, its dynamics (analytically obtained in sec:nonlinearcontextualclassification based on the reduction in the following sections) closely approximates the dynamics of a standard ReLU network trained with backprop (Fig.", "REF c red).", "This result demonstrates that the gated networks are more expressive than their non-gated counterpart, and that gated networks can provide insight into ReLU dynamics in certain settings.", "We note that so far, our analysis does not provide a mechanism to select the gating structure.", "We will return to this point in Section REF , which provides a perspective on the gating structures likely to emerge in large networks." ], [ "Exact reduction from decoupled initial conditions", "Our fundamental goal is to understand the dynamics of learning as a function of architecture and dataset statistics.", "In this section, we exploit the simplified form of the gradient flow equations to obtain an exact reduction of the dynamics.", "Our reduction builds on prior work in deep linear networks, and intuitively, shows that the dynamics of gating networks can be expressed succinctly in terms of effective independent 1D networks that govern the singular value dynamics of each weight matrix in the network.", "The reduced dynamics can be substantially more compact, as for instance, a weight matrix of size $N\\times M$ has $NM$ entries but only $\\textrm {min}(M,N)$ singular values.", "Figure: Pathway network solution dynamics.", "(a) The network contains MMdifferent input domains (each consisting of a bank of neurons), MM differentoutput domains, and two hidden layers.", "The task is to learn a mapping fromeach input domain to each output domain.", "The gating structure gateson one input and one output pathway.", "The hidden pathway is alwayson.", "(b) Gated network formalism.", "There are M 2 M^2 pathways through thenetwork from input to output.", "All M 2 M^2 flow through the hidden weight matrix, whileonly MM flow through each input or output weight matrix.", "This fact causes the hiddenlayer to learn faster.", "(c) Small example dataset with hierarchical structure.The task of the network is to produce the 7-dimensional output vector for eachof four items.", "Inputs are random orthogonal vectors for each item.", "(d) Eachinput domain is trained with KK output domains (here K=4K=4), such that someinput-output routes are never seen in training.", "(e) Training loss dynamics forsimulated networks from small random weights (red, 10 repetitions), simulatednetworks from decoupled initial conditions (green), and theoretical predictionfrom Eqn.", "(blue).", "The theory matches the decoupled simulationsexactly, and is a good approximation for small random weights.", "(f) The singular values of the hidden weight matrix (blue) are larger than those in input or output matrices by a factor M\\sqrt{M}.", "Theoretical predictions match simulations well, particularly for larger singular values.", "(g) Representational similarity (or kernel) matrix at the first hidden layer.", "Inputs from different domains are mapped to similar internal representations, revealing a shared representation even for input domains that are never trained with a common output.", "(h) Predicted output at the end of training.", "The network generalizes perfectly to input-output routes that were never seen during training.", "Parameters: M=7,K=4,λ=.02,σ 0 =.2,N h =64.M=7,K=4,\\lambda =.02,\\sigma _0=.2,N_h=64.To accomplish this, we introduce a change of variables based on the singular value decomposition of the relevant dataset statistics.", "Suppose that the dataset correlation matrices are mutually diagonalizable, such that their singular value decompositions have the form $\\Sigma ^{yx}(p)&=& U_{t(p)}S(p) V_{s(p)}^T \\\\\\Sigma ^x(j,p)&=&V_{s(j)}D(j,p) V_{s(p)}^T$ where the set of $U$ and $V$ matrices are orthogonal, and the set of $S$ and $D$ matrices are diagonal.", "That is, there is a distinct orthogonal matrix $U_l$ for each output layer, a distinct orthogonal matrix $V_l$ for each input layer, and diagonal matrices $S(p),D(p)$ for each path through the network.", "Then, following analyses in deep linear networks [76], we consider the following change of variables.", "We rewrite the weight matrix on each edge as $W_e(t) = R_{t(e)}B_e(t)R_{s(e)}^T \\quad \\forall e, $ where the matrices $B_e(t)$ are the new dynamical variables, and the matrix $R_v$ associated to each node $v$ in the graph satisfies $R_v^TR_v=I$ .", "Further, for output nodes $v$ , we require $R_v=U_v$ , the output singular vectors in the diagonalizability assumption.", "Similarly, for input nodes, we require $R_v=V_v$ .", "Inserting (REF )-(REF ) into (REF )-(REF ) shows that the dynamics for $B_e$ decouple: if all $B_e(0)$ are initially diagonal, they will remain so under the dynamics (full derivation in Appendix ).", "For this decoupled initialization, the dynamics are $\\tau \\frac{d}{dt}B_e = \\sum _{p\\in \\mathcal {P}(e)} B_{p \\setminus e}\\left[S(p) - \\sum _{j \\in \\mathcal {T}(t(p))} B_jD(j,p) \\right] $ where $B_{p \\setminus e}=B_{\\bar{t}(p,e)}B_{\\bar{s}(p,e)}$ is the product of all $B$ matrices on path $p$ after removing edge $e$ (see Appendix ).", "In essence, this reduction removes competitive interactions between singular value modes, such that the dynamics of the overall network can be described by summing together several “1D networks,” one for each singular value.", "Intuitively, this reduction shows that learning dynamics depend on several factors.", "Input-output correlations Other things being equal, a pathway learning from a dataset with larger input-output singular values will learn faster.", "This fact is well known from prior work on deep linear networks [76].", "Pathway counting Other things being equal, a weight matrix corresponding to an edge that participates in many paths (such that the sum contains many terms) will learn faster.", "This fact is less obvious, as it becomes relevant only if one moves beyond simple feed-forward chains to study complex architectures and gating.", "We now turn to examples that verify and illustrate the rich behavior and consequences of these dynamics." ], [ "Applications and consequences", "To fix a specific scenario with rich opportunities for generalization, we consider a “routing” setting, as depicted in Fig.", "REF a.", "In this setting, a network receives inputs from $M$ different input domains and produces outputs across $M$ different output domains.", "The goal is to learn to map inputs from a specific input domain to a specific output domain, with no negative-interference from other input-output domain pairs.", "There are thus $M^2$ possible tasks which can be performed, each corresponding to mapping one of the $M$ input domains to one of the $M$ output domains.", "We assume that the target input-output mapping from the active input domain to the active output domain is the same for all pathways, and defined by a dataset with input correlations $\\langle xx^T \\rangle = VDV^T$ and input-output correlations $\\langle yx^T\\rangle = USV^T$ .", "For the simulations in this section, we take the dataset to contain four examples, and the target output to be a 7-dimensional feature vector with hierarchical structure (Fig.", "REF c), but note that the theory is more general.", "To investigate the possibility of structured generalization, we consider a setting where only a subset of input-output pathways are trained.", "That is, each input domain is trained with only $K\\le M$ output domains, as depicted in Fig.", "REF d, such that some input-output pathways are never observed during training.", "We consider solving this task with a two-hidden layer gated deep linear network depicted in Fig.", "REF a.", "We emphasize that this task is fundamentally nonlinear, because inputs on irrelevant input domains must be ignored.", "We take the gating structure to gate off all first layer pathways except the relevant input domain, and to gate off all third layer pathways but the one to the relevant output domain.", "As shown in Fig.", "REF b, this scheme results in $M^2$ pathways through this network that must be considered in the reduction.", "The resulting pathway correlations are simply given by the original dataset, scaled be the probability that each path is active (see Appendix ) $\\Sigma ^{yx}(p)&=& \\frac{1}{KM}USV^T\\\\\\Sigma ^x(j,p)&=&{\\left\\lbrace \\begin{array}{ll}\\frac{1}{KM}VDV^T \\quad \\textrm {if}~j=p \\\\0 \\quad \\textrm {otherwise}\\end{array}\\right.", "}.$ Crucially, we have a simple “pathway counting” logic behind the reduction: the first and third layer weights are active in $K$ paths (all tasks originating from a given input domain or terminating at a given output domain, respectively), while second layer weights are active in $KM$ trained paths.", "This fact causes the second layer weights to learn more rapidly.", "Assuming that weights start out roughly balanced in each first layer and third layer weigh matrix (a reasonable assumption when starting from small random weights), this yields the reduced dynamics (Appendix ) $\\tau \\frac{d}{dt}B_1 &=& \\frac{1}{M} B_2B_1\\left[S-B_2B_1^2D\\right] \\\\\\tau \\frac{d}{dt}B_2 &=& B_1^2\\left[S-B_2B_1^2D\\right]$ where $B_1$ describes the input and output pathway weights singular values, and $B_2$ describes the hidden layer weight singular values.", "We note that the quantity $MB_1^2-B_2^2$ is conserved under the dynamics.", "Defining the constant $C=MB_1(0)^2-B_2(0)^2$ , we can therefore write the dynamics as $\\tau \\frac{d}{dt}B_2 = \\frac{1}{M}(B_2^2+C)\\left[S-\\frac{1}{M}B_2(B_2^2+C)D\\right].", "$ Remarkably, this equation reveals that the dynamics of this potentially large, gated, multilayer network with arbitrary numbers of hidden neurons can be reduced to a single scalar for each singular value in the dataset.", "Each diagonal element of this equation provides a separable differential equation that may be integrated to give an exact formal solution.", "Figure REF e compares the training error dynamics predicted by Eqn.", "REF to full simulations starting from small random weights (i.e.", "scaling Xavier initialiation weights by 0.2), verifying that our reduction is a good description of dynamics starting from small random weights.", "Furthermore, as shown in Appendix , the hidden layer weight singular values change by a factor of $\\sqrt{M}$ more than the input or output weights, as verified in Fig REF f. Figure: Pathway race dynamics.", "(a) The same routing task can be solved using a variety of gating schemes that differ in their use of shared representations.", "Every input-output combination can be given a dedicated pathway such that P=1P=1 tasks flow through each (left), groups of two input domains and two output domains can share a pathway such that P=4P=4 tasks flow through each (middle), or all P=M 2 P=M^2 tasks can run through a shared representation (right).", "(b) Singular value dynamics as a function of the number of pathways PP flowing through the hidden layers.", "Networks that share more structure learn faster.", "Consequently, in a single network where subparts share structure to different degrees, the maximally shared dynamics dominate the race between pathways.", "(c) Error on trained (blue) and untrained (red) pathways as a function of the fraction of output domains KK trained with each input domain MM.", "When few outputs are trained per input domain, the race dynamics do not strongly favor shared structure and so error on untrained domains is large.", "When ∼40%\\sim 40\\% of output domains are trained with each input, the shared structure solutions are sufficiently faster to reliably dominate the race and yield generalization to untrained domains.", "Parameters: M=10,λ=.05/K,σ 0 =.2,N h =64.M=10,\\lambda =.05/K,\\sigma _0=.2,N_h=64." ], [ "Shared representations and generalization", "With this description of the training dynamics of the network, we can then ask what representations emerge in the network over training.", "One way of interrogating the nature of representations in the network is to compute the representational similarity between different input examples from the same input domain, and across different input domains.", "Specifically, we compute the dot product between the neural activity in the first hidden layer in response to different inputs.", "As shown in Fig.", "REF g, the pathway network learns a shared representation, in which each individual example maps to the same representation regardless of what input domain it arrives on.", "That is, the gating and learning dynamics enable the network to learn a representation that is invariant to input domain, and which is abstract in the sense that the representation contains no information about what input domain produced it.", "This shared representation supports zero-shot generalization to untrained input-output pathways, as shown in Fig.", "REF h. The intuition behind obtaining zero-shot generalization is as follows: Say that we are evaluating the network on a new input-output pair of domains.", "As long as the network has been trained on examples from the current input domain (in conjunction with any output domain), the network will map it to the shared representation.", "Similarly, as long as the network has been trained on examples from the current output domain, it will be able to map this shared representation to the output.", "In this way, training on a subset of $M^2$ tasks is enough to obtain strong generalization to all $M^2$ tasks.", "In essence, this solution accomplishes a factorization of the problem into two interacting but distinct components: the gating variables represent what input domain links to what output domain, providing information about “where” signals should go; while the neural activity represents “what” task-relevant input was presented, regardless of where it came from or where it should be routed to.", "This factorization can permit generalization to untrained pathways provided the gating structure is configured appropriately.", "Figure: Neural race vs. NTK regime: Initialization, shared representation formation, and zero-shot generalization.", "(a) Training loss for pathway networks with different initialization scales σ 0 \\sigma _0.", "Small σ 0 \\sigma _0 yields pathway race dynamics with stage-like drops through training.", "Large σ 0 \\sigma _0 yields NTK-like dynamics with rapid, exponential learning curves.", "(b) Error for trained (blue) and untrained (red) input-output domain combinations as a function of initialization scale.", "While performance on trained domains is excellent for all scales, zero-shot generalization only emerges in the neural race regime.", "(c) Representational similarity between inputs presented to different domains, for small (left column), medium (middle column) and large (right column) initialization scales.", "At small initialization scales, internal representations in the first hidden layer are similar even across domains, indicating one common shared representation.", "Large initialization scales place the network in the NTK regime where random initial connectivity persists throughout learning, yielding distinct high-dimensional random representations for each domain.", "(d) Network output for all input-output combinations for three initialization scales (labeled in panel c).", "Because networks in the NTK regime do not learn a shared representation for different input domains, they do not generalize to untrained pathways." ], [ "Neural race dynamics: Implicit bias toward shared representations", "The reductions so far have assumed that the gating structure is specified a priori, and furthermore, that different domains connect to the same singular value modes in the hidden layer.", "That is, the gating structure provides the opportunity for learning a shared representation, but this is not obligatory: different parts of the hidden representation could learn distinct pathways, despite all being gated on.", "Remarkably, the dynamics from small random weights track the trajectory predicted for maximally shared representation, suggesting that the full solution dynamics rapidly converge to the submanifold of decoupled weights.", "Why are shared representations favored under the dynamics?", "To investigate this, we note that the same task and architecture typically permit several gating schemes and singular value mode connectivity patterns that each would obtain zero training error.", "As shown in Fig.", "REF a (left), for instance, the routing task could be solved with an alternative gating structure in which each input-output route receives a dedicated pathway that is gated on only when the task is to connect that specific input-output route.", "This gating scheme would still obtain zero training error, but does not yield any representation sharing.", "Other partial sharing schemes are possible; for instance, representations could be shared across groups of two input and two output domains (Fig.", "REF a, middle).", "What is the impact of these choices on learning dynamics and generalization?", "Taking the case where all routes are trained ($K=M$ ) for simplicity, with no sharing, each pathway participates in just one of the $M^2$ total trained pathways, compared to the fully shared solution where the input and output layers participate in $M$ pathways and the hidden layer participates in $M^2$ .", "From this, we can see that greater sharing leads to faster learning.", "In a combined network that produces its output using both shared and non-shared representations, the dynamics of each pathway will race each other to solve the task; and hence the most-shared structure will dominate.", "To see this quantitatively, we repeat a similar derivation to the preceding section for networks with varying degrees of pathway overlap.", "In particular, we parameterize the degree of pathway overlap with the parameter $P$ that counts the number of pathways flowing through a given hidden layer.", "The resulting reduction is $\\tau \\frac{d}{dt}B_1 &=& \\frac{\\sqrt{P}}{M^2} B_2B_1\\left[S-B_2B_1^2D\\right] \\\\\\tau \\frac{d}{dt}B_2 &=& \\frac{P}{M^2} B_1^2\\left[S-B_2B_1^2D\\right],$ which shows that the learning rate in all layers increases as $P$ increases.", "Solution dynamics for a range of degrees of sharing $P$ are plotted in Fig.", "REF b, which show that greater degrees of sharing reliably leads to faster singular value dynamics.", "Hence, dynamics in GDLNs take the form of a pathway race: when many gating schemes coexist in the same network, the ones that share the most structure–and hence learn the fastest–will come to dominate the solution.", "Therefore gradient flow dynamics in complex network architectures has an implicit bias toward extracting shared representations where possible.", "The strength of this bias increases as each input domain is trained with more output domains.", "As shown in Fig.", "REF c, shared representations begin to dominate reliably when roughly 40% of input-output routes are trained, enabling generalization to unseen input-output routes." ], [ "Impact of initialization", "The training and generalization dynamics of deep networks are known to depend on the weight initialization.", "Here we show that initial weight variance exerts a pronounced effect on the emergence of shared representations, and hence generalization abilities.", "As observed in a number of theoretical and empirical works, neural networks can operate in two different initialization regimes [20], [13], [26].", "Sufficiently wide networks initialized with large variance initializations enter the Neural Tangent Kernel regime, where training dynamics follow a simple linear dynamical system and error trajectories exhibit exponential approach to their asymptote [43], [53], [8].", "Intuitively, in this regime, the initial strong random connectivity in the network provides sufficiently rich features to learn the task without substantially changing internal representations.", "In this setting, deep networks behave like kernel machines with a fixed kernel (the neural tangent kernel).", "By contrast, networks initialized with sufficiently small variance initializations learn rich task-specific representations, and their dynamics as we have seen can be more complex [58], [70], [84], [77].", "To show the effect of this transition in our setting, we train pathway networks starting from different random matrices with singular value $\\sigma _0$ .", "For a range of initialization scales, all networks converge to zero training error (Fig.", "REF a).", "As expected, large initialization scales lead to NTK-like exponential dynamics, while small initialization scales lead to progressive stage-like drops in the error consistent with prior analyses of deep linear networks in the rich regime.", "Critically, initialization has a dramatic impact on generalization (Fig.", "REF b), and only small initializations are capable of zero-shot generalization to untrained routes.", "To understand why, we visualize the representational similarity structure for several networks in Fig.", "REF c, which shows a transition from shared to independent representations as networks move from the rich to the lazy regime.", "Finally, Fig.", "REF d shows the breakdown in generalization in the NTK regime.", "Hence our neural race reduction can describe learning in the rich feature learning regime, with non-trivial generalization behavior.", "Figure: Experimental results.", "(a) Each input domain receives inputs that have been subjected to one of MM input transformations.", "The target output for each output domain is also transformed by one of MM output transformations.", "Here the visualization uses rotations in the input and permutations in the output.", "Only a subset of input-output transformation pairs are seen during training.", "(b) Error for trained (blue) and untrained (orange) input-output domain pairs as a function of the percentage of trained pathways (K/MK/M) on the CIFAR dataset with M 2 =1600M^2=1600 total tasks.", "(c) Error on MNIST with M 2 =10 4 M^2=10^4 total tasks.", "Training accuracy is always high while zero-shot transfer to untrained pathways becomes as good as the training performance when ≈\\approx 40% of pathways are trained.", "(d) Error as a function of initialization scale.", "While performance on trained domains is good for all scales, zero-shot generalization only emerges at small inits." ], [ "Experiments", "So far, we have described mathematical principles relating network architecture to learning dynamics and the nature of learned representations in GDLNs.", "We demonstrated how the network architecture affects generalization in pathway networks when applied to a simple toy dataset.", "In this section, we qualitatively validate our key findings on naturalistic datasets.", "Specifically, we test if GDLNs can exhibit strong zero-shot generalization performance on untrained input-output domain pairs (sec:sharedrepresentationsandgeneralization) and if the link between initialization scale and zero-shot generalization (sec:initialiation) holds when training on naturalistic datasets.", "Several datasets and benchmarks have been proposed to evaluate systematic generalization in neural networks [45], [12], [82], [50], [72], [83].", "However, these naturalistic datasets generally require the network to learn multiple capacities (like spatial reasoning, logical induction, etc.)", "and come with a fixed (and often unclear) extent of training in different domains.", "To develop a setting that remains close to the theory, we create new datasets by composing transformations on popular vision datasets.", "Having fine-grained control over the dataset generation mechanism enables us to understand the effect of parameters like the number of input/output domains.", "Briefly, as depicted in fig:resultsonnaturalimagesa, starting from a base dataset with $n$ inputs and outputs $\\lbrace (x^\\mu ,y^\\mu )\\rbrace ,\\mu =1,\\cdots ,n$ , we generate new tasks by applying one of $M$ input transformations $f^{\\textrm {in}}_i,i=1,\\cdots ,M$ and one of $M$ output transformations $f^{\\textrm {out}}_j,j=1,\\cdots ,M$ (full details deferred to Appendix due to space constraints).", "The task from input domain $i$ to output domain $j$ thus has samples $\\lbrace (f^{\\textrm {in}}_i(x^\\mu ),f^{\\textrm {out}}_j(y^\\mu ))\\rbrace $ .", "We use the MNIST  [23] and CIFAR-10 datasets [48] as base datasets, and rotations and permutations as transformations.", "Relative to our previous experiments, these datasets add real data correlations and distinct transformations on each domain that make finding a shared representation challenging." ], [ "Results", "In fig:resultsonnaturalimages, we evaluate the zero-shot generalization performance of gated deep linear networks on  untrained pathways and study the effect of initialization on their performance.", "Full model and training details are given in Appendix .", "Fig.", "REF (b,c) shows mean accuracy, over trained and untrained pathways, as a function of the fraction of datasets that the model used for training, for CIFAR ($M=40$ ) and MNIST ($M=100$ ) respectively.", "Training accuracy is always high while the zero-shot transfer to untrained pathways becomes near-perfect when $\\approx $ 40% of pathways are trained.", "In Fig.", "REF d, we report the error for trained (blue) and untrained (orange) input-output domain combinations as a function of initialization scale (gain ratio).", "While performance on trained domains is good for all scales, zero-shot generalization only emerges for smaller scales, as in the neural race regime.", "These observations validate our findings (from sec:applications) on naturalistic datasets." ], [ "Related Work", "Our work is closely related to several areas in machine learning: Deep Linear Networks, the study of dynamics of learning in neural networks, and modular neural networks.", "Deep Linear Networks: [14], [29], [76], [5], [51], [77] showed that deep linear networks exhibit several nonlinear phenomena that are observed in deep neural networks and proposed studying the dynamics of deep linear networks as a surrogate for understanding the dynamics in the deep neural networks.", "[14] described the loss landscape, while [76] developed the theory of gradient descent learning in deep linear neural networks and provided exact solutions to the nonlinear dynamics.", "Motivated by their observation about the similarity in dynamics of linear and non-linear networks and the feasibility of analyzing the gradients in the linear network, we ground the proposed framework in the deep linear network setting.", "Many works have studied the dynamics of deep networks under specific assumptions like linear separability of data [22], deep ReLU networks [91], [88], Tensor Switching Networks [92] (generalization of ReLU to tensor-valued hidden units), the Neural Tanget Kernel limit [43], [28], the Mean Field limit [58], [70], [84] and Ensemble Networks [27] to name a few (see [17], [13], [4] for reviews).", "Similar to these works, we also focus on a specific subset of deep networks, gated deep linear networks, that captures a nonlinear relationship between a network’s input-output maps and its parameters, while being amenable to theoretical analysis.", "Within the setup of deep linear networks, several works have focused on the analysis of convergence rate [76], [6], [5], [24], on understanding inductive biases like implicit regularization [52], [44], [35], [77], [7] and understanding generalization dynamics [51], [65], [40].", "[94], [16] propose and study the Gated Linear Networks (GLNs) as a class of backpropagation-free neural architectures using geometric mixing.", "While GLNs appear similar to GDLNs, there are several differences.", "GDLN is a framework that schematizes how pathways of information flow impact learning dynamics within architecture and studies networks trained using back-propagation.", "Additionally, GLNs are good at online learning and continual learning, while in this work, we use the GDLN framework for understanding zero-shot generalization capabilities.", "In the GLN model, a neuron is defined as a gated geometric mixer of the output of linear networks in the previous layer, while in the GDLN model, the neurons are linear networks where the input is the output of the linear network along the previous path.", "In the GLN setup, multiple input-to-output connections (in successive layers) can be active for the same input, while in the GDLN setup, only one input-output connection (in successive layers) is active for one input.", "In this work, we propose modeling the model architecture as a graph and study how pathways of information flow impact learning dynamics within an architecture.", "Previous works have also proposed analyzing neural networks as directed graphs using the Complex Network Theory [15].", "[78] analyzed the structure and performance of fully connected neural networks, [102] focused on the emergence of motifs in fully connected networks, [90] study deep belief networks using techniques from Complex Network Theory literature and [49] focused on convolution and fully connected networks, with ReLU non-linearity.", "Our Gated Deep Linear Network framework is closely related to areas like modular networks [36], [79], [9], [45], [75], routing networks [69] and mixture of experts [42], [46], [19], [100].", "In these works, the common theme is to learn a set of modules (or experts) that can be composed (or selected) using a controller (or a router).", "The modules are generally instantiated as neural networks, while the controller can either be a neural network or a hand-designed policy.", "These approaches have been prominently used in natural language processing [80], [54], [25], [55], computer vision [1], [34], [98], [96] and reinforcement learning [99], [86], [3], [32], [38], [33].", "Our work is also related to previous works in systematic generalization [12], [11], [50], [72], [31] and multi-task learning [18], [103], [47], [66], [71], [56], [60], [95].", "Specifically, we explore the role of model architecture and weight initialization on models' ability to exhibit systematic generalization and multi-task learning." ], [ "Conclusion", "A key intuition in deep learning holds that a network's architecture influences learned representations, and should relate to task structure in order to achieve good performance and generalization.", "Here, we have introduced the Gated Deep Linear Network framework, which reveals how architecture–reflected by a simple nonlinear gating scheme along the edges of an architecture graph–controls pathways of information flow that govern learning dynamics, representation learning, and ultimately generalization.", "Our exact reductions and solutions show that learning dynamics take the form of a race, with greater representational reuse causing faster learning, imparting a bias toward shared representations.", "We validate our key insights on naturalistic datasets and with relaxed assumptions.", "An interesting future research direction will be to explore mechanisms for inferring the optimal architecture and gating for a given setup." ], [ "Acknowledgements", "We thank Hannah Sheahan, Timo Flesch, Devon Jarvis, and Olivier Delalleau for their feedback and comments.", "This work was supported by a Sir Henry Dale Fellowship from the Wellcome Trust and Royal Society (216386/Z/19/Z) to A.S., and the Sainsbury Wellcome Centre Core Grant from Wellcome (219627/Z/19/Z) and the Gatsby Charitable Foundation (GAT3755).", "A.S. is a CIFAR Azrieli Global Scholar in the Learning in Machines & Brains program." ], [ "The XoR task", "We consider the XoR task with $P=4$ data points that lie at $[\\pm 1~\\pm 1]^T$ as depicted in Fig.", "REF a.", "The task is exclusive-or on the input bits, with a target output of $y=1$ if true and $y=-1$ otherwise.", "We solve this with a GDLN containing four pathways (Fig.", "REF b), with each pathway active on exactly one of the four examples.", "By symmetry, all pathways will have the same loss dynamics and so we need only solve one.", "Consider the pathway active for the example $x=[1~1]^T,y=1$ , that is, whose gating variable $g_1=1$ when this example is presented and $g_1=0$ on the remaining three examples.", "The relevant dataset correlations are $\\Sigma ^{yx}&=&\\langle g_1yx^T \\rangle = 1/P\\begin{bmatrix} 1 & 1\\end{bmatrix} \\\\\\Sigma ^{x}&=&\\langle g_1xx^T\\rangle = 1/P\\begin{bmatrix} 1 & 1 \\\\ 1 & 1\\end{bmatrix} \\\\\\Sigma ^{y}&=&\\langle g_1yy^T\\rangle = 1/P$ The singular value decomposition of the input-output correlations yields the nonlinear singular value $s=\\sqrt{2}/P$ and input singular vector $v=[ 1/\\sqrt{2} ~ 1/\\sqrt{2}]^T$ .", "Applying this singular vector to diagonalize the input correlations yields the associated input variance $d=v^T\\Sigma ^{x}v=2/P$ .", "The effective singular value dynamics of this pathway is given by the deep linear network dynamics with these correlations (see [76], [77]), yielding $a(t)=\\frac{s/d}{1-(1-\\frac{s}{da_0})e^{-2st/\\tau }}$ where $a(t)$ is the singular value in the product of both weight matrices in the pathway, and $a_0$ is the initial effective singular value, related to the initialization variance.", "Finally the loss for this pathway is the loss trajectory $l(t)$ of the associated deep linear network.", "The total loss, by symmetry, is the loss from all four pathways $\\mathcal {L}(t)=Pl(t)$ , $\\mathcal {L}(t) & = &\\frac{1}{2} - Psa(t) + \\frac{P}{2}da(t)^2 \\\\& = & \\frac{1}{2} - \\sqrt{2}a(t) + a(t)^2.$ This analytical expression is exact for GDLNs initialized in the decoupled regime, and it agrees closely with the dynamics of standard ReLU networks trained end-to-end on the task starting from small random weights (Fig.", "REF c).", "Hence GDLNs can learn nonlinear input-output tasks, and in certain settings, describe the dynamics of standard ReLU networks when the gating structure is chosen appropriately." ], [ "Nonlinear Contextual Classification", "As another simple example, consider a nonlinear contextual classification problem that cannot be solved using deep linear networks but can be solved using the gated deep linear network, again highlighting that the gated networks are more expressive than their non-gated counterpart.", "Consider receiving two-dimensional inputs $x \\in ^2$ where each component $x_i, i=1,2$ is drawn from a uniform distribution between -1 and 1.", "The task of the network is to classify stimuli based either on the first or second input component.", "That is, the target output is $y=x_c$ in context $c\\in {1,2}$ , and each context appears with probability 1/2.", "In this simple scenario (a variant of the XoR task), the same input must be treated in two different ways depending on context, and nonlinearity is required for solving it correctly.", "Now we must choose a gating structure.", "If we choose a single pathway that is always active ($g=1$ for all samples), then we recover a deep linear network.", "The resulting correlation matrices are $\\Sigma ^{yx}&=&\\langle yx^T \\rangle = [1/6~ 1/6] \\\\\\Sigma ^{x}&=&\\langle xx^T\\rangle = 1/3I$ where $I$ is the $2\\times 2$ identity matrix.", "Under the resulting dynamics, the total weights converge to the linear least squares solution $W^{tot}=\\Sigma ^{yx}(\\Sigma ^{x})^{-1}=[1/2 ~ 1/2]$ , the best solution attainable by the linear network.", "Alternatively, we can set the gating variables such that a different pathway is active in each context.", "We then have the collection of correlation matrices $\\Sigma ^{yx}(1)&=& [1/6 ~0]\\\\\\Sigma ^{yx}(2)&=&[0 ~ 1/6]\\\\\\Sigma ^{x}(1,1)&=&\\Sigma ^{x}(2,2)=1/6I\\\\\\Sigma ^{x}(1,2)&=&\\Sigma ^{x}(2,1)=0.$ We thus see that each pathway faces a subproblem defined by just one context.", "For this simple case, the pathways converge to their respective linear least squares solutions.", "In particular, $W^{tot}(1)=[1 ~ 0]$ and $W^{tot}(2)=[0 ~ 1]$ , such that each pathway picks out the correct input coordinate.", "In combination with the gating scheme, these weights exactly solve this nonlinear task, showing that gated linear networks are more expressive than linear networks.", "Interestingly, neuroimaging and electrophysiological recordings from this paradigm suggest that this type of solution is observed in the human and primate brain, as well as in standard ReLU networks trained in the “rich” feature learning regime [26]." ], [ "Gradient flow dynamics", "The gradient flow equations are $\\tau \\frac{d}{dt}W_e & = & -\\frac{\\partial \\mathcal {L}(\\lbrace W\\rbrace )}{\\partial W_e} \\quad \\forall e \\in E \\\\& = & \\left\\langle \\sum _{p\\in \\mathcal {P}(e)} g_pW_{\\bar{t}(p,e)}^T\\left[y_{t(p)}x^T_{s(p)} - h_{t(p)}x_{s(p)}^T \\right] W^T_{\\bar{s}(p,e)} \\right\\rangle _{y,x,g}\\\\& = & \\left\\langle \\sum _{p\\in \\mathcal {P}(e)} g_pW_{\\bar{t}(p,e)}^T\\left[y_{t(p)}x^T_{s(p)} - \\sum _{j \\in \\mathcal {T}(t(p))} g_jW_jx_{s(j)}x_{s(p)}^T \\right]W^T_{\\bar{s}(p,e)} \\right\\rangle _{y,x,g}\\\\& = & \\sum _{p\\in \\mathcal {P}(e)} W_{\\bar{t}(p,e)}^T\\left[\\left\\langle g_py_{t(p)}x^T_{s(p)}\\right\\rangle _{y,x,g} - \\sum _{j \\in \\mathcal {T}(t(p))} W_j\\left\\langle g_jx_{s(j)}x_{s(p)}^Tg_p \\right\\rangle _{y,x,g} \\right]W^T_{\\bar{s}(p,e)} \\\\& = & \\sum _{p\\in \\mathcal {P}(e)} W_{\\bar{t}(p,e)}^T\\left[\\Sigma ^{yx}(p) - \\sum _{j \\in \\mathcal {T}(t(p))} W_j\\Sigma ^{x}(j,p) \\right]W^T_{\\bar{s}(p,e)}, $ where we have simply rearranged terms and used the linearity of expectation.", "The dynamics reduction can then be obtained by applying the change of variables, $\\tau \\frac{d}{dt}W_e & = & \\sum _{p\\in \\mathcal {P}(e)} W_{\\bar{t}(p,e)}^T\\left[\\Sigma ^{yx}(p) - \\sum _{j \\in \\mathcal {T}(t(p))} W_j\\Sigma ^{x}(j,p) \\right]W^T_{\\bar{s}(p,e)} \\\\\\tau \\frac{d}{dt}\\left(R_{t(e)}B_eR_{s(e)}^T\\right) & = & \\sum _{p\\in \\mathcal {P}(e)} \\left(U_{t(p)}B_{\\bar{t}(p,e)}R_{t(e)}^T\\right)^T\\left[U_{t(p)}S(p) V_{s(p)}^T - \\right.", "\\\\&&\\left.", "\\sum _{j \\in \\mathcal {T}(t(p))} U_{t(j)}B_jV_{s(j)}^TV_{s(j)}D(j,p) V_{s(p)}^T \\right]\\left(R_{s(e)}B_{\\bar{s}(p,e)}V_{s(p)}^T\\right)^T \\\\\\tau \\frac{d}{dt}B_e & = & \\sum _{p\\in \\mathcal {P}(e)} B_{\\bar{t}(p,e)}\\left[S(p) - \\sum _{j \\in \\mathcal {T}(t(p))} B_jD(j,p) \\right]B_{\\bar{s}(p,e)}$ where we have used the fact that $R_v^TR_v=I$ for all nodes, and the fact that $U_{t(j)}=U_{t(p)}$ by definition of the set $\\mathcal {T}(t(p))$ .", "From this we see that if the $B$ variables are initially diagonal they remain so under the dynamics.", "In this case, the dynamics decouple and each element along the diagonal of the $B$ matrices evolves independently of the rest." ], [ "Routing task and network reduction", "To understand the dynamics in the pathway network, we first collect the relevant input statistics.", "We have $\\Sigma ^{yx}(p)&=& \\left\\langle g_p y_{t(p)} x_{s(p)}^T\\right\\rangle \\\\& = & \\textrm {Pr}(g_p=1)USV^T\\\\& = & \\frac{1}{KM}USV^T\\\\\\Sigma ^x(j,p)&=&\\left\\langle g_jx_{s(j)}x_{s(p)}^Tg_p\\right\\rangle \\\\&=& {\\left\\lbrace \\begin{array}{ll}\\frac{1}{KM}VDV^T \\quad \\textrm {if}~j=p \\\\0 \\quad \\textrm {otherwise}\\end{array}\\right.", "}$ because there are $KM$ total trained paths from input to output and all pathways are gated off except for the active pathway.", "Inserting these data statistics into (REF ), and assuming that initial singular values are equal for all input domains and all output domains (a reasonable approximation when starting from small random weights), we can track only the variables $B_1,B_2,$ and $B_3$ encoding the singular values in the input, hidden, and output weights respectively.", "Next, we note that the first and third layer weights are active in $K$ tasks (all tasks originating from a given input or terminating at a given output, respectively), while second layer weights are active in all $KM$ tasks.", "This yields the reduced dynamics $\\tau \\frac{d}{dt}B_1 &=& \\frac{1}{M} B_3B_2\\left[S-B_3B_2B_1D\\right] \\\\\\tau \\frac{d}{dt}B_2 &=& B_3B_1\\left[S-B_3B_2B_1D\\right] \\\\\\tau \\frac{d}{dt}B_3 &=& \\frac{1}{M}B_2B_1\\left[S-B_3B_2B_1D\\right].$ If we consider `balanced' initial conditions where $B_1(0)=B_3(0)$ , we have $\\tau \\frac{d}{dt}B_1 &=& \\frac{1}{M} B_2B_1\\left[S-B_2B_1^2D\\right] \\\\\\tau \\frac{d}{dt}B_2 &=& B_1^2\\left[S-B_2B_1^2D\\right],$ recovering Eqns.", "(REF )-() of the main text.", "To estimate the ratio of singular values in the first layer to that in the second, we consider its time derivative and calculate the steady state.", "We have $\\frac{d}{dt} B_2/B_1 & = & B_1\\left[S-B_2B_1^2D\\right] - \\frac{1}{M} B_2^2/B_1\\left[S-B_2B_1^2D\\right]\\\\0 & = & B_1 - \\frac{1}{M}B_2^2/B_1\\\\B_2 & = & \\sqrt{M}B_1.$ Hence if training continues for long times (such that the error term does not become zero), the shared portion of the pathway changes more by a factor $\\sqrt{M}$ (and this ratio does not depend on $K$ ).", "We can extend this analysis to the case where all input-output routes are trained but the gating structure sends only $P$ paths through each hidden weight matrix, as considered in Section REF .", "With this gating scheme, the reduction is $\\tau \\frac{d}{dt}B_1 &=& \\frac{\\sqrt{P}}{M^2} B_2B_1\\left[S-B_2B_1^2D\\right] \\\\\\tau \\frac{d}{dt}B_2 &=& \\frac{P}{M^2} B_1^2\\left[S-B_2B_1^2D\\right],$ and the singular value ratio is $\\frac{d}{dt} B_2/B_1 & = & \\frac{P}{M^2}B_1\\left[S-B_2B_1^2D\\right] - \\frac{\\sqrt{P}}{M^2} B_2^2/B_1\\left[S-B_2B_1^2D\\right]\\\\0 & = & PB_1 - \\sqrt{P}B_2^2/B_1\\\\B_2 & = & P^{\\frac{1}{4}}B_1.$ This ratio scales from 1 to $\\sqrt{M}$ as the number of shared paths goes from $P=1$ (no sharing) to $P=M^2$ (full sharing).", "Hence for this architecture, greater sharing causes larger weight changes in the hidden pathway." ], [ "Experimental details and further results", "This section contains details and hyperparameter settings for the simulations reported in Section , as well as additional visualization of results in Figures REF and REF .", "We start by explaining the general procedure for transforming the existing datasets and then describe the new dataset instances that we create.", "Consider a dataset $D=(X, Y)$ , defined as a tuple of inputs $X$ and targets $Y$ .", "The dataset $D$ has $n$ datapoints, that is, $X \\in R^{n \\times n_{dim}}$ , where $n_{dim}$ is the dimensionality of each inputWhile images are multi-dimensional arrays, they can be represented as flattened 1-d arrays..", "The dataset has $n_{class}$ unique classes, referred by their indices $\\lbrace 0, 1, \\cdots , n_{class}-1\\rbrace $ .", "We want to create new datasets by transforming the given dataset $D$ .", "We assume that we have a list of $M$ input transformations $f_i^{input}: R^{n_{dim}} \\rightarrow R^{n_{dim}} \\forall i \\in \\lbrace 0, \\cdots , M-1\\rbrace $ and $M$ output transformations $f_j^{output}: R^{n_{class}} \\rightarrow R^{n_{class}} \\forall j \\in \\lbrace 0, \\cdots , M-1\\rbrace $ .", "Now, we can define a new dataset, $D_{i,j}$ , as $(X_i, Y_j)$ , where $X_i = f_i^{input}(X)$ , $Y_j = f_j^{output}(Y)$ i.e any transformation of the given dataset is a new dataset.", "We can apply $M$ transformations on the input and $M$ transformations on the output to obtain $M^2$ datasets.", "We consider the following two operations for input transformations: rotation of the input image and permutation of pixels.", "For the $i^{th}$  rotation transformation, we rotate the input images by an angle $\\theta = 180i/M$ degrees.", "For the $i^{th}$  permutation transformation, we apply a random permutation matrix to the flattened input.", "Each of these transformations provides input to one input domain.", "We use the permutation operation as the output transformation, implying that each new output transformation corresponds to a $n_{class}$ -way classification task.", "We use the MNIST [23] and CIFAR-10 datasets [48] to create three datasets: MNIST-Permuted-Input-Permuted-Output-40: 40 transformations on both the input and the output, leading to a total of 1600 datasets.", "The input transformation is permutation of pixels and the output transformation is permutation of the targets.", "MNIST-Permuted-Input-Permuted-Output-100: 100 permutation transformations on both the input and the output, leading to a total of $10^4$ datasets.", "MNIST-Rotated-Input-Permuted-Output-40 40 rotation transformations on the input and 40 permutation transformations on the output.", "CIFAR-Rotated-Input-Permuted-Output-40 40 rotation transformations on the input and 40 permutation transformations on the output.", "In the case of CIFAR-Rotated-Input-Permuted-Output-40 dataset, use a pre-trained ResNet18 [39] modelWe use the following code for pre-training the models: https://github.com/akamaster/pytorch_resnet_cifar10 to map the images into 512 dimensional vectors.", "We pretrain the ResNet18 model on full CIFAR-10 dataset, freeze the pre-trained model and use the first two residual blocks to encode the images from the transformed datasets.", "The output of the (frozen) ResNet encoder is used as input to the gated network." ], [ "Model and training", "The model consists of $M$ encoders, denoted as ($\\lbrace \\phi _i \\forall i \\in \\lbrace 1, \\cdots , M\\rbrace \\rbrace $ ) and $M$ decoders, denoted as ($\\lbrace \\psi _i \\forall i \\in \\lbrace 1, \\cdots , M\\rbrace \\rbrace $ ) and a shared hidden layer $\\theta $ .", "In the GDLN framework, the encoders correspond to the input nodes, the decoders correspond to the  output nodes and the connection from an encoder, to the hidden layer, to the decoder corresponds to a path.", "The encoders, decoders and the shared hidden layer are all instantiated as linear networks.", "Given a dataset, $D_{i,j}$ , or $(x_i, y_j)$ , we compute the prediction using the following function: $\\psi _j(\\theta (\\phi _i(x_i)))$  Note that we overload the notation to represent the components and the computation using the same symbol." ], [ "Training and Evaluation Setup", "Following the setup in sec:applications, we train a subset of input-output domains such that each input domain is trained with only $K \\le M$ output domains, resulting in $M \\times K$  trained pathways and $M \\times (M-K)$  untrained pathways.", "During evaluation, we report the performance on both the  trained pathways and the untrained pathways.", "Figure: Experimental results.", "Error for trained (blue) and untrained (orange) input-output domain pairs as a function of the percentage of trained pathways (K/MK/M) on: (i) CIFAR dataset, where input is rotated and output is permuted, with M 2 =1600M^2=1600 total tasks, (ii) MNIST dataset, where input and output, both are permuted, with M 2 =10 4 M^2=10^4 total tasks, (iii) MNIST dataset, where input and output, both are permuted, with M 2 =1600M^2=1600 total tasks, and (iv) MNIST dataset, where input is rotated and output is permuted, with M 2 =1600M^2=1600 total tasks.", "(in the order of left to right).", "Training accuracy is always high while zero-shot transfer to untrained pathways becomes as good as the training performance when ≈\\approx 40% of pathways are trained.Figure: Experimental results.", "Error for trained (blue) and untrained (orange) input-output domain pairs as a function of the percentage of trained pathways (K/MK/M) on: (i) MNIST dataset, where input and output, both are permuted, with M 2 =1600M^2=1600 total tasks, and (ii) MNIST dataset, where input is rotated and output is permuted, with M 2 =1600M^2=1600 total tasks (in the order of left to right).", "These models are trained with the leaky-ReLU non-linearity with the negative slope parameter set to be 0.010.01, thus making the model non-linear.", "The training accuracy is always high while zero-shot transfer to untrained pathways becomes as good as the training performance when ≈\\approx 25% of pathways are trained.Figure: Hyperparameter values for CIFAR-Rotated-Input-Permuted-Output-40 (different from the values described in table::commonhp" ] ]
2207.10430
[ [ "High-Harmonic Spectroscopy of Coherent Lattice Dynamics in Graphene" ], [ "Abstract High-harmonic spectroscopy of solids is a powerful tool, which provides access to both electronic structure and ultrafast electronic response of solids, from their band structure and density of states, to phase transitions, including the emergence of the topological edge states, to the PetaHertz electronic response.", "However, in spite of these successes, high harmonic spectroscopy has hardly been applied to analyse the role of coherent femtosecond lattice vibrations in the attosecond electronic response.", "Here we study coherent phonon excitations in monolayer graphene to show how high-harmonic spectroscopy can be used to detect the influence of coherent lattice dynamics, particularly longitudinal and transverse optical phonon modes, on the electronic response.", "Coherent excitation of the in-plane phonon modes results in the appearance of sidebands in the spectrum of the emitted harmonic radiation.", "We show that the spectral positions and the polarisation of the sideband emission offer a sensitive probe of the dynamical symmetries associated with the excited phonon modes.", "Our work brings the key advantage of high harmonic spectroscopy -- the combination of sub-femtosecond to tens of femtoseconds temporal resolution -- to the problem of probing phonon-driven electronic response and its dependence on the dynamical symmetries in solids." ], [ "Introduction", "Strong-field driven high-harmonic generation (HHG) is a nonlinear frequency up-conversion process, which emits radiation at integer multiples of the incident laser frequency [1].", "Taking advantage of major technical advances in mid-infrared sources, the pioneering experiments [2] have extended HHG from gases to solids, stimulating intense research into probing electron dynamics in solids on the natural timescale.", "Today, high-harmonic spectroscopy has been employed to probe different static and dynamic properties of solids, such as band dispersion [3], [4], [5], [6], density of states [7], band defects [8], [9], valley pseudospin [10], [11], [12], [13], Bloch oscillations [14], topology and light-driven phase transitions, including strongly correlated systems  [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], and even combine attosecond temporal with pico-meter spatial resolution of electron trajectories in lattices [25].", "Availability of mid-infrared light sources also enables coherent excitation of a desired phonon mode by tuning the polarisation and frequency of the laser pulse  [26].", "Yet, the analysis of the effect of coherent lattice dynamics on high harmonic generation in solids appears lacking, apart from a lone experiment [27].", "This situation stands in stark contrast to molecular gases, where high-harmonic spectroscopy has been extensively employed to probe nuclear motion in various molecules [28], [29], [30], [31], [32], [33].", "Present work aims to fill this gap and highlight some of the capabilities offered by high harmonic spectroscopy in time-resolving the interplay of femtosecond lattice and attosecond electronic motions.", "Such interplay is essential for many fundamental phenomena, including thermal conductivity [34], optical reflectivity [35], [36], structural phase transition [37], [38], heat capacity [39], and optical properties [40], [41].", "Various spectroscopic methods have been developed to excite and probe phonons, see e.g.", "[42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], but their temporal resolution is limited by the length of the pulses used.", "Large coherent bandwidth of high harmonic signals offers sub-laser-cycle temporal resolution and the possibility to time-resolve the impact of lattice distortions on the faster electronic response.", "One difficulty in tracking lattice vibrations via highly nonlinear optical response stems from their small amplitude.", "If the corresponding changes in both the band structure and couplings are similarly small, the high-harmonic response hardly changes.", "Yet, large distortions are not needed if the excited phonon mode dynamically changes the symmetry of the unit cell.", "Here we show how coherent phonon dynamics and the associated changes in the lattice symmetry are encoded in the electronic response and the harmonic signal, and how the sub-cycle temporal resolution inherent in the harmonic signal can be used to track the interplay of electronic and lattice dynamics.", "Figure: Hexagonal honeycomb structure of graphene and associated in-plane phonon modes.", "(a) Real-space structure of graphene.", "(b) Brillouin zone in momentum-space withΓ,𝖬,\\mathsf {\\Gamma , M,} and 𝖪\\mathsf {K} as the high symmetry points.", "(c) and (d) are the sketches of atomic vibrations associated with the degenerate E 2g _{2g} phononmodes in real-space.", "Here, modes are labelled as (c) in-plane longitudinal optical (𝗂𝖫𝖮\\textsf {iLO})phonon mode and (d) in-plane transverse optical (𝗂𝖳𝖮\\textsf {iTO}) phonon mode, respectively.We analyse monolayer graphene, which belongs to $\\textbf {D}_{6h}$ point group symmetry, see Fig.", "REF (a).", "It exhibits six phonon branches: three optical and three acoustic.", "Here we focus on the former.", "Out of the three optical phonon modes, one is out-of-plane, the two others are in-plane modes.", "We will consider only the in-plane modes.", "The lattice vibrations corresponding to the in-plane Longitudinal Optical ($\\textsf {iLO}$ ), and the in-plane Transverse Optical ($\\textsf {iTO}$ ) E$_{2g}$ modes are shown in Fig.", "REF (c) and REF (d), respectively.", "The two modes are degenerate at the $\\mathsf {\\Gamma }$ -point, with the phonon frequency equal to 194 meV (oscillation period $\\sim $ 21 femtoseconds [53]); both are Raman active and can be excited with a resonant pulse pair or impulsively by a short pulse with bandwidth covering 194 meV.", "Moreover, it is possible to selectively excite either $\\textsf {iLO}$ or $\\textsf {iTO}$ coherent phonon mode by tuning the polarisation of the pump pulse either along $\\mathsf {\\Gamma -K}$ or $\\mathsf {\\Gamma -M}$ direction, respectively.", "Coherent lattice dynamics should in general introduce periodic modulations of the system parameters and thus of its high-harmonic response.", "In the frequency domain, such modulations add sidebands to the main peaks in the harmonic spectrum.", "We shall see that their position and polarisation encode the information about the frequency and the symmetry of the excited phonon mode, respectively." ], [ "Theoretical Method", "Carbon atoms are arranged at the corners of a hexagon in the honeycomb lattice of the graphene.", "The unit-cell of graphene has a two-atom basis, usually denoted as A and B atoms.", "The corresponding Brillouin zone in momentum space is shown in Fig.", "REF (b), where $\\mathsf { \\Gamma }$ , $\\mathsf {M}$ , and $\\mathsf {K}$ are the high-symmetry points.", "In our convention, the zigzag and armchair directions of graphene are along $\\mathsf {X}$ -axis ($\\mathsf {\\Gamma }-\\mathsf {K}$ direction) and $\\mathsf {Y}$ -axis ($\\mathsf {\\Gamma }-\\mathsf {M}$ direction), respectively.", "The electronic ground-state of the graphene is described by the nearest-neighbour tight-binding approximation and the corresponding Hamiltonian is written as $\\mathcal {\\hat{H}}_\\textbf {k} = - \\gamma _{0}\\sum _{i\\in nn} ~e^{i\\textbf {k}\\cdot \\textbf {d}_i} \\hat{a}_\\textbf {k}^{\\dagger } \\hat{b}_\\textbf {k} + \\textrm {H. c.} $ Here, the summation is over the nearest neighbour atoms.", "$\\gamma _0$ is the nearest-neighbour hopping energy, which is chosen to be 2.7 eV.", "$\\textbf {d}_i$ is the separation vector between an atom with its nearest neighbour, such that $|\\textbf {d}_i| = a$ = 1.42 Å is the inter-atomic distance, for a lattice parameter $a_0$ of 2.46 Å.", "$\\hat{a}_k^{\\dagger } (\\hat{b}_k)$ is creation (annihilation) operator for atom A (B) in the unit cell.", "The low-energy band-structure of graphene is obtained by solving Eq.", "(REF ) and has zero band-gap and exhibits linear dispersion at $\\mathsf {K}$ -points in the Brillouin zone.", "We treat lattice dynamics classically, and assume that atoms perform harmonic oscillations for short displacements from their equilibrium positions.", "The displacement vector for a particular phonon mode is expressed as $\\textbf {q}(t)= \\textrm {q}_{0}~\\hat{\\textbf {e}}~\\rm {Re}\\left( e^{i\\omega _{ph}t}\\right).$ Here, $\\textrm {q}_{0}$ is the maximum displacement of an atom from its equilibrium position, $\\omega _{\\textrm {ph}} = $ 194 meV is the energy of the E$_{2g}$ phonon mode , and $\\hat{\\textbf {e}}$ is the normalised eigenvector for a particular phonon mode.", "From Figs.", "REF (c) and REF (d), it is clear that $\\hat{\\textbf {e}}_{\\textsf {iLO}}$ = $[1,0,-1,0]/\\sqrt{2}$ and $\\hat{\\textbf {e}}_{\\textsf {iTO}}$ = $[0,1,0,-1]/\\sqrt{2}$ , in which the first (last) two elements are components of A (B) atom.", "Due to coherent phonon excitations, lattice dynamics causes temporal variations in the relative distance between atoms ($\\textbf {d}_i$ ).", "In this case, the corresponding time-dependent Hamiltonian within the tight-binding approximation can be written as [54], [55], [56] $\\mathcal {\\hat{H}}_\\textbf {k}(t) = - \\gamma (t)\\sum _{i\\in nn} e^{i\\textbf {k}\\cdot \\textbf {d}_i(t)} \\hat{a}_\\textbf {k}^{\\dagger } \\hat{b}_\\textbf {k} + \\textrm {H. c.}$ Here, the hopping energy is modelled as an exponentially decaying function of the relative displacement between nearest-neighbour atoms as $\\gamma (t)$ = $\\gamma _0~e^{-(|\\textbf {d}_{i}(t)|-a)/\\delta }$ , in which $\\delta $ is the width of the decay function chosen to be 0.184$a_0$  [57].", "The interaction among laser, electrons and coherently excited phonon mode in graphene is modelled by solving following equations of the single-particle density matrix.", "By updating the modified Hamiltonian as a result of the lattice dynamics, semiconductor Bloch equations in co-moving frame $|n,\\textbf {k}+\\textbf {A}(t)\\rangle $ , is extended and equations of motion read as ddtvvk = iE(t)dvc(kt, t)cvk + c.c.", "ddtcvk = [-icv(kt, t)-1T2]cvk + iE(t)dcv(kt, t)[vvk-cck].", "Here, $\\textbf {E}(t)$ and $\\textbf {A}(t)$ are, respectively, the electric field and the vector potential corresponding to the laser field, which are related as $\\textbf {E}(t)$ = $-d\\textbf {A}(t)/dt$ , and $\\textbf {k}_t$ is the shorthand notation for $\\textbf {k} + \\textbf {A}(t)$ .", "$\\varepsilon _{cv}(\\textbf {k})$ and $\\textbf {d}_{cv}(\\textbf {k})$ are, respectively, the band-gap energy and dipole matrix elements between valence and conduction bands at $\\textbf {k}$ .", "$\\textbf {d}_{cv}(\\textbf {k})$ is defined as $\\textbf {d}_{cv}(\\textbf {k}) = i\\langle c,\\textbf {k} |\\nabla _\\textbf {k}|v,\\textbf {k}\\rangle $ .", "Also, $\\rho _{cc}^{\\textbf {k}}(t) = 1 - \\rho _{vv}^{\\textbf {k}}(t)$ , and $\\rho ^{\\textbf {k}}_{vc}(t) = \\rho ^{\\textbf {k}*}_{cv}(t)$ .", "A phenomenological term to take care of the interband decoherence is added with a constant dephasing time $T_2$ .", "We calculate the matrix elements at each time-step during temporal evolution of the coherently excited phonon mode, which results in the additional time-dependence in the matrix elements.", "As long as the maximum displacement of the atoms are small and the time-step is too small compared to the phonon time-period, the matrix elements at consecutive time-steps are smoothly updated.", "We solve the coupled differential equations described in Eq.", "() using the fourth-order Runge-Kutta method with a time-step of 0.01 fs.", "We sampled the Brillouin zone with 251$\\times $ 251 grid.", "The current at any k point in the Brillouin zone is defined as $\\textbf {J}(\\textbf {k}, t) = \\sum _{m,n \\in \\lbrace c,v\\rbrace } \\rho _{mn} ^{\\textbf {k}} (t) \\textbf {p}_{nm}(\\textbf {k}_t, t).$ Here, $\\textbf {p}_{nm}$ are the momentum matrix-elements defined as $\\textbf {p}_{nm}(\\textbf {k}) = \\langle n,\\textbf {k}|\\nabla _\\textbf {\\textbf {k}}\\mathcal {\\hat{H}}_\\textbf {k}| m,\\textbf {k}\\rangle $ .", "The total current, J(t) can be calculated by integrating $\\textbf {J}(\\textbf {k}, t)$ over the entire Brillouin zone.", "The high-harmonic spectrum is simulated as $\\mathcal {I}(\\omega ) = \\left| \\mathcal {FT} \\left( \\dfrac{\\rm {d}}{{\\rm d}t} \\textbf {J}(t) \\right) \\right|^2.$ Here, $\\mathcal {FT}$ stands for the Fourier transform.", "High-order harmonics are generated from monolayer graphene, with or without coherent lattice dynamics, using a linearly polarised pulse with a wavelength of 2.0 $\\mu $ m and peak intensity of 1$ \\times 10^{11} $ W/cm$^2$ .", "The pulse is 100 fs long and has a sin-squared envelope.", "The laser parameters used in this work are below the damage threshold of graphene [58].", "Similar laser parameters have been used to investigate electron dynamics in graphene via intense laser pulse  [59], [60], [61].", "The value of the dephasing time $T_2 = $ 10 fs is used throughout in this work [62].", "The observations we made here are consistent for other values of $T_2$ in the range 5 - 30 fs.", "Both in-plane E$_{2g}$ phonon modes are considered here.", "Results presented in this work correspond to a maximum 0.03a$_0$ displacement of atoms from their equilibrium positions during coherent lattice dynamics.", "However, our findings remain valid for displacements ranging from 0.01a$_0$ to 0.05a$_0$ with respect to the equilibrium positions." ], [ "Results and Discussion", "High-harmonic spectra for monolayer graphene, with and without coherent lattice dynamics, are presented in Fig.", "REF .", "The spectrum corresponding to the graphene, without lattice dynamics, is shown by grey shaded area as a reference.", "Owing to the inversion symmetry of the graphene, the reference spectrum in grey colour exhibits only odd harmonics (consistent with earlier reports, e.g.", "Refs.", "[62], [61], [63].)", "We assume that coherent phonon dynamics is excited prior to a high harmonic probe.", "When one of the E$_{2g}$ phonon modes in graphene is coherently excited, the harmonic spectra display sidebands along with the main odd harmonic peaks as reflected from Fig.", "REF .", "The energy difference between the adjacent sidebands matches the phonon energy ($\\omega _{\\textrm {ph}}$ ).", "The sideband intensity is sensitive to the phonon amplitude but is clearly visible already for amplitudes above 0.01 of the lattice constant.", "Here we present the case of the amplitude equal to 0.03 of the lattice constant.", "Figure: High-harmonic spectra of monolayer graphene with and without coherent lattice dynamics.", "(a) and (c) High-harmonic spectra corresponding to the coherent 𝗂𝖫𝖮\\textsf {iLO} E 2g _{2g} phonon modeand the probe harmonic pulse is polarised along Γ𝖪\\mathsf {\\Gamma K} and Γ𝖬\\mathsf {\\Gamma M} directions, respectively.", "(b) and (d) Same as (a) and (c) except 𝗂𝖳𝖮\\textsf {iTO} E 2g _{2g} phonon mode is coherently excited.In all the cases, sidebands corresponding to the first harmonic are marked at frequencies(ω 0 ±mω ph \\omega _{0} \\pm m \\omega _{\\textrm {ph}}), where ω 0 \\omega _0 is the frequency of the harmonic generatingprobe pulse, and ω ph \\omega _{\\textrm {ph}} is phonon frequency.The harmonics with grey shaded area are the reference spectra andrepresent the spectra of graphene without phonon excitation.The unit cell of the graphene with the corresponding phonon eigenvector and polarisation of the harmonic generating probe pulse are shown in the respective insets.Red (blue) colour corresponds the polarisation of emitted radiation parallel (perpendicular)to the polarisation of harmonic generatingprobe pulse.As E$_{2g}$ phonon modes preserve inversion centre, only odd harmonics are generated.", "When the coherent $\\textsf {iLO}$ mode and the probe harmonic pulse (along $\\mathsf {\\Gamma -K}$ ) are in the same direction, the even-oder sidebands are polarised along $\\mathsf {\\Gamma -K}$ (red colour), whereas the odd-order sidebands are polarised perpendicular to $\\mathsf {\\Gamma -K}$ (blue colour), i.e., along $\\mathsf {\\Gamma - M}$ direction [see Fig.", "REF (a)].", "When the polarisation of the probe pulse changes from $\\mathsf {\\Gamma -K}$ to $\\mathsf {\\Gamma - M}$ direction, the polarisation of the sidebands remains the same with respect to the laser polarisation.", "In this case, the even-oder sidebands are polarised along $\\mathsf {\\Gamma -M}$ (blue colour), whereas odd-order sidebands are polarised along $\\mathsf {\\Gamma -K}$ (red colour) [see Fig.", "REF (c)].", "In both the cases, the main harmonic peaks are always polarised along the direction of the probe pulse.", "The situation is simpler in the case of coherent $\\textsf {iTO}$ mode excitation.", "Both the main harmonic peaks and the sidebands are polarised along the direction of the probe pulse [see Figs.", "REF (b) and (d)].", "Thus, we see that the polarisation of the sidebands yields information about the symmetries of the excited phonon modes.", "We now investigate how the dynamical changes in symmetries differ from similar static variation in the high-harmonic spectra.", "Consider the static case with the maximum displacement of atoms, along a particular phonon mode direction, 3$\\%$ of the lattice parameter from their equilibrium positions.", "Figure REF compares high-harmonic spectra for the statically-deformed and undeformed graphene (grey color).", "The probe polarisation is along $\\mathsf {\\Gamma -K}$ and $\\mathsf {\\Gamma -M}$ directions in the top and bottom panels of Fig.", "REF , respectively.", "When the graphene is deformed along the $\\textsf {iLO}$ phonon mode, odd harmonics are generated along parallel and perpendicular directions with respect to the laser polarisation as shown in Figs.", "REF (a) and REF (c), respectively.", "However, only odd harmonics, parallel to the laser polarisation, are generated when graphene is deformed in accordance with $\\textsf {iTO}$ -mode [see Figs.", "REF (b) and (d)].", "Figure: High-harmonic spectra of monolayer deformed graphene.", "(a) and (c) When the atoms in graphene are maximally displaced, from their equilibrium position,along 𝗂𝖫𝖮\\textsf {iLO} phonon mode.", "(b) and (d) Similar to (a) and (b) but atoms are displacedalong 𝗂𝖳𝖮\\textsf {iTO} phonon mode.", "The harmonic spectrum of undeformed graphene is shownin grey shaded area for reference.", "The unit cell of the deformed graphene lattice and thepolarisation of the harmonic generatingprobe pulse are shown in the respective insets.", "Red (blue) colour corresponds the polarisation of emitted radiation parallel (perpendicular)to the polarisation of harmonic generatingprobe pulse.The emergence of parallel and the perpendicular components in the first case and the parallel component in second case can be explained as follows: the monolayer graphene has $\\sigma _{x}$ and $\\sigma _{y}$ symmetry planes, in addition to the inversion centre.", "When the polarisation of the probe laser is along the high symmetry direction ($\\mathsf {\\Gamma -K}$ or $\\mathsf {\\Gamma -M}$ ), there is no perpendicular component of the current.", "However, if the polarisation of the probe pulse is along any other than these high-symmetry directions, symmetry constraints allow the generation of odd harmonics perpendicular to the direction of the laser polarisation.", "Recently, same symmetry concept is employed in twisted bilayer graphene to correlate the twist angle with its high-harmonic spectrum [64].", "It is straightforward to see that the distortion due to the $\\textsf {iLO}$ phonon mode breaks the symmetries of the reflection planes in monolayer graphene.", "The absence of the reflection symmetry planes along X and Y directions guarantees the generation of harmonics in both $\\mathsf {\\Gamma -K}$ and $\\mathsf {\\Gamma -M}$ directions as shown in Figs.", "REF (a) and (c).", "On the other hand, $\\textsf {iTO}$ phonon mode preserves both the symmetry planes and as a results harmonics along the laser polarisation are only allowed [Figs.", "REF (b) and (d)].", "In short, the presence or absence of the perpendicular current is a result of the transient breaking of the symmetry planes, which can be correlated to the results in Fig.", "REF .", "To understand the mechanism behind the sideband generations and associated polarisation properties during a coherent lattice dynamics, we need to consider the changes in the symmetries dynamically during the probe pulse.", "To understand the symmetry constraint on the polarisation of sidebands, let us consider dynamical symmetries (DSs) of the system, accounting for the coherent lattice dynamics and the probe pulse.", "We apply the Floquet formalism to a periodically driven system, represented by the Hamiltonian described by Eq.", "(REF ), which satisfies $\\hat{\\mathcal {H}}_\\textbf {k}$ (t) = $\\hat{\\mathcal {H}}_\\textbf {k}(t + \\tau _{\\textrm {ph}})$ , where $\\tau _{\\textrm {ph}}$ is the time-period corresponds to $\\omega _{\\textrm {ph}}$ .", "The Hamiltonian obeys the time-dependent Schrödinger equation and its solution is obtained in the basis of the Floquet states as $|\\psi _n^{\\rm F}(t)\\rangle = e^{-i\\epsilon _n^{\\rm F}t}|\\phi _n^{\\rm F}(t)\\rangle $ .", "Here, $\\epsilon _n^{\\rm F}$ is the quasi-energy corresponds to the $n^{\\rm th}$ Floquet state, and $|\\phi _n^{\\rm F}(t)\\rangle $ is the time-periodic part of the wave function, such that $|\\phi _n^{\\rm F}(t+\\tau _{\\textrm {ph}})\\rangle = |\\phi _n^{\\rm F}(t)\\rangle $ .", "The DSs in a Floquet system are the combined spatio-temporal symmetries, which provide different kinds of selection rules as discussed in Ref.", "[65], [66].", "In the presence of the probe pulse, the laser-graphene interaction within tight-binding approximation can be modelled with the Peierls substitution as $\\hat{\\mathcal {H}}_{\\textbf {k}}(t) \\rightarrow \\hat{\\mathcal {H}}_{\\textbf {k}+\\textbf {A}(t)}(t)$ .", "For the sake of simplicity, we employ a perturbative approach to understand the polarisation of the sidebands as the strength of the sidebands is much weaker in comparison to the main harmonic peaks.", "Let us expand $\\hat{\\mathcal {H}}_{\\textbf {k}+\\textbf {A}(t)}(t)$ in terms of $i\\textbf {A}(t)\\cdot \\textbf {d}_i(t)$ as $\\hat{\\mathcal {H}}_{\\textbf {k}+\\textbf {A}(t)}(t) \\approx \\hat{\\mathcal {H}}_{\\textbf {k}}(t) + \\textbf {A}(t)\\cdot \\nabla _\\textbf {k} \\hat{\\mathcal {H}}_{\\textbf {k}}(t).", "$ The second term in the above equation can be treated as perturbation as $\\hat{\\mathcal {H}}_{\\textbf {k}}^\\prime (t) = \\textbf {A}(t)\\cdot \\hat{\\textbf {J}}(t)$ with $\\hat{\\textbf {J}} = \\nabla _{\\textbf {A}(t)}\\hat{\\mathcal {H}}_{\\textbf {k}+\\textbf {A}(t)}$ is the current operator in the Bloch basis.", "In Eq.", "(REF ), higher-order terms are neglected.", "By following Ref.", "[66] and assuming the electron initially is in the Floquet state $|\\phi _i^{\\rm F}\\rangle $ , we can solve time-dependent Schrödinger equation within first-order perturbation theory and the $\\mu ^{\\rm th}$ -component of the current can be written as $\\begin{split}J_\\mu (t) &= \\langle \\phi _i^{\\rm F}(t)| \\hat{J}_\\mu (t) | \\phi _i^{\\rm F}(t)\\rangle \\\\&-\\sum _{e\\ne i} \\int _{-\\infty }^t i dt^{\\prime } e^{-i\\omega _{ei}(t-t^{\\prime })}\\chi ^{\\rm F}_{\\mu \\nu }(t,t^{\\prime }) A_\\nu (t^{\\prime })+ \\textrm {c.c.", "}\\end{split}$ Here, $\\chi ^{\\rm F}_{\\mu \\nu }(t,t^{\\prime }) = \\langle \\phi _i^{\\rm F}(t)| \\hat{J}_\\mu (t) | \\phi _e^{\\rm F}(t)\\rangle \\langle \\phi _e^{\\rm F}(t^{\\prime })| \\hat{J}_\\nu (t^{\\prime }) | \\phi _i^{\\rm F}(t^{\\prime })\\rangle $ .", "From the above equation, it is apparent that the second term correlates to the generations of the sidebands via Raman process.", "The symmetry constraint for the $m^{\\rm th}$ -order sideband can be written as $\\hat{X}^t\\textbf {E}_{s,m}(t) [\\hat{X}^t\\textbf {E}(t)]^\\dagger $ = $\\textbf {E}_{s,m} \\textbf {E}^\\dagger (t)$ , provided spatial symmetries of $\\hat{X}^t$ and probe-pulse are same [66].", "Here, $\\textbf {E}_{s,m}(t)$ and $\\textbf {E}(t)$ are, respectively, the electric fields associated with $m^{\\rm th}$ -order sideband and the probe laser; and $\\hat{X}^t$ is the dynamical symmetry operation.", "The quantity $\\textbf {E}_{s,m}(t)\\textbf {E}(t)^\\dagger $ is denoted by $\\mathcal {R}_m(t)$ and known as Raman tensor  [66].", "Thus the selection rules for the sidebands depend on the invariance of the Raman tensor under operation with the DSs of the Floquet system.", "There are two DSs corresponding to the coherent $\\textsf {iLO}$ phonon mode as shown in Fig.", "REF .", "We define $\\tau _n$ as the time translation of $\\tau _{\\textrm {ph}}/n$ , $\\hat{C}_{n\\mu }$ is the rotation of 2$\\pi /n$ with respect to $\\mu $ -axis, $\\hat{\\sigma }_\\mu $ is the reflection with respect to $\\mu $ -axis, and $\\hat{\\mathcal {T}}$ is the time-reversal operator.", "The symmetry operations $\\mathcal {D}_1 = \\hat{\\sigma }_x \\cdot \\tau _2$ [see Fig.", "REF (a)], and $\\mathcal {D}_2 = \\hat{\\sigma _{x}}$ [see Fig.", "REF (b)] leaves the system invariant.", "Figure: Schematic representations of the dynamical symmetries of the Floquet Hamiltonian(a) 𝒟 ^ 1 \\hat{\\mathcal {D}}_{1} = σ x ^·τ ^ 2 \\hat{\\sigma _{x}}\\cdot \\hat{\\tau }_2(b) 𝒟 ^ 2 \\hat{\\mathcal {D}}_{2} = σ x ^\\hat{\\sigma _{x}}.", "The arrows show the displacements of the atom for a particular phonon mode.The selection rules for the sidebands and its polarisation directions are obtained from the DSs as shown in Fig.", "REF and requires a condition as $\\hat{\\mathcal {D}}\\mathcal {R}_m(t) = \\mathcal {R}_m(t)$ .", "We assume that the temporal part of the $m^{\\rm th}$ -order sideband as $e^{i(\\omega _0 \\pm m\\omega _{\\textrm {ph}})t+\\phi _0}$ , and of the probe laser pulse as $e^{i\\omega _0 t}$ .", "In such situation, the Raman tensor is explicitly written as $\\mathcal {R}_m(t) = e^{i(\\pm m \\omega _{\\textrm {ph}}t + \\phi _0)} \\begin{bmatrix}E_{s,m_{x}}E^{*}_{x} & E_{s,m_{x}}E^{*}_{y} \\\\E_{s,m_{y}}E^{*}_{x} & E_{s,m_{y}}E^{*}_{y}\\end{bmatrix}.$ When the probe laser is polarised along X-axis, the invariance condition for the Raman tensor $\\hat{\\mathcal {D}_1}\\mathcal {R}_m(t) = \\mathcal {R}_m(t)$ reduces to $e^{i(\\pm m\\omega _{\\textrm {ph}}t)} \\begin{bmatrix}E_{s,m_{x}} \\\\E_{s,m_{y}}\\end{bmatrix}= e^{i[\\pm m(\\omega _{\\textrm {ph}}t +\\pi )]} \\begin{bmatrix}E_{s,m_{x}} \\\\-E_{s,m_{y}}\\end{bmatrix}.$ The selection rule for the $m^{\\rm th}$ -order sideband is as follows: when $m$ is odd (even), the polarisation of the sideband will be along the Y(X) direction.", "Our observations in Fig.", "$\\ref {HHGlattice}$ (a) are consistent with Eq.", "(REF ).", "When the $\\textsf {iLO}$ phonon mode is excited and the probe pulse is along $\\mathsf {\\Gamma -M}$ direction, $ \\hat{\\sigma }_y \\cdot \\tau _2$ , and $\\hat{C}_{2Z}$ are the DSs, which leave the Raman tensor invariant.", "It is straightforward to see that the selection rules for the $m^{\\rm th}$ -order sideband are deduced as: when $m$ is odd (even), the polarisation of the sidebands will be along the X(Y) direction.", "On the other hand, when the $\\textsf {iTO}$ phonon mode is excited and the probe pulse is along $\\mathsf {\\Gamma -K}$ ($\\mathsf {\\Gamma -M}$ ) direction, $\\hat{\\sigma }_x$ ($\\hat{\\sigma }_y$ ) is the DS, which yields Raman tensor invariant [see Fig.", "REF (b)].", "This symmetry restricts the polarisation of the sidebands to be along the direction of the probe pulse.", "Our results are consistent with the observation made in Fig.", "$\\ref {HHGlattice}$ .", "With the increased intensity of the probe, higher-order harmonics and sidebands will appear.", "To summarise, we have established that high-harmonic spectroscopy is responsive to the coherent lattice dynamics in solids.", "The high-harmonic spectrum is modulated by the frequency of the excited phonon mode within the solid.", "Both in-plane E$_{2g}$ Raman-active phonon modes of the monolayer graphene lead to the generation of higher-order sidebands, along with the main harmonic peaks.", "In the case of $\\textsf {iLO}$ phonon mode excitation, the even- and odd-order sidebands are polarised parallel and perpendicular to the polarisation of probe harmonic pulse, respectively.", "In the case of $\\textsf {iTO}$ phonon mode, all sidebands are polarised along the probe harmonic pulse's polarisation.", "The polarisations of the sidebands are dictated by the dynamical symmetries of the combined system, which includes the phonon modes and probe laser pulse.", "Therefore, the polarisation properties are a sensitive probe of these dynamical symmetries.", "The presence of high-harmonic signal perpendicular to the polarisation of the probe pulse is a signature of lattice excitation-driven symmetry breaking of the reflection plane.", "The present work is paving a way for probing phonon-driven processes in solids and non-linear phononics with sub-cycle temporal resolution." ], [ "Acknowledgements", "We acknowledge fruitful discussion with Sumiran Pujari (IIT Bombay), Dipanshu Bansal (IIT Bombay) and Klaus Reimann (MBI Berlin).", "G. D. acknowledges support from Science and Engineering Research Board (SERB) India (Project No.", "MTR/2021/000138).", "D.K.", "acknowledges support from CRC 1375 “NOA–Nonlinear optics down to atomic scales\", Project C4, funded by the Deutsche Forschungsgemeinschaft (DFG)." ] ]
2207.10440
[ [ "Splitting schemes for FitzHugh--Nagumo stochastic partial differential\n equations" ], [ "Abstract We design and study splitting integrators for the temporal discretization of the stochastic FitzHugh--Nagumo system.", "This system is a model for signal propagation in nerve cells where the voltage variable is solution of a one-dimensional parabolic PDE with a cubic nonlinearity driven by additive space-time white noise.", "We first show that the numerical solutions have finite moments.", "We then prove that the splitting schemes have, at least, the strong rate of convergence $1/4$.", "Finally, numerical experiments illustrating the performance of the splitting schemes are provided." ], [ "Introduction", "The deterministic FitzHugh–Nagumo system is a simplified two-dimensional version of the famous Hodgkin–Huxley model which describes how action potentials propagate along an axon.", "Noise is omnipresent in neural systems and arises from different sources: it could be internal noise (such as random synaptic input from other neurons) or external noise, see for instance [31] for details.", "It was noted in [44] that the addition of an appropriate amount of noise in the model helps to detect weak signals.", "All this has attracted a large body of works on the analysis of the influence of external random perturbations in neurons in the recent years, see for instance [30], [31], [33], [38], [41], [44], [45], [47], [48].", "In this article, we consider the stochastic FitzHugh–Nagumo system $\\left\\lbrace \\begin{aligned}&\\frac{\\partial }{\\partial t} u(t,\\zeta )=\\frac{\\partial ^2}{\\partial \\zeta ^2} u(t,\\zeta )+u(t,\\zeta )-u^3(t,\\zeta )-v(t,\\zeta )+ \\frac{\\partial ^2}{\\partial t\\partial \\zeta } W(t,\\zeta ),\\\\&\\frac{\\partial }{\\partial t} v(t,\\zeta )=\\gamma _1 u(t,\\zeta )-\\gamma _2 v(t,\\zeta )+\\beta ,\\\\&\\frac{\\partial }{\\partial \\zeta }u(t,0)=\\frac{\\partial }{\\partial \\zeta }u(t,1)=0,\\\\&u(0,\\zeta )=u_0(\\zeta ),v(0,\\zeta )=v_0(\\zeta ),\\end{aligned}\\right.$ for $\\zeta \\in (0,1)$ and $t\\ge 0$ .", "The objective of this article is to design and analyse numerical integrators, which treat explicitly the nonlinearity, for the temporal discretization of the system above, based on splitting strategies.", "In the stochastic partial differential equation (SPDE) above, the unknowns $u=\\bigl (u(t)\\bigr )_{t\\ge 0}$ and $v=\\bigl (v(t)\\bigr )_{t\\ge 0}$ are $L^2(0,1)$ -valued stochastic processes, with initial values $u_0,v_0\\in L^2(0,1)$ , see Section  and the standard monograph [21] on stochastic evolution equations in Hilbert spaces.", "In addition, $\\gamma _1,\\gamma _2,\\beta \\in \\mathbb {R}$ are three real-valued parameters, $\\Delta =\\frac{\\partial ^2}{\\partial \\zeta ^2}$ is the Laplace operator endowed with homogeneous Neumann boundary conditions, and $\\bigl (W(t)\\bigr )_{t\\ge 0}$ is a cylindrical Wiener process, meaning that the component $u$ is driven by space-time white noise.", "The component $u$ represents the voltage variable while the component $v$ the recovery variable.", "The noise represents random fluctuations of the membrane potential, see [44] for a related model with a scalar noise.", "Note that in the considered system only the evolution of the voltage variable $u$ is driven by a Wiener process.", "Having noise for the evolution of the recovery variable $v$ would correspond to modelling different biological phenomena which are not treated in this work.", "The major difficulty in the theoretical and numerical analysis of the SPDE system above is the nonlinearity $u-u^3$ appearing in the evolution of the component $u$ : this nonlinearity is not globally Lipschitz continuous and has polynomial growth.", "As proved in [3], using a standard explicit discretization like the Euler–Maruyama method would yield numerical schemes which usually do not converge: more precisely, moment bounds, uniform with respect to the time step size, would not hold for such methods.", "For an efficient numerical simulation of the above SPDE system, we propose to exploit a splitting strategy to define integrators and we show that appropriate moment bounds and strong error estimates can be obtained.", "In a nutshell, the main idea of a splitting strategy is to decompose the vector field, appearing in the evolution equation, in several parts, in order to exhibit subsystems which can be integrated exactly (or easily).", "One then composes the (exact or approximate) flows associated with the subsystems to define integrators applied to the original problem.", "Splitting schemes have a long history in the numerical analysis of ordinary and partial differential equations, see for instance [5], [25], [29], [36] and references therein.", "Splitting integrators have recently been applied and analysed in the context of stochastic ordinary and partial differential equations.", "Without being exhaustive, we refer the interested reader to [1], [2], [6], [15], [18], [28], [37] for the finite-dimensional context and to [4], [10], [11], [12], [14], [19], [20], [22], [23], [32], [34], [35], [39] for the context of SPDEs.", "The main result of this paper is a strong convergence result, with rate of convergence $1/4$ , for easy to implement splitting integrators, see Equation (REF ) in Subsection REF , for the time discretization of the SPDE defined above, see Theorem REF for a precise statement.", "To the best of our knowledge, Theorem REF is the first strong convergence result obtained for a time discretization scheme applied to the stochastic FitzHugh–Nagumo SPDE system.", "The first non-trivial step of the analysis is to obtain suitable moment bounds for the splitting scheme, see Theorem REF .", "Note that the proof of the moment bounds of Theorem REF is inspired by the article [13] where splitting schemes for the stochastic Allen–Cahn equation $\\text{d} u(t)=\\Delta u(t)\\,\\text{d} t+(u(t)-u^3(t))\\,\\text{d} t+ \\text{d} W(t)$ were studied.", "The proof of the strong convergence error estimates of Theorem REF is inspired by the article [12].", "However, one needs a dedicated and detailed analysis since the considered stochastic FitzHugh–Nagumo system is not a parabolic stochastic evolution system, and several arguments are non trivial.", "Note also that the construction of the splitting scheme is inspired by the recent article [16] which treats a finite dimensional version $\\left\\lbrace \\begin{aligned}&\\text{d} u(t)=(u(t)-u^3(t)-v(t))\\,\\text{d} t,\\\\&\\text{d} v(t)=(\\gamma _1 u(t)-\\gamma _2 v(t)+\\beta )\\,\\text{d} t+\\text{d} B(t),\\\\&u(0)=u_0,v(0)=v_0,\\end{aligned}\\right.$ of the stochastic FitzHugh–Nagumo system (where the finite-dimensional noise $B$ is in the $v$ -component).", "We now review the literature related to this work.", "The recent article [16] analyses the strong convergence of splitting schemes for a class of semi-linear stochastic differential equations (SDEs) as well as preservation of possible structural properties of the problem.", "Applications to the proposed schemes to the stochastic FitzHugh–Nagumo SDE are also presented.", "The work [46] performs extensive numerical simulations on the FitzHugh–Nagumo equation with space-time white noise in $1d$ .", "A finite difference discretization is used in space, while the classical Euler–Maruyama is used in time.", "The article [7] studies numerically the FitzHugh–Nagumo equation with colored noise in $2d$ .", "In particular, the authors use a finite element discretization in space and the semi-implicit Euler–Maruyama scheme in time.", "The two previously mentioned works employ crude explicit discretization for the nonlinearity and therefore may have the issues about moment bounds discussed above.", "The work [24] proves convergence (without rates) of a fully-discrete numerical scheme, based on a Galerkin method in space and the tamed Euler scheme in time, for a general SPDE with super-linearly growing operators.", "This is then applied to the FitzHugh–Nagumo equation with space-time white noise in $1d$ .", "The articles [42] and [43] prove strong convergence rates of a finite difference spatial discretization of the FitzHugh–Nagumo equation with space-time white noise in $1d$ .", "This article is organized as follows.", "The setting is given in Section , in particular this allows us to state a well-posedness result for the considered stochastic FitzHugh–Nagumo system.", "The splitting strategy, the proposed integrators and the main results of the paper are then presented in Sections REF , REF and REF respectively.", "Several auxiliary results are stated and proved in Section .", "Section  gives the proofs of Theorems REF and REF .", "Finally, numerical experiments are provided in Section ." ], [ "Setting", "This section is devoted to introducing the functional framework, the linear and nonlinear operators, and the Wiener process.", "This allows us to consider the stochastic FitzHugh–Nagumo SPDE system as a stochastic evolution equation in the classical framework of [21]." ], [ "Functional framework", "Let us first introduce the infinite-dimensional, separable Hilbert space $H=L^2(0,1)$ of square integrable functions from $(0,1)$ to $\\mathbb {R}$ .", "This space is equipped with the inner product $\\langle \\cdot ,\\cdot \\rangle _H$ and the norm $\\Vert \\cdot \\Vert _H$ which satisfy $\\langle u_1,u_2\\rangle _H=\\displaystyle \\int _0^1 u_1(\\zeta )u_2(\\zeta )\\,\\text{d}\\zeta ,\\quad \\Vert u\\Vert _{H}=\\sqrt{\\langle u,u\\rangle _H},$ respectively, for all $u_1,u_2,u\\in H$ .", "Let us then introduce the product space $\\mathcal {H}=H\\times H$ , which is also an infinite-dimensional, separable Hilbert space, with the inner product $\\langle \\cdot ,\\cdot \\rangle _\\mathcal {H}$ and the norm $\\Vert \\cdot \\Vert _\\mathcal {H}$ defined by $\\langle x_1,x_2\\rangle _\\mathcal {H}=\\langle u_1,u_2\\rangle _H+\\langle v_1,v_2\\rangle _H,\\quad \\Vert x\\Vert _\\mathcal {H}=\\sqrt{\\Vert u\\Vert _H^2+\\Vert v\\Vert _H^2},$ for all $x_1=(u_1,v_1),x_2=(u_2,v_2),x=(u,v)\\in \\mathcal {H}$ .", "Let also $E=\\mathcal {C}^0([0,1])$ be the space of continuous functions from $[0,1]$ to $\\mathbb {R}$ , and set $\\mathcal {E}=E\\times E$ .", "Then $E$ and $\\mathcal {E}$ are separable Banach spaces, with the norms $\\Vert \\cdot \\Vert _E$ and $\\Vert \\cdot \\Vert _\\mathcal {E}$ defined by $\\Vert u\\Vert _E=\\max _{\\zeta \\in [0,1]}|u(\\zeta )|,\\quad \\Vert x\\Vert _\\mathcal {E}=\\max \\bigl (\\Vert u\\Vert _E,\\Vert v\\Vert _E\\bigr )$ for all $u\\in E$ and $x=(u,v)\\in \\mathcal {E}$ .", "Let us denote the inner product and the norm in the finite-dimensional Euclidean space $\\mathbb {R}^2$ by $\\langle \\cdot ,\\cdot \\rangle $ and $\\Vert \\cdot \\Vert $ respectively.", "If $M$ is a $2\\times 2$ real-valued matrix, let $M=\\underset{x\\in \\mathbb {R}^2;~\\Vert x\\Vert =1}{\\sup }~\\Vert Mx\\Vert $ .", "Finally, in the sequel, $\\mathbb {N}=\\lbrace 1,2,\\ldots \\rbrace $ denotes the set of integers and $\\mathbb {N}_0=\\lbrace 0\\rbrace \\cup \\mathbb {N}=\\lbrace 0,1,\\ldots \\rbrace $ denotes the set of nonnegative integers.", "We often write $j\\ge 1$ (resp.", "$j\\ge 0$ ) instead of $j\\in \\mathbb {N}$ (resp.", "$j\\in \\mathbb {N}_0$ )." ], [ "Linear operators", "This subsection presents the material required to use the semigroup approach for SPDEs, see for instance [21].", "For all $j\\in \\mathbb {N}$ , set $\\lambda _j=(j\\pi )^2$ and $e_j(\\zeta )=\\sqrt{2}\\cos (j\\pi \\zeta )$ for all $\\zeta \\in [0,1]$ .", "In addition, set $\\lambda _0=0$ and $e_0(\\zeta )=1$ for all $\\zeta \\in [0,1]$ .", "Then $\\bigl (e_j\\bigr )_{j\\ge 0}$ is a complete orthonormal system of $H$ , and one has $\\Delta e_j=-\\lambda _je_j$ for all $j\\ge 0$ , where $\\Delta $ denotes the Laplace operator with homogeneous Neumann boundary conditions.", "For all $u\\in H$ and all $t\\ge 0$ , set $e^{t\\Delta }u=\\sum _{j\\ge 0}e^{-t\\lambda _j}\\langle u,e_j\\rangle _{H} e_j.$ Then, for any $u_0\\in H$ , the mapping $(t,\\zeta )\\mapsto u(t,\\zeta )=e^{t\\Delta }u_0(\\zeta )$ is the unique solution of the heat equation on $(0,1)$ with homogeneous Neumann boundary conditions and initial value $u(0,\\cdot )=u_0$ : $\\left\\lbrace \\begin{aligned}&\\frac{\\partial u(t,\\zeta )}{\\partial t}=\\Delta u(t,\\zeta ),\\quad t>0,~\\zeta \\in (0,1),\\\\&\\frac{\\partial u(t,0)}{\\partial \\zeta } =\\frac{\\partial u(t,1)}{\\partial \\zeta } =0,\\quad t>0,\\\\&u(0,\\zeta )=u_0(\\zeta ),\\quad \\zeta \\in (0,1).\\end{aligned}\\right.$ For all $\\alpha \\in [0,2]$ , set $&H^\\alpha =\\left\\lbrace u\\in H;~\\sum _{j\\ge 0}\\lambda _j^{\\alpha }\\langle u,e_j\\rangle _{H}^2<\\infty \\right\\rbrace ,\\\\&(-\\Delta )^{\\frac{\\alpha }{2}} u=\\sum _{j\\ge 0}\\lambda _j^{\\frac{\\alpha }{2}}\\langle u,e_j\\rangle _{H} e_j,\\quad u\\in H^\\alpha .$ Observe that $H^0=H=L^2(0,1)$ .", "The Laplace operator $\\Delta $ with homogeneous Neumann boundary conditions is a self-adjoint unbounded linear operator on $H$ , with domain $D(\\Delta )=H^2$ .", "We also let $\\mathcal {H}^\\alpha =H^\\alpha \\times H$ for all $\\alpha \\in [0,2]$ .", "Let us now introduce the linear operator $\\Lambda $ , defined as follows: for all $x=(u,v)\\in \\mathcal {H}^2$ , set $\\Lambda x=\\begin{pmatrix} -\\Delta u\\\\ 0\\end{pmatrix}.$ Then $\\Lambda $ is a self-adjoint unbounded linear operator on $\\mathcal {H}$ , with domain $D(\\Lambda )=\\mathcal {H}^2$ .", "For all $x=(u,v)\\in \\mathcal {H}$ and $t\\ge 0$ , set $e^{-t\\Lambda }x=\\begin{pmatrix} e^{t\\Delta }u\\\\ v\\end{pmatrix}.$ Regularity estimates for this operator are presented in Section  below." ], [ "Nonlinear operator", "Let $\\beta ,\\gamma _1,\\gamma _2\\in \\mathbb {R}$ be parameters of the model.", "Define the mapping $F:\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ such that for all $x=(u,v)\\in \\mathbb {R}^2$ one has $F(x)=\\begin{pmatrix} u-u^3-v\\\\ \\gamma _1 u- \\gamma _2v+\\beta \\end{pmatrix}.$ In order to define splitting schemes, it is convenient to introduce two auxiliary mappings $F^{\\rm NL}:\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ and $F^{\\rm L}:\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ defined as follows: for all $x=(u,v)\\in \\mathbb {R}^2$ , set $&F^{\\rm NL}(x)=\\begin{pmatrix} u-u^3 \\\\ \\beta \\end{pmatrix} \\\\&F^{\\rm L}(x)=\\begin{pmatrix} -v\\\\ \\gamma _1 u-\\gamma _2v\\end{pmatrix}=Bx,$ where the matrix $B$ is defined by $B=\\begin{pmatrix} 0 & -1\\\\ \\gamma _1 & -\\gamma _2\\end{pmatrix}.$ One then has $F(x)=F^{\\rm NL}(x)+F^{\\rm L}(x)$ for all $x\\in \\mathbb {R}^2$ .", "The mapping $F^{\\rm L}$ is globally Lipschitz continuous: for all $x_1,x_2\\in \\mathbb {R}^2$ one has $\\Vert F^{\\rm L}(x_2)-F^{\\rm L}(x_1)\\Vert \\le B\\Vert x_2-x_1\\Vert .$ However $F$ and $F^{\\rm NL}$ are only locally Lipschitz continuous, and satisfy a one-sided Lipschitz continuity property: there exists $C\\in (0,\\infty )$ such that for all $x_1,x_2\\in \\mathbb {R}^2$ one has $\\langle x_2-x_1,F^{\\rm NL}(x_2)-F^{\\rm NL}(x_1)\\rangle \\le C\\Vert x_2-x_1\\Vert ^2,\\quad \\langle x_2-x_1,F(x_2)-F(x_1)\\rangle \\le C\\Vert x_2-x_1\\Vert ^2.$ In the sequel, an abuse of notation is used for simplicity: the same notation is employed for a mapping $f:\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ and for the associated Nemytskii operator defined on $\\mathcal {H}$ or on $\\mathcal {E}$ by $f(u,v)=f(u(\\cdot ),v(\\cdot ))$ ." ], [ "Wiener process", "It remains to define the noise that drives the stochastic FitzHugh–Nagumo system.", "Let $\\bigl (W(t)\\bigr )_{t\\ge 0}$ be a cylindrical Wiener process on $H$ : given a sequence $\\bigl (\\beta _j(\\cdot )\\bigr )_{j\\ge 0}$ of independent standard real-valued Wiener processes, defined on a probability space $(\\Omega ,\\mathcal {F},\\mathbb {P})$ equipped with a filtration $\\bigl (\\mathcal {F}_t\\bigr )_{t\\ge 0}$ which satisfies the usual conditions and where $\\mathbb {E}[\\cdot ]$ denotes the expectation operator on the probability space, set $W(t)=\\sum _{j\\ge 0}\\beta _j(t)e_j.$ For all $t\\ge 0$ , define $\\mathcal {W}(t)=\\begin{pmatrix} W(t)\\\\ 0\\end{pmatrix}=\\sum _{j\\ge 0}\\beta _j(t)\\begin{pmatrix} e_j\\\\0\\end{pmatrix},$ then $\\bigl (\\mathcal {W}(t)\\bigr )_{t\\ge 0}$ is a generalized $\\mathcal {Q}$ -Wiener process on $\\mathcal {H}$ , with the covariance operator $\\mathcal {Q}=\\begin{pmatrix}I & 0\\\\ 0 & 0\\end{pmatrix}.$ Note that almost surely $W(t)\\notin H$ and $\\mathcal {W}(t)\\notin \\mathcal {H}$ for all $t>0$ .", "However, for all $T\\ge 0$ , the Itô stochastic integrals $\\int _0^T L(t)\\,\\text{d}W(t)$ and $\\int _0^T \\mathcal {L}(t)\\,\\text{d}\\mathcal {W}(t)$ are well-defined $H$ -valued and $\\mathcal {H}$ -valued random variables respectively, if $\\bigl (L(t)\\bigr )_{0\\le t\\le T}$ and $\\bigl (\\mathcal {L}(t)\\bigr )_{0\\le t\\le T}$ are adapted processes which satisfy $\\sum _{j\\ge 0}\\int _0^T \\mathbb {E}[\\Vert L(t)e_j\\Vert _H^2]\\,\\text{d}t<\\infty $ and $\\sum _{j\\ge 0}\\int _0^T \\mathbb {E}[\\Vert \\mathcal {L}(t)\\begin{pmatrix}e_{j}\\\\ 0\\end{pmatrix}\\Vert _{\\mathcal {H}}^2]\\,\\text{d}t<\\infty $ respectively.", "Observe that for all $T\\ge 0$ one has $\\sum _{j\\ge 0} \\int _{0}^{T}\\Vert e^{t\\Delta }e_j\\Vert _H^2\\,\\text{d}t=\\sum _{j\\ge 0} \\int _{0}^{T}\\Vert e^{-t\\Lambda }\\begin{pmatrix}e_j\\\\0\\end{pmatrix}\\Vert _\\mathcal {H}^2\\,\\text{d}t\\le T+\\sum _{j\\ge 1}\\lambda _j^{-1}<\\infty .$ Therefore, for all $t\\ge 0$ one can define the $H$ -valued random variable $Z(t)$ and the $\\mathcal {H}$ -valued random variable $\\mathcal {Z}(t)$ , called the stochastic convolutions, by $\\begin{aligned}Z(t)&=\\int _0^t e^{(t-s)\\Delta }\\,\\text{d}W(s),\\\\\\mathcal {Z}(t)&=\\int _0^t e^{-(t-s)\\Lambda }\\,\\text{d}\\mathcal {W}(s).\\end{aligned}$ The processes $\\bigl (Z(t)\\bigr )_{t\\ge 0}$ and $\\bigl (\\mathcal {Z}(t)\\bigr )_{t\\ge 0}$ are interpreted as the mild solutions of the stochastic evolution equations $\\text{d}Z(t)&=\\Delta Z(t)\\,\\text{d}t+\\text{d}W(t),\\\\\\text{d}\\mathcal {Z}(t)&=-\\Lambda \\mathcal {Z}(t)\\,\\text{d}t+\\text{d}\\mathcal {W}(t)$ with initial values $Z(0)=0$ and $\\mathcal {Z}(0)=0$ .", "Note that $\\mathcal {Z}(t)=\\begin{pmatrix}Z(t)\\\\0\\end{pmatrix}$ for all $t\\ge 0$ ." ], [ "The stochastic FitzHugh–Nagumo SPDE system", "In this work, we study numerical schemes for the FitzHugh–Nagumo stochastic system for signal propagation in nerve cells.", "This system is written as the stochastic evolution system $\\left\\lbrace \\begin{aligned}&\\text{d}u(t)=\\Delta u(t)\\,\\text{d}t+(u(t)-u^3(t)-v(t))\\,\\text{d}t+ \\text{d}W(t),\\\\&\\text{d}v(t)=(\\gamma _1 u(t)-\\gamma _2v(t)+\\beta )\\,\\text{d}t,\\\\&u(0)=u_0,v(0)=v_0,\\end{aligned}\\right.$ where the unknowns $u(\\cdot )=\\bigl (u(t)\\bigr )_{t\\ge 0}$ and $v(\\cdot )=\\bigl (v(t)\\bigr )_{t\\ge 0}$ are $H$ -valued stochastic processes, and with initial values $u_0\\in H$ and $v_0\\in H$ .", "Recall that Neumann boundary conditions are used in the above system.", "Using the notation introduced above and setting $X(t)=(u(t),v(t))$ for all $t\\ge 0$ , the stochastic evolution system (REF ) is treated in the sequel as the stochastic evolution equation $\\text{d}X(t)=-\\Lambda X(t)\\,\\text{d}t+F(X(t))\\,\\text{d}t+\\text{d}\\mathcal {W}(t),\\, X(0)=x_0,$ with the initial value $x_0=(u_0,v_0)\\in \\mathcal {H}$ .", "For all $T\\in (0,\\infty )$ , a stochastic process $\\bigl (X(t)\\bigr )_{0\\le t\\le T}$ is called a mild solution of (REF ) if it has continuous trajectories with values in $\\mathcal {H}$ , and if for all $t\\in [0,T]$ one has $X(t)=e^{-t\\Lambda }x_0+\\int _0^t e^{-(t-s)\\Lambda }F(X(s))\\,\\text{d}s+\\int _0^t e^{-(t-s)\\Lambda }\\,\\text{d}\\mathcal {W}(s).$ In the framework presented in this section, the stochastic evolution equation (REF ) admits a unique global mild solution, for any initial value $x_0\\in \\mathcal {H}^{2\\alpha }\\cap \\mathcal {E}$ and for $\\alpha \\in [0,\\frac{1}{4})$ , see Proposition REF below.", "For simplicity, the initial values $u_0,v_0$ , resp.", "$x_0$ , appearing in (REF ), resp.", "(REF ), are deterministic.", "It would be straightforward to extend the results below for random initial values which are independent of the Wiener process and are assumed to satisfy appropriate moment bounds, using a conditioning argument." ], [ "Splitting schemes", "The time-step size of the integrators defined below is denoted by $\\tau $ .", "Without loss of generality, it is assumed that $\\tau \\in (0,\\tau _0)$ , where $\\tau _0$ is an arbitrary positive real number, and that there exists $T\\in (0,\\infty )$ and $N\\in \\mathbb {N}$ such that $\\tau =T/N$ .", "The notation $t_n=n\\tau $ for $n\\in \\lbrace 0,\\ldots ,N\\rbrace $ is used in the sequel.", "The increments of the Wiener processes $\\bigl (W(t)\\bigr )_{t\\ge 0}$ and $\\bigl (\\mathcal {W}(t)\\bigr )_{t\\ge 0}$ are denoted by $\\delta W_n=W(t_{n+1})-W(t_n),\\quad \\delta \\mathcal {W}_n=\\mathcal {W}(t_{n+1})-\\mathcal {W}(t_n)=\\begin{pmatrix} \\delta W_n\\\\0 \\end{pmatrix}.$ The proposed time integrators for the SPDE (REF ) are based on a splitting strategy.", "Recall that the main principle of splitting integrators is to decompose the vector field of the evolution problem in several parts, such that the arising subsystems are exactly (or easily) integrated.", "We define these subsystems in Subsection REF , then give the definitions of three splitting schemes in Subsection REF and state the main results of this article in Subsection REF ." ], [ "Solutions of auxiliary subsystems", "The construction of the proposed splitting schemes is based on the combination of exact or approximate solutions of the three subsystems considered below.", "$\\bullet $ The nonlinear differential equation (considered on the Euclidean space $\\mathbb {R}^2$ ) $\\left\\lbrace \\begin{aligned}&\\frac{\\text{d}x^{\\rm NL}(t)}{\\text{d}t}=F^{\\rm NL}(x^{\\rm NL}(t)),\\\\&x^{\\rm NL}(0)=x_0\\in \\mathbb {R}^2\\end{aligned}\\right.$ admits a unique global solution $\\bigl (x^{\\rm NL}(t)\\bigr )_{t\\ge 0}$ .", "This solution has the following exact expression, see for instance [13]: for all $t\\ge 0$ and $x_0=(u_0,v_0)\\in \\mathbb {R}^2$ , one has $x^{\\rm NL}(t)=\\phi _t^{\\rm NL}(x_0)=\\begin{pmatrix} \\frac{u_0}{\\sqrt{u_0^2+(1-u_0^2)e^{-2t}}} \\\\ v_0+\\beta t\\end{pmatrix}.$ $\\bullet $ The linear differential equation (considered on the Euclidean space $\\mathbb {R}^2$ ) $\\left\\lbrace \\begin{aligned}&\\frac{\\text{d}x^{\\rm L}(t)}{\\text{d}t}=F^{\\rm L}(x^{\\rm L}(t)),\\\\&x^{\\rm L}(0)=x_0\\in \\mathbb {R}^2\\end{aligned}\\right.$ admits a unique global solution $\\bigl (x^{\\rm L}(t)\\bigr )_{t\\ge 0}$ .", "This solution has the following expression: for all $t\\ge 0$ and $x_0=(u_0,v_0)\\in \\mathbb {R}^2$ , one has $x^{\\rm L}(t)=\\phi _t^{\\rm L}(x_0)=e^{tB}x_0.$ $\\bullet $ The stochastic evolution equation (considered on the Hilbert space $\\mathcal {H}$ ) $\\left\\lbrace \\begin{aligned}&\\text{d}X^{\\rm s}(t)=-\\Lambda X^{\\rm s}(t)\\,\\text{d}t+\\text{d}\\mathcal {W}(t)\\\\&X^{\\rm s}(0)=x_0\\in \\mathcal {H}\\end{aligned}\\right.$ admits a unique global solution $\\bigl (X^{\\rm s}(t)\\bigr )_{t\\ge 0}$ .", "This solution has the following expression: for all $t\\ge 0$ and $x_0=(u_0,v_0)\\in \\mathcal {H}$ , one has $X^{\\rm s}(t)=e^{-t\\Lambda }x_0+\\int _0^t e^{-(t-s)\\Lambda }\\text{d}\\mathcal {W}(s)=\\begin{pmatrix} e^{t\\Delta }u_0+\\int _0^t e^{(t-s)\\Delta }\\,\\text{d}W(s)\\\\ v_0\\end{pmatrix},$ see (REF ) for the expression of the stochastic convolution.", "For all $n\\in \\lbrace 0,\\ldots ,N-1\\rbrace $ , set $X^{\\rm s,exact}_{n}=X^{\\rm s}(t_{n})$ , then one has the following recursion formula $X^{\\rm s,exact}_{n+1}=e^{-\\tau \\Lambda }X^{\\rm s,exact}_n+\\int _{t_n}^{t_{n+1}}e^{-(t_{n+1}-s)\\Lambda }\\,\\text{d}\\mathcal {W}(s)$ recalling the notation $t_n=n\\tau $ .", "Instead of using the exact solution (REF ) of the stochastic convolution (REF ), one can use approximate solutions $\\bigl (X_n^{\\rm s,exp}\\bigr )_{n\\ge 0}=\\bigl (u_n^{\\rm s,exp},v_n^{\\rm s,exp}\\bigr )_{n\\ge 0}$ and $\\bigl (X_n^{\\rm s,imp}\\bigr )_{n\\ge 0}=\\bigl (u_n^{\\rm s,imp},v_n^{\\rm s,imp}\\bigr )_{n\\ge 0}$ defined by an exponential Euler scheme and a linear implicit Euler scheme respectively: $X_{n+1}^{\\rm s,exp}=e^{-\\tau \\Lambda }\\Bigl (X_n^{\\rm s,exp}+\\delta \\mathcal {W}_n\\Bigr )=\\begin{pmatrix} e^{\\tau \\Delta }\\bigl (u_n^{\\rm s,exp}+\\delta W_n\\bigr )\\\\ v_n^{\\rm s,exp}\\end{pmatrix},$ and $X_{n+1}^{\\rm s,imp}=\\bigl (I+\\tau \\Lambda \\bigr )^{-1}\\Bigl (X_n^{\\rm s,imp}+\\delta \\mathcal {W}_n\\Bigr )=\\begin{pmatrix} (I-\\tau \\Delta )^{-1}\\bigl (u_n^{\\rm s,imp}+\\delta W_n\\bigr )\\\\ v_n^{\\rm s,imp}\\end{pmatrix},$ with initial values $X_0^{\\rm s,exp}=X_0^{\\rm s,imp}=x_0=(u_0,v_0)\\in \\mathcal {H}$ , $u_0^{\\rm s,exp}=u_0^{\\rm s,imp}=u_0\\in H$ and $v_0^{\\rm s,exp}=v_0^{\\rm s,imp}=v_0\\in H$ ." ], [ "Definition of the splitting schemes", "We are now in position to introduce the three splitting schemes studied in this article.", "They are constructed using a Lie–Trotter strategy, where first the subsystems (REF ), (REF ) are solved exactly using the flow maps (REF ) and (REF ) respectively, and where the subsystem (REF ) is either solved exactly using (REF ) or approximately using (REF ) or (REF ).", "For the composition of the first two subsystems, define the mapping $\\phi _\\tau :\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ as follows: for all $\\tau \\in (0,\\tau _0)$ , set $\\phi _\\tau =\\phi _\\tau ^{\\rm L}\\circ \\phi _\\tau ^{\\rm NL}.$ Using the expression (REF ) for the exact solution (REF ) of (REF ) leads to the definition of the following explicit splitting scheme for the stochastic FitzHugh–Nagumo SPDE system (REF ): $X_{n+1}^{\\rm LT, exact}=e^{-\\tau \\Lambda }\\phi _\\tau \\bigl (X_n^{\\rm LT, exact}\\bigr )+\\int _{t_n}^{t_{n+1}}e^{-(t_{n+1}-s)\\Lambda }\\,\\text{d}\\mathcal {W}(s).$ Using the exponential Euler scheme (REF ) to approximate the solution of (REF ) leads to the definition of the following explicit splitting scheme for (REF ): $X_{n+1}^{\\rm LT, expo}=e^{-\\tau \\Lambda }\\phi _\\tau \\bigl (X_n^{\\rm LT, expo}\\bigr )+e^{-\\tau \\Lambda }\\delta \\mathcal {W}_n.$ Using the linear implicit Euler scheme (REF ) to approximate the solution of (REF ) leads to the definition of the following splitting scheme for (REF ): $X_{n+1}^{\\rm LT, imp}=(I+\\tau \\Lambda )^{-1}\\phi _\\tau \\bigl (X_n^{\\rm LT, imp}\\bigr )+(I+\\tau \\Lambda )^{-1}\\delta \\mathcal {W}_n.$ For these three Lie–Trotter splitting schemes (REF ), (REF ) and (REF ), the same initial value is imposed: $X_0^{\\rm LT, exact}=X_0^{\\rm LT, expo}=X_0^{\\rm LT, imp}=x_0\\in \\mathcal {H}.$ Before proceeding with the statements of the main results, let us give several observations and auxiliary tools.", "Observe that the three schemes (REF ), (REF ) and (REF ) can be written using the single formulation $X_{n+1}=\\mathcal {A}_\\tau \\phi _\\tau (X_n)+\\int _{t_n}^{t_{n+1}}\\mathcal {B}_{t_{n+1}-s}\\,\\text{d}\\mathcal {W}(s)$ which is used in the analysis below.", "The expressions of the linear operators $\\mathcal {A}_\\tau $ and $\\mathcal {B}_{t_{n+1}-s}$ for each of the three schemes are given by: $\\mathcal {A}_\\tau =e^{-\\tau \\Lambda }, \\mathcal {B}_{t_{n+1}-s}=e^{-(t_{n+1}-s)\\Lambda }$ for the scheme (REF ) $\\mathcal {A}_\\tau =\\mathcal {B}_{t_{n+1}-s}=e^{-\\tau \\Lambda }$ for the scheme (REF ), and $\\mathcal {A}_\\tau =\\mathcal {B}_{t_{n+1}-s}=(I+\\tau \\Lambda )^{-1}$ for the scheme (REF ).", "For any value $\\tau \\in (0,\\tau _0)$ of the time-step size, introduce the mapping $\\psi _{\\tau }:\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ defined as follows: for all $x\\in \\mathbb {R}^2$ , $\\psi _\\tau (x)=\\frac{\\phi _\\tau (x)-x}{\\tau }.$ The Lie–Trotter splitting scheme (REF ) is then written as $X_{n+1}=\\mathcal {A}_\\tau X_n+\\tau \\mathcal {A}_\\tau \\psi _\\tau (X_n)+\\int _{t_n}^{t_{n+1}}\\mathcal {B}_{t_{n+1}-s}\\,\\text{d}\\mathcal {W}(s)$ and can thus be interpreted as a numerical scheme applied to the auxiliary stochastic evolution equation $\\text{d}X_\\tau (t)=-\\Lambda X_\\tau (t)\\,\\text{d}t+\\psi _\\tau (X_\\tau (t))\\,\\text{d}t+\\text{d}\\mathcal {W}(t),\\, X_\\tau (0)=x_0.$ Note that the SPDE (REF ) is similar to the original problem (REF ), however the nonlinearity $F$ is replaced by the auxiliary mapping $\\psi _\\tau $ ." ], [ "Main results", "In this subsection, we state the main results of this article.", "First, we give moment bounds for the three splitting schemes (REF ), see Theorem REF .", "Then, we give strong error estimates, with rate of convergence $1/4$ , for the numerical approximations of the solution of the stochastic FitzHugh–Nagumo SPDE system (REF ), see Theorem REF .", "Theorem 3.1 For all $T\\in (0,\\infty )$ and $p\\in [1,\\infty )$ , there exists $C_p(T)\\in (0,\\infty )$ such that for all $x_0\\in \\mathcal {E}$ one has $\\underset{\\tau \\in (0,\\tau _0)}{\\sup }~\\underset{0\\le n\\le N}{\\sup }~\\mathbb {E}[\\Vert X_n\\Vert _\\mathcal {E}^p]\\le C_p(T)\\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^p\\bigr ),$ where $\\bigl (X_n\\bigr )_{n\\ge 0}$ is given by (REF ) (with initial value $X_0=x_0$ ), and where $T=N\\tau $ with $N\\in \\mathbb {N}$ .", "The proof of this theorem is postponed to Section .", "Remark 3.2 The nonlinear mapping $F$ is not globally Lipschitz continuous and has polynomial growth.", "Therefore, if one employs a standard implicit-explicit scheme applied directly to the original SPDE $\\mathcal {X}_{n+1}=\\mathcal {A}_\\tau \\mathcal {X}_n+\\tau \\mathcal {A}_\\tau F(\\mathcal {X}_n)+\\int _{t_n}^{t_{n+1}}\\mathcal {B}_{t_{n+1}-s}\\,\\text{d}\\mathcal {W}(s)$ with $\\mathcal {X}_0=x_0$ , where the same notation as for the scheme (REF ) is used, one has $\\underset{\\tau \\in (0,\\tau _0)}{\\sup }~\\underset{0\\le n\\le N}{\\sup }~\\mathbb {E}[\\Vert \\mathcal {X}_n\\Vert _\\mathcal {E}^p]=\\underset{\\tau \\in (0,\\tau _0)}{\\sup }~\\underset{0\\le n\\le N}{\\sup }~\\mathbb {E}[\\Vert \\mathcal {X}_n\\Vert _\\mathcal {H}^p]=\\infty ,$ see for instance [3] for the stochastic Allen–Cahn equation and [24].", "As a consequence Theorem REF is not a trivial result and illustrates the superiority of the proposed explicit splitting scheme compared with a crude explicit discretization method.", "We are now in position to state our strong convergence result.", "Its proof is given in Section .", "Theorem 3.3 For all $T\\in (0,\\infty )$ , $p\\in [1,\\infty )$ and $\\alpha \\in [0,\\frac{1}{4})$ , there exists $C_{\\alpha ,p}(T)\\in (0,\\infty )$ such that for all $x_0=(u_0,v_0)\\in \\mathcal {H}^{2\\alpha }\\cap \\mathcal {E}$ , all $\\tau \\in (0,\\tau _0)$ , one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X(t_n)-X_n\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_{\\alpha ,p}(T)\\tau ^{\\alpha }\\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^7+\\Vert x_0\\Vert _\\mathcal {E}^7\\bigr ).$ The order of convergence $1/4$ obtained in Theorem REF is consistent with the temporal Hölder regularity property of the trajectories $t\\mapsto X(t)\\in \\mathcal {H}$ .", "It is also consistent with the strong convergence rate obtained in [12] for the stochastic Allen–Cahn equation.", "However new arguments are required to study the FitzHugh–Nagumo system which is not a parabolic SPDE problem, and which has a cubic nonlinearity.", "Let us state two of the main auxiliary results which are used in the proofs of the main results.", "These propositions are proved in Subsection REF .", "Proposition 3.4 For all $\\tau \\in (0,\\tau _0)$ , the mapping $\\phi _\\tau :\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ defined by (REF ) is globally Lipschitz continuous.", "In addition, for all $\\tau \\in (0,\\tau _0)$ and all $x_1,x_2\\in \\mathbb {R}^2$ one has $\\Vert \\phi _\\tau (x_2)-\\phi _\\tau (x_1)\\Vert \\le e^{(1+B)\\tau }\\Vert x_2-x_1\\Vert .$ Proposition 3.5 There exists $C(\\tau _0)\\in (0,\\infty )$ such that for all $\\tau \\in (0,\\tau _0)$ , the mapping $\\psi _\\tau :\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ defined by (REF ) satisfies the following properties: for all $x_1,x_2\\in \\mathbb {R}^2$ , one has $&\\langle x_2-x_1,\\psi _\\tau (x_2)-\\psi _\\tau (x_1)\\rangle \\le C(\\tau _0)\\Vert x_2-x_1\\Vert ^2\\\\&\\Vert \\psi _\\tau (x_2)-\\psi _\\tau (x_1)\\Vert \\le C(\\tau _0)\\bigl (1+\\Vert x_1\\Vert ^3+\\Vert x_2\\Vert ^3\\bigr )\\Vert x_2-x_1\\Vert ,$ and for all $x\\in \\mathbb {R}^2$ one has $\\Vert \\psi _\\tau (x)-F(x)\\Vert \\le C(\\tau _0)\\tau \\bigl (1+\\Vert x\\Vert ^5\\bigr ).$ Finally, one has $\\underset{\\tau \\in (0,\\tau _0)}{\\sup }~\\Vert \\psi _\\tau (0)\\Vert <\\infty .$ The inequality (REF ) states that $\\psi _\\tau $ satisfies a one-sided Lipschitz continuity property which is uniform with respect to $\\tau \\in (0,\\tau _0)$ .", "This is similar to the property (REF ) satisfied by $F$ .", "It is straightforward to check that $\\psi _\\tau $ is in fact globally Lipschitz continuous for any fixed $\\tau \\in (0,\\tau _0)$ , however this property does not hold uniformly with respect to $\\tau \\in (0,\\tau _0)$ .", "Instead, one has the one-sided Lipschitz continuity property (REF ) and the local Lipschitz continuity property () which are both uniform with respect to $\\tau \\in (0,\\tau _0)$ ." ], [ "Preliminary results", "In this section we state and prove several results which are required for the analysis of the three splitting schemes of type (REF ).", "In particular, we give properties of the semigroup (Proposition REF ), we then prove the properties of the auxiliary mappings $\\phi _\\tau $ (Proposition REF ) and $\\psi _\\tau $ (Proposition REF ), and finally we study the well-posedness and moment bounds for the mild solution of the considered SPDE." ], [ "Properties of the semigroup", "In this subsection, we study properties of the semigroup generated by the linear operator $\\Lambda $ in the stochastic FitzHugh–Nagumo system (REF ).", "In addition, estimates for the operator $(I+\\tau \\Lambda )^{-1}$ used in the semi-linear splitting schemes (REF ) and (REF ) are also provided.", "Proposition 4.1 The semigroup $\\bigl (e^{-t\\Lambda }\\bigr )_{t\\ge 0}$ defined by (REF ) satisfies the following properties: $\\bullet $ For all $t\\ge 0$ , $e^{-t\\Lambda }$ is a bounded linear operator from $\\mathcal {H}$ to $\\mathcal {H}$ and from $\\mathcal {E}$ to $\\mathcal {E}$ .", "In addition, for all $t\\ge 0$ one has $\\underset{x\\in \\mathcal {H}\\setminus \\lbrace 0\\rbrace }{\\sup }~\\frac{\\Vert e^{-t\\Lambda }x\\Vert _\\mathcal {H}}{\\Vert x\\Vert _\\mathcal {H}}=1,\\quad \\underset{x\\in \\mathcal {E}\\setminus \\lbrace 0\\rbrace }{\\sup }~\\frac{\\Vert e^{-t\\Lambda }x\\Vert _\\mathcal {E}}{\\Vert x\\Vert _\\mathcal {E}}=1.$ $\\bullet $ Smoothing property.", "For all $\\alpha \\in [0,\\infty )$ , there exists a real number $C_\\alpha \\in (0,\\infty )$ such that, for all $(u,v)\\in \\mathcal {H}$ and all $t\\in (0,\\infty )$ , one has $\\Vert e^{-t\\Lambda }\\begin{pmatrix}(-\\Delta )^{\\alpha }u\\\\ v\\end{pmatrix}\\Vert _{\\mathcal {H}}\\le C_\\alpha \\min (1,t)^{-\\alpha }\\Vert (u,v)\\Vert _{\\mathcal {H}}.$ $\\bullet $ Temporal regularity.", "For all $\\mu ,\\nu \\ge 0$ with $\\mu +\\nu \\le 1$ , there exists a real number $C_{\\mu ,\\nu }\\in (0,\\infty )$ such that, for all $x=(u,v)\\in \\mathcal {H}^{2\\nu }$ and all $t_1,t_2\\in (0,\\infty )$ , one has $\\Vert e^{-t_2\\Lambda }x-e^{-t_1\\Lambda }x\\Vert _{\\mathcal {H}}\\le C_{\\mu ,\\nu }\\frac{|t_2-t_1|^{\\mu +\\nu }}{\\min (t_2,t_1)^\\mu }\\Vert (-\\Delta )^\\nu u\\Vert _H.$ $\\bullet $ On the one hand, since the eigenvalues $\\bigl (\\lambda _j\\bigr )_{j\\ge 0}$ of $-\\Delta $ are nonnegative, it is straightforward to see that for all $x=(u,v)\\in \\mathcal {H}$ and $t\\ge 0$ one has $e^{-t\\Lambda }x\\in \\mathcal {H}$ , and $\\Vert e^{-t\\Lambda }x\\Vert _\\mathcal {H}^2=\\Vert e^{t\\Delta }u\\Vert _H^2+\\Vert v\\Vert _H^2\\le \\Vert u\\Vert _{H}^2+\\Vert v\\Vert _{H}^2=\\Vert x\\Vert _{\\mathcal {H}}^2.$ This proves that $e^{-t\\Lambda }$ is a bounded linear operator from $\\mathcal {H}$ to $\\mathcal {H}$ for all $t\\ge 0$ , and that $\\underset{x\\in \\mathcal {H}\\setminus \\lbrace 0\\rbrace }{\\sup }~\\frac{\\Vert e^{-t\\Lambda }x\\Vert _\\mathcal {H}}{\\Vert x\\Vert _\\mathcal {H}}\\le 1.$ On the other hand, using the formula for the Green function of the heat equation with homogeneous Neumann boundary conditions, the semigroup $\\bigl (e^{t\\Delta }\\bigr )_{t\\ge 0}$ defined by (REF ) satisfies the following properties: for all $t\\ge 0$ and $u\\in E$ , one has $e^{t\\Delta }u\\in E$ and $\\Vert e^{t\\Delta }u\\Vert _{E}\\le \\Vert u\\Vert _E$ .", "As a consequence, for all $x=(u,v)\\in \\mathcal {E}$ , one has $e^{-t\\Lambda }x=(e^{t\\Delta }u,v)\\in \\mathcal {E}$ and $\\Vert e^{-t\\Lambda }x\\Vert _\\mathcal {E}=\\max \\bigl (\\Vert e^{t\\Delta }u\\Vert _E,\\Vert v\\Vert _E\\bigr )\\le \\max \\bigl (\\Vert u\\Vert _E,\\Vert v\\Vert _E\\bigr )=\\Vert x\\Vert _\\mathcal {E}.$ To conclude the proof of (REF ), it suffices to check that for $x=(0,v)$ and all $t\\ge 0$ one has $e^{-t\\Lambda }x=x$ .", "$\\bullet $ The smoothing property (REF ) is a straightforward consequence of the smoothing property for the semigroup $\\bigl (e^{t\\Delta }\\bigr )_{t\\ge 0}$ : for all $\\alpha \\in [0,\\infty )$ , $t\\ge 0$ and $u\\in H$ , one has (recall that $\\lambda _0=0$ ) $\\Vert e^{t\\Delta }(-\\Delta )^\\alpha u\\Vert _H^2=\\sum _{j\\ge 1}e^{-2t\\lambda _j}\\lambda _j^{2\\alpha }\\langle u,e_j\\rangle _H^2\\le \\underset{\\xi \\in (0,\\infty )}{\\sup }~\\bigl (\\xi ^{2\\alpha }e^{-2\\xi }\\bigr )~t^{-2\\alpha }\\Vert u\\Vert _H^2.$ As a consequence, for all $\\alpha \\in [0,\\infty )$ , $t\\ge 0$ and $x=(u,v)\\in \\mathcal {H}$ , one has $\\Vert e^{-t\\Lambda }\\begin{pmatrix}(-\\Delta )^{\\alpha }u\\\\ v\\end{pmatrix}\\Vert _{\\mathcal {H}}^2=\\Vert e^{t\\Delta }(-\\Delta )^{\\alpha }u\\Vert _H^2+\\Vert v\\Vert _H^2\\le C_\\alpha ^2 t^{-2\\alpha }\\Vert u\\Vert _H^2+\\Vert v\\Vert _H^2\\le C_\\alpha ^2 \\min (1,t)^{-2\\alpha }\\Vert x\\Vert _{\\mathcal {H}}^2.$ $\\bullet $ The regularity property (REF ) is a straightforward consequence of the following regularity property for the semigroup $\\bigl (e^{t\\Delta }\\bigr )_{t\\ge 0}$ : for all $\\mu ,\\nu \\in [0,1]$ with $\\mu +\\nu \\le 1$ , $0\\le t_1\\le t_2$ and $u\\in H^{2\\nu }$ , one has $\\Vert e^{t_2\\Delta }u-e^{t_1\\Delta }u\\Vert ^2_H&=\\Vert (e^{(t_2-t_1)\\Delta }-I)e^{t_1\\Delta }u\\Vert ^2_H\\\\&=\\sum _{j\\ge 1}\\bigl (e^{-(t_2-t_1)\\lambda _j}-1\\bigr )^2e^{-2t_1\\lambda _j}\\langle u,e_j\\rangle _H^2\\\\&\\le 2^{2(\\mu +\\nu )}(t_2-t_1)^{2(\\mu +\\nu )}\\sum _{j\\ge 1}\\lambda _j^{2(\\mu +\\nu )}e^{-2t_1\\lambda _j}\\langle u,e_j\\rangle _H^2\\\\&\\le 2^{2(\\mu +\\nu )}\\underset{\\xi \\in (0,\\infty )}{\\sup }~\\bigl (\\xi ^{2\\mu }e^{-2\\xi }\\bigr )~\\frac{(t_2-t_1)^{2(\\mu +\\nu )}}{t_1^{2\\mu }}\\sum _{j\\ge 1}\\lambda _j^{2\\nu }\\langle u,e_j\\rangle _H^2\\\\&\\le 2^{2(\\mu +\\nu )}\\underset{\\xi \\in (0,\\infty )}{\\sup }~\\bigl (\\xi ^{2\\mu }e^{-2\\xi }\\bigr )~\\frac{(t_2-t_1)^{2(\\mu +\\nu )}}{t_1^{2\\mu }}\\Vert (-\\Delta )^\\nu u\\Vert _H^2.$ As a consequence, for all $\\mu ,\\nu \\in [0,1]$ with $\\mu +\\nu \\le 1$ , $0\\le t_1\\le t_2$ and $x=(u,v)\\in H^{2\\nu }\\times H$ , one has $\\Vert e^{-t_2\\Lambda }x-e^{-t_1\\Lambda }x\\Vert _{\\mathcal {H}}=\\Vert e^{t_2\\Delta }u-e^{t_1\\Delta }u\\Vert _H\\le C_{\\mu ,\\nu }\\frac{|t_2-t_1|^{\\mu +\\nu }}{t_1^\\mu }\\Vert (-\\Delta )^\\nu u\\Vert _H.$ The proof of Proposition REF is thus completed.", "In the sequel, the following properties are also used for the analysis of the splitting scheme (REF ) for which a linear implicit Euler method is used for the approximation (REF ) of the stochastic convolution: for all $t\\ge 0$ , $(I+t\\Lambda )^{-1}$ is a bounded linear operator from $\\mathcal {H}$ to $\\mathcal {H}$ and from $\\mathcal {E}$ to $\\mathcal {E}$ , and one has $\\underset{x\\in \\mathcal {H}\\setminus \\lbrace 0\\rbrace }{\\sup }~\\frac{\\Vert (I+t\\Lambda )^{-1}x\\Vert _\\mathcal {H}}{\\Vert x\\Vert _\\mathcal {H}}=1,\\quad \\underset{x\\in \\mathcal {E}\\setminus \\lbrace 0\\rbrace }{\\sup }~\\frac{\\Vert (I+t\\Lambda )^{-1}x\\Vert _\\mathcal {E}}{\\Vert x\\Vert _\\mathcal {E}}=1.$ The proof of the inequality (REF ) is straightforward.", "Indeed, for all $x\\in \\mathcal {H}$ or $x\\in \\mathcal {E}$ , and all $t\\ge 0$ , one has $(I+t\\Lambda )^{-1}x=\\int _0^\\infty e^{-(I+t\\Lambda )s}x\\,\\text{d}s.$ Using (REF ), one then obtains the inequalities $&\\Vert (I+t\\Lambda )^{-1}x\\Vert _\\mathcal {H}\\le \\int _{0}^{\\infty }e^{-s}\\Vert e^{-ts\\Lambda }x\\Vert _\\mathcal {H}\\,\\text{d}s\\le \\int _{0}^{\\infty }e^{-s}\\,\\text{d}s\\Vert x\\Vert _\\mathcal {H}=\\Vert x\\Vert _\\mathcal {H}\\\\&\\Vert (I+t\\Lambda )^{-1}x\\Vert _\\mathcal {E}\\le \\int _{0}^{\\infty }e^{-s}\\Vert e^{-ts\\Lambda }x\\Vert _\\mathcal {E}\\,\\text{d}s\\le \\int _{0}^{\\infty }e^{-s}\\,\\text{d}s\\Vert x\\Vert _\\mathcal {E}=\\Vert x\\Vert _\\mathcal {E}.$ Like in the proof of (REF ), choosing $x=(0,v)$ gives $(I+t\\Lambda )^{-1}x=x$ for all $t\\ge 0$ , and thus concludes the proof of (REF )." ], [ "Proofs of Propositions ", "In order to prove Propositions REF and REF which state properties of the mappings $\\phi _\\tau :\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ and $\\psi _\\tau :\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ defined by (REF ) and (REF ), it is convenient to introduce the auxiliary mappings $\\phi _t^{\\rm AC}:\\mathbb {R}\\rightarrow \\mathbb {R}$ and $\\psi _t^{\\rm AC}:\\mathbb {R}\\rightarrow \\mathbb {R}$ , defined as follows: for all $t\\in (0,\\infty )$ and $u\\in \\mathbb {R}$ , set $\\phi _t^{\\rm AC}(u)=\\frac{u}{\\sqrt{u^2+(1-u^2)e^{-2t}}},\\quad \\psi _t^{\\rm AC}(u)=\\frac{\\phi _t^{\\rm AC}(u)-u}{t}.$ The mapping $\\phi _t^{\\rm AC}$ is the flow map associated with the nonlinear differential equation, see the subsystem (REF ), $\\frac{\\text{d}u^{\\rm AC}(t)}{\\text{d}t}=u^{\\rm AC}(t)-(u^{\\rm AC}(t))^3,$ meaning that $u^{\\rm AC}(t)=\\phi _t^{\\rm AC}(u^{\\rm AC}(0))$ for all $t\\ge 0$ .", "The properties of the mappings $\\phi _\\tau ^{\\rm AC}$ and $\\psi _\\tau ^{\\rm AC}$ stated in Lemma REF are given by [13].", "Lemma 4.2 There exists $C(\\tau _0)\\in (0,\\infty )$ such that for all $\\tau \\in (0,\\tau _0)$ , the mappings $\\phi _\\tau ^{\\rm AC}:\\mathbb {R}\\rightarrow \\mathbb {R}$ and $\\psi _\\tau ^{\\rm AC}:\\mathbb {R}\\rightarrow \\mathbb {R}$ satisfy the following properties: $\\bullet $ For all $\\tau \\in (0,\\tau _0)$ and $u_1,u_2\\in \\mathbb {R}$ , one has $|\\phi _\\tau ^{\\rm AC}(u_2)-\\phi _\\tau ^{\\rm AC}(u_1)|\\le e^{\\tau }|u_2-u_1|.$ $\\bullet $ For all $\\tau \\in (0,\\tau _0)$ and $u_1,u_2\\in \\mathbb {R}$ , one has $&\\bigl (u_2-u_1\\bigr )\\bigl (\\psi _\\tau ^{\\rm AC}(u_2)-\\psi _\\tau ^{\\rm AC}(u_1)\\bigr )\\le C(\\tau _0)|u_2-u_1|^2, \\\\&|\\psi _\\tau ^{\\rm AC}(u_2)-\\psi _\\tau ^{\\rm AC}(u_1)|\\le C(\\tau _0)\\bigl (1+|u_1|^3+|u_2|^3\\bigr )|u_2-u_1|,$ and for all $\\tau \\in (0,\\tau _0)$ and $u\\in \\mathbb {R}$ , one has $|\\psi _\\tau (u)-(u-u^3)|\\le C(\\tau _0)\\tau \\bigl (1+|u|^5\\bigr ).$ We are now in position to prove Proposition REF .", "The result is straightforward: $\\phi _\\tau $ is the composition of the two globally Lipschitz continuous mappings $\\phi _\\tau ^{\\rm L}$ and $\\phi _\\tau ^{\\rm NL}$ .", "The proof is given to exhibit the dependence of the Lipschitz constant with respect to the time-step size $\\tau \\in (0,\\tau _0)$ .", "[Proof of Proposition REF ] Note that for all $\\tau \\in (0,\\tau _0)$ and $x=(u,v)\\in \\mathbb {R}^2$ one has $\\phi _\\tau ^{\\rm NL}(x)=\\begin{pmatrix}\\phi _\\tau ^{\\rm AC}(u)\\\\ v+\\beta \\tau \\end{pmatrix}.$ Using the definition (REF ) and the inequality (REF ) from Lemma REF , one then obtains the following inequality: for all $\\tau \\in (0,\\tau _0)$ and all $x_1=(u_1,v_1),x_2=(u_2,v_2)\\in \\mathbb {R}^2$ , one has $\\Vert \\phi _\\tau (x_2)-\\phi _\\tau (x_{1})\\Vert ^2&=\\Vert \\phi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x_2))-\\phi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x_1))\\Vert ^2\\\\&=\\Vert e^{\\tau B}\\bigl (\\phi _\\tau ^{\\rm NL}(x_2)-\\phi _\\tau ^{\\rm NL}(x_1)\\bigr )\\Vert ^2\\\\&\\le e^{2\\tau B}\\Vert \\phi _\\tau ^{\\rm NL}(x_2)-\\phi _\\tau ^{\\rm NL}(x_1)\\Vert ^2\\\\&\\le e^{2\\tau B}\\bigl (|\\phi _\\tau ^{\\rm AC}(u_2)-\\phi _\\tau ^{\\rm AC}(u_1)|^2+|v_2-v_1|^2\\bigr )\\\\&\\le e^{2\\tau B}\\bigl (e^{2\\tau }|u_2-u_1|^2+|v_2-v_1|^2\\bigr )\\\\&\\le e^{2\\tau (1+B)}\\Vert x_2-x_1\\Vert ^2.$ This concludes the proof of Proposition REF .", "In order to prove Proposition REF , the main tool is the following expression for the mapping $\\psi _\\tau $ defined by (REF ): for all $\\tau \\in (0,\\tau _0)$ and $x\\in \\mathbb {R}^2$ , one has $\\psi _\\tau (x)=\\psi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x))+\\psi _\\tau ^{\\rm NL}(x),$ where the mappings $\\psi _\\tau ^{\\rm L}$ and $\\psi _\\tau ^{\\rm NL}$ are given by $&\\psi _\\tau ^{\\rm L}(x)=\\frac{\\phi _\\tau ^{\\rm L}(x)-x}{\\tau }=\\frac{e^{\\tau B}-I}{\\tau }x\\\\&\\psi _\\tau ^{\\rm NL}(x)=\\frac{\\phi _\\tau ^{\\rm NL}(x)-x}{\\tau }=\\begin{pmatrix} \\psi _\\tau ^{\\rm AC}(u)\\\\ \\beta \\end{pmatrix}$ for all $\\tau \\in (0,\\tau _0)$ and $x=(u,v)\\in \\mathbb {R}^2$ .", "The proof of the equality (REF ) is straightforward: using (REF ), one has $\\psi _\\tau (x)&=\\frac{\\phi _\\tau (x)-x}{\\tau }=\\frac{\\phi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x))-\\phi _\\tau ^{\\rm NL}(x)}{\\tau }+\\frac{\\phi _\\tau ^{\\rm NL}(x)-x}{\\tau }\\\\&=\\psi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x))+\\psi _\\tau ^{\\rm NL}(x).$ Having the identity (REF ) at hand, we are now in position to prove Proposition REF .", "[Proof of Proposition REF ] Note that the mapping $\\psi _\\tau ^{\\rm L}:\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ is linear and therefore is globally Lipschitz continuous.", "In addition, for all $\\tau \\in (0,\\tau _0)$ and $x_1,x_2\\in \\mathbb {R}^2$ , one has $\\Vert \\psi _\\tau ^{\\rm L}(x_2)-\\psi _\\tau ^{\\rm L}(x_1)\\Vert \\le \\frac{e^{\\tau B}-I}{\\tau }\\Vert x_2-x_1\\Vert \\le \\frac{e^{\\tau _0 B}-1}{\\tau _0}\\Vert x_2-x_1\\Vert ,$ using the inequalities $\\frac{e^{\\tau B}-I}{\\tau }&=\\sum _{k=1}^{\\infty }\\frac{\\tau ^{k-1}}{k!", "}B^k\\le \\sum _{k=1}^{\\infty }\\frac{\\tau ^{k-1}}{k!", "}B^k\\le \\sum _{k=1}^{\\infty }\\frac{\\tau _0^{k-1}}{k!", "}B^k=\\frac{e^{\\tau _0 B}-1}{\\tau _0}.$ Let us first prove the one-sided Lipschitz continuity property (REF ): for all $\\tau \\in (0,\\tau _0)$ and $x_1,x_2\\in \\mathbb {R}^2$ , using the identity (REF ), then the Cauchy–Schwarz inequality and (REF ), one has $\\langle x_2-x_1,\\psi _\\tau (x_2)-\\psi _\\tau (x_1)\\rangle &=\\langle x_2-x_1,\\psi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x_2))-\\psi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x_1))\\rangle \\\\&+\\langle x_2-x_1,\\psi _\\tau ^{\\rm NL}(x_2)-\\psi _\\tau ^{\\rm NL}(x_1)\\rangle \\\\&\\le \\frac{e^{\\tau _0 B}-1}{\\tau _0} \\Vert x_2-x_1\\Vert \\Vert \\phi _\\tau ^{\\rm NL}(x_2)-\\phi _\\tau ^{\\rm NL}(x_2)\\Vert \\\\&+\\langle x_2-x_1,\\psi _\\tau ^{\\rm NL}(x_2)-\\psi _\\tau ^{\\rm NL}(x_1)\\rangle .$ On the one hand, using the same arguments as in the proof of Proposition REF , one has $\\Vert \\phi _\\tau ^{\\rm NL}(x_2)-\\phi _\\tau ^{\\rm NL}(x_1)\\Vert \\le e^\\tau \\Vert x_2-x_1\\Vert \\le e^{\\tau _0} \\Vert x_2-x_1\\Vert .$ On the other hand, for all $x=(u,v)\\in \\mathbb {R}^2$ one has $\\psi _\\tau ^{\\rm NL}(x)=\\begin{pmatrix} \\psi _\\tau ^{\\rm AC}(u)\\\\ \\beta \\end{pmatrix}.$ Using the inequality (REF ) from Lemma REF , one then obtains $\\langle x_2-x_1,\\psi _\\tau ^{\\rm NL}(x_2)-\\psi _\\tau ^{\\rm NL}(x_1)\\rangle \\le e^\\tau \\Vert x_2-x_1\\Vert ^2\\le e^{\\tau _0}\\Vert x_2-x_1\\Vert ^2.$ Gathering the results then gives $\\langle x_2-x_1,\\psi _\\tau (x_2)-\\psi _\\tau (x_1)\\rangle \\le \\Bigl (\\frac{e^{\\tau _0 B}-1}{\\tau _0}+1\\Bigr )e^{\\tau _0}\\Vert x_2-x_1\\Vert ^2,$ which concludes the proof of the inequality (REF ).", "Let us now prove the local Lipschitz continuity property ().", "Using the identity (REF ) and the inequality (), for all $\\tau \\in (0,\\tau _0)$ and $x_1=(u_1,v_1),x_2=(u_2,v_2)\\in \\mathbb {R}^2$ , one has $\\Vert \\psi _\\tau (x_2)-\\psi _\\tau (x_1)\\Vert &\\le \\Vert \\psi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x_2))-\\psi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x_1))\\Vert +\\Vert \\psi _\\tau ^{\\rm NL}(x_2)-\\psi _\\tau ^{\\rm NL}(x_1)\\Vert \\\\&\\le \\frac{e^{\\tau _0 B}-1}{\\tau _0}\\Vert \\phi _\\tau ^{\\rm NL}(x_2)-\\phi _\\tau ^{\\rm NL}(x_1)\\Vert +C(\\tau _0)\\bigl (1+|u_1|^3+|u_2|^3\\bigr )|u_2-u_1|\\\\&\\le \\Bigl (\\frac{e^{\\tau _0 B}-1}{\\tau _0}e^{\\tau _0}+C(\\tau _0)\\Bigr )\\bigl (1+\\Vert x_1\\Vert ^3+\\Vert x_2\\Vert ^3\\bigr )\\Vert x_2-x_1\\Vert .$ Let us now prove the error estimate (REF ).", "Using the identities (REF ) and (REF ), for all $\\tau \\in (0,\\tau _0)$ and $x=(u,v)\\in \\mathbb {R}^2$ , one has $\\Vert \\psi _\\tau (x)-F(x)\\Vert \\le \\Vert \\psi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x))-F^{\\rm L}(x)\\Vert +\\Vert \\psi _\\tau ^{\\rm NL}(x)-F^{\\rm NL}(x)\\Vert .$ On the one hand, using the inequality (REF ), the expressions of the linear mappings $F^{\\rm L}$ and $\\psi _\\tau ^{\\rm L}$ and the definition of $\\psi _\\tau ^{\\rm NL}$ , one has $\\Vert \\psi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x))-F^{\\rm L}(x)\\Vert &\\le \\Vert \\psi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x))-F^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x))\\Vert +\\Vert F^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x))-F^{\\rm L}(x)\\Vert \\\\&\\le \\frac{e^{\\tau B}-I-\\tau B}{\\tau }\\Vert \\phi _\\tau ^{\\rm NL}(x)\\Vert +\\tau B\\Vert \\psi _\\tau ^{\\rm NL}(x)\\Vert .$ Note that $\\phi _\\tau ^{\\rm NL}(0)=(\\phi _\\tau ^{\\rm AC}(0),\\beta \\tau )=(0,\\beta \\tau )$ and $\\psi _\\tau ^{\\rm NL}(0)=(\\psi _\\tau ^{\\rm AC}(0),\\beta )=(0,\\beta )$ .", "In addition, one has $\\frac{e^{\\tau B}-I-\\tau B}{\\tau }\\le \\sum _{k=2}^{\\infty }\\frac{\\tau ^{k-1}}{k!", "}B^k\\le \\tau \\sum _{k=2}^{\\infty }\\frac{\\tau _0^{k-2}}{k!", "}B^k=\\tau \\frac{e^{\\tau _0B}-1-\\tau _0B}{\\tau _0}.$ Therefore, using the inequalities (REF ) from Proposition REF and () from Lemma REF , one has $\\Vert \\psi _\\tau ^{\\rm L}(\\phi _\\tau ^{\\rm NL}(x))-F^{\\rm L}(x)\\Vert \\le C(\\tau _0)\\tau (1+\\Vert x\\Vert ^4).$ On the other hand, using the inequality (REF ) from Lemma REF , one has $\\Vert \\psi _\\tau ^{\\rm NL}(x)-F^{\\rm NL}(x)\\Vert =|\\psi _\\tau ^{\\rm AC}(u)-(u-u^3)|\\le C(\\tau _0)\\tau \\bigl (1+|u|^5\\bigr ).$ Gathering the estimates then gives the inequality $\\Vert \\psi _\\tau (x)-F(x)\\Vert \\le C(\\tau _0)\\tau (1+\\Vert x\\Vert ^5),$ which concludes the proof of (REF ).", "It remains to prove the inequality (REF ).", "The proof is straightforward: using (REF ) and the equalities $\\phi _\\tau ^{\\rm AC}(0)=\\psi _\\tau ^{\\rm AC}(0)=0$ , one has $\\psi _\\tau (0)=\\frac{e^{\\tau B}-I}{\\tau }\\phi _\\tau ^{\\rm NL}(0)+\\psi _\\tau ^{\\rm NL}(0)=e^{\\tau B}\\begin{pmatrix} 0\\\\ \\beta \\end{pmatrix}.$ Therefore one gets $\\underset{\\tau \\in (0,\\tau _0)}{\\sup }~\\Vert \\psi _\\tau (0)\\Vert \\le e^{\\tau _0B}|\\beta |.$ The proof of Proposition REF is thus completed.", "Let us conclude this subsection with a remark concerning the order of the composition of the two subsystems to define the splitting schemes, see equation (REF ).", "Remark 4.3 Let $\\hat{\\phi }_\\tau :\\mathbb {R}^2\\rightarrow \\mathbb {R}^2$ be defined as follows: for all $\\tau \\in (0,\\tau _0)$ , set $\\hat{\\phi }_\\tau =\\phi _\\tau ^{\\rm NL}\\circ \\phi _\\tau ^{\\rm L}.$ Compared with the definition (REF ) of $\\phi _\\tau $ , the order of the composition of the integrators $\\phi _\\tau ^{\\rm L}$ and $\\phi _\\tau ^{\\rm NL}$ associated with the subsystems (REF ) and (REF ) respectively is reversed.", "Define also $\\hat{\\psi }_\\tau (x)=\\frac{\\hat{\\phi }_\\tau (x)-x}{\\tau }$ for all $\\tau \\in (0,\\tau _0)$ and $x\\in \\mathbb {R}^2$ .", "Using the mapping $\\hat{\\phi }_\\tau $ , modifying the definition of the scheme (REF ) gives the alternative splitting scheme $\\hat{X}_{n+1}=\\mathcal {A}_\\tau \\hat{\\phi }_\\tau (\\hat{X}_n)+\\int _{t_n}^{t_{n+1}}\\mathcal {B}_{t_{n+1}-s}\\,\\text{d}\\mathcal {W}(s)$ for the approximation of the stochastic evolution equation (REF ).", "Precisely, alternatives of the splitting schemes (REF ), (REF ) and (REF ) are obtained from the formulation (REF ).", "However, the analysis performed in this paper does not encompass the case of the scheme (REF ), due to missing properties for the mapping $\\hat{\\psi }_\\tau $ , compared with $\\psi _\\tau $ , as explained below.", "Note that the result of Proposition REF also holds with $\\phi _\\tau $ replaced by $\\hat{\\phi }_\\tau $ .", "However, it is not clear whether the one-sided Lipschitz continuity property (REF ) from Proposition REF holds also with $\\psi _\\tau $ replaced by $\\hat{\\psi }_\\tau $ (uniformly with respect to $\\tau \\in (0,\\tau _0)$ ).", "The proof of the inequality (REF ) exploits the global Lipschitz continuity property (REF ) of the auxiliary mapping $\\psi _\\tau ^{\\rm L}$ , which is a linear mapping from $\\mathbb {R}^2$ to $\\mathbb {R}^2$ .", "Instead of the identity (REF ), one has $\\hat{\\psi }_\\tau (x)=\\psi _\\tau ^{\\rm NL}(\\phi _\\tau ^{\\rm L}(x))+\\psi _\\tau ^{\\rm L}(x),$ and since $\\psi _\\tau ^{\\rm NL}$ is not globally Lipschitz continuous uniformly with respect to $\\tau \\in (0,\\tau _0)$ , the arguments of the proof above cannot be repeated for the splitting scheme (REF )." ], [ "Moment bounds for the solutions of the stochastic evolution equations (", "Let us first state the moment bounds for the stochastic convolution defined by (REF ).", "Lemma 4.4 Let $\\bigl (\\mathcal {Z}(t)\\bigr )_{t\\ge 0}$ be defined by (REF ).", "For all $T\\in (0,\\infty )$ and $p\\in [1,\\infty )$ , one has $\\underset{0\\le t\\le T}{\\sup }~\\mathbb {E}[\\Vert \\mathcal {Z}(t)\\Vert _\\mathcal {E}^p]<\\infty .$ Let us only provide the sketch of the proof.", "To deal with homogeneous Neumann boundary conditions, it is convenient to introduce $Z_0(t)=\\langle Z(t),e_0\\rangle e_0=\\beta _0(t)e_0$ and ${Z}_\\perp (t)=Z(t)-Z_0(t)$ for all $t\\ge 0$ .", "Let also $\\mathcal {Z}_0(t)=\\begin{pmatrix} Z_0(t)\\\\0\\end{pmatrix}$ and ${\\mathcal {Z}}_\\perp (t)=\\mathcal {Z}(t)-\\mathcal {Z}_0(t)$ .", "On the one hand, one has $\\underset{0\\le t\\le T}{\\sup }~\\mathbb {E}[\\Vert \\mathcal {Z}_0(t)\\Vert _\\mathcal {E}^p]=\\underset{0\\le t\\le T}{\\sup }~\\mathbb {E}[\\Vert Z_0(t)\\Vert _E^p]\\le \\underset{0\\le t\\le T}{\\sup }~\\mathbb {E}[|\\beta _0(t)|^p]\\Vert e_0\\Vert _E^p\\le CT^{\\frac{p}{2}}.$ On the other hand, applying the temporal and spatial increment bounds [21] and the Kolmogorov regularity criterion [27] gives $\\underset{0\\le t\\le T}{\\sup }~\\mathbb {E}[\\Vert \\mathcal {Z}_\\perp (t)\\Vert _\\mathcal {E}^p]=\\underset{0\\le t\\le T}{\\sup }~\\mathbb {E}[\\Vert Z_\\perp (t)\\Vert _E^p]\\le C(T)<\\infty .$ Combining the moment bounds for $\\mathcal {Z}_0(t)$ and $\\mathcal {Z}_\\perp (t)$ then concludes the proof of Lemma REF .", "We now state well-posedness and moment bounds properties, first for the solutions to the stochastic FitzHugh–Nagumo SPDE system (REF ), second for the solutions to the auxiliary SPDE (REF ).", "Proposition 4.5 For any initial value $x_0\\in \\mathcal {H}$ , the stochastic evolution equation (REF ) admits a unique global mild solution $\\bigl (X(t)\\bigr )_{t\\ge 0}$ , in the sense that (REF ) is satisfied.", "Moreover, for all $T\\in (0,\\infty )$ and all $p\\in [1,\\infty )$ , there exists $C_p(T)\\in (0,\\infty )$ such that for all $x_0\\in \\mathcal {E}$ one has $\\underset{0\\le t\\le T}{\\sup }~\\mathbb {E}[\\Vert X(t)\\Vert _\\mathcal {E}^p]\\le C_p(T)\\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^p\\bigr ).$ Proposition 4.6 For any initial value $x_0\\in \\mathcal {H}$ and for all $\\tau \\in (0,\\tau _0)$ , the stochastic evolution equation (REF ) admits a unique global mild solution $\\bigl (X_\\tau (t)\\bigr )_{t\\ge 0}$ , in the sense that $X_\\tau (t)=e^{-t\\Lambda }x_0+\\int _0^t e^{-(t-s)\\Lambda }\\psi _\\tau (X_\\tau (s))\\,\\text{d}s+\\int _0^t e^{-(t-s)\\Lambda }\\,\\text{d}\\mathcal {W}(s)$ is satisfied for all $t\\ge 0$ .", "Moreover, for all $T\\in (0,\\infty )$ and all $p\\in [1,\\infty )$ , there exists $C_p(T,\\tau _0)\\in (0,\\infty )$ such that for all $x_0\\in \\mathcal {E}$ one has $\\underset{\\tau \\in (0,\\tau _0)}{\\sup }~\\underset{0\\le t\\le T}{\\sup }~\\mathbb {E}[\\Vert X_\\tau (t)\\Vert _\\mathcal {E}^p]\\le C_p(T)\\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^p\\bigr ).$ The detailed proofs of Propositions REF and REF are omitted.", "However let us emphasize that the main arguments used in the proofs are, on the one hand, the one-sided Lipschitz continuity properties (REF ) and (REF ) of $F$ and $\\psi _\\tau $ respectively, and on the other hand, the moment bounds on $\\mathcal {Z}(t)$ from Lemma REF .", "Observe that the mapping $\\psi _\\tau $ is globally Lipschitz continuous for any $\\tau >0$ , therefore the existence and uniqueness of the mild solution $\\bigl (X_\\tau (t)\\bigr )_{t\\ge 0}$ satisfying (REF ) follows from standard fixed point arguments, see for instance [21].", "The proof of the moment bounds (REF ) requires some care: indeed, one needs to obtain upper bounds which are uniform with respect to $\\tau \\in (0,\\tau _0)$ , and applying [21] would not be appropriate since the Lipschitz constant of $\\psi _\\tau $ is unbounded for $\\tau \\in (0,\\tau _0)$ .", "Introducing $Y_\\tau (t)=X_\\tau (t)-\\mathcal {Z}(t)$ , one obtains the moment bounds (REF ) using the one-sided Lipschitz continuity property (REF ) from Proposition REF , which is uniform with respect to $\\tau \\in (0,\\tau _0)$ .", "Similar arguments are used to prove Proposition REF .", "Propositions REF and REF are variants of [13] for the analysis of the stochastic Allen–Cahn equation and we refer to [17] for a more general version.", "Some arguments need to be adapted since the considered systems (REF ) and (REF ) are not parabolic systems.", "Finally, let us state the following result which is required in Section  below.", "Lemma 4.7 For all $T\\in (0,\\infty )$ , $p\\in [1,\\infty )$ and $\\alpha \\in [0,\\frac{1}{4})$ , there exists $C_{\\alpha ,p}(T)\\in (0,\\infty )$ such that for all $x_0=(u_0,v_0)\\in \\mathcal {H}^{2\\alpha }\\cap \\mathcal {E}$ , all $\\tau \\in (0,\\tau _0)$ and $t_1,t_2\\in [0,T]$ , one has $\\bigl (\\mathbb {E}[\\Vert X_\\tau (t_2)-X_\\tau (t_1)\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_{\\alpha ,p}(T)|t_2-t_1|^{\\alpha }\\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^4+\\Vert x_0\\Vert _\\mathcal {E}^4\\bigr ).$ Let $0\\le t_1<t_2\\le T$ , using the mild form (REF ) of the auxiliary stochastic evolution equation, we obtain the estimate $\\bigl (\\mathbb {E}\\left[ \\Vert X_\\tau (t_2)-X_\\tau (t_1)\\Vert _\\mathcal {H}^p \\right]\\bigr )^{\\frac{1}{p}}&\\le \\Vert e^{-t_2\\Lambda }x_0-e^{-t_1\\Lambda }x_0\\Vert _{\\mathcal {H}}+\\bigl (\\mathbb {E}\\left[ \\Vert \\mathcal {Z}(t_2)-\\mathcal {Z}(t_1)\\Vert _\\mathcal {H}^p \\right]\\bigr )^{\\frac{1}{p}} \\\\&\\quad +\\int _0^{t_1}\\bigl (\\mathbb {E}\\left[ \\Vert \\left( e^{-(t_2-s)\\Lambda }-e^{-(t_1-s)\\Lambda } \\right)\\psi _\\tau (X_\\tau (s))\\Vert _\\mathcal {H}^p \\right]\\bigr )^{\\frac{1}{p}}\\,\\text{d}s\\\\&\\quad +\\int _{t_1}^{t_2}\\bigl (\\mathbb {E}\\left[ \\Vert e^{-(t_2-s)\\Lambda }\\psi _\\tau (X_\\tau (s))\\Vert _\\mathcal {H}^p \\right]\\bigr )^{\\frac{1}{p}}\\,\\text{d}s,$ where we recall that $\\mathcal {Z}(t)$ denotes the stochastic convolution (REF ).", "The first term on the right-hand side is estimated using the inequality (REF ) in order to get $\\Vert e^{-t_2\\Lambda }x_0-e^{-t_1\\Lambda }x_0\\Vert _\\mathcal {H}\\le |t_2-t_1|^\\alpha \\Vert (-\\Lambda )^\\alpha x_0\\Vert _\\mathcal {H}.$ The second term corresponds to the temporal regularity of the stochastic convolution $\\bigl (\\mathbb {E}\\left[ \\Vert \\mathcal {Z}(t_2)-\\mathcal {Z}(t_1)\\Vert _\\mathcal {H}^p \\right]\\bigr )^{\\frac{1}{p}}\\le |t_2-t_1|^\\alpha .$ This is obtained combining the proofs of Lemma REF and of [8].", "The last two terms are estimated using the polynomial growth $\\Vert \\psi _\\tau (x)\\Vert \\le C(\\tau _0)\\left(1+\\Vert x\\Vert \\right)^4$ , see equations () and (REF ) in Proposition REF .", "Indeed, one has $\\Vert \\left( e^{-(t_2-s)\\Lambda }-e^{-(t_1-s)\\Lambda } \\right)\\psi _\\tau (X_\\tau (s))\\Vert _\\mathcal {H}&\\le C_\\alpha \\frac{|t_2-t_1|^\\alpha }{|t_1-s|^\\alpha }\\Vert \\psi _\\tau (X_\\tau (s)) \\Vert _{\\mathcal {H}}\\\\&\\le C_\\alpha (\\tau _0)\\frac{|t_2-t_1|^\\alpha }{|t_1-s|^\\alpha }\\left(1+\\Vert X_\\tau (s)\\Vert ^4_{\\mathcal {E}}\\right)$ and $\\Vert e^{-(t_2-s)\\Lambda }\\psi _\\tau (X_\\tau (s))\\Vert _\\mathcal {H}\\le \\left(1+\\Vert X_\\tau (s)\\Vert ^4_{\\mathcal {E}}\\right)$ for the last term.", "One concludes the proof using the moment bounds of the solution of the auxiliary stochastic evolution equation, see Proposition REF ." ], [ "Proofs of the main results", "In this section, we provide the detailed proofs for the main results of the present work.", "We start by proving moment bounds for the three splitting schemes (Theorem REF ).", "We then prove the strong error estimates with rate of convergence at least $1/4$ (Theorem REF )." ], [ "Proof of Theorem ", "The proof of the moment bounds (REF ) given below is inspired by the proof of [13] and requires some auxiliary tools.", "Given the time-step size $\\tau \\in (0,\\tau _0)$ , introduce the auxiliary scheme $\\bigl (\\mathcal {Z}_n\\bigr )_{n\\ge 0}$ defined as follows: for all $n\\ge 0$ , $\\mathcal {Z}_{n+1}=\\mathcal {A}_\\tau \\mathcal {Z}_n+\\int _{t_n}^{t_{n+1}}\\mathcal {B}_{t_{n+1}-s}\\,\\text{d}\\mathcal {W}(s)$ with initial value $\\mathcal {Z}_0=0$ , using the same notation as for the general expression (REF ) of the three splitting schemes (REF ), (REF ) and (REF ).", "One has the following moment bounds for the solution of the scheme (REF ).", "Recall that one has $T=N\\tau $ for some integer $N\\in \\mathbb {N}$ .", "Lemma 5.1 For all $T\\in (0,\\infty )$ and $p\\in [1,\\infty )$ , one has $\\underset{\\tau \\in (0,\\tau _0)}{\\sup }~\\underset{0\\le n\\le N}{\\sup }~\\mathbb {E}[\\Vert \\mathcal {Z}_n\\Vert _\\mathcal {E}^p]<\\infty .$ Lemma REF is a variant of [13], using the same arguments as in the sketch of proof of Lemma REF above.", "The proof of Lemma REF is therefore omitted.", "We are now in position to provide the proof of Theorem REF .", "[Proof of Theorem REF ] For all $n\\in \\lbrace 0,\\ldots ,N\\rbrace $ , set $r_n=X_n-\\mathcal {Z}_n.$ Using the definitions (REF ) and (REF ) and the definition (REF ) of the mapping $\\psi _\\tau $ , for all $n\\in \\lbrace 0,\\ldots ,N-1\\rbrace $ , one has $r_{n+1}&=X_{n+1}-\\mathcal {Z}_{n+1}=\\mathcal {A}_\\tau \\Bigl (\\phi _\\tau (X_n)-\\mathcal {Z}_n\\Bigr )\\\\&=\\mathcal {A}_\\tau \\Bigl (\\phi _\\tau (r_n+\\mathcal {Z}_n)-\\phi _\\tau (\\mathcal {Z}_n)\\Bigr )+\\tau \\mathcal {A}_\\tau \\psi _\\tau (\\mathcal {Z}_n).$ On the one hand, using the inequalities (REF ) and (REF ) and the global Lipschitz continuity property (REF ) of $\\phi _\\tau $ (see Proposition REF ), one has $\\Vert \\mathcal {A}_\\tau \\Bigl (\\phi _\\tau (r_n+\\mathcal {Z}_n)-\\phi _\\tau (\\mathcal {Z}_n)\\Bigr )\\Vert _\\mathcal {E}\\le \\Vert \\phi _\\tau (r_n+\\mathcal {Z}_n)-\\phi _\\tau (\\mathcal {Z}_n)\\Vert _\\mathcal {E}\\le e^{\\tau (1+B)}\\Vert r_n\\Vert _\\mathcal {E}.$ On the other hand, using the inequalities (REF ) and (REF ), the local Lipschitz continuity property () of $\\psi _\\tau $ (see Proposition REF ) and the upper bound (REF ), one has $\\Vert \\mathcal {A}_\\tau \\psi _\\tau (\\mathcal {Z}_n)\\Vert _\\mathcal {E}\\le C(\\tau _0)\\bigl (1+\\Vert \\mathcal {Z}_n\\Vert _\\mathcal {E}^4\\bigr ).$ Therefore one obtains the following inequality $\\Vert r_{n+1}\\Vert _\\mathcal {E}\\le e^{\\tau (1+B)}\\Vert r_n\\Vert _\\mathcal {E}+C(\\tau _0)\\bigl (1+\\Vert \\mathcal {Z}_n\\Vert _\\mathcal {E}^4\\bigr ),$ and by a straightforward argument, using the fact that $N\\tau =T$ , one has the estimate: $\\Vert r_n\\Vert _\\mathcal {E}\\le C(T,\\tau _0)\\Bigl (\\Vert r_0\\Vert _\\mathcal {E}+\\sum _{k=0}^{n-1}\\bigl (1+\\Vert \\mathcal {Z}_k\\Vert _\\mathcal {E}^4\\bigr )\\Bigr ),$ for all $n\\in \\lbrace 0,\\ldots ,N\\rbrace $ .", "Finally, for all $p\\in [1,\\infty )$ , using the moment bound (REF ) from Lemma REF , one obtains for all $n\\in \\lbrace 0,\\ldots ,N\\rbrace $ $\\bigl (\\mathbb {E}[\\Vert r_n\\Vert _\\mathcal {E}^p]\\bigr )^{\\frac{1}{p}}\\le C(T,\\tau _0)\\Bigl (\\Vert r_0\\Vert _\\mathcal {E}+\\sum _{k=0}^{n-1}\\bigl (1+\\bigl (\\mathbb {E}[\\Vert \\mathcal {Z}_k\\Vert _\\mathcal {E}^{4p}]\\bigr )^{\\frac{1}{p}}\\bigr )\\Bigr )\\le C_p(T,\\tau _0)\\Bigl (\\Vert r_0\\Vert _\\mathcal {E}+1\\Bigr ).$ Since $X_n=r_n+\\mathcal {Z}_n$ owing to (REF ), using the moment bound above and the moment bound (REF ) from Lemma REF then concludes the proof of the moment bound (REF ).", "The proof of Theorem REF is thus completed." ], [ "Proof of Theorem ", "Recall that the numerical scheme is given by (REF ).", "It is straightforward to check that for all $n\\ge 0$ one has $X_{n}=\\mathcal {A}_\\tau ^n x_0+\\tau \\sum _{k=0}^{n-1}\\mathcal {A}_\\tau ^{n-k}\\psi _\\tau (X_k)+\\sum _{k=0}^{n-1}\\int _{t_k}^{t_{k+1}}\\mathcal {A}_\\tau ^{n-k-1}\\mathcal {B}_{t_{k+1}-s}\\,\\text{d}\\mathcal {W}(s).$ Let us introduce the auxiliary process $\\bigl (X_n^{\\rm aux}\\bigr )_{n\\ge 0}$ which is defined as follows: for all $n\\ge 0$ one has $X_{n}^{\\rm aux}=\\mathcal {A}_\\tau ^n x_0+\\tau \\sum _{k=0}^{n-1}\\mathcal {A}_\\tau ^{n-k}\\psi _\\tau (X_\\tau (t_k))+\\sum _{k=0}^{n-1}\\int _{t_k}^{t_{k+1}}\\mathcal {A}_\\tau ^{n-k-1}\\mathcal {B}_{t_{k+1}-s}\\,\\text{d}\\mathcal {W}(s),$ where we recall that $t_k=k\\tau $ and that $\\bigl (X_\\tau (t)\\bigr )_{t\\ge 0}$ is the unique mild solution of the auxiliary stochastic evolution equation (REF ).", "Note that for all $n\\ge 0$ one has $X_{n+1}^{\\rm aux}=\\mathcal {A}_\\tau X_n^{\\rm aux}+\\tau \\mathcal {A}_\\tau \\psi _\\tau (X_\\tau (t_n))+\\int _{t_n}^{t_{n+1}}\\mathcal {B}_{t_{n+1}-s}\\,\\text{d}\\mathcal {W}(s).$ Lemma 5.2 For all $T\\in (0,\\infty )$ and $p\\in [1,\\infty )$ , there exists $C_p(T)\\in (0,\\infty )$ such that for all $x_0\\in \\mathcal {E}$ one has $\\underset{\\tau \\in (0,\\tau _0)}{\\sup }~\\underset{0\\le n\\le N}{\\sup }~\\mathbb {E}[\\Vert X_n^{\\rm aux}\\Vert _\\mathcal {E}^p]\\le C_p(T)\\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^p\\bigr ).$ [Proof of Lemma REF ] Using the discrete mild formulation (REF ) of $X_n^{\\rm aux}$ , the inequalities (REF ) and (REF ), the local Lipschitz continuity property () of $\\psi _\\tau $ and the upper bound (REF ) (see Proposition REF ), for all $\\tau \\in (0,\\tau _0)$ and $n\\ge 0$ one has $\\Vert X_n^{\\rm aux}\\Vert _\\mathcal {E}\\le \\Vert x_0\\Vert _\\mathcal {E}+C(\\tau _0)\\tau \\sum _{k=0}^{n-1}\\bigl (1+\\Vert X_\\tau (t_k)\\Vert _\\mathcal {E}^4\\bigr )+\\Vert \\mathcal {Z}_n\\Vert _\\mathcal {E}.$ It suffices to use the moment bounds (REF ) for the auxiliary process $X_\\tau $ from Proposition REF and (REF ) for the Gaussian random variables $\\mathcal {Z}_n$ from Lemma REF , and the Minkowskii inequality, to conclude the proof of the moment bounds (REF ).", "The proof of Lemma REF is thus completed.", "Observe that for all $n\\in \\lbrace 0,\\ldots ,N\\rbrace $ the error $X(t_n)-X_n$ can be decomposed as follows: $X(t_n)-X_n=X(t_n)-X_\\tau (t_n)+X_\\tau (t_n)-X_n^{\\rm aux}+X_n^{\\rm aux}-X_n.$ In order to prove Theorem REF , it suffices to prove error bounds for the three error terms appearing in the right-hand side of (REF ).", "They are given in Lemma REF , Lemma REF and Lemma REF respectively.", "The proofs of these technical lemmas are presented at the end of the section.", "Lemma 5.3 For all $T\\in (0,\\infty )$ and $p\\in [1,\\infty )$ , there exists $C_p(T,\\tau _0)\\in (0,\\infty )$ such that for all $x_0\\in \\mathcal {E}$ and all $\\tau \\in (0,\\tau _0)$ , one has $\\underset{t\\in [0,T]}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X(t)-X_\\tau (t)\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_p(T,\\tau _0)\\tau \\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^5\\bigr ).$ Lemma 5.4 For all $T\\in (0,\\infty )$ , $p\\in [1,\\infty )$ and $\\alpha \\in [0,\\frac{1}{4})$ , there exists $C_{\\alpha ,p}(T)\\in (0,\\infty )$ such that for all $x_0=(u_0,v_0)\\in \\mathcal {H}^{2\\alpha }\\cap \\mathcal {E}$ , all $\\tau \\in (0,\\tau _0)$ , one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X_\\tau (t_n)-X_n^{\\rm aux}\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_{\\alpha ,p}(T)\\tau ^{\\alpha }\\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^7+\\Vert x_0\\Vert _\\mathcal {E}^7\\bigr ).$ Lemma 5.5 For all $T\\in (0,\\infty )$ , $p\\in [1,\\infty )$ and $\\alpha \\in [0,\\frac{1}{4})$ , there exists $C_{\\alpha ,p}(T)\\in (0,\\infty )$ such that for all $x_0=(u_0,v_0)\\in \\mathcal {H}^{2\\alpha }\\cap \\mathcal {E}$ , all $\\tau \\in (0,\\tau _0)$ , one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X_n^{\\rm aux}-X_n\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_{\\alpha ,p}(T)\\tau ^{\\alpha }\\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^7+\\Vert x_0\\Vert _\\mathcal {E}^7\\bigr ).$ With the auxiliary error estimates given above, it is straightforward to give the proof of Theorem REF .", "[Proof of Theorem REF ] Using the decomposition of the error (REF ), using the Minkowskii inequality and the error estimates (REF ), (REF ) and (REF ), one obtains the following result: for all $\\alpha \\in [0,\\frac{1}{4})$ and $p\\in [1,\\infty )$ , there exists $C_{\\alpha ,p}\\in (0,\\infty )$ such that for all $\\tau \\in (0,\\tau _0)$ one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X(t_n)-X_n\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}&\\le \\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X(t_n)-X_\\tau (t_n)\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\\\&+\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X_\\tau (t_n)-X_n^{\\rm aux}\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\\\&+\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X_n^{\\rm aux}-X_n\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\\\&\\le C_p(T,\\tau _0)\\tau \\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^5\\bigr )\\\\&+C_{\\alpha ,p}(T)\\tau ^{\\alpha }\\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^7+\\Vert x_0\\Vert _\\mathcal {E}^7\\bigr )\\\\&+C_{\\alpha ,p}(T)\\tau ^{\\alpha }\\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^7+\\Vert x_0\\Vert _\\mathcal {E}^7\\bigr )\\\\&\\le C_{\\alpha ,p}(T)\\tau ^{\\alpha }\\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^7+\\Vert x_0\\Vert _\\mathcal {E}^7\\bigr ).$ This concludes the proof of the inequality (REF ) and the proof of Theorem REF is thus completed.", "Let us now give the proofs of the auxiliary error estimates.", "Note that the proof of Lemma REF requires the error estimate (REF ) from Lemma REF .", "[Proof of Lemma REF ] For all $t\\ge 0$ and $\\tau \\in (0,\\tau _0)$ , set $R_\\tau (t)=X_\\tau (t)-X(t).$ The auxiliary process $\\bigl (R_\\tau (t)\\bigr )_{t\\ge 0}$ is the unique solution of the evolution equation $\\frac{\\text{d}R_\\tau (t)}{\\text{d}t}&=-\\Lambda R_\\tau (t)+\\psi _\\tau (X_\\tau (t))-\\psi _\\tau (X(t))+\\psi _\\tau (X(t))-F(X(t))$ with the initial value $R_\\tau (0)=0$ .", "Therefore one obtains, almost surely, for all $t\\ge 0$ $\\frac{1}{2}\\frac{\\text{d}\\Vert R_\\tau (t)\\Vert _{\\mathcal {H}}^2}{\\text{d}t}&=\\langle R_\\tau (t),-\\Lambda R_\\tau (t)\\rangle _{\\mathcal {H}}+\\langle R_\\tau (t),\\psi _{\\tau }(X_\\tau (t))-\\psi _{\\tau }(X(t))\\rangle _{\\mathcal {H}}\\\\&\\quad +\\langle R_\\tau (t), \\psi _\\tau (X(t))-F(X(t))\\rangle _{\\mathcal {H}}.$ First, one has $\\langle R_\\tau (t),-\\Lambda R_\\tau (t)\\rangle _{\\mathcal {H}}\\le 0.$ Second, using the one-sided Lipschitz continuity property (REF ) from Proposition REF for $\\psi _\\tau $ (uniformly with respect to $\\tau \\in (0,\\tau _0)$ ), one has $\\langle R_\\tau (t),\\psi _{\\tau }(X_\\tau (t))-\\psi _{\\tau }(X(t))\\rangle _{\\mathcal {H}}\\le C(\\tau _0)\\Vert R_\\tau (t)\\Vert _{\\mathcal {H}}^2.$ Finally, using the Cauchy–Schwarz and Young inequalities and the error estimate (REF ) from Proposition REF , one has $\\langle R_\\tau (t), \\psi _\\tau (X(t))-F(X(t))\\rangle _{\\mathcal {H}}&\\le \\Vert R_\\tau (t)\\Vert _\\mathcal {H}\\Vert \\psi _\\tau (X(t))-F(X(t))\\Vert _\\mathcal {H}\\\\&\\le \\frac{1}{2}\\Vert R_\\tau (t)\\Vert _\\mathcal {H}^2+\\frac{1}{2}\\Vert \\psi _\\tau (X(t))-F(X(t))\\Vert _\\mathcal {H}^2\\\\&\\le \\frac{1}{2}\\Vert R_\\tau (t)\\Vert _\\mathcal {H}^2+C(\\tau _0)\\tau ^2\\bigl (1+\\Vert X(t)\\Vert _\\mathcal {E}^{10}\\bigr ).$ Gathering the upper bounds above and using Gronwall's lemma, one obtains, almost surely, for all $t\\in [0,T]$ $\\Vert R_\\tau (t)\\Vert _\\mathcal {H}^2\\le C(T,\\tau _0)\\tau ^2\\int _{0}^{T}\\bigl (1+\\Vert X(s)\\Vert _\\mathcal {E}^{10}\\bigr )\\,\\text{d}s.$ Using the moment bound (REF ) from Proposition REF , one then obtains for all $t\\in [0,T]$ and all $p\\in [2,\\infty )$ $\\bigl (\\mathbb {E}[\\Vert R_\\tau (t)\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{2}{p}}&\\le C(T,\\tau _0)\\tau ^2\\int _{0}^{T}\\bigl (1+\\mathbb {E}[\\Vert X(s)\\Vert _\\mathcal {E}^{5p}]^{\\frac{2}{p}}\\bigr )\\,\\text{d}s\\\\&\\le C(T,\\tau _0)\\tau ^2\\bigl (1+\\underset{s\\in [0,T]}{\\sup }~\\mathbb {E}[\\Vert X(s)\\Vert _\\mathcal {E}^{5p}]^{\\frac{2}{p}}\\bigr )\\\\&\\le C_p(T,\\tau _0)\\tau ^2\\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^{10}\\bigr ).$ This estimate has been proved for $p\\in [2,\\infty )$ , however it is also valid for $p\\in [1,2)$ .", "This concludes the proof of the error estimate (REF ) and of Lemma REF .", "In order to prove Lemma REF , let us recall the following useful standard inequality: $\\underset{n\\in \\mathbb {N},z\\in [0,\\infty )}{\\sup }~n|\\frac{1}{(1+z)^n}-e^{-nz}|+\\underset{n\\in \\mathbb {N},z\\in [0,\\infty )}{\\sup }~\\frac{|\\frac{1}{(1+z)^n}-e^{-nz}|}{\\min (1,z)}<\\infty .$ In addition, for all $\\alpha \\in [0,1]$ , $n\\in \\mathbb {N}$ and $z\\in [0,\\infty )$ , one has $\\min (1,z)\\le z^\\alpha $ .", "See Section  in the appendix for a proof.", "[Proof of Lemma REF ] Using the mild formulations (REF ) for $X_\\tau (t_n)$ and (REF ) for $X_n^{\\rm aux}$ , one obtains the following decomposition of the error: for all $n\\ge 0$ , one has $X_\\tau (t_n)-X_n^{\\rm aux}=E_{n}^{\\tau ,1}+E_{n}^{\\tau ,2}+E_{n}^{\\tau ,3}+E_{n}^{\\tau ,4}+E_{n}^{\\tau ,5},$ where $E_{n}^{\\tau ,1}&=(e^{-n\\tau \\Lambda }-\\mathcal {A}_\\tau ^{n})x_0\\\\E_{n}^{\\tau ,2}&=\\mathcal {Z}(t_n)-\\mathcal {Z}_n\\\\E_{n}^{\\tau ,3}&=\\sum _{k=0}^{n-1}\\int _{t_k}^{t_{k+1}}e^{-(t_n-s)\\Lambda }\\bigl (\\psi _\\tau (X_\\tau (s))-\\psi _\\tau (X_{\\tau }(t_{k}))\\bigr )\\,\\text{d}s\\\\E_{n}^{\\tau ,4}&=\\sum _{k=0}^{n-1}\\int _{t_k}^{t_{k+1}}\\bigl (e^{-(t_n-s)\\Lambda }-e^{-(t_n-t_{k})\\Lambda }\\bigr )\\psi _\\tau (X_{\\tau }(t_{k}))\\,\\text{d}s\\\\E_{n}^{\\tau ,5}&=\\tau \\sum _{k=0}^{n-1}\\bigl (e^{-(t_n-t_{k})\\Lambda }-\\mathcal {A}_\\tau ^{n-k}\\bigr )\\psi _\\tau (X_\\tau (t_{k})).$ Let us now give estimates for those five error terms.", "$\\bullet $ If the splitting schemes (REF ) and (REF ) are considered, one has $\\mathcal {A}_\\tau =e^{-\\tau \\Lambda }$ and thus $E_n^{\\tau ,1}=0$ for all $n\\ge 0$ .", "If the splitting scheme (REF ) is considered, one has $\\mathcal {A}_\\tau =(I+\\tau \\Lambda )^{-1}$ , thus using the inequality (REF ), for all $n\\in \\lbrace 0,\\ldots ,N\\rbrace $ , one has $\\Vert E_n^{\\tau ,1}\\Vert _\\mathcal {H}^2&=\\Vert \\bigl (e^{n\\tau \\Delta }-((I-\\tau \\Delta )^{-1})^n\\bigr )u_0\\Vert _{H}^2\\\\&=\\sum _{j=1}^{\\infty }(\\frac{1}{(1+\\tau \\lambda _j)^n}-e^{-n\\tau \\lambda _j}\\bigr )^2\\langle u_0,e_j\\rangle _{H}^2\\\\&\\le C_\\alpha \\sum _{j=1}^{\\infty }(\\tau \\lambda _j)^{2\\alpha }\\langle u_0,e_j\\rangle _{H}^2\\\\&\\le C_\\alpha \\tau ^{2\\alpha }\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^2.$ Therefore one obtains the following upper bound: for all $\\alpha \\in [0,\\frac{1}{4})$ , there exists $C_\\alpha \\in (0,\\infty )$ such that for all $\\tau \\in (0,\\tau _0)$ one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert E_n^{\\tau ,1}\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_{\\alpha }\\tau ^\\alpha \\Vert (-\\Delta )^\\alpha u_0\\Vert _H$ $\\bullet $ Note that if the splitting scheme (REF ) is considered ($X_n=X_n^{\\rm LT, exact}$ for all $n\\ge 0$ ), one has $E_n^{\\tau ,2}=0$ for all $n\\ge 0$ .", "If the splitting schemes (REF ) and (REF ) are considered, for all $n\\ge 0$ one has $E_n^{\\tau ,2}=\\mathcal {Z}(t_n)-\\mathcal {Z}_n=\\begin{pmatrix} Z(t_n)-Z_n\\\\ 0\\end{pmatrix},$ with $Z_n=Z_n^{\\rm LT, expo}$ (resp.", "$Z_n=Z_n^{\\rm LT, imp}$ ) if the scheme (REF ) (resp.", "the scheme (REF )) is considered.", "Here, we denote $Z_{n+1}^{\\rm LT, expo}&=e^{\\tau \\Delta }\\Bigl (Z_{n}^{\\rm LT, expo}+\\delta W_n\\Bigr )\\\\Z_{n+1}^{\\rm LT, imp}&=(I-\\tau \\Delta )^{-1}\\Bigl (Z_{n}^{\\rm LT, imp}+\\delta W_n\\Bigr ).$ One has the following mean-square error estimate, which are standard results in the analysis of numerical schemes for parabolic semilinear stochastic partial differential equations, see for instance [40]: for all $\\alpha \\in [0,\\frac{1}{4})$ , there exists $C_\\alpha \\in (0,\\infty )$ such that $\\underset{n\\ge 0}{\\sup }~\\mathbb {E}[\\Vert Z(t_n)-Z_n\\Vert _H^2]\\le C_\\alpha \\tau ^{2\\alpha },$ if $Z_n=Z_n^{\\rm LT, expo}$ and $Z_n=Z_n^{\\rm LT, imp}$ .", "Since $Z(t_n)-Z_n$ is a $H$ -valued Gaussian random variable, one obtains the following upper bound: for all $\\alpha \\in [0,\\frac{1}{4})$ and $p\\in [1,\\infty )$ , there exists $C_{\\alpha ,p}\\in (0,\\infty )$ such that for all $\\tau \\in (0,\\tau _0)$ one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert E_n^{\\tau ,2}\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_{\\alpha ,p}\\tau ^\\alpha .$ $\\bullet $ Using the inequality (REF ) and the local Lipschitz continuity property () of $\\psi _\\tau $ (Proposition REF ), one obtains $\\Vert E_n^{\\tau ,3}\\Vert _\\mathcal {H}&\\le \\sum _{k=0}^{n-1}\\int _{t_k}^{t_{k+1}}\\Vert e^{-(t_n-s)\\Lambda }\\bigl (\\psi _\\tau (X_\\tau (s))-\\psi _\\tau (X_{\\tau }(t_{k}))\\bigr )\\Vert _{\\mathcal {H}}\\,\\text{d}s\\\\&\\le \\sum _{k=0}^{n-1}\\int _{t_k}^{t_{k+1}}\\Vert \\bigl (\\psi _\\tau (X_\\tau (s))-\\psi _\\tau (X_{\\tau }(t_{k}))\\bigr )\\Vert _{\\mathcal {H}}\\,\\text{d}s\\\\&\\le C(\\tau _0)\\sum _{k=0}^{n-1}\\int _{t_k}^{t_{k+1}}\\bigl (1+\\Vert X_\\tau (s)\\Vert _\\mathcal {E}^3+\\Vert X_\\tau (t_k)\\Vert _\\mathcal {E}^3\\bigr )\\Vert X_\\tau (s)-X_{\\tau }(t_{k})\\Vert _{\\mathcal {H}}\\,\\text{d}s.$ Using the Minkowskii and Cauchy–Schwarz inequalities, the moment bound (REF ) (Proposition REF ) and the regularity estimate (REF ) (Lemma REF ), one has $\\bigl (\\mathbb {E}[\\Vert E_n^{\\tau ,3}\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}&\\le C(\\tau _0)\\sum _{k=0}^{n-1}\\int _{t_k}^{t_{k+1}}\\bigl (1+\\underset{r\\in [t_{k},t_{k+1}]}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X_\\tau (r)\\Vert _\\mathcal {E}^{6p}]\\bigr )^{\\frac{1}{2p}}\\bigr )\\bigl (\\mathbb {E}[\\Vert X_\\tau (s)-X_{\\tau }(t_{k})\\Vert _{\\mathcal {H}}^{2p}]\\bigr )^{\\frac{1}{2p}}\\,\\text{d}s\\\\&\\le C_{\\alpha ,p}(T)\\tau ^\\alpha (1+\\Vert x_0\\Vert _\\mathcal {E}^3)\\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^4+\\Vert x_0\\Vert _\\mathcal {E}^4\\bigr ).$ Therefore one obtains the following upper bound: for all $\\alpha \\in [0,\\frac{1}{4})$ , $p\\in [1,\\infty )$ and $T\\in (0,\\infty )$ , there exists $C_{\\alpha ,p}(T)\\in (0,\\infty )$ such that for all $\\tau \\in (0,\\tau _0)$ one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert E_n^{\\tau ,3}\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_{\\alpha ,p}(T)\\tau ^\\alpha \\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^7+\\Vert x_0\\Vert _\\mathcal {E}^7\\bigr ).$ $\\bullet $ Using the inequality (REF ) from Proposition REF (with $\\mu =\\alpha \\in [0,1)$ and $\\nu =0$ ) and the local Lipschitz continuity property () of $\\psi _\\tau $ combined with the bound (REF ) (Proposition REF ), one has for all $s\\in [t_{k},t_{k+1}]$ $\\Vert \\bigl (e^{-(t_n-s)\\Lambda }-e^{-(t_n-t_{k})\\Lambda }\\bigr )\\psi _\\tau (X_{\\tau }(t_{k}))\\Vert _\\mathcal {H}&\\le C_{\\alpha }\\frac{|s-t_k|^\\alpha }{(t_n-s)^\\alpha }\\Vert \\psi _\\tau (X_{\\tau }(t_{k}))\\Vert _\\mathcal {H}\\\\&\\le C_{\\alpha }\\frac{\\tau ^\\alpha }{(t_n-s)^\\alpha }\\bigl (1+\\Vert X_\\tau (t_k)\\Vert _\\mathcal {E}^4\\bigr ).$ Using the Minkoswskii inequality, the moment bounds (REF ) from Proposition REF , and the fact that $\\int _0^T s^{-\\alpha }\\,\\text{d}s<\\infty $ for $\\alpha \\in [0,1)$ , one obtains the following upper bound: for all $\\alpha \\in [0,\\frac{1}{4})$ , $p\\in [1,\\infty )$ and $T\\in (0,\\infty )$ , there exists $C_{\\alpha ,p}(T)\\in (0,\\infty )$ such that for all $\\tau \\in (0,\\tau _0)$ one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert E_n^{\\tau ,4}\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_{\\alpha ,p}(T)\\tau ^\\alpha \\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^4\\bigr ).$ $\\bullet $ Note that if the splitting schemes (REF ) and (REF ) are considered, one has $\\mathcal {A}_\\tau =e^{-\\tau \\Lambda }$ and thus $E_n^{\\tau ,5}=0$ for all $n\\ge 0$ .", "If the splitting scheme (REF ) is considered, one has $\\mathcal {A}_\\tau =(I+\\tau \\Lambda )^{-1}$ .", "Using the inequality (REF ), for all $x=(u,v)\\in \\mathcal {H}$ and all $0\\le k\\le n-1$ one has $\\Vert (e^{-(t_n-t_k)\\Lambda }x-\\mathcal {A}_\\tau ^{n-k}x\\Vert _\\mathcal {H}=\\Vert e^{(n-k)\\tau \\Delta }u-((I-\\tau \\Delta )^{-1})^{n-k}u\\Vert _H\\le \\frac{C\\Vert u\\Vert _H}{(n-k)}\\le \\frac{C\\Vert x\\Vert _\\mathcal {H}}{(n-k)^\\alpha }.$ As a consequence, using the Minkowskii inequality, the local Lipschitz continuity property () of $\\psi _\\tau $ combined with the bound (REF ) (Proposition REF ) and the moment bounds (REF ) from Proposition REF , one has $\\bigl (\\mathbb {E}[\\Vert E_n^{\\tau ,5}\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}&\\le \\tau \\sum _{k=0}^{n-1}\\frac{C}{(n-k)^\\alpha }\\bigl (1+\\bigl (\\mathbb {E}[\\Vert X_\\tau (t_k)\\Vert _\\mathcal {E}^{4p}]\\bigr )^{\\frac{1}{p}}\\bigr )\\\\&\\le C_p(T)\\tau \\sum _{\\ell =1}^{n}\\frac{1}{t_{\\ell }^\\alpha } \\tau ^\\alpha \\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^4\\bigr ).$ Using the fact that for all $\\alpha \\in [0,1)$ one has $\\underset{\\tau \\in (0,\\tau _0)}{\\sup }~\\tau \\sum _{\\ell =1}^{N}\\frac{1}{t_{\\ell }^\\alpha }<\\infty ,$ one obtains the following upper bound: for all $\\alpha \\in [0,\\frac{1}{4})$ , $p\\in [1,\\infty )$ and $T\\in (0,\\infty )$ , there exists $C_{\\alpha ,p}(T)\\in (0,\\infty )$ such that for all $\\tau \\in (0,\\tau _0)$ one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert E_n^{\\tau ,5}\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_{\\alpha ,p}(T)\\tau ^\\alpha \\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^4\\bigr ).$ We are now in position to conclude the proof: using the decomposition of the error (REF ) and the upper bounds (REF ), (REF ), (REF ),  (REF ) and (REF ), one obtains the following upper bound: for all $\\alpha \\in [0,\\frac{1}{4})$ , $p\\in [1,\\infty )$ and $T\\in (0,\\infty )$ , there exists $C_{\\alpha ,p}(T)\\in (0,\\infty )$ such that for all $\\tau \\in (0,\\tau _0)$ one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X_\\tau (t_n)-X_n^{\\rm aux}\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}\\le C_{\\alpha ,p}(T)\\tau ^\\alpha \\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^7+\\Vert x_0\\Vert _\\mathcal {E}^7\\bigr ).$ This concludes the proof of the inequality (REF ) and the proof of Lemma REF is completed.", "Note that the proof of Lemma REF above does not use Gronwall inequalities arguments.", "[Proof of Lemma REF ] Using the expressions (REF ) and (REF ) for $X_n^{\\rm aux}$ and $X_n$ , and the definition (REF ) of the mapping $\\psi _\\tau $ , for all $n\\in \\lbrace 0,\\ldots ,N-1\\rbrace $ one obtains $X_{n+1}^{\\rm aux}-X_{n+1}=\\mathcal {A}_\\tau \\bigl (X_n^{\\rm aux}-X_n\\bigr )+\\tau \\mathcal {A}_\\tau \\bigl (\\psi _\\tau (X_\\tau (t_n))-\\psi _\\tau (X_n)\\bigr ).$ Writing $\\psi _\\tau (X_\\tau (t_n))=\\psi _\\tau (X_\\tau (t_n))-\\psi _\\tau (X_n^{\\rm aux})+\\psi _\\tau (X_n^{\\rm aux}),$ and using again the identity (REF ), one obtains $X_{n+1}^{\\rm aux}-X_{n+1}=\\mathcal {A}_\\tau \\bigl (\\phi _\\tau (X_n^{\\rm aux})-\\phi _\\tau (X_n)\\bigr )+\\tau \\mathcal {A}_\\tau \\bigl (\\psi _\\tau (X_\\tau (t_n))-\\psi _\\tau (X_n^{\\rm aux})\\bigr ).$ On the one hand, using the inequalities (REF ) (Proposition REF ), if $\\mathcal {A}_\\tau =e^{-\\tau \\Lambda }$ and (REF ), if $\\mathcal {A}_\\tau =(I+\\tau \\Lambda )^{-1}$ , and the global Lipschitz continuity property (REF ) of $\\phi _\\tau $ (Proposition REF ), one obtains $\\Vert \\mathcal {A}_\\tau \\bigl (\\phi _\\tau (X_n^{\\rm aux})-\\phi _\\tau (X_n)\\bigr )\\Vert _\\mathcal {H}&\\le \\Vert \\phi _\\tau (X_n^{\\rm aux})-\\phi _\\tau (X_n)\\Vert _\\mathcal {H}\\\\&\\le e^{\\tau (1+B)}\\Vert X_n^{\\rm aux}-X_n\\Vert _\\mathcal {H}.$ On the other hand, using the inequalities (REF ) (Proposition REF ), if $\\mathcal {A}_\\tau =e^{-\\tau \\Lambda }$ and (REF ), if $\\mathcal {A}_\\tau =(I+\\tau \\Lambda )^{-1}$ , and the local Lipschitz continuity property () of $\\psi _\\tau $ (Proposition REF ), one obtains $\\Vert \\mathcal {A}_\\tau \\bigl (\\psi _\\tau (X_\\tau (t_n))-\\psi _\\tau (X_n^{\\rm aux})\\bigr )\\Vert _\\mathcal {H}&\\le \\Vert \\psi _\\tau (X_\\tau (t_n))-\\psi _\\tau (X_n^{\\rm aux})\\Vert _\\mathcal {H}\\\\&\\le C(\\tau _0)\\Bigl (1+\\Vert X_\\tau (t_n)\\Vert _\\mathcal {E}^3+\\Vert X_n^{\\rm aux}\\Vert _\\mathcal {E}^3\\Bigr )\\Vert X_\\tau (t_n)-X_n^{\\rm aux}\\Vert _\\mathcal {H}.$ By a straightforward argument, since $X_0^{\\rm aux}=X_0=x_0$ , for all $n\\in \\lbrace 0,\\ldots ,N\\rbrace $ , one has $\\Vert X_n^{\\rm aux}-X_n\\Vert _\\mathcal {H}\\le C(\\tau _0)e^{T(1+B)}\\tau \\sum _{k=1}^{N}\\Bigl (1+\\Vert X_\\tau (t_k)\\Vert _\\mathcal {E}^3+\\Vert X_k^{\\rm aux}\\Vert _\\mathcal {E}^3\\Bigr )\\Vert X_\\tau (t_k)-X_k^{\\rm aux}\\Vert _\\mathcal {H}.$ Using the Minkowskii and Cauchy–Schwarz inequalities, the moment bounds (REF ) and (REF ) from Proposition REF and Lemma REF respectively, and the error estimate (REF ) from Lemma REF , one obtains the following strong error estimate: for all $\\alpha \\in [0,\\frac{1}{4})$ , $p\\in [1,\\infty )$ and $T\\in (0,\\infty )$ , there exists $C_{\\alpha ,p}(T)\\in (0,\\infty )$ such that for all $\\tau \\in (0,\\tau _0)$ one has $\\underset{0\\le n\\le N}{\\sup }~\\bigl (\\mathbb {E}[\\Vert X_n^{\\rm aux}-X_n\\Vert _\\mathcal {H}^p]\\bigr )^{\\frac{1}{p}}&\\le C(T)\\tau \\sum _{k=1}^{N}\\Bigl (1+\\bigl (\\mathbb {E}[\\Vert X_\\tau (t_k)\\Vert _\\mathcal {E}^{6p}])^{\\frac{1}{2p}}+\\bigl (\\Vert X_k^{\\rm aux}\\Vert _\\mathcal {E}^{6p}]\\bigr )^{\\frac{1}{2p}}\\Bigr )\\\\&\\quad \\times \\bigl (\\mathbb {E}[\\Vert X_\\tau (t_k)-X_k^{\\rm aux}\\Vert _\\mathcal {H}^{2p}\\bigr )^{\\frac{1}{2p}}\\\\&\\le C_{\\alpha ,p}(T)\\tau ^{\\alpha }\\bigl (1+\\Vert x_0\\Vert _\\mathcal {E}^3\\bigr )\\bigl (1+\\Vert (-\\Delta )^\\alpha u_0\\Vert _H^4+\\Vert x_0\\Vert _\\mathcal {E}^4\\bigr ).$ This concludes the proof of the inequality (REF ) and the proof of Lemma REF is thus completed." ], [ "Numerical experiments", "This section presents numerical experiments to support and illustrate the above theoretical results.", "To perform these numerical experiments, we consider the stochastic FitzHugh–Nagumo SPDE system (REF ) with Neumann boundary conditions on the interval $[0,1]$ .", "The spatial discretization is performed using a standard finite difference method with mesh size denoted by $h$ .", "In order to obtain a linear system with a symmetric matrix, we use centered differences for the numerical discretization of the Laplacian, while first order differences are used for the discretization of the Neumann boundary conditions.", "The initial values are given by $u_0(\\zeta )=\\cos (2\\pi \\zeta )$ and $v_0(\\zeta )=\\cos (2\\pi \\zeta )$ .", "For the temporal discretization, we use the three Lie–Trotter splitting integrators (REF ), (REF ) and (REF ) studied in this paper, denoted below by LTexact, LTexpo, LTimp respectively." ], [ "Evolution plots", "Let us first display one sample of the numerical solutions of the stochastic FitzHugh–Nagumo system (REF ) with the parameters $\\gamma _1=0.08$ , $\\gamma _2=0.8\\gamma _1$ and $\\beta =0.7$ .", "The SPDE is discretized with finite differences with mesh $h=2^{-10}$ .", "We consider the time interval $[0,T]=[0,1]$ and apply the integrators with time step size $\\tau =2^{-15}$ .", "The results are presented in Figure REF .", "The general behaviour of the numerical solutions given by the three splitting schemes is the same.", "However, one can observe a spatial smoothing effect in the $u$ component of the solution when the schemes LTexpo–(REF ) or to some extent LTimp–(REF ) are applied: for a given time step size, the spatial regularity of the numerical solution is increased compared with the one of the exact solution.", "On the contrary, the scheme LTexact–(REF ) preserves the spatial regularity of the solution for any value of the time step size.", "We refer to the recent preprint [9] for the analysis of this phenomenon for parabolic semilinear SPDEs.", "Let us emphasize that the phenomenon is due to the way the stochastic convolution is computed, exactly for the scheme LTexact–(REF ) or approximately for the schemes LTexpo–(REF ) and LTimp–(REF ).", "Figure: Space-time evolution plots of uu and vv using the Lie–Trotter splitting schemes LTexact, LTexpo, and LTimp." ], [ "Mean-square error plots", "To illustrate the rates of strong convergence for the Lie–Trotter splitting schemes stated in Theorem REF , we consider the stochastic FitzHugh–Nagumo system (REF ) with the parameters $\\gamma _1=\\gamma _2=\\beta =1$ , with $T=1$ and apply a finite difference method with $h=2^{-9}$ for spatial discretization.", "We apply the Lie–Trotter splitting schemes with time steps ranging from $2^{-10}$ to $2^{-18}$ .", "The reference solution is computed using the scheme LTexact–(REF ) with time step size $\\tau _{\\rm ref}=2^{-18}$ .", "The expectation is approximated using $M_s=100$ samples.", "We have checked that the Monte Carlo error is negligible.", "A plot in logarithmic scales for the mean-square errors $\\bigl (\\mathbb {E}[\\Vert X(t_N)-X_N\\Vert _\\mathcal {H}^2]\\bigr )^{\\frac{1}{2}}$ is given on the left-hand side of Figure REF .", "We observe that the strong rate of convergence for the three considered Lie–Trotter splitting schemes is at least $1/4$ , which illustrates the result stated in Theorem REF .", "Furthermore, the numerical experiments suggest that for the scheme LTexact–(REF ) the order of convergence is $1/2$ , which is not covered by Theorem REF .", "The fact that using an accelerated exponential Euler scheme where the stochastic convolution is computed exactly yields higher order of convergence is known for parabolic semilinear stochastic PDEs driven by space-time white noise, under appropriate conditions, see for instance [26] or [9].", "However, the stochastic FitzHugh–Nagumo equations considered in this article are not parabolic systems therefore it is not known how to prove the observed higher order strong rate of convergence.", "This question may be studied in future works.", "The right-hand side of Figure REF shows the errors for the variant (REF ) of the splitting scheme (REF ) introduced in Remark REF : the mapping $\\phi _\\tau =\\phi _\\tau ^{\\rm L}\\circ \\phi _\\tau ^{\\rm NL}$ given by (REF ) is replaced by $\\hat{\\phi }_\\tau =\\phi _\\tau ^{\\rm NL}\\circ \\phi _\\tau ^{\\rm L}$ given by (REF ).", "As explained in Remark REF , this type of Lie–Trotter schemes is not covered by the results in Section REF , more precisely the moment bounds in Theorem REF cannot be proved by the techniques used in this article.", "However, the numerical experiments are similar to those on the left-hand side of Figure REF and suggest that the strong order of convergence for this variant is at least $1/4$ , and that higher order convergence with rate $1/2$ may be obtained for the variant of the scheme LTexact–(REF ).", "Figure: Mean-square errors as a function of the time step:Lie–Trotter splitting schemes: left (φ τ =φ τ L ∘φ τ NL \\phi _\\tau =\\phi _\\tau ^{\\rm L}\\circ \\phi _\\tau ^{\\rm NL})and right (φ τ =φ τ NL ∘φ τ L \\phi _\\tau =\\phi _\\tau ^{\\rm NL}\\circ \\phi _\\tau ^{\\rm L}) (⋄\\diamond for LTexact,□\\square for LTexpo, for LTimp).The dotted lines have slopes 1/21/2 and 1/41/4." ], [ "Acknowledgements", "The work of CEB is partially supported by the following project SIMALIN (ANR-19-CE40-0016) operated by the French National Research Agency.", "The work of DC is partially supported by the Swedish Research Council (VR) (projects nr.", "$2018-04443$ ).", "The computations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at UPPMAX, Uppsala University." ], [ "Proof of the inequality (", "Let us first state two elementary inequalities: for all $0\\le a\\le b$ and $n\\in \\mathbb {N}$ , one has $0\\le b^n-a^n\\le nb^{n-1}(b-a),$ for all $z\\in [0,\\infty )$ , one has $0\\le \\frac{1}{1+z}-e^{-z}\\le C\\min (1,z^2).$ As a consequence, for all $n\\in \\mathbb {N}$ and $z\\in [0,\\infty )$ one has $0\\le \\frac{1}{(1+z)^n}-e^{-nz}\\le \\frac{n}{(1+z)^{n-1}}\\bigl (\\frac{1}{1+z}-e^{-z}\\bigr ).$ [Proof of (REF )] For all $n\\ge 3$ and $z\\in [0,\\infty )$ , one has $n|\\frac{1}{(1+z)^n}-e^{-nz}|&\\le \\frac{Cn^2z^2}{(1+z)^{n-1}}\\le \\frac{Cn^2z^2}{1+(n-1)z+\\frac{(n-1)(n-2)}{2}z^2}\\le \\frac{2Cn^2}{(n-1)(n-2)}\\le C.$ The cases $n=1$ and $n=2$ are treated separately, one has $\\underset{z\\in [0,\\infty )}{\\sup }~|\\frac{1}{(1+z)}-e^{-z}|+\\underset{z\\in [0,\\infty )}{\\sup }~2|\\frac{1}{(1+z)^2}-e^{-2z}|<\\infty .$ This concludes the proof of the first inequality.", "To prove the second inequality, observe first that one has $\\underset{n\\in \\mathbb {N},z\\in [0,\\infty )}{\\sup }~|\\frac{1}{(1+z)^n}-e^{-nz}|\\le 2.$ In addition, for all $n\\ge 2$ and $@miscz\\in [0,\\infty )$ , one has $\\frac{|\\frac{1}{(1+z)^n}-e^{-nz}|}{z}&\\le \\frac{Cnz}{(1+z)^{n-1}}\\le \\frac{Cnz}{1+(n-1)z}\\le \\frac{Cn}{n-1}\\le C.$ The case $n=1$ is treated separately: using the inequality $\\min (1,z^2)\\le z$ one has $\\underset{z\\in [0,\\infty )}{\\sup }~\\frac{|\\frac{1}{1+z}-e^{-z}|}{z}\\le C.$ Gathering the results concludes the proof of the second inequality." ] ]
2207.10484
[ [ "Magnetic imaging with spin defects in hexagonal boron nitride" ], [ "Abstract Optically-active spin defects hosted in hexagonal boron nitride (hBN) are promising candidates for the development of a two-dimensional (2D) quantum sensing unit.", "Here, we demonstrate quantitative magnetic imaging with hBN flakes doped with negatively-charged boron-vacancy (V$_{\\rm B}^-$) centers through neutron irradiation.", "As a proof-of-concept, we image the magnetic field produced by CrTe$_2$, a van der Waals ferromagnet with a Curie temperature slightly above $300$ K. Compared to other quantum sensors embedded in 3D materials, the advantages of the hBN-based magnetic sensor described in this work are its ease of use, high flexibility and, more importantly, its ability to be placed in close proximity to a target sample.", "Such a sensing unit will likely find numerous applications in 2D materials research by offering a simple way to probe the physics of van der Waals heterostructures." ], [ "separate-uncertainty Magnetic imaging with spin defects in hexagonal boron nitride P. Kumar F. Fabre A. Durand T. Clua-Provost Laboratoire Charles Coulomb, Université de Montpellier and CNRS, 34095 Montpellier, France J. Li J. H. Edgar Tim Taylor Department of Chemical Engineering, Kansas State University, Manhattan, Kansas 66506, USA N. Rougemaille J. Coraux Univ.", "Grenoble Alpes, CNRS, Grenoble INP, Institut Néel, 38000 Grenoble, France X. Marie P. Renucci C. Robert Université de Toulouse, INSA-CNRS-UPS, LPCNO, 135 Avenue Rangueil, 31077 Toulouse, France I. Robert-Philip B. Gil G. Cassabois A. Finco V. Jacques Laboratoire Charles Coulomb, Université de Montpellier and CNRS, 34095 Montpellier, France Optically-active spin defects hosted in hexagonal boron nitride (hBN) are promising candidates for the development of a two-dimensional (2D) quantum sensing unit.", "Here, we demonstrate quantitative magnetic imaging with hBN flakes doped with negatively-charged boron-vacancy (V$_\\text{B}^-$ ) centers through neutron irradiation.", "As a proof-of-concept, we image the magnetic field produced by CrTe2, a van der Waals ferromagnet with a Curie temperature slightly above 300.", "Compared to other quantum sensors embedded in 3D materials, the advantages of the hBN-based magnetic sensor described in this work are its ease of use, high flexibility and, more importantly, its ability to be placed in close proximity to a target sample.", "Such a sensing unit will likely find numerous applications in 2D materials research by offering a simple way to probe the physics of van der Waals heterostructures.", "Quantum sensing technologies based on solid-state spin defects have already shown a huge potential to cover the growing need for high-precision sensors [1], both for fundamental research and for industrial applications.", "The most advanced quantum sensing platforms to date rely on optically-active spin defects embedded in three-dimensional (3D) materials [2].", "A prime example is the nitrogen-vacancy (NV) center in diamond [3], [4], [5], which has already found a wide range of applications in condensed matter physics [6], life sciences [7] and geophysics [8].", "Despite such success, NV-based quantum sensing technologies still face several limitations that mainly result from the 3D structure of the diamond host matrix.", "They include (i) a limited proximity between the quantum sensor and the target sample, which hampers its sensitivity, and (ii) the inability to engineer ultrathin and flexible diamond layers, which precludes an easy transfer of the quantum sensing unit onto the samples to be probed as well as its integration into complex multifunctional devices.", "An emerging strategy to circumvent these limitations consists in using spin defects embedded in a van der Waals crystal that could be exfoliated down to the monolayer limit [9], [10].", "Such a 2D quantum sensing foil would offer atomic-scale proximity to the probed sample together with an increased versatility and flexibility for device integration.", "Hexagonal boron nitride (hBN) is currently the most promising van der Waals crystal for the design of quantum sensing foils [9], [10].", "This insulating material, which can be easily exfoliated down to few atomic layers while maintaining chemical stability, is extensively used for encapsulation of van der Waals heterostructures [11].", "Furthermore, hBN hosts a broad diversity of optically-active point defects owing to its wide bandgap [12], [13], [14], [15].", "For some of these defects, the electron spin resonance (ESR) can be detected optically, offering a key resource for quantum sensing applications [9].", "While several spin-active defects with unknown microscopic structures have been recently isolated at the single level in hBN [16], [17], [18], most studies to date have been focused on ensembles of negatively-charged boron vacancy (V$_\\text{B}^-$ ) centers [19] [Fig.", "REF (a)].", "Under green laser illumination, this defect produces a broadband photoluminescence (PL) signal in the near infrared.", "In addition, the V$_\\text{B}^-$ center features magneto-optical properties very similar to those of the NV defect in diamond, with a spin triplet ground state whose ESR frequencies can be measured via optically-detected magnetic resonance methods [19], [20].", "Despite a low quantum yield that has so far prevented its optical detection at the single scale, ensembles of V$_\\text{B}^-$ centers have recently found first applications as quantum sensors in van der Waals heterostructures [21], [22].", "In this work, we analyze the performances of hBN flakes doped with V$_\\text{B}^-$ centers by neutron irradiation for quantitative magnetic field imaging.", "As a proof-of-concept, we image the magnetic field produced by CrTe2, a van der Waals ferromagnet with in-plane magnetic anisotropy, which exhibits a Curie temperature ($T_{\\rm c}$ ) slightly above 300 [23], [24], [25], [26].", "Magnetic flakes with a thickness of a few tens of nanometers were obtained by mechanical exfoliation of a bulk $1T$ -CrTe2 crystal and then transferred on a SiO2/Si substrate.", "For magnetic imaging, we rely on a monoisotopic h10BN crystal grown from a Ni-Cr flux [27], which was irradiated with thermal neutrons with a dose of about 2.6e16n.", "The interest of isotopic purification with 10B lies in its very large neutron capture cross section, which ensures an efficiently creation of V$_\\text{B}^-$ centers via neutron transmutation doping [28], [29].", "hBN flakes mechanically exfoliated from this neutron-irradiated crystal were deposited above the CrTe2 flakes.", "Besides providing magnetic imaging capabilities, the V$_\\text{B}^-$ -doped hBN capping layer also protects CrTe2 from degradation of its magnetic properties.", "For each sample, the thickness of the different layers was inferred by atomic force microscopy (AFM).", "All experiments described below are carried out under ambient conditions with a scanning confocal microscope employing a green laser excitation, a high-numerical aperture microscope objective (NA$=0.7$ ) and a photon counting detection module.", "At each point of the sample, optical illumination combined with microwave excitation enables the measurement of the ESR frequencies of the V$_\\text{B}^-$ center by recording its spin-dependent PL intensity [19].", "In this work, the microwave excitation is delivered via an external loop antenna placed close to the sample.", "Figure: (a) Atomic structure of the V B - _\\text{B}^- center in hBN with a simplified diagram of its energy levels showing the ESR transitions ν ± \\nu _{\\pm } in the spin triplet ground state.", "(b) Optically-detected ESR spectrum recorded on a 85-nm thick hBN flake with an out-of-plane bias field B b =7B_b={7}{} and an optical power of 2.1.", "The solid line is a fit to the data with Lorentzian functions.", "(c) Magnetic field sensitivity η B \\eta _B as a function of the optical pumping power.", "The black dashed line is a guide to the eye.", "Inset: Evolution of δD\\delta D with the optical power.", "The red dashed line is a fit to the data with a linear function.We start by studying a heterostructure consisting of a 64-nm-thick CrTe2 flake covered with a 85-nm-thick hBN sensing layer [see Fig.", "REF (a)].", "Before discussing magnetic imaging results, we first qualify the magnetic field sensitivity of the V$_\\text{B}^-$ -doped hBN layer as a function of the optical excitation power.", "To this end, optically-detected ESR spectra were recorded far from the CrTe2 flake while applying a bias magnetic field $B_b={7}{}$ along the $c$ -axis of hBN (i.e.", "perpendicular to the layers).", "A typical spectrum is shown in Fig.", "REF (b).", "The ESR frequencies are given by $\\nu _{\\pm }= D\\pm \\gamma _eB_b$ , where $\\gamma _e={28}{/}$ is the electron gyromagnetic ratio and $D\\sim {3.47}{}$ denotes the zero-field splitting parameter of the V$_\\text{B}^-$ spin triplet ground state [Fig.", "REF (a)].", "From such a spectrum, the magnetic field sensitivity $\\eta _B$ can be inferred as [30] $\\eta _B\\approx 0.7 \\times \\frac{1}{\\gamma _e}\\times \\frac{\\Delta \\nu }{\\mathcal {C}\\sqrt{\\mathcal {R}}} \\ ,$ where $\\mathcal {R}$ is the rate of detected photons, $\\mathcal {C}$ the contrast of the magnetic resonance and $\\Delta \\nu $ its linewidth [Fig.", "REF (b)].", "From the parameters $\\lbrace \\mathcal {R}, \\mathcal {C}, \\Delta \\nu \\rbrace $ measured at different optical powers, we extract the power-dependent magnetic field sensitivity.", "As shown in Fig.", "REF (c), $\\eta _B$ improves with increasing laser power and reaches an optimal value of $\\sim {60}{\\sqrt{}}$ .", "Note that in the considered power range, the PL signal does not reach saturation and the ESR linewidth $\\Delta \\nu $ remains almost constant (see SI).", "The magnetic sensitivity is therefore mainly limited by the ESR contrast $\\mathcal {C}$ , which could be improved by optimizing the orientation of the microwave magnetic field, e.g.", "by depositing the hBN flakes on top of a coplanar waveguide [31].", "Interestingly, the gain in sensitivity with optical power is accompanied by a slight reduction of the zero-field splitting parameter $D=(\\nu _++\\nu _-)/2$ , whose relative shift $\\delta D$ is plotted in the inset of Fig.", "REF (c).", "This shift results from a temperature variation $\\delta T$ of the hBN layer [9], [32], which can be phenomenologically described near room temperature by the relation $\\delta D= \\varepsilon \\ \\delta T$ where $\\varepsilon \\sim {-0.7}{/}$  [9].", "Our measurements thus indicate that optical illumination leads to heating of the hBN sensing layer.", "Fitting the data with a linear function leads to an optically-induced heating efficiency around 1/.", "To mitigate this effect, which is detrimental for studying the magnetic order in materials featuring a $T_{\\rm c}$ near room temperature like CrTe2, magnetic imaging experiments were carried out at low optical power ($\\sim {0.6}{}$ ), leading to a slightly degraded magnetic field sensitivity $\\eta _B\\sim {130}{\\sqrt{}}$ [Fig.", "REF (c)].", "Figure: (a) Optical image of the CrTe2 (64) / hBN (85) heterostructure.", "(b) AFM image indicating that the magnetic layer has a constant thickness.", "(c) PL image recorded across the same area as in (b).", "(d) Corresponding map of the magnetic field component B z B_z obtained by measuring the Zeeman shift of the lower ESR frequency of the V B - _\\text{B}^- centers.", "Experiments are carried out with a bias magnetic field B b =7B_b={7}{} and an optical power of 0.6.", "The acquisition time per pixel is 4.2.", "(e) Simulated map of the magnetic field component B z B_z for a uniform in-plane magnetization 𝐌\\mathbf {M} (black arrow) with an azimuthal angle φ M =297\\phi _M={297} and a norm M=60/M={60}{/}.A PL raster scan of the CrTe2 (64) / hBN (85) heterostructure is shown in Fig.", "REF (c).", "The increased PL signal above the CrTe2 flake is related to its metallic character leading to reflection effects [31].", "Magnetic field imaging is performed by recording an ESR spectrum at each point of the scan, from which the Zeeman shift of the lower ESR frequency $\\nu _-$ is extracted.", "Here, we only track one magnetic resonance of the V$_\\text{B}^-$ center to reduce data acquisition time.", "After subtracting the offset linked to the bias magnetic field $B_b$ , the Zeeman shift is simply given by $\\Delta _z=\\gamma _e B_z$ , where $B_z$ is the magnetic field projection along the V$_\\text{B}^-$ quantization axis, which corresponds to the $c$ -axis of hBN.", "A map of the magnetic field component $B_z$ produced by the CrTe2 flake is shown in Fig.", "REF (d).", "The magnetic field is mainly generated at the edges of the flake as expected for a uniformly magnetized flake with homogeneous thickness [26].", "Furthermore, the recorded magnetic field distribution directly confirms that the CrTe2 flake features an in-plane magnetization.", "Indeed, considering a uniform out-of-plane magnetization, the magnetic field component $B_z$ would be identical at all edges of the flake (see SI).", "To perform a quantitative analysis of the recorded magnetic map, we simulate the distribution of the magnetic field component $B_z$ produced by a CrTe2 flake with constant thickness and uniform in-plane magnetization $\\mathbf {M}$ .", "This vector is characterized by its azimuthal angle $\\phi _M$ in the ($x, y$ ) sample plane and norm $M$ .", "The geometry of the flake used for the simulation is extracted from a topography image recorded by AFM [Fig.", "REF (b)].", "By comparing the overall structure of the experimental magnetic image with simulations obtained for various values of the angle $\\phi _M$ , we first identify the magnetization orientation $\\phi _M \\sim {297}$ [Fig.", "REF (e)].", "Considering solely shape anisotropy, the in-plane magnetization should be stabilized along the long axis of the CrTe2 flake.", "Our result indicates a deviation from this simple case, which suggests that CrTe2 exhibits a non-negligible magnetocrystalline anisotropy, in agreement with recent works [25], [26].", "Having identified the orientation of the magnetization vector, we then estimate its norm $M$ by analyzing the stray field amplitude recorded at the edges of the CrTe2 flake.", "Besides being linked to the magnetization norm, the Zeeman shift of the ESR frequency measured at each pixel of the scan also results from both (i) a lateral-averaging due to the diffraction-limited spatial resolution of the magnetic microscope and (ii) an averaging over the vertical ($z$ ) distribution of optically-active V$_\\text{B}^-$ centers in the hBN sensing layer.", "Indeed, neutron irradiation creates V$_\\text{B}^-$ centers throughout the hBN volume, in contrast to ion implantation techniques which lead to the creation of defects at a depth linked to the ion implantation energy [33], [34].", "Taking into account these two averaging processes (see SI), a fair agreement is obtained between the simulated and experimental magnetic maps for $M \\sim {60}{/}$ [Fig.", "REF (e)].", "This value is two times larger than the one recently measured for micron-sized CrTe2 flakes without hBN encapsulation [26].", "This discrepancy is attributed to a partial degradation of non-encapsulated CrTe2 flakes resulting in a reduced effective magnetization.", "As already indicated above, the magnetic image shown in Fig.", "REF (d) was recorded at a low optical excitation power to mitigate heating effects.", "The same experiments performed at larger optical powers show a weaker stray magnetic field at the edges of the flake (see SI).", "Since CrTe2 has a $T_{\\rm c}$ close to room temperature, any slight heating of the sample significantly reduces the sample magnetization.", "Figure: (a) Optical image of an heterostructure consisting of a 15-nm thick hBN layer deposited on top of a CrTe2 flake.", "(b) AFM image of the CrTe2 flake showing large thickness variations.", "(c) PL image recorded around the CrTe2 flake.", "(d) Corresponding map of the magnetic field component B z B_z obtained by measuring the Zeeman shift of the lower ESR frequency of the V B - _\\text{B}^- centers.", "Experiments are carried out with a bias magnetic field B b =7B_b={7}{} and an optical power of 2.4.", "The acquisition time per pixel is 15.3.Next we study a heterostructure involving a thinner (15) hBN sensing layer in order to rely on an ensemble of V$_\\text{B}^-$ centers localized closer to the magnetic sample, thus reducing vertical averaging effects [Fig.", "REF (a)].", "While the PL signal increases for a thick hBN layer deposited on CrTe2 owing to reflection effects, the PL scan of this second heterostructure now reveals a strong PL quenching of the V$_\\text{B}^-$ centers located above the magnetic layer [Fig.", "REF (c)].", "This effect also results from the metallic character of CrTe2.", "Indeed, when the V$_\\text{B}^-$ centers are placed in close proximity to the CrTe2 flake, energy transfer to the metal quenches their PL signal by opening additional non-radiative decay channels [35], [36].", "Even though such a PL quenching impairs the magnetic sensitivity of the hBN layer, magnetic imaging can still be performed by increasing the optical pumping power [Fig.", "REF (d)].", "Despite an improved proximity of the V$_\\text{B}^-$ centers, the overall amplitude of the stray magnetic field is not stronger than the one previously measured with a thicker hBN layer.", "This is due to laser-induced heating of the magnetic flake, which reduces its magnetization, as discussed above.", "For this sample, a quantitative analysis of the stray field distribution is a difficult task because of large variations of the CrTe2 thickness.", "Indeed, a topography image recorded by AFM shows that the magnetic flake is divided into two main parts whose average thickness varies from $\\sim $ 115 to $\\sim $ 31, with local fluctuations exceeding 20% due to wrinkles [Fig.", "REF (b)].", "Since magnetic stray fields are produced at each thickness step of the magnetic layer with a possible reorientation of the in-plane magnetization [26], precise magnetic simulations can hardly be performed.", "Considering a simplified geometry of the CrTe2 flake with a constant thickness (115), the magnetic field produced at the top-left edge is reproduced for a magnetization $M \\sim 40$  kA/m.", "Although qualitative, this analysis confirms the reduction of the magnetization induced by laser-induced heating of the magnetic layer.", "In summary, we have shown that V$_\\text{B}^-$ spin defects hosted in hBN layers can be used for quantitative magnetic imaging with a sensitivity around ${100}{\\sqrt{}}$ and a spatial resolution limited by diffraction at the micron scale.", "Although much better performances can be obtained with other quantum sensors such as the NV defect in diamond, the key advantages of the hBN-based magnetic sensor described in this work are its ease of use, high flexibility and, more importantly, its ability to be placed in close proximity to a target sample.", "Such a sensing unit will likely find numerous applications in 2D materials research by offering a simple way to probe in situ the physics of van der Waals heterostructrures, with optimal performance obtained for the study of non-metallic 2D materials, for which PL quenching effects can be avoided.", "An improvement in magnetic sensitivity by at least one order of magnitude could be achieved by optimizing the microwave excitation used to perform ESR spectroscopy [31] while spatial resolution below the diffraction limit might be reached by relying on super-resolution optical imaging methods [37], [38], [39].", "To release the full potential of hBN-based quantum sensing foils, a remaining challenge, however, is to demonstrate that V$_\\text{B}^-$ centers can be stabilized in an atomically-thin hBN layer to achieve an ultimate atomic-scale proximity with the probed sample.", "Acknowledgements - This work was supported by the French Agence Nationale de la Recherche under the program ESR/EquipEx+ (grant number ANR-21-ESRE-0025), the Institute for Quantum Technologies in Occitanie through the project BONIQs, an Office of Naval Research award number N000142012474 and by the U.S. Department of Energy, Office of Nuclear Energy under DOE Idaho Operations Office Contract DE-AC07-051D14517 as part of a Nuclear Science User Facilities experiment.", "We acknowledge Abdellali Hadj-Azzem for providing the bulk $1T$ -CrTe$_2$ samples and the support of The Ohio State University Nuclear Reactor Laboratory and the assistance of Susan M. White, Lei Raymond Cao, Andrew Kauffman, and Kevin Herminghuysen for the irradiation services provided.", "Data Availability Statement - The data that support the findings of this study are openly available in Zenodo at https://zenodo.org/record/6802738 with the identifier 10.5281/zenodo.6802738." ] ]
2207.10477
[ [ "Defective Colouring of Hypergraphs" ], [ "Abstract We prove that the vertices of every $(r + 1)$-uniform hypergraph with maximum degree $\\Delta$ may be coloured with $c(\\frac{\\Delta}{d + 1})^{1/r}$ colours such that each vertex is in at most $d$ monochromatic edges.", "This result, which is best possible up to the value of the constant $c$, generalises the classical result of Erd\\H{o}s and Lov\\'asz who proved the $d = 0$ case." ], [ "Introduction", "Hypergraph colouring is a widely studied field with numerous deep results [26], [10], [9], [8], [13], [14], [15], [4], [20], [21], [22].", "In a seminal contribution, [12] proved that every $(r + 1)$ -uniform hypergraph with maximum degree $\\Delta $ has a vertex-colouring with at most $c\\Delta ^{1/r}$ colours and with no monochromatic edge, where $c$ is an absolute constant.", "The proof is a simple application of what is now called the Lovász local lemma, introduced in the same paper.", "Indeed, hypergraph colouring was the motivation for the development of the Lovász local lemma, which has become a staple of probabilistic combinatorics.", "This paper proves the following generalisation of the result of [12].", "Here a vertex-colouring of a hypergraph is Maroon$d$ -defective if each vertex is in at most $d$ monochromatic edges.", "The case $d = 0$ corresponds to the result of [12] mentioned above.", "For all integers $r\\geqslant 1$ and $d\\geqslant 0$ and $\\Delta \\geqslant \\max {d + 1, 50^{100r^3}}$ , every $(r + 1)$ -uniform hypergraph $G$ with maximum degree at most $\\Delta $ has a $d$ -defective $k$ -colouring, where $k \\leqslant 100 *{\\frac{\\Delta }{d + 1}}^{1/r}.$ Several notes on main are in order.", "The bound on the number of colours in the theorem of [12] and in main is best possible (up to the multiplicative constant) because of complete hypergraphsLet $G$ be the $(r + 1)$ -uniform complete hypergraph on $n$ vertices, which has maximum degree $\\Delta = \\binom{n - 1}{r}\\leqslant (\\frac{en}{r})^r$ .", "In any $d$ -defective $k$ -colouring of $G$ , at least $\\frac{n}{k}$ vertices are monochromatic, implying $d\\geqslant \\binom{n/k - 1}{r} > (\\frac{n}{2kr})^r \\geqslant \\frac{\\Delta }{(2ek)^r}$ .", "Thus $k \\geqslant \\frac{1}{2e}(\\frac{\\Delta }{d})^{1/r} $ , which is within a constant factor of the upper bound in main..", "It remains tight even for $(r + 1)$ -uniform hypergraphs with no complete $(r + 2)$ -vertex subhypergraph.", "For example, a hypergraph construction by [10] has this propertyLet $e_i$ denote the $r$ -dimensional vector with 1 in the $i\\textsuperscript {th}$ coordinate and 0 elsewhere.", "Let $G$ be the $(r + 1)$ -uniform hypergraph with vertex set $\\lbrace 1, \\cdots , n\\rbrace ^r$ and whose edges are $\\lbrace v, v_1, \\cdots , v_r\\rbrace $ where, for each $i$ , $v_i - v$ is a positive multiple of $e_i$ .", "Any $r + 2$ vertices induce at most two edges, so $G$ has contains no $(r + 2)$ -clique.", "$G$ has maximum degree $\\Delta = (n - 1)^r < n^r$ .", "Suppose that $V(G)$ is coloured with $k \\leqslant (\\Delta /(d + 1))^{1/r}/r < n/(r(d + 1)^{1/r})$ colours.", "Then there is a monochromatic set $S \\subseteq V(G)$ of size at least $(d + 1)^{1/r} r n^{r - 1}$ .", "Apply the following iterative deletion procedure to $S$ : if, for some coordinate $j$ and integers $a_1, \\cdots , a_{j - 1}, a_{j + 1}, \\cdots , a_r \\in \\lbrace 1, \\cdots , n\\rbrace $ , there are less than $(d + 1)^{1/r}$ vertices in $S$ whose $i\\textsuperscript {th}$ coordinate is $a_i$ for all $i \\ne j$ , then delete all these vertices.", "Let $S^{\\prime }$ be the set remaining after applying all such deletions.", "Each step deletes less than $(d + 1)^{1/r}$ vertices and at most $rn^{r - 1}$ steps occur so $S^{\\prime }$ is non-empty.", "Let $v \\in S^{\\prime }$ have the smallest coordinate sum.", "By definition of $S^{\\prime }$ , for each $i$ , there are at least $(d + 1)^{1/r}$ vertices $v_i \\in S^{\\prime }$ with $v_i - v$ being a positive multiple of $e_i$ .", "Hence, $v$ has degree at least $d + 1$ in $S^{\\prime }$ .", "Therefore, every $d$ -defective colouring of $G$ uses more than $(\\Delta /(d + 1))^{1/r}/r$ colours..", "The assumption $\\Delta \\geqslant d + 1$ in main is reasonable, since if $\\Delta \\leqslant d$ then one colour suffices.", "The assumption that $\\Delta \\geqslant 50^{100r^3}$ enables the uniform constant 100 in the bound on $k$ .", "Of course, one could drop the assumption and replace 100 by some constant $c_r$ depending on $r$ .", "The graph ($r = 1$ ) case of main (with ${\\frac{\\Delta }{d + 1}}+1$ colours) is easily proved by a max-cut type argument (see maxcut) as was first observed by [23].", "See [28] for a comprehensive survey on defective graph colouring.", "If $G$ is a linear hypergraph (that is, any two edges intersect in at most one vertex), then main may be proved directly with the Lovász local lemma.", "Non-linear hypergraphs are hard because the number of neighbours of a vertex $v$ is not precisely determined by the degree of $v$ .", "See the start of sec:proof for details.", "main can be rephrased as saying that for any $k$ , $G$ has a $k$ -colouring with monochromatic degree $\\mathcal {O}(\\frac{\\Delta }{k^r})$ for fixed $r$ .", "This is similar to a result of [6] who showed that for any $k$ every $(r + 1)$ -uniform hypergraph with $m$ edges has a $k$ -colouring with $\\mathcal {O}(\\frac{m}{k^r})$ monochromatic edges of each colour.", "In this light, main is a variant on so-called judicious partitions [18], [30], [17], [5], [29], [27], [7], [1], [6]." ], [ "Notation", "Let $G$ be a hypergraph, which consists of a finite vertex-set $V(G)$ and an edge-set $E(G)\\subseteq 2^{V(G)}$ .", "Let Maroon$e(G)$ ${E(G)}$ .", "$G$ is Maroon$r$ -uniform if every edge has size $r$ .", "The Maroonlink hypergraph of a vertex $v$ in $G$ , denoted Maroon$G_v$, is the hypergraph with vertex-set $V(G)\\setminus {v}$ and edge-set $\\lbrace e\\subseteq V(G)\\setminus \\lbrace v\\rbrace \\colon e \\cup {v} \\in E(G)\\rbrace $ .", "If $G$ is $(r + 1)$ -uniform, then $G_{v}$ is $r$ -uniform.", "The Maroondegree of a set of vertices $S\\subseteq V(G)$ , denoted Maroon$\\deg (S)$, is the number of edges in $G$ that contain $S$ .", "We often omit set parentheses, so $\\deg (x)$ and $\\deg (u, v)$ denote the number of edges containing $x$ and the number of edges containing both $u$ and $v$ , respectively.", "Let ${Maroon}{\\Delta (G)} \\max \\lbrace \\deg (v) \\colon v \\in V(G)\\rbrace $ ." ], [ "Probabilistic Tools", "We use the following standard probabilistic tools.", "[Lovász local lemma [12]] Let $\\mathcal {A}$ be a set of events in a probability space such that each event in $\\mathcal {A}$ occurs with probability at most $p$ and for each event $A \\in \\mathcal {A}$ there is a collection $\\mathcal {A}^{\\prime }$ of at most $d$ other events such that $A$ is independent from the collection $(B \\colon B \\notin \\mathcal {A}^{\\prime } \\cup {A})$ .", "If $4pd \\leqslant 1$ , then with positive probability no event in $\\mathcal {A}$ occurs.", "[Markov's inequality] If $X$ is a nonnegative random variable and $a>0$ , then $\\mathbb {P}(X\\geqslant a) \\leqslant \\frac{\\mathbb {E}(X)}{a}.$ [Chernoff bound] Let $X \\sim \\operatorname{Bin}(n, p)$ .", "For any $\\varepsilon \\in [0, 1]$ , $\\mathbb {P}(X \\geqslant (1 + \\varepsilon )\\mathbb {E}(X)) & \\leqslant \\exp (-\\varepsilon ^2 np/3), \\\\\\mathbb {P}(X \\leqslant (1 - \\varepsilon )\\mathbb {E}(X)) & \\leqslant \\exp (-\\varepsilon ^2 np/2).$ We will need a version of Chernoff for negatively correlated random variables, for example, see [19].", "Boolean random variables $X_1, \\cdots , X_n$ are Maroonnegatively correlated if, for all $S \\subseteq \\lbrace 1, \\cdots , n\\rbrace $ , $\\mathbb {P}(X_i = 1 \\textnormal { for all } i \\in S) \\leqslant \\prod _{i \\in S} \\mathbb {P}(X_i = 1).$ [Chernoff for negatively correlated variables] Suppose $X_1, \\cdots , X_n$ are negatively correlated Boolean random variables with $\\mathbb {P}(X_i = 1) \\leqslant p$ for all $i$ .", "Then, for any $t \\geqslant 0$ , $\\mathbb {P}\\bigl (\\sum _i X_i \\geqslant pn + t\\bigr ) \\leqslant \\exp (-2t^2/n).$ Finally we need McDiarmid's bounded differences inequality [24].", "[McDiarmid's inequality] Let $T_1, \\cdots , T_n$ be $n$ independent random variables.", "Let $X$ be a random variable determined by $T_1, \\cdots , T_n$ , such that changing the value of $T_j$ (while fixing the other $T_i$ ) changes the value of $X$ by at most $c_j$ .", "Then, for any $t \\geqslant 0$ , $\\mathbb {P}(X \\geqslant \\mathbb {E}(X) + t) \\leqslant \\exp \\Bigl (-\\frac{2 t^2}{\\sum _{i} c_{i}^2}\\Bigr ).$" ], [ "Proof", "For motivation we first consider a naïve application of the Lovász local lemma.", "Suppose $G$ is a linear $(r + 1)$ -uniform hypergraph.", "Colour $G$ with $k {100(\\Delta /(d + 1))^{1/r}}$ colours uniformly at random.", "For each set $F$ of $d + 1$ edges all containing a common vertex, let $B_F$ be the event that the vertex set of $F$ is monochromatic.", "Then, since $G$ is linear, $p \\mathbb {P}(B_F) = k^{-r(d + 1)}$ .", "For a fixed $F$ , the number of $F^{\\prime }$ sharing a vertex with $F$ is at most $D (r(d + 1) + 1) \\Delta (r + 1) \\binom{\\Delta }{d}$ : we have specified the vertex shared with $F$ , the edge containing that vertex, the common vertex of the edges in $F^{\\prime }$ , and finally the remaining $d$ edges of $F^{\\prime }$ .", "Now $D \\leqslant 3r^2 d \\Delta (e\\Delta /d)^d$ and so $4pD & \\leqslant 4 \\cdot 100^{-r(d + 1)} \\bigl (\\tfrac{d + 1}{\\Delta }\\bigr )^{d + 1} \\cdot 3r^2 e^d d^{-d + 1} \\Delta ^{d + 1} \\\\& = 12 r^2 e^d \\cdot 100^{-r(d + 1)} \\cdot d(d + 1) \\bigl (\\tfrac{d + 1}{d}\\bigr )^d \\\\& \\leqslant 24 r^2 d^2 e^{d + 1} \\cdot 100^{-r(d + 1)} \\leqslant 1.$ Hence, by the Lovász local lemma, there is a colouring in which no $B_F$ occurs: that is, there is a $d$ -defective $k$ -colouring of $G$ .", "It was crucial in this argument that $G$ was linear so that the powers of $\\Delta $ in $D$ and $p$ cancelled out exactly.", "For non-linear $G$ , the number of neighbours of a vertex $v$ is not determined by the degree of $v$ and so $p$ may be larger without a corresponding decrease in $D$ .", "A more involved argument is required." ], [ "First Steps", "Here we outline our colouring strategy before diving into the details.", "We are given an $(r + 1)$ -uniform hypergraph $G$ with maximum degree $\\Delta $ and wish to colour its vertices so that every vertex is in at most $d$ monochromatic edges.", "For a fixed colouring $\\phi $ , the Maroonmonochromatic degree of a vertex $v$ , denoted Maroon$\\deg _{\\phi }(v)$, is the number of monochromatic edges containing $v$ (which must have colour $\\phi (v)$ ).", "First we colour the vertices of $G$ uniformly at random with $k$ colours where $k = {49(\\frac{\\Delta }{d + 1})^{1/r}}$ .", "Say a vertex is Maroonbad if its monochromatic degree is greater than $d$ and Maroongood otherwise.", "We are aiming for a colouring in which every vertex is good.", "The expected monochromatic degree of a vertex $v$ in such a colouring is $k^{-r} \\deg (v) \\leqslant k^{-r} \\Delta \\leqslant 49^{-r} (d + 1)$ .", "In particular, each individual vertex has small (certainly, by Markov's inequality, at most $49^{-r}$ ) probability of being bad.", "However, the goodness of a vertex $v$ depends on the colours assigned to vertices in the neighbourhood of $v$ and so $49^{-r}$ is not a sufficiently small probability to conclude (by, say, the Lovász local lemma) that there is a particular colouring for which all vertices are good.", "Instead of colouring all of $G$ with a single-uniform colouring, we do so over many rounds.", "After a round (where we coloured a hypergraph $G$ ), any good vertices will keep their colours and be discarded (they have been coloured appropriately).", "Let $G^{\\prime }$ be the subhypergraph of $G$ induced by the bad vertices.", "In the next round we uniformly and randomly colour the vertices of $G^{\\prime }$ with a new palette of colours completely disjoint from those used in previous rounds.", "If the palettes all have the same size and the process runs for too many rounds, then we will end up using too many colours.", "However, if $\\Delta (G^{\\prime }) \\leqslant 2^{-r} \\Delta (G)$ , then we can use half the number of colours in the next round and so use $\\mathcal {O}((\\frac{\\Delta }{d + 1})^{1/r})$ colours across all the rounds.", "Thus, our aim is to prove the following nibble-style lemma from which main easily follows.", "Fix integers $r,\\Delta ,d$ with $r\\geqslant 1$ and $\\Delta \\geqslant \\max {d + 1, 50^{50r^2}}$ and $d\\geqslant 0$ .", "Then every $(r + 1)$ -uniform hypergraph $G$ with maximum degree at most $\\Delta $ has a partial colouring with at most $49(\\frac{\\Delta }{d + 1})^{1/r} $ colours such that every coloured vertex has monochromatic degree at most $d$ and the subhypergraph $G^{\\prime }$ of $G$ induced by the uncoloured vertices satisfies $\\Delta (G^{\\prime }) \\leqslant 2^{-r} \\Delta $ .", "[Proof of main assuming nibble] We start with a $(r + 1)$ -uniform hypergraph $G$ with maximum degree at most $\\Delta = \\Delta _{0}$ for some $\\Delta _{0} \\geqslant \\max {d + 1, 50^{100r^3}}$ .", "Apply nibble to get a partial colouring of $G$ where: every vertex has monochromatic degree at most $d$ , at most $49(\\frac{\\Delta _0}{d + 1})^{1/r}$ colours are used, and the subhypergraph $G_1$ of $G$ induced by uncoloured vertices has $\\Delta (G_1) \\leqslant \\Delta _1 = 2^{-r} \\Delta _0$ .", "Iterate this procedure (using a palette of new colours each round) to obtain, for $i=0,1,\\dots $ , an induced subhypergraph $G_i$ of $G$ with $\\Delta (G_i) \\leqslant \\Delta _i = 2^{-ri} \\Delta $ such that $G[V(G) - V(G_i)]$ has been coloured with at most $49 \\bigl (\\tfrac{\\Delta _0}{d + 1}\\bigr )^{1/r} + 49 \\bigl (\\tfrac{\\Delta _1}{d + 1}\\bigr )^{1/r} + \\cdots + 49 \\bigl (\\tfrac{\\Delta _{i - 1}}{d + 1}\\bigr )^{1/r} = 49 \\bigl (\\tfrac{\\Delta }{d + 1}\\bigr )^{1/r} (1 + 2^{-1} + \\cdots + 2^{-(i - 1)}) \\leqslant 98 \\bigl (\\tfrac{\\Delta }{d + 1}\\bigr )^{1/r}$ colours where every monochromatic degree is at most $d$ .", "Continue carrying out rounds of colouring until $\\Delta _i < d + 1$ or $\\Delta _i < 50^{50r^2}$ .", "First suppose that $\\Delta _i < d + 1$ and so $\\Delta (G_i) \\leqslant d$ .", "Use a single new colour on the entirety of $G_i$ to give a $d$ -defective colouring of $G$ .", "Now suppose that $d + 1 \\leqslant \\Delta _i < 50^{50r^2}$ .", "Properly colour $G_i$ with $\\Delta (G_i) + 1 \\leqslant 50^{50r^2}$ colours.", "This gives a $d$ -defective colouring of $G$ with at most $98 \\bigl (\\tfrac{\\Delta }{d + 1}\\bigr )^{1/r} + 50^{50r^2} \\leqslant 100 \\bigl (\\tfrac{\\Delta }{d + 1}\\bigr )^{1/r}$ colours.", "The final inequality uses the fact that $\\Delta \\geqslant 50^{100r^3}$ and $d + 1 \\leqslant 50^{50r^2}$ .", "Recall that a vertex is bad for a colouring $\\phi $ if it has monochromatic degree at least $d + 1$ .", "Say that an edge $e$ is Maroonbad for a colouring $\\phi $ if every vertex in $e$ is bad (note that a bad edge is not necessarily monochromatic).", "Furthermore, say that a vertex is Maroonterrible for a colouring $\\phi $ if it is incident to more than $2^{-r} \\Delta $ bad edges.", "nibble says that there is some colouring for which no vertex is terrible.", "The key to the proof of nibble is to show that a vertex is terrible with low probability.", "In the remainder of the paper, we use the definitions of good, bad, and terrible given above and also set $k {49(\\frac{\\Delta }{d + 1})^{1/r}}$ .", "Let $\\Delta \\geqslant 50^{50r^2}$ .", "Let $G$ be an $(r + 1)$ -uniform hypergraph with maximum degree at most $\\Delta $ .", "In a uniformly random $k$ -vertex colouring of $G$ , each vertex $v$ of $G$ is terrible with probability at most $\\Delta ^{-5}$ .", "[Proof of nibble assuming EachVertexTerrible] Randomly and independently assign each vertex of $G$ one of $k$ colours.", "For each vertex $v$ , let $A_v$ be the event that $v$ is terrible.", "By EachVertexTerrible, $\\mathbb {P}(A_v)\\leqslant \\Delta ^{-5}$ .", "The event $A_v$ depends solely on the colours assigned to vertices in the closed second neighbourhood of $v$ .", "Thus if two vertices $v$ and $w$ are at distance at least 5 in $G$ , then $A_v$ and $A_w$ are independent.", "Thus each event $A_v$ is mutually independent of all but at most $2(r\\Delta )^4$ other events $A_w$ .", "Since $4 \\Delta ^{-5} \\cdot 2(r\\Delta )^4 = 8 r^4 /\\Delta \\leqslant 1$ , by the Lovász local lemma, with positive probability, no event $A_v$ occurs.", "Thus, there exists a $k$ -colouring $\\phi $ of $G$ such that no vertex is terrible.", "Let $G^{\\prime }$ be the subgraph of $G$ induced by the bad vertices.", "Since no vertex is terrible, $\\Delta (G^{\\prime })\\leqslant 2^{-r}\\Delta $ .", "It remains to prove EachVertexTerrible, which we do in FinalProof.", "We have now reduced the question to a local property of a random $k$ -colouring.", "A vertex $v$ is terrible if it is bad and at least $2^{-r} \\Delta $ of the edges in its link graph $G_v$ are bad.", "Analysing the dependence between the badness of different edges in $G_v$ is difficult.", "We sidestep this issue by using a decomposition into sunflowers.", "A Maroonsunflower with $p$ petals is a collection $A_1, \\cdots , A_p$ of sets for which $A_1 \\setminus K, \\cdots , A_p \\setminus K$ are pairwise disjoint where $K A_1 \\cap \\cdots \\cap A_p$ (that is, $A_i \\cap A_j = K$ for all distinct $i, j$ ).", "$K$ is the Marooncore of the sunflower and $A_1 \\setminus K, \\cdots , A_p \\setminus K$ are its Maroonpetals.", "If $A_1, \\cdots , A_p$ are distinct edges of a uniform hypergraph that form a sunflower, then the petals are all non-empty with the same size.", "The core may be empty in which case the sunflower is a matching of size $p$ .", "In a random colouring, the colourings on different petals of a sunflower are independent.", "Hence, it will be useful to partition the edges of hypergraphs into sunflowers with many petals together with a few edges left over.", "[Sunflower decomposition] Let $H$ be an $r$ -uniform hypergraph and $a$ an integer.", "There are edge-disjoint subhypergraphs $H_1, \\cdots , H_s$ of $H$ such that: Each $H_i$ is a sunflower with exactly $a$ petals.", "$H^{\\prime } = H - (E(H_1) \\cup \\cdots \\cup E(H_s))$ has fewer than $(ra)^r$ edges.", "Let $H_1, \\cdots , H_s$ be a maximal collection of edge-disjoint subhypergraphs of $H$ where each $H_i$ is a sunflower with exactly $a$ petals.", "So $H^{\\prime }$ contains no sunflower with $a$ petals.", "By the Erdős-Rado sunflower lemma [11], $e(H^{\\prime }) \\leqslant r!", "(a - 1)^r < (ra)^r$ (see [2], [3], [25] for improved bounds).", "The proof of EachVertexTerrible uses a sunflower decomposition to show that if a vertex is terrible, then some reasonably large set of vertices $S$ must have at least $3^{-r}$ proportion of its vertices being bad.", "As noted above, each vertex is bad with probability at most $49^{-r}$ and so we expect $49^{-r} {S}$ bad vertices in $S$ .", "We are able to show that the number of bad vertices in (a suitable) $S$ is not much more than the expected number with very small failure probability.", "This is accomplished in Slemmalargek,Slemmasmallk below, which correspond respectively to the case of large and small $k$ ." ], [ "When $k$ is large: {{formula:086c124d-8a5a-43c9-9d44-ea63dca66ed8}}", "When $k$ is large we expect a medium-sized vertex-set $S$ to have close to ${S}$ different colours appearing on it (that is, to be close to rainbow).", "If two vertices have different colours, then the events that they are bad will be negatively correlated and hence we expect only a small proportion of $S$ to be bad.", "The negative correlation is made precise in Chernoffforbad and the upper tail concentration of the number of bad vertices in $S$ is established in Slemmalargek.", "Let $S = {v_1, \\cdots , v_{\\ell }}$ be a set of at most $k$ vertices in $G$ and let $D$ be the event that $v_1, \\cdots , v_{\\ell }$ are all given different colours.", "Let $X$ be the number of bad vertices in $S$ .", "Then, in a uniformly random $k$ -colouring of $V(G)$ , for any $t \\geqslant 0$ , $\\mathbb {P}(X \\geqslant \\ell \\cdot 49^{-r} + t \\mid D) \\leqslant \\exp \\bigl (-2t^2/\\ell \\bigr ).$ Let $B_j$ be the event ${v_j \\textnormal { is bad}}$ and $X_j$ be the indicator random variable for $B_j$ so $X = \\sum _{j} X_j$ .", "For an edge $e$ containing a vertex $v$ , the probability $e$ is monochromatic is $k^{-r}$ .", "Hence, the expected number of monochromatic edges containing $v$ is at most $\\Delta k^{-r}$ .", "Thus, $\\mathbb {P}(X_j = 1) \\leqslant 49^{-r}$ by Markov's inequality (Markov).", "Fix distinct colours $c_1, \\cdots , c_{\\ell }$ and let $V_j$ be the set of vertices given colour $c_j$ .", "Conditioned on the event $C_j = {v_j \\textnormal { is coloured } c_j}$ , $B_j$ is increasing in $V_j$ , while $D$ is non-increasing in $V_j$ .", "Hence, by the Harris inequality [16], $\\mathbb {P}(B_j \\cap D \\mid C_j) \\leqslant \\mathbb {P}(B_j \\mid C_j) \\mathbb {P}(D \\mid C_j)$ .", "Using this and the symmetry of the colours gives $\\mathbb {P}(B_j \\mid D) = \\mathbb {P}(B_j \\mid D \\cap C_j) = \\frac{\\mathbb {P}(B_j \\cap D \\mid C_j)}{\\mathbb {P}(D \\mid C_j)} \\leqslant \\mathbb {P}(B_j \\mid C_j) = \\mathbb {P}(B_j).$ But $\\mathbb {P}(B_j) \\leqslant 49^{-r}$ , so $\\mathbb {E}(X \\mid D) = \\sum _j \\mathbb {P}(B_j \\mid D) \\leqslant \\ell \\cdot 49^{-r}$ .", "Let $C$ be the event ${\\textnormal {each } v_i \\textnormal { is coloured } c_i}$ .", "Conditioned on $C$ , $B_j$ is increasing in $V_j$ and non-increasing in all other $V_i$ .", "We claim the $B_i$ are negatively correlated on the event $C$ .", "For $k = 2$ this is just the Harris inequality.", "Fix $k > 2$ and let $S$ be a set of indices: we need to show $\\mathbb {P}(\\cap _{i \\in S} B_i \\mid C) \\leqslant \\prod _{i \\in S} \\mathbb {P}(B_i \\mid C)$ .", "If ${S} \\leqslant 1$ , then there is equality.", "Otherwise let $i_1, i_2 \\in S$ .", "Now $B_{i_1} \\cap B_{i_2}$ is increasing in $V_{i_1} \\cup V_{i_2}$ and non-increasing in all other $V_i$ .", "By induction, $\\mathbb {P}(\\cap _{i \\in S} B_i \\mid C) \\leqslant \\mathbb {P}(B_{i_1} \\cap B_{i_2} \\mid C) \\cdot \\prod _{i \\in S \\setminus {i_1, i_2}} \\mathbb {P}(B_i \\mid C) \\leqslant \\prod _{i \\in S} \\mathbb {P}(B_i \\mid C).$ By symmetry of the colours, $\\mathbb {P}(B_i \\mid C) = \\mathbb {P}(B_i \\mid D)$ for all $i$ and also $\\mathbb {P}(\\cap _{i \\in S} B_i \\mid C) = \\mathbb {P}(\\cap _{i \\in S} B_i \\mid D)$ for any set of indices $S$ .", "In particular, the $B_i$ are negatively correlated on the event $D$ .", "Applying Chernoffneg to $X_1, \\cdots , X_\\ell $ gives the result.", "Let $S$ be a set of vertices of $G$ with $7 \\cdot 40^r \\leqslant {S} \\leqslant k^{1/2}$ .", "In a uniformly random $k$ -colouring of $V(G)$ , with failure probability at most $\\exp (-{S} \\cdot 7^{-2r})$ , fewer than $3^{-r} {S}$ vertices of $S$ are bad.", "We first show that with high probability the number of distinct colours on $S$ is at least $(1 - 6^{-r}){S}$ .", "Indeed, the probability that a fixed vertex does not have a unique colour is at most ${S}/k$ .", "In particular, the probability that fewer than $(1 - 6^{-r}){S}$ different colours appear on $S$ is at most $\\binom{{S}}{6^{-r} {S}} \\biggl (\\frac{{S}}{k}\\biggr )^{6^{-r} {S}} \\leqslant (6^r e)^{6^{-r} {S}} \\biggl (\\frac{{S}}{k}\\biggr )^{6^{-r} {S}} = \\biggl (\\frac{6^r e {S}}{k}\\biggr )^{6^{-r} {S}} \\leqslant \\biggl (\\frac{6^r e}{{S}}\\biggr )^{6^{-r} {S}}.$ If at least $(1 - 6^{-r}){S}$ different colours appear on $S$ , then there is a subset $S^{\\prime } \\subset S$ of size at least $(1 - 6^{-r}){S}$ where the vertices are all given different colours.", "Fix such an $S^{\\prime }$ and let $X$ be the number of bad vertices in $S^{\\prime }$ .", "Note that if $X < 6^{-r} {S^{\\prime }}$ , then ${S}$ contains fewer than $6^{-r} {S} + 6^{-r} {S^{\\prime }} \\leqslant 3^{-r} {S}$ bad vertices.", "Let $D$ be the event that all the vertices of $S^{\\prime }$ get different colours.", "By Chernoffforbad, $\\mathbb {P}(X \\geqslant {S^{\\prime }} \\cdot 6^{-r} \\mid D)\\leqslant \\mathbb {P}(X \\geqslant {S^{\\prime }} \\cdot 49^{-r} + {S^{\\prime }} \\cdot 6^{-r} /\\sqrt{2} \\mid D)\\leqslant \\exp (-{S^{\\prime }} \\cdot 6^{-2r}).$ Hence, with failure probability at most $(6^re/{S})^{6^{-r} {S}} + \\exp (-{S^{\\prime }} \\cdot 6^{-2r}) \\leqslant 2 \\exp (-{S^{\\prime }} \\cdot 6^{-2r}) \\leqslant \\exp (-{S} \\cdot 7^{-2r}),$ fewer than $3^{-r} {S}$ vertices of $S$ are bad." ], [ "When $k$ is small: {{formula:4ba6c1b4-d1a2-4824-8f0b-bda4466b37c7}}", "We need a simple max cut lemma.", "[Max cut] Let $G$ be a hypergraph whose edges have size at most $r + 1$ and let $\\ell $ be a positive integer.", "There is a partition $V_1 \\cup \\cdots \\cup V_{\\ell }$ of $V(G)$ such that, for every vertex $x \\in V_i$ , the number of edges containing $x$ and at least one more vertex from $V_i$ is at most $r \\deg (x)/\\ell $ .", "Throughout the proof, vertices $u, v, x$ are distinct.", "Choose a partition $V_{1} \\cup \\cdots \\cup V_{\\ell }$ of $V(G)$ into $\\ell $ parts that minimises $\\sum _{i} \\sum _{u, v \\in V_{i}} \\deg (u, v).$ Fix a vertex $x$ and suppose it is in some part $V_{a}$ .", "By minimality, for all $i$ , $\\sum _{u \\in V_a} \\deg (u, x) \\leqslant \\sum _{u \\in V_{i}} \\deg (u, x),$ or else we could increase (REF ) by moving $x$ to $V_i$ .", "But $\\sum _{i} \\sum _{u \\in V_{i}} \\deg (u, x) = \\sum _{u \\in V(G)} \\deg (u, x) \\leqslant r \\deg (x),$ and so $\\sum _{u \\in V_a} \\deg (u, x) \\leqslant r \\deg (x)/\\ell $ .", "This last sum is at least the number of edges containing $x$ and at least one more vertex from $V_a$ .", "Given a large vertex-set $S$ we aim to show that, with high probability, a small proportion of its vertices are bad.", "We use maxcut to split $S$ into parts so that very few edges have two vertices in the same part.", "Consider an arbitrary part $P$ .", "We will show that, with high probability, a small proportion of the vertices in $P$ are bad.", "We do this by first revealing the random $k$ -colouring on $V(G) - P$ .", "Since $k$ is small we get strong concentration on the distribution of colours on $V(G) - P$ .", "We then reveal the colouring on $P$ and use this concentration to show that it is unlikely that $P$ has a high proportion of bad vertices.", "Suppose $\\Delta \\geqslant 50^{50r^2}$ , $k \\leqslant \\Delta ^{1/(6r)}$ and let $S$ be a set of at least $(3k)^{3r} \\Delta ^{1/(6r)}$ vertices of $G$ .", "With failure probability at most $\\Delta ^{-6}$ , in a uniformly random $k$ -colouring of $V(G)$ , fewer than $3^{-r} {S}$ vertices of $S$ are bad.", "It will be helpful to partition $S$ into sets $P$ such that not too many edges meet $P$ in more than one vertex.", "We therefore apply the max cut lemma, maxcut, to $G$ with $\\ell = r k^r$ , and restrict the resulting partition to $S$ .", "We obtain a partition $\\mathcal {P}$ of $S$ into $r k^r$ parts such that, for every vertex $x \\in S$ , the number of edges containing $x$ and at least one more vertex from $x$ 's part is at most $\\deg (x)/k^r$ .", "We say a part $P \\in \\mathcal {P}$ is Maroonbig if ${P} \\geqslant {S}/(50 r (3k)^r)$ and is Maroonsmall otherwise.", "There are $r k^r$ parts in $\\mathcal {P}$ , so the number of vertices of $S$ in small parts is at most ${S}/(50 r (3k)^r) \\cdot r k^r = 0.02 \\cdot 3^{-r} {S}$ .", "Hence, if $3^{-r} {S}$ vertices of $S$ are bad, then at least $0.98 \\cdot 3^{-r}$ proportion of the vertices in big parts are bad so some big part $P$ has at least $0.98 \\cdot 3^{-r} {P}$ bad vertices.", "We now focus on a big part $P \\in \\mathcal {P}$ and show that, with failure probability at most $\\Delta ^{-8}$ , at most $0.98 \\cdot 3^{-r} {P}$ vertices of $P$ are bad.", "For each vertex $x \\in P$ , let $G^{\\prime }_{x}$ be the $r$ -uniform graph on $V(G) - P$ , whose edges are those $e$ with $e \\cup {x} \\in E(G)$ (that is, $G^{\\prime }_{x}$ is the link graph of $x$ restricted to $V(G) - P$ ).", "Define the $r$ -uniform auxiliary (multi)hypergraph $H_P$ to have vertex set $V(G) - P$ and edge set $E(H_P) = \\bigcup _{x \\in P} E(G^{\\prime }_{x}),$ where edges are counted with multiplicity.", "Let $\\phi $ be a uniformly random $k$ -colouring of $V(G)$ and $\\phi ^{\\prime }$ be the restriction of $\\phi $ to $V(G) - P$ .", "Reveal $\\phi ^{\\prime }$ and let $X$ be the number of monochromatic edges of $H_P$ , again counted with multiplicity.", "We now apply McDiarmid's inequality to show that $X$ concentrates.", "First note that $e(H_P) \\leqslant {P} \\cdot \\Delta $ and $\\mathbb {E}(X) = e(H_P) k^{-(r - 1)} \\leqslant {P} \\cdot \\Delta k^{-(r - 1)}$ .", "For a vertex $v \\in V(H_P)$ , changing $\\phi ^{\\prime }(v)$ changes the value of $X$ by at most $\\deg _{H_P}(v)$ .", "Now, $\\sum _{v} \\deg _{H_P}(v)^{2} \\leqslant \\Delta \\sum _{v} \\deg _{H_P}(v) = r \\Delta e(H_P) \\leqslant r \\Delta ^2 {P}.$ By McDiarmid’s inequality (McDiarmid), $\\mathbb {P}\\Bigl (X \\geqslant \\frac{1.1 \\cdot \\Delta {P}}{k^{r - 1}}\\Bigr )\\leqslant \\mathbb {P}\\Bigl (X \\geqslant \\mathbb {E}(X) + \\frac{0.1 \\cdot \\Delta {P}}{k^{r - 1}}\\Bigr )& \\leqslant \\exp \\Bigl (- \\frac{{P}}{50 r k^{2(r - 1)}}\\Bigr ) \\\\& \\leqslant \\exp \\Bigl (- \\frac{{S}}{2500 r^2 \\cdot 3^r \\cdot k^{3r - 2}}\\Bigr ) \\\\& \\leqslant \\exp \\bigl (- k^2 \\Delta ^{1/(6r)}/(2500 r^2)\\bigr ) \\leqslant \\Delta ^{-8}/2.$ For a vertex $x \\in P$ , say a colour is Maroon$x$ -unhelpful if there are more than $(49^r - 1) \\Delta /k^r$ monochromatic edges of $G^{\\prime }_x$ of that colour.", "Say $x$ Maroonunhelpful if there are more than $0.45 \\cdot 3^{-r} k$ $x$ -unhelpful colours.", "Note that if $x$ is unhelpfu]l, then the number of monochromatic edges in $G^{\\prime }_x$ is greater than $0.45 (49^r - 1) \\cdot \\Delta \\cdot 3^{-r} k^{-(r - 1)}$ .", "Hence, if more than $0.48 \\cdot 3^{-r} \\cdot {P}$ vertices of $P$ are unhelpful, then the number of monochromatic edges in $H_P$ is greater than $1.1 \\cdot \\Delta {P}/k^{r - 1}$ .", "We have just shown this occurs with probability less than $\\Delta ^{-8}/2$ .", "Hence, with failure probability at most $\\Delta ^{-8}/2$ , at least $(1 - 0.48 \\cdot 3^{-r}){P}$ vertices of $P$ are helpful.", "Suppose that at least $(1 - 0.48 \\cdot 3^{-r}){P}$ vertices of $P$ are helpful; call the set of helpful vertices $P^{\\prime }$ .", "Now reveal $\\phi $ on $P$ .", "For each vertex $x \\in P^{\\prime }$ , the probability that $x$ gets given an $x$ -unhelpful colour is less than $0.45 \\cdot 3^{-r}$ .", "Let $Y$ be the number of $x \\in P^{\\prime }$ coloured with an $x$ -unhelpful colour.", "For different $x \\in P^{\\prime }$ , these events are independent (we have already revealed $\\phi $ on $V(G) - P$ ) and so we may couple $Y$ with a random variable $Z \\sim \\operatorname{Bin}({P^{\\prime }}, 0.45 \\cdot 3^{-r})$ so that $Y \\leqslant Z$ .", "Hence, by the Chernoff bound (Chernoff), $\\mathbb {P}(Y \\geqslant 0.5 \\cdot 3^{-r} {P^{\\prime }})\\leqslant \\mathbb {P}(Z \\geqslant 0.5 \\cdot 3^{-r} {P^{\\prime }})& \\leqslant \\mathbb {P}(Z \\geqslant 1.1 \\cdot \\mathbb {E}(Z)) \\\\& \\leqslant \\exp (-0.45 \\cdot 3^{-r} {P^{\\prime }}/300) \\\\& \\leqslant \\exp \\bigl (- k^{2r} \\Delta ^{1/(6r)}/(6000r)\\bigr ) \\leqslant \\Delta ^{-8}/2.$ Hence, with failure probability at most $\\Delta ^{-8}/2 + \\Delta ^{-8}/2 = \\Delta ^{-8}$ , at least $(1 - 0.5 \\cdot 3^{-r}) {P^{\\prime }} \\geqslant (1 - 0.98\\cdot 3^{-r}) {P}$ vertices $x$ of $P$ are coloured with an $x$ -helpful colour.", "We now show that if a vertex $x$ is given an $x$ -helpful colour, then $x$ will be a good vertex (for $\\phi $ ).", "Indeed, there are at most $\\deg (x)/k^r \\leqslant \\Delta /k^r$ edges of $G$ containing $x$ that have at least one more vertex in $P$ and, as $x$ is given an $x$ -helpful colour, there are at most $(49^r - 1)\\Delta /k^r$ other monochromatic edges containing $x$ .", "Hence, with failure probability at most $\\Delta ^{-8}$ , at least $(1 - 0.98 \\cdot 3^{-r}) {P}$ vertices of $P$ are good, that is, at most $0.98 \\cdot 3^{-r} {P}$ vertices of $P$ are bad.", "Finally, taking a union bound over the big parts shows that the probability some big part $P$ has at least $0.98 \\cdot 3^{-r} {P}$ bad vertices is at most $r k^r \\Delta ^{-8} \\leqslant r \\Delta ^{-8 + 1/6} \\leqslant \\Delta ^{-6}$ , as required." ], [ "Proof of EachVertexTerrible", "To prove EachVertexTerrible we use the sunflower decompositions given by sunflower to show that if a vertex is terrible, then some reasonably large set of vertices $S$ must have at least $3^{-r}$ proportion of its vertices being bad.", "Slemmalargek,Slemmasmallk show that this is unlikely.", "[Proof of EachVertexTerrible] Fix a vertex $v$ of $G$ and consider the link graph $G_v$ , (which is an $r$ -uniform hypergraph).", "Recall that an edge of $G_v$ is Maroonbad if all its vertices are bad and is Maroongood otherwise.", "If $v$ is terrible, then at least $2^{-r} \\cdot \\Delta $ edges of $G_v$ are bad.", "First suppose that $k \\geqslant \\Delta ^{1/(6r)}$ .", "By sunflower, there are edge-disjoint subgraphs $G_{1}, \\cdots , G_{s}$ of $G$ each of which is a sunflower with exactly ${\\Delta ^{1/(12r)}}$ petals and such that $e(G - E(G_1 \\cup \\cdots \\cup G_s)) < r^r \\cdot \\Delta ^{1/12} \\leqslant 6^{-r} \\Delta $ .", "Let $G^{\\prime } = G_1 \\cup \\cdots \\cup G_s$ .", "If $v$ is terrible, then the number of bad edges in $G^{\\prime }$ is at least $\\bigl (2^{-r} - 6^{-r}\\bigr ) \\Delta \\geqslant 3^{-r} \\Delta \\geqslant 3^{-r} e(G^{\\prime }).$ Hence, if $v$ is terrible, then there is some $i$ for which at least $3^{-r} e(G_i)$ edges of $G_i$ are bad.", "Pick one vertex from each petal of $G_i$ and call the resulting set $S$ .", "Note that ${S} = {\\Delta ^{1/(12r)}} \\leqslant k^{1/2}$ .", "If at least $3^{-r} e(G_i)$ edges of $G_i$ are bad, then at least $3^{-r} {S}$ vertices of $S$ are bad.", "By Slemmalargek, this happens with probability at most $\\exp (-{S}/7^{2r}) \\leqslant \\exp (-\\Delta ^{1/(12r)}/7^{2r}) \\leqslant \\Delta ^{-6}.$ Taking a union bound over the $G_{i}$ shows that $v$ is terrible with probability at most $\\Delta ^{-5}$ .", "Now suppose that $k \\leqslant \\Delta ^{1/(6r)}$ .", "By sunflower, there are edge-disjoint subgraphs $G_{1}, \\cdots , G_{s}$ of $G$ each of which is a sunflower with at least $\\Delta ^{1/r}/(6r)$ petals and such that $e(G - E(G_{1} \\cup \\cdots \\cup G_{s})) < 6^{-r} \\Delta $ .", "Let $G^{\\prime } = G_{1} \\cup \\cdots \\cup G_{s}$ .", "If $v$ is terrible, then the number of bad edges in $G^{\\prime }$ is at least $\\bigl (2^{-r} - 6^{-r}\\bigr )\\Delta \\geqslant 3^{-r} \\Delta \\geqslant 3^{-r} e(G^{\\prime }).$ Hence, if $v$ is terrible, then there is some $i$ for which at least $3^{-r} e(G_i)$ edges of $G_i$ are bad.", "Pick one vertex from each petal of $G_i$ and call the resulting set $S$ : note that ${S} \\geqslant \\Delta ^{1/r}/(6r) \\geqslant 3^{3r} \\Delta ^{2/(3r)} \\geqslant (3k)^{3r} \\Delta ^{1/(6r)}$ .", "If at least $3^{-r} e(G_i)$ edges of $G_i$ are bad, then at least $3^{-r} {S}$ vertices of $S$ are bad.", "By Slemmasmallk, this happens with probability at most $\\Delta ^{-6}$ .", "Taking a union bound over the $G_{i}$ shows that $v$ is terrible with probability at most $\\Delta ^{-5}$ ." ], [ "Open problems", "As noted in the introduction, Erdős and Lovász proved that every $(r + 1)$ -uniform hypergraph $G$ with maximum degree at most $\\Delta $ has chromatic number $\\chi (G) = \\mathcal {O}(\\Delta ^{1/r})$ .", "[15] improved this to $\\mathcal {O}((\\Delta /\\log \\Delta )^{1/r})$ when $G$ is a linear hypergraph and there have been similar improvements [8], [9], [22] when $G$ satisfies other sparsity conditions (such as being triangle-freeA Maroontriangle in a hypergrph consists of edges $e, f, g$ and vertices $u, v, w$ such that $u, v \\in e$ and $v, w \\in f$ and $w, u \\in g$ and ${u, v, w} \\cap e \\cap f \\cap g = \\varnothing $ .", ").", "It would be interesting to know whether logarithmic improvements occur for defective colourings of sparse hypergraphs.", "[14] showed that there exist $(r + 1)$ -uniform linear hypergraphs $G$ with maximum degree $\\Delta $ and $\\chi (G) = \\Omega ((\\Delta /\\log \\Delta )^{1/r})$ .", "Consider a $d$ -defective $k$ -colouring of $G$ (where $d \\geqslant 2$ ).", "Each colour class induces a linear $(r + 1)$ -uniform hypergraph with maximum degree $d$ and so is $\\mathcal {O}((d/\\log d)^{1/r})$ -colourable.", "In particular, $k = \\Omega \\Bigl (\\bigl (\\tfrac{\\Delta }{\\log \\Delta } \\cdot \\tfrac{\\log d}{d}\\bigr )^{1/r}\\Bigr ).$ Is this tight: is it true that every $(r + 1)$ -uniform linear hypergraph is $k$ -colourable with defect $d\\geqslant 2$ , where $k = \\mathcal {O}\\Bigl (\\bigl (\\tfrac{\\Delta }{\\log \\Delta } \\cdot \\tfrac{\\log d}{d}\\bigr )^{1/r}\\Bigr )?$" ] ]
2207.10514
[ [ "Sumsets of sequences in abelian groups and flags in field extensions" ], [ "Abstract For a finite abelian group $G$ with subsets $A$ and $B$, the sumset $AB$ is $\\{ab \\mid a\\in A, b \\in B\\}$.", "A fundamental problem in additive combinatorics is to find a lower bound for the cardinality of $AB$ in terms of the cardinalities of $A$ and $B$.", "This article addresses the analogous problem for sequences in abelian groups and flags in field extensions.", "For a positive integer $n$, let $[n]$ denote the set $\\{0,\\dots,n-1\\}$.", "To a finite abelian group $G$ of cardinality $n$ and an ordering $G = \\{1=v_0,\\dots,v_{n-1}\\}$, associate the function $T \\colon [n] \\times [n] \\rightarrow [n]$ defined by \\[ T(i,j) = \\min\\big\\{k \\in [n] \\mid \\{v_0,\\dots,v_i\\}\\{v_0,\\dots,v_j\\} \\subseteq \\{v_0,\\dots,v_k\\}\\big\\}.", "\\] Under the natural partial ordering, what functions $T$ are minimal as $\\{1=v_0,\\dots,v_{n-1}\\}$ ranges across orderings of finite abelian groups of cardinality $n$?", "We also ask the analogous question for degree $n$ field extensions.", "We explicitly classify all minimal $T$ when $n < 18$, $n$ is a prime power, or $n$ is a product of $2$ distinct primes.", "When $n$ is not as above, we explicitly construct orderings of abelian groups whose associated function $T$ is not contained in the above classification.", "We also associate to orderings a polyhedron encoding the data of $T$." ], [ "Introduction", "Let $G$ be an abelian group, written multiplicatively, and let $A,B\\subseteq G$ be finite subsets.", "The sumset $AB$ is $\\lbrace ab \\; \\mid \\; a\\in A, b \\in B\\rbrace $ .", "A fundamental problem in additive combinatorics is to find a lower bound for the cardinality of $AB$ in terms of the cardinalities of $A$ and $B$ .", "For a finite set $S$ , let $\\vert S \\vert $ denote its cardinality.", "Let $\\operatorname{Stab}(AB) = \\lbrace g \\in G \\mid gAB = AB\\rbrace $ .", "In 1961, Kneser obtained the following lower bound.", "Theorem 1.1 (Kneser ) For all abelian groups $G$ and finite subsets $A,B\\subseteq G$ , we have $\\vert AB \\vert \\ge \\vert A \\vert + \\vert B \\vert - \\vert \\operatorname{Stab}(AB) \\vert .$ If $G$ is a cyclic group of prime order $p$ , then kneserorig specializes to the Cauchy–Davenport theorem, which states that every pair of nonempty subsets $A, B\\subseteq G$ satisfies $\\vert AB \\vert \\ge \\min \\lbrace p, \\vert A \\vert + \\vert B \\vert -1\\rbrace $ .", "Bachoc, Serra, and Zémor generalized kneserorig to field extensions.", "Theorem 1.2 (Bachoc, Serra, Zémor , Theorem 2) For all field extensions $L/K$ and nonempty $K$ -subvector spaces $A,B \\subseteq L$ of positive finite dimension, we have $\\dim _KAB \\ge \\dim _KA + \\dim _KB - \\dim _K\\operatorname{Stab}(AB).$ In this article, we instead study the sumsets of sequences in abelian groups and flags in field extensions." ], [ "Setup", "For a positive integer $n$ , let $[n]$ denote the set $\\lbrace 0,\\dots ,n-1\\rbrace $ ." ], [ "Flags of abelian groups", "Let $G$ be any finite abelian group, written multiplicatively, of cardinality $n$ .", "A flag of $G$ is an indexed set $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ of subsets of $G$ such that $\\lbrace 1\\rbrace = F_0 \\subset F_1 \\subset \\dots \\subset F_{n-1} = G$ and $\\vert F_i \\vert = i + 1$ for all $i \\in [n]$ ." ], [ "Flags of field extensions", "Let $L/K$ be any degree $n$ field extension.", "A flag of $L$ over $K$ is an indexed set $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ of $K$ -subvector spaces of $L$ such that $K = F_0 \\subset F_1 \\subset \\dots \\subset F_{n-1} = L$ and $\\dim _K F_i = i + 1$ for all $i \\in [n]$ ." ], [ "The function $T_{\\mathcal {F}}$", "Let $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ be a flag of an abelian group or a field extension.", "Associate to $\\mathcal {F}$ the function $T_{\\mathcal {F}} \\colon [n]\\times [n] & \\longrightarrow [n] \\\\(i\\;\\;,\\;\\;j ) \\; & \\longmapsto \\min \\lbrace k \\in [n] \\mid F_iF_j \\subseteq F_k\\rbrace .$ We call the associated function $T_{\\mathcal {F}}$ the flag type of $\\mathcal {F}$ .", "Which functions $T_{\\mathcal {F}}$ arise from groups and field extensions?", "The function $T_{\\mathcal {F}}$ satisfies the following conditions: for all $i,j \\in [n]$ , we have $T_{\\mathcal {F}}(i,j) = T(j,i)$ ; $T_{\\mathcal {F}}(i,0) = i$ ; and if $i < n-1$ , we have $T_{\\mathcal {F}}(i,j) \\le T_{\\mathcal {F}}(i+1,j)$ .", "Motivated by the functions $T_{\\mathcal {F}}$ , we make the following definition.", "definitionflagtypedefn A flag type of degree $n$ is a function $T \\colon [n] \\times [n] \\rightarrow [n]$ such that for any $i,j \\in [n]$ , we have: $T(i,j) = T(j,i)$ ; $T(i,0) = i$ ; and $T(i,j) \\le T(i+1,j)$ if $i < n-1$ .", "Definition 1.3 We say a flag type is realizable if it arises from a flag of an abelian group or field extension.", "In this article, let $T$ denote a flag type.", "Is every flag type realizable?", "The answer is no: for any integers $i \\ge 0$ and $j > 0$ , let $(i \\operatorname{\\%}j)$ denote the remainder when dividing $i$ by $j$ .", "overflowlemma implies that if $T(i,j) < i + j$ , then there exists an integer $1 < k < n$ such that $k \\mid n$ and $(i \\operatorname{\\%}k) + (j \\operatorname{\\%}k) \\ne (i + j) \\operatorname{\\%}k.$ There are also other constraints on flag types; see unrealizableflagtype for more information.", "Morever, say $T$ is realizable for groups if there exists a finite abelian group with a flag realizing $T$ .", "Similarly, say $T$ is realizable for field extensions if there exists a field extension with a flag realizing $T$ .", "Every flag type that is realizable for groups is realizable for fields (groupfieldlemma).", "In this article, we aim to describe the set of realizable flag types." ], [ "Some fundamental flag types", "We now construct some flag types that play a fundamental role in what follows.", "Definition 1.4 A tower type is a $t$ -tuple of integers $\\mathfrak {T}= (n_1,\\dots ,n_t) \\in \\mathbb {Z}_{>1}^t$ for some $t \\ge 1$ .", "We say $t$ is the length of the tower type and $\\prod _{i = 1}^tn_i$ is the degree of the tower type.", "In this article, $\\mathfrak {T}$ will denote a tower type of length $t$ and degree $n$ .", "Mixed radix notation will be useful to express conditions concisely.", "Definition 1.5 Choose a tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ and $i \\in [n]$ .", "Writing $i$ in mixed radix notation with respect to $\\mathfrak {T}$ means writing $i = i_1 + i_2 n_1 + i_3 (n_1n_2) + \\dots + i_t(n_1\\dots n_{t-1})$ where $i_s$ is an integer such that $0 \\le i_s < n_s$ for $1\\le s \\le t$ .", "Note that the integers $i_s$ are uniquely determined.", "Fix a tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ .", "For any positive integer $i$ , let $C_i$ denote the cyclic group of order $i$ , written multiplicatively.", "Define the finite abelian group $G = C_{n_1} \\times \\dots \\times C_{n_t}$ .", "For $1 \\le i \\le t$ , let $e_i \\in G$ be a generator of the $i$ -th component $C_{n_i}$ .", "Construct a sequence $\\lbrace v_0,\\dots ,v_{n-1}\\rbrace $ elements of $G$ as follows.", "Write $i = i_1 + i_2 (n_1) + i_3 (n_1 n_2) + \\dots + i_t(n_1\\dots n_{t-1})$ in mixed radix notation with respect to $(n_1,\\dots ,n_t)$ and set $v_i e_1^{i_1}\\dots e_t^{i_t}$ .", "The sequence $\\lbrace v_0,\\dots ,v_{n-1}\\rbrace $ is essentially lexicographic; it is $\\bigg \\lbrace 1, e_1, e_1^2,\\dots ,e_1^{n_1-1}, e_2, e_1e_2, e_1^2e_2,\\dots ,\\prod _{s=1}^{t}e_s^{n_s-1} \\bigg \\rbrace .$ Define the flag $\\mathcal {F}(\\mathfrak {T}) = \\lbrace F_i\\rbrace _{i \\in [n]}$ by $F_i = \\lbrace v_0,\\dots ,v_i\\rbrace $ .", "Definition 1.6 Let $T(\\mathfrak {T})$ be the flag type of $\\mathcal {F}(\\mathfrak {T})$ .", "We may describe the flag types $T(n_1,\\dots ,n_t)$ even more explicitly; see explicitflagtype.", "The flag types $T(n_1,\\dots ,n_t)$ arise from flags of abelian groups.", "We will show that, in an appropriate sense, they are often minimal among various subsets of realizable flag types." ], [ "Minimality and realizability of $T(\\mathfrak {T})$", "Say a flag type $T$ is realizable for a finite abelian group $G$ if there exists a flag $\\mathcal {F}$ of $G$ such that $T = T_{\\mathcal {F}}$ .", "The following theorem is proved in realizabilitysection.", "theoremrealizabilityforgroupsthm For any finite abelian group $G$ and tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ , the flag type $T(\\mathfrak {T})$ is realizable for $G$ if and only if $G$ has a filtration $\\lbrace 1\\rbrace = G_0 \\subset G_1 \\subset \\dots \\subset G_t = G$ such that $G_i/G_{i-1}$ is cyclic of order $n_i$ for all $1\\le i \\le t$ .", "For a field $K$ , say $T$ is realizable over a field $K$ if there exists a field extension of $K$ with a flag $\\mathcal {F}$ such that $T = T_{\\mathcal {F}}$ .", "Say $T$ is realizable for a field extension $L/K$ if there exists a flag $\\mathcal {F}$ of $L$ over $K$ such that $T = T_{\\mathcal {F}}$ .", "The following theorem is proved in realizabilitysection.", "theoremrealizabilityforfieldsthm For any degree $n$ field extension $L/K$ and tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ , the flag type $T(\\mathfrak {T})$ is realizable for $L/K$ if and only if $L/K$ has a tower of subfields $K = L_0 \\subset L_1 \\subset \\dots \\subset L_t = L$ where $n_i = [L_i \\colon L_{i-1}]$ and the field extension $L_i/L_{i-1}$ is simple for all $1\\le i\\le t$ .", "We define a partial ordering $\\le $ on the set of flag types.", "For any two flag types $T$ and $T^{\\prime }$ , say $T \\le T^{\\prime }$ if $T(i,j) \\le T^{\\prime }(i,j)$ for all $i,j \\in [n]$ .", "This partial ordering allows us to give a good notion of minimality in many cases.", "Definition 1.7 We say: a flag type is minimal if it is minimal among realizable flag types; a flag type $T$ is minimal for finite abelian groups if $T$ is minimal among flag types that are realizable for finite abelian groups; for a finite abelian group $G$ , say $T$ is minimal for $G$ if $T$ is minimal among flag types that are realizable for $G$ ; for a field $K$ , a flag type $T$ is minimal over $K$ if $T$ is minimal amomng flag types that are realizable over $K$ ; and for a degree $n$ field extension $L/K$ , say $T$ is minimal for $L/K$ if $T$ is minimal among flag types that are realizable for $L/K$ .", "Definition 1.8 We say a tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ is prime if $n_1,\\dots ,n_t$ are all prime.", "The following theorem is proved in realizabilitysection.", "theoremminimality For a tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ , the flag type $T(\\mathfrak {T})$ is minimal if and only if $\\mathfrak {T}$ is prime.", "The following theorem in proved in completesect and notcompletesect.", "theoremminimalitycompletethm The set $\\lbrace T(\\mathfrak {T}) \\; \\mid \\; \\mathfrak {T}\\text{ is prime and has degree } n\\rbrace $ is the set of minimal flag types of degree $n$ if and only if $n = p^k$ , with $p$ prime and $k \\ge 1$ ; $n = pq$ , with $p$ and $q$ distinct primes; or $n = 12$ ." ], [ "Tower types of flags", "To a flag $\\mathcal {F}$ of a finite abelian group $G$ , we can associate a tower of subgroups $\\lbrace 1\\rbrace = G_0 \\subset G_1 \\subset \\dots \\subset G_t = G $ such a subgroup is in the tower if and only if it is generated by $F_i$ for some $i \\in [n]$ .", "The tower type of $\\mathcal {F}$ is $\\bigg (\\frac{\\operatorname{\\vert }G_1 \\operatorname{\\vert }}{\\operatorname{\\vert }G_0 \\operatorname{\\vert }},\\dots , \\frac{\\operatorname{\\vert }G_t \\operatorname{\\vert }}{\\operatorname{\\vert }G_{t-1} \\operatorname{\\vert }}\\bigg )$ Similarly, to a flag $\\mathcal {F}$ of a field extension $L/K$ , we can associate a tower of field extensions $K = L_0 \\subset L_1 \\subset \\dots \\subset L_t = L$ such a field is in the tower if and only if it is generated by $F_i$ for some $i \\in [n]$ .", "The tower type of $\\mathcal {F}$ is $\\big ([L_1 \\colon L_0],\\dots , [L_t \\colon L_{t-1}]\\big ).$ theoremlenstrathm Suppose $\\mathfrak {T}= (n_1,\\dots ,n_t)$ is as follows: $n < 8$ ; $n = 8$ and $\\mathfrak {T}\\ne (8)$ ; $n$ is prime; $\\mathfrak {T}= (p,\\dots ,p)$ for a prime $p$ ; $\\mathfrak {T}= (2,p)$ for a prime $p$ ; or $\\mathfrak {T}= (3,p)$ for a prime $p$ ; Then $T(\\mathfrak {T})$ is the unique flag type that is minimal among flags of tower type $\\mathfrak {T}$ ." ], [ "Polyhedra associated to flag types", "We will study flag types using polyhedra.", "Let $T$ be a flag type.", "definitionptdefn Let $P_{T}$ be the set of $\\mathbf {x} = (x_1,\\dots ,x_{n-1}) \\in \\mathbb {R}^{n-1}$ such that $0 \\le x_1 \\le \\dots \\le x_{n-1}$ and $x_{T(i,j)} \\le x_i + x_j$ for all $1 \\le i,j < n$ .", "The set of $P_T$ has a natural poset structure given by inclusion.", "The definition of $P_T$ implies that $P_T \\subseteq P_{T^{\\prime }}$ if and only if $T^{\\prime } \\le T$ .", "The following is proved in polysection.", "lemmainjectionlemma The map from flag types to polyhedra sending $T$ to $P_T$ is an injection.", "So, the poset of polyhedra $P_T$ is canonically dual to the poset of flag types.", "definitionlenpdefn Let $\\operatorname{Len}(\\mathfrak {T})$ be the polyhedron associated to the flag type $T(\\mathfrak {T})$ .", "We have a variant of minimalitycompletethm for polyhedra.", "theoremminimalitycompletethmbetter We have $\\bigcup _T P_T = \\bigcup _{\\mathfrak {T}}\\operatorname{Len}(\\mathfrak {T})$ as $T$ ranges across realizable flag types if and only if $n$ is of the following form: $n = p^k$ , with $p$ prime and $k \\ge 1$ ; $n = pq$ , with $p$ and $q$ distinct primes; or $n = 12$ .", "The following is a corollary of lenstrathm.", "corollarylenstrathmcorol Suppose $\\mathfrak {T}= (n_1,\\dots ,n_t)$ is as follows: $n < 8$ ; $n = 8$ and $\\mathfrak {T}\\ne (8)$ ; $n$ is prime; $\\mathfrak {T}= (p,\\dots ,p)$ for a prime $p$ ; $\\mathfrak {T}= (2,p)$ for a prime $p$ ; or $\\mathfrak {T}= (3,p)$ for a prime $p$ .", "Then $\\bigcup _{\\mathcal {F}} P_{T_{\\mathcal {F}}} = \\operatorname{Len}(\\mathfrak {T})$ as $\\mathcal {F}$ ranges across all flags with tower type $\\mathfrak {T}$ .", "We explicitly describe the polyhedron $P_T$ in terms of the flag type $T$ .", "definitioncornerdef For $0 < i,j < n$ , say $(i,j)$ is a corner of $T$ if $T(i-1,j) < T(i,j)$ and $T(i,j-1) < T(i,j)$ .", "The corners of $T$ describe facets of $P_T$ , as proved in polysection.", "theoremcornerfacetthm $P_T$ is an unbounded polyhedron of dimension $n-1$ .", "For $i,j,k \\in [n]$ , the inequality $x_{k} \\le x_i + x_j$ defines a facet of $P_T$ if and only if $(i,j)$ is a corner and $T(i,j)=k$ .", "This explicit description of $P_T$ specializes to the polyhedron $P(n_1,\\dots ,n_t)$ .", "To state how, we will first need the following definition.", "definitionoverflowdefn Fix a tower type $\\mathfrak {T}=(n_1,\\dots ,n_t)$ and integers $0 \\le i, j \\le i + j < n$ .", "We say the addition $i+j$ does not overflow modulo $\\mathfrak {T}$ if we write $i$ , $j$ , and $k=i+j$ in mixed radix notation with respect to $\\mathfrak {T}$ as $i = i_1 + i_2 n_1 + i_3 (n_1n_2) + \\dots + i_t(n_1\\dots n_{t-1})$ $j = j_1 + j_2 n_1 + i_3 (n_1n_2) + \\dots + j_t(n_1\\dots n_{t-1})$ $k = k_1 + k_2 n_1 + k_3 (n_1n_2) + \\dots + k_t(n_1\\dots n_{t-1})$ and $i_s + j_s = k_s$ for all $1 \\le s \\le t$ .", "Otherwise, we say the addition $i+j$ overflows modulo $\\mathfrak {T}$.", "The following is proved in polysection.", "corollaryexplicitflagtype For any tower type $\\mathfrak {T}$ and $0 \\le i,j,i+j < n$ , the following are equivalent: $i + j$ does not overflow modulo $\\mathfrak {T}$ ; $(i,j)$ is a corner of $\\operatorname{Len}(\\mathfrak {T})$ ; and $T(\\mathfrak {T})(i,j) = i+j$ .", "All corners $(i,j)$ of $T(\\mathfrak {T})$ satisfy $T(\\mathfrak {T})(i,j) = i + j$ .", "Is it true that $T(i,j) = i+j$ for any corner $(i,j)$ of a realizable flag type $T$ ?", "We prove the following theorem in cornerstructuresec.", "theoremcornersineq For a flag $\\mathcal {F}$ and every corner $(i,j)$ of $T_{\\mathcal {F}}$ , we have $T_{\\mathcal {F}}(i + j) \\ge i + j$ .", "The reverse inequality is not true in general; there exist realizable flag types $T$ with a corner $(i,j)$ such that $T(i + j) > i + j$ .", "We prove the theorem below in cornerstructuresec.", "theoremcornersstrongineq There exists a minimal flag type $T$ of degree 18 such that $P_T \\setminus \\big ( P_{T(2,3,3)} \\cup P_{T(3,2,3)} \\cup P_{T(3,3,2)}\\big ) \\ne \\emptyset $ and T has a corner $(i,j)$ such that $T(i + j) > i + j$ ." ], [ "Acknowledgments", "I am extremely grateful to Hendrik Lenstra for the many invaluable ideas, conversations, and corrections throughout the course of this project.", "I also thank Manjul Bhargava for suggesting the questions that led to this paper and for providing invaluable advice and encouragement throughout our research.", "Thank you as well to Jacob Tsimerman, Arul Shankar, Gilles Zémor, and Noga Alon for illuminating conversations.", "The author was supported by the NSF Graduate Research Fellowship." ], [ "The polyhedron associated to a flag type $T$", "Let $T$ be any flag type of degree $n$ .", "In this section, we will prove cornerfacetthm.", "* Proposition 2.1 The set $P_T$ is an unbounded polyhedron of dimension $n-1$ .", "Because $P_T$ is a nonempty (it contains the origin) and is the intersection of homogeneous linear inequalities, it is an unbounded polyhedron.", "We will show that $P_T$ has dimension $n-1$ by producing $n-1$ linearly independent points in $P_T$ .", "For $i \\in [n-1]$ , define $\\mathbf {x}^i = (x_1^i,\\dots ,x_{n-1}^i) \\in \\mathbb {R}^{n-1}$ by $x_1^i = \\dots = x_i^i &= 1 \\\\x_{i+1}^i = \\dots = x_{n-1}^i &= 2$ It is easy to verify that $\\mathbf {x}^i \\in P_T$ .", "The $(n-1)\\times (n-1)$ matrix whose rows consist of the $(n-1)$ -tuples $x^0,\\dots ,x^{n-2}$ has rank $(n-1)$ ; this can be seen via elementary row and column operations.", "* The first claim follows from dimprop.", "We now prove the second claim.", "Suppose $(i,j)$ is not a corner of $T$ .", "Then, there exists a corner $(i^{\\prime },j^{\\prime })$ with $i^{\\prime } \\le i$ , $j^{\\prime } \\le j$ and $T(i^{\\prime },j^{\\prime }) = T(i,j)$ .", "We will show that $x_{T(i,j)} \\le x_i + x_j$ does not define a facet of $P_T$ by showing that $P_T \\cap \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\; \\vert \\; x_{T(i,j)} = x_{i^{\\prime }} + x_{j^{\\prime }}\\rbrace $ strictly contains $P_T \\cap \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\; \\vert \\; x_{T(i,j)} = x_{i} + x_{j}\\rbrace $ .", "In other words, we will produce $\\mathbf {x} \\in P_T$ with $x_{T(i,j)} = x_{i^{\\prime }} + x_{j^{\\prime }} \\ne x_i + x_j$ .", "If $i^{\\prime } \\ne j^{\\prime }$ , set $\\mathbf {x} = (x_1,\\dots ,x_{n-1})$ where $x_1, \\dots , x_{i^{\\prime }} &= 2 \\\\x_{i^{\\prime }+1}, \\dots , x_{j^{\\prime }} &= 3 \\\\x_{j^{\\prime }+1}, \\dots , x_{T(i,j) - 1} &= 4 \\\\x_{T(i,j)}, \\dots , x_{n-1} &= 5.$ If $i^{\\prime } = j^{\\prime }$ , set $\\mathbf {x} = (x_1,\\dots ,x_{n-1})$ where $x_1, \\dots , x_{i^{\\prime }} &= 2 \\\\x_{i^{\\prime }+1}, \\dots , x_{T(i,j) - 1} &= 3 \\\\x_{T(i,j)}, \\dots , x_{n-1} &= 4.$ A computation verifies that $x_{T(i,j)} = x_{i^{\\prime }} + x_{j^{\\prime }} \\ne x_i + x_j$ , and $\\mathbf {x} \\in P_{T_{\\mathcal {F}}}$ .", "Therefore, $P_T$ is the set of $\\mathbf {x} = (x_1,\\dots ,x_{n-1}) \\in \\mathbb {R}^{n-1}$ such that $0 \\le x_1 \\le \\dots \\le x_{n-1}$ , and $x_{T(i,j)} \\le x_i + x_j$ for all corners $(i,j)$ of $T$ .", "Now, suppose $(i,j)$ is a corner of $T$ .", "To show that $x_{T(i,j)} \\le x_i + x_j$ defines a facet of $P_T$ , it suffices to produce a point $\\mathbf {x} = (x_1,\\dots ,x_{n-1})\\in \\mathbb {R}^{n-1}$ such that $0 \\le x_1 \\le \\dots \\le x_{n-1}$ , and $x_{T(i,j)} > x_i + x_j$ and $x_{T(i^{\\prime },j^{\\prime })} \\le x_{i^{\\prime }} + x_{j^{\\prime }}$ for any other corner $(i^{\\prime },j^{\\prime })$ .", "Set $\\mathbf {x} = (x_1,\\dots ,x_{n-1})$ where If $i \\ne j$ , set $\\mathbf {x} = (x_1,\\dots ,x_{n-1})$ where $x_1, \\dots , x_{i} &= 2 \\\\x_{i+1}, \\dots , x_{j} &= 3 \\\\x_{j+1}, \\dots , x_{T(i,j) - 1} &= 4 \\\\x_{T(i,j)}, \\dots , x_{n-1} &= 6.$ If $i = j$ , set $\\mathbf {x} = (x_1,\\dots ,x_{n-1})$ where $x_1, \\dots , x_{i} &= 2 \\\\x_{i+1}, \\dots , x_{T(i,j) - 1} &= 3 \\\\x_{T(i,j)}, \\dots , x_{n-1} &= 5.$ A computation verifies that $\\mathbf {x}$ satisfies the necessary conditions.", "We obtain the following lemma.", "* A flag type is determined by its corners, and a polyhedra is determined by its facets.", "Apply cornerfacetthm to complete the proof.", "* This follows from the definition of $T(\\mathfrak {T})$ .", "We may compare flag types using corners.", "Lemma 2.2 For any two flag types $T$ and $T^{\\prime }$ such that $T \\lnot \\ge T^{\\prime }$ , there exists a corner $(i,j)$ of $T^{\\prime }$ such that $T(i,j) < T^{\\prime }(i,j)$ .", "Because $T \\lnot \\ge T^{\\prime }$ , we have $P_{T} \\lnot \\subseteq P_{T^{\\prime }}$ .", "Thus, there exists integers $1 \\le i,j < n$ such that $P_{T^{\\prime }} \\subset \\lbrace \\mathbf {x} = (x_1,\\dots ,x_{n-1}) \\in \\mathbb {R}^{n-1} \\; \\mid \\; x_{T^{\\prime }(i,j)} \\le x_i + x_j\\rbrace ,$ $P_{T^{\\prime }} \\lnot \\subset \\lbrace \\mathbf {x} = (x_1,\\dots ,x_{n-1}) \\in \\mathbb {R}^{n-1} \\; \\mid \\; x_{T^{\\prime }(i,j)} \\le x_i + x_j\\rbrace ,$ and the linear inequality $ x_{T^{\\prime }(i,j)} \\le x_i + x_j$ defines a facet of $P_{T^{\\prime }}$ .", "Thus cornerfacetthm, there exists a corner $(i,j)$ of $T^{\\prime }$ such that $P_T \\cap \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\; \\mid \\; x_{T^{\\prime }(i,j)} > x_i + x_j\\rbrace \\ne \\emptyset $ .", "Because $P_T \\subseteq \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\; \\mid \\; x_{T(i,j)} \\le x_i + x_j\\rbrace $ , we have $T(i,j) < T^{\\prime }(i,j)$ ." ], [ "Realizability and minimality of $T(\\mathfrak {T})$", "For every flag $\\mathcal {F}$ of a finite abelian group, there exists a sequence $\\lbrace v_0,\\dots ,v_{n-1}\\rbrace $ such that $F_i = \\lbrace v_0,\\dots ,v_i\\rbrace $ .", "Similarly, for a flag $\\mathcal {F}$ of an field extension $L/K$ , there exists a sequence $\\lbrace v_0,\\dots ,v_{n-1}\\rbrace $ such that $F_i = K\\langle v_0,\\dots ,v_i\\rangle $ .", "Lemma 3.1 If $T_{\\mathcal {F}}(i,j) = j$ then $F_i \\subseteq \\operatorname{Stab}(F_j)$ .", "If $T_{\\mathcal {F}}(i,j) = j$ then $F_j \\subseteq F_iF_j \\subseteq F_j$ so $F_iF_j = F_j$ .", "Lemma 3.2 If $\\mathcal {F}$ is a flag of a finite abelian group and $(i,j)$ is a corner of $T_{\\mathcal {F}}$ , then $v_iv_j = v_{T_{\\mathcal {F}}(i,j)}$ .", "If $\\mathcal {F}$ is a flag of a field extension and $(i,j)$ is a corner of $T_{\\mathcal {F}}$ , then $v_iv_j \\in F_{T_{\\mathcal {F}}(i,j)}\\setminus F_{T_{\\mathcal {F}}(i,j)-1}$ .", "Suppose $\\mathcal {F}$ is a flag of an abelian group.", "We have that $T_{\\mathcal {F}}(i,j) = \\max \\lbrace k \\in [n] \\mid v_k \\in F_iF_j\\rbrace $ .", "Thus, $v_{T_{\\mathcal {F}}(i,j)} \\in F_iF_j\\setminus (F_{i-1}F_j \\cup F_iF_{j-1}) = \\lbrace v_iv_j\\rbrace $ so $v_{T_{\\mathcal {F}}(i,j)} = v_iv_j$ .", "The proof in the case of field extensions is entirely analogous.", "* First suppose $G$ has such a filtration and let $e_i \\in G_i$ be a lift of a generator of $G_i/G_{i-1}$ for $i = 1,\\dots ,t$ .", "Write $i = i_1 + i_2(n_1) + i_3(n_1n_2) + \\dots + i_t(n_1\\dots n_{t-1})$ in mixed radix notation, and define a sequence $\\lbrace v_0,\\dots ,v_{n-1}\\rbrace $ by $v_i = e_1^{i_1}\\dots e_t^{i_t}$ for $i \\in [n]$ .", "Define a flag $\\mathcal {F}= \\lbrace F_i\\rbrace _{i\\in [n]}$ by $F_i = \\lbrace v_0,\\dots ,v_i\\rbrace $ .", "Observe that $T_{\\mathcal {F}}$ equals $T(\\mathfrak {T})$ .", "Now suppose that $\\mathcal {F}$ is a flag such that $T_{\\mathcal {F}} = T(n_1,\\dots ,n_t)$ , corresponding to a sequence $\\lbrace v_0,\\dots ,v_{n-1}\\rbrace $ .", "Now, for all $i = 1,\\dots ,t$ , we have $T_{\\mathcal {F}}(n_1\\dots n_i - 1, n_1\\dots n_i - 1) = n_1\\dots n_i - 1$ , so the $F_{n_1\\dots n_i - 1}$ must be a group; call it $G_i$ .", "So, we have a filtration $\\lbrace 1\\rbrace = G_0 \\subset G_1 \\subset \\dots \\subset G_t = G$ such that $\\operatorname{\\vert }G_i/G_{i-1} \\operatorname{\\vert }= n_i$ for all $1 \\le i \\le t$ .", "Now for $1 \\le i \\le t$ let $w_{i} = v_{n_1\\dots n_{i-1}}$ ; we claim $w_i$ is a cyclic generator of $G_i/G_{i-1}$ .", "If $n_i = 2$ , this is trivial so assume $n_i \\ne 2$ .", "For $1 \\le j < n_i$ , the addition $(n_1\\dots n_{i-1}) + (j-1)(n_1\\dots n_{i-1})$ does not overflow modulo $(n_1,\\dots ,n_t)$ so $(n_1\\dots n_{i-1}, (j-1)n_1\\dots n_{i-1})$ is a corner of $T_{\\mathcal {F}}$ by cornerfacetthm.", "Hence by cornerreadinglemma, we see that $v_{jn_1\\dots n_{i-1}} = v_{n_1\\dots n_{i-1}}v_{(j-1)n_1\\dots n_{i-1}}.$ By induction, $v_{jn_1\\dots n_{i-1}} = w_i^j.$ Therefore $w_i^j$ is distinct for $1 \\le j < n_i$ so $w_i$ is a cyclic generator of the group $G_i/G_{i-1}$ of cardinality $n_i$ .", "* The proof is entirely analogous to the proof of realizabilityforgroupsthm.", "* Let $\\mathfrak {T}= (n_1,\\dots ,n_t)$ be a tower type where $n_i = m_1m_2$ for $m_1$ and $m_2$ positive integers.", "Then $\\operatorname{Len}(\\mathfrak {T})$ is strictly contained in $\\operatorname{Len}(n_1,\\dots ,n_{i-1},m_1,m_2,n_{i+1},\\dots ,n_t)$ by cornerfacetthm.", "Thus, $T(n_1,\\dots ,n_{i-1},m_1,m_2,n_{i+1},\\dots ,n_t) \\le T(\\mathfrak {T}).$ So if $\\mathfrak {T}$ is not prime, the flag type $T(\\mathfrak {T})$ not minimal.", "We will now prove that if $\\mathfrak {T}$ is prime, the flag type $T(\\mathfrak {T})$ is minimal.", "We give the proof for finite abelian groups; the proof for field extensions is entirely analogous.", "Write $\\mathfrak {T}= (p_1,\\dots ,p_t)$ .", "Choose a flag $\\mathcal {F}$ of $G$ such that $T_{\\mathcal {F}} \\le T(\\mathfrak {T})$ .", "We will prove that $T_{\\mathcal {F}} = T(\\mathfrak {T})$ .", "Let $\\lbrace v_0,\\dots ,v_{n-1}\\rbrace $ be a sequence of elements of $G$ such that $F_i = \\lbrace v_0,\\dots ,v_i\\rbrace $ .", "For all $1 \\le i \\le t$ , we have $p_1\\dots p_i - 1 \\le T_{\\mathcal {F}}(p_1\\dots p_i - 1, p_1\\dots p_i - 1) \\le T(\\mathfrak {T})(p_1\\dots p_i - 1, p_1\\dots p_i - 1) = p_1\\dots p_i - 1.$ Thus, $p_1\\dots p_i - 1 = T_{\\mathcal {F}}(p_1\\dots p_i - 1, p_1\\dots p_i - 1)$ .", "Therefore the set $\\lbrace v_0,\\dots ,v_{p_1\\dots p_i - 1}\\rbrace $ form a subgroup of $G$ ; call this subgroup $G_i$ .", "Moreover, let $w_i = v_{n_1\\dots n_{i-1}}$ .", "Write $k \\in [n]$ in mixed radix notation with respect to $(p_1,\\dots ,p_t)$ as $k = k_1 + k_2(p_1) + \\dots + k_t(p_1\\dots p_t).$ We will inductively show that $v_k = w_1^{k_1}\\dots w_t^{k_t}$ , which implies that $T_{\\mathcal {F}} = T(\\mathfrak {T})$ .", "For $k = 0$ or $k = p_1\\dots p_i$ for $1\\le i < t$ , this is obvious from the definition.", "For any other $k \\in [n]$ there exists a corner $(i,j)$ of $T(\\mathfrak {T})$ with $T(\\mathfrak {T})(i,j) = i+j = k$ by explicitflagtype.", "Because $T_{\\mathcal {F}} \\le T(\\mathfrak {T})$ and $\\operatorname{\\vert }F_i F_j \\operatorname{\\vert }= i+j$ by induction, we must have $v_k = v_iv_j$ ." ], [ "Some useful lemmas", "The following lemmas will be necessary to prove our main theorems.", "Recall the following definition.", "* Definition 4.1 For any integer $1 < m < n$ such that $m \\mid n$ , we say the addition $i + j$ overflows modulo $m$ if the addition $i + j$ overflows modulo $(m,n/m)$ .", "Otherwise, we say the addition $i + j$ does not overflow modulo $m$.", "Recall that for a subset $S$ of a finite abelian group, $\\operatorname{\\vert }S \\operatorname{\\vert }$ denotes the cardinality $S$ .", "For a $K$ -vector space $S$ in a field extension $L/K$ , let $\\operatorname{\\vert }S \\operatorname{\\vert }$ denote $\\dim _K S$ .", "Lemma 4.2 (, Theorem 2) Let $G$ be an abelian group of cardinality $n$ .", "Suppose we have $i,j \\in [n]$ and two subsets $I$ and $J$ of cardinality $i + 1$ and $j + 1$ respectively.", "Suppose $\\operatorname{\\vert }IJ \\operatorname{\\vert }\\le i+j$ .", "Set $m = \\vert \\operatorname{Stab}(F_i F_j) \\vert $ and write $i$ and $j$ in mixed radix notation with respect to $(m, n/m)$ as $i &= i_1 + i_2 m \\\\j &= j_1 + j_2 m.$ Then $m > 1$ , the addition $i + j$ overflows modulo $m$ , $\\operatorname{\\vert }\\operatorname{Stab}(IJ)I\\operatorname{\\vert }= (i_2 + 1)m,$ $\\operatorname{\\vert }\\operatorname{Stab}(IJ)J \\operatorname{\\vert }= (j_2 + 1)m,$ $\\operatorname{\\vert }IJ \\operatorname{\\vert }= (i_2 + j_2 + 1)m.$ Also, if $L/K$ is a degree $n$ field extension and $I,J \\subseteq L$ are $K$ -subvector spaces of dimension $i + 1$ and $j + 1$ respectively such that $\\dim _K IJ \\le i+j$ , then the same result holds, where here $m = \\dim _k \\operatorname{Stab}(IJ)$ .", "We prove the lemma for groups.", "The proof for vector spaces is entirely analogous if one substitutes zemorthm for kneserorig.", "Observe that $IJ = (\\operatorname{Stab}(IJ)I)(\\operatorname{Stab}(IJ)J)$ .", "Apply kneserorig to obtain that $i_1 + i_2 m + j_1 + j_2 m &= i+j \\\\&\\ge \\operatorname{\\vert }IJ \\operatorname{\\vert }\\\\&= \\operatorname{\\vert }(\\operatorname{Stab}(IJ)I)(\\operatorname{Stab}(IJ)J) \\operatorname{\\vert }\\\\&\\ge \\operatorname{\\vert }\\operatorname{Stab}(IJ)I \\operatorname{\\vert }+ \\operatorname{\\vert }\\operatorname{Stab}(IJ)J \\operatorname{\\vert }- m \\\\&\\ge (i_2 + 1)m + (j_2 + 1)m - m \\\\&\\ge i_2 m + j_2 m + m$ and thus $i_1 + j_1 \\ge m$ , so the addition $i + j$ overflows modulo $m$ .", "Then $\\operatorname{Stab}(IJ)I$ , $\\operatorname{Stab}(IJ)J$ and $IJ$ are unions of cosets of the group $\\operatorname{Stab}(IJ)$ .", "Now because $i_1 + j_1 < 2m$ , we must have $\\operatorname{\\vert }\\operatorname{Stab}(IJ)I \\operatorname{\\vert }= (i_2 + 1)m$ , $\\operatorname{\\vert }\\operatorname{Stab}(IJ)J \\operatorname{\\vert }= (j_2 + 1)m$ , and $\\operatorname{\\vert }IJ \\operatorname{\\vert }= (i_2 + j_2 + 1)m$ .", "Proposition 4.3 There exists flag types that are not realizable.", "Let $n = 6$ and let $T$ be a flag type such that $T(1,1) = 1$ and $T(2,2) = 2$ .", "It is not difficult to verify that such a flag type exists.", "Suppose $T$ is realized by a flag $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ of a finite abelian group; then $F_1$ must be a subgroup of order 2 because $T(1,1) = 1$ , and $F_3$ must be a subgroup of order 3 because $T(2,2) = 2$ .", "However, $F_2 \\subseteq F_3$ , which is a contradiction.", "An analogous argument shows that $T$ is not realized by a flag $\\mathcal {F}$ of a field extension.", "Lemma 4.4 Let $G = C_{q_1}\\times \\dots \\times C_{q_k}$ be the finite abelian group of order $n$ where $C_{q_i}$ is the cyclic group of order $q_i$ and $q_i$ is a prime power for all $i$ .", "Let $L/K$ be a degree $n$ field extension containing subextensions $L_i/K$ such that the extension $L_i/K$ has a primitive element $\\alpha _i$ with $\\alpha _i^{q_i} \\in K$ , we have $\\deg (L_i/K) = q_i$ , and $\\prod _i L_i = L$ .", "Then every flag type realizable for $G$ is realizable for $L/K$ .", "Let $e_1$ be a generator of $C_{q_i}$ .", "For any flag $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ of $G$ , there is a sequence $\\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ where $F_i = \\lbrace v_0,\\dots ,v_i\\rbrace $ .", "For $i \\in [n]$ , let $i_1,\\dots ,i_k$ be integers such that $v_i = e_1^{i_1}\\dots e_k^{i_k}$ .", "Let $\\alpha _i$ be a primitive element of $L_i/K$ such that $\\alpha _i^{q_i} \\in K$ .", "Let $v_i^{\\prime } = \\alpha _1^{i_1}\\dots \\alpha _k^{i_k}$ and let $\\mathcal {F}^{\\prime } = \\lbrace F^{\\prime }_i\\rbrace _{i \\in [n]}$ be the flag of $L/K$ defined by $F_i^{\\prime } = K\\langle v_0^{\\prime },\\dots ,v_i^{\\prime }\\rangle $ .", "Then a computation shows that $T_{\\mathcal {F}} = T_{\\mathcal {F}^{\\prime }}$ ." ], [ "Proof that the flag types $T(\\mathfrak {T})$ for prime {{formula:92c0471c-d6c2-4b3d-8bd7-355406d3774b}} form the set of minimal flag types for certain {{formula:f1b59f32-a1b5-44e2-a4b5-fa3913340a59}}", "Our purpose in the following two sections will be to prove the following two theorems.", "* Combine completethm, notcompletethm, and minimality.", "* Combine completethm and notcompletethm.", "We prove completethm in completesect and we prove notcompletethm in notcompletesect.", "For simplicity of exposition, we assume in this section that all flags are flags of abelian groups; the proofs for field extensions is entirely analogous.", "Simply replace the word “cardinality” with “dimension”, the word “group” with “field”, the word “order” (of an element in the group) with “degree” (of an element in the field extension), and the phrase “union of cosets” with “vector space”.", "Moreover, replace any use of kneserorig with zemorthm.", "Theorem 5.1 Suppose $n$ is of the following form: $n = p^k$ , with $p$ prime and $k \\ge 1$ ; $n = pq$ , with $p$ and $q$ distinct primes; or $n = 12$ .", "Then, for every realizable flag type $T$ , there exists a flag type $T(p_1,\\dots ,p_t)$ such that $T \\ge T(p_1,\\dots ,p_t)$ .", "We separate into the following cases: for $n = p^k$ for $p$ prime and $k \\ge 1$ , apply primepower; for $n = 2p$ for $p$ an odd prime, apply 2prefined; for $n = pq$ for $p$ and $q$ distinct odd primes, apply largepqcase; and for $n = 12$ , apply 12.", "The rest of this section is devoted to proving 2prefined, largepqcase, primepower, and 12.", "Remark 5.2 For a tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ , we denote the tower type $T(\\mathfrak {T}) = T((n_1,\\dots ,n_t))$ by $T(n_1,\\dots ,n_t)$ .", "When $t = 2$ , we caution the reader to not confuse $T(n_1,n_2)$ with evaluating some flag type $T$ at the numbers $n_1$ and $n_2$ .", "Proposition 5.3 Suppose $n = 2p$ for $p$ an odd prime.", "For every realizable flag type $T$ , we have $T \\ge T(2,p)$ or $T \\ge T(p,2)$ .", "Assume for the sake of contradiction that there exists a flag $\\mathcal {F}$ such that $T_{\\mathcal {F}} \\lnot \\ge T(p,2)$ and $T_{\\mathcal {F}} \\lnot \\ge T(2,p)$ .", "By testflagineqlemma, there exists integers $0 < i_2 \\le j_2 < i_2 + j_2 < 2p$ such that $i_2 + j_2$ does not overflow modulo $(p,2)$ and $T_{\\mathcal {F}}(i_2 + j_2) < i_2 + j_2$ .", "By overflowlemma we must have that $F_{i_2} + F_{j_2}$ overflows modulo 2.", "Moreover, $\\operatorname{\\vert }\\operatorname{Stab}(F_{i_2}F_{j_2})F_{i_2}\\operatorname{\\vert }= \\operatorname{\\vert }F_{i_2} \\operatorname{\\vert }.$ Hence $F_{i_2}$ is a union of cosets of $\\operatorname{Stab}(F_{i_2}F_{j_2})F_{i_2}$ and $\\operatorname{Stab}(F_{i_2}F_{j_2})F_{i_2}$ is a group of order 2.", "Similarly, because $T_{\\mathcal {F}} \\lnot \\ge T(2,p)$ , there must exist integers $0 < i_p \\le j_p < i_p + j_p < 2p$ such that $i_p + j_p$ does not overflow modulo $(2,p)$ and $T_{\\mathcal {F}}(i_p + j_p) < i_p + j_p$ .", "Again, by overflowlemma we must have that $i_p + j_p$ overflows modulo $p$ and $\\operatorname{Stab}(F_{i_p}F_{j_p})$ is a group of order $p$ .", "Because $i_p + j_p \\ge p$ , we have $j_p \\ge \\frac{p+1}{2}$ .", "Because $F_{i_2}$ is a union of cosets of a group of order 2, it cannot be contained in a group of order $p$ .", "Therefore, we must have $F_{j_p} \\subseteq F_{i_2}$ and hence $i_2 + 1 &= \\operatorname{\\vert }F_{i_2} \\operatorname{\\vert }\\\\&= \\operatorname{\\vert }\\operatorname{Stab}(F_{i_2}F_{j_2})F_{i_2} \\operatorname{\\vert }\\\\&\\ge \\operatorname{\\vert }\\operatorname{Stab}(F_{i_2}F_{j_2})F_{j_p} \\operatorname{\\vert }\\\\&= 2\\big (\\frac{p+1}{2} + 1\\big ) = p+3.$ Thus $i_2 + j_2 \\ge 2p$ , which is a contradiction.", "Proposition 5.4 Suppose $n = pq$ for $p$ and $q$ distinct odd primes.", "Then for every realizable flag type $T$ , we have $T \\ge T(p,q)$ or $T \\ge T(q,p)$ .", "Assume for the sake of contradiction that there exists a flag $\\mathcal {F}$ such that $T_{\\mathcal {F}} \\lnot \\ge T(p,q)$ and $T_{\\mathcal {F}} \\lnot \\ge T(q,p)$ .", "By testflagineqlemma, there exists integers $0 < i_q \\le j_q < i_q + j_q < pq$ such that $i_q + j_q$ does not overflow modulo $(p,q)$ , but $T_{\\mathcal {F}}(i_q, j_q) > i_q + j_q$ .", "By overflowlemma we must have that $i_q + j_q$ overflows modulo $q$ and $K_q = \\operatorname{Stab}(F_{i_q} F_{j_q})$ is a group of order $q$ .", "Similarly, because $T_{\\mathcal {F}} \\lnot \\ge T(q,p)$ , there must exist integers $0 < i_p \\le j_p < i_p + j_p < pq$ such that $i_p + j_p$ does not overflow modulo $(q,p)$ , but $T(i_p, j_p) > i_p + j_p$ .", "By overflowlemma we must have that $i_p + j_p$ overflows modulo $q$ and $K_p = \\operatorname{Stab}(F_{i_p}F_{j_p})$ is a group of order $p$ .", "If $i_q \\le i_p$ and $j_q \\le j_p$ then $K_q \\subseteq F_{i_q} F_{j_q} \\subseteq F_{i_p} F_{j_p},$ and because $F_{i_p}F_{j_p}$ is a union of cosets of $K_p$ , we have $G = K_pK_q \\subseteq F_{i_p}F_{j_p},$ which is a contradiction.", "Similarly, it is not possible for $i_p \\le i_q$ and $j_p \\le j_q$ .", "Hence without loss of generality we may suppose $i_p \\le i_q \\le j_q \\le j_p$ .", "Write $i_p$ and $j_p$ in mixed radix notation with respect to $(p,q)$ and write $i_q$ and $j_q$ in mixed radix notation with respect to $(q,p)$ as $i_p &= i_{1,p}p + i_{2,p} \\\\j_p &= j_{1,p}p + j_{2,p} \\\\i_q &= i_{1,q}q + i_{2,q} \\\\j_q &= j_{1,q}q + j_{2,q}.$ By overflowlemma, we have that $\\operatorname{\\vert }K_p F_{i_p}\\operatorname{\\vert }&= \\operatorname{\\vert }K_p\\operatorname{\\vert }(i_{1,p}+1) \\\\\\operatorname{\\vert }K_p F_{j_p}\\operatorname{\\vert }&= \\operatorname{\\vert }K_p\\operatorname{\\vert }(j_{1,p}+1) \\\\\\operatorname{\\vert }K_q F_{i_q}\\operatorname{\\vert }&= \\operatorname{\\vert }K_q\\operatorname{\\vert }(i_{1,q}+1) \\\\\\operatorname{\\vert }K_q F_{j_q}\\operatorname{\\vert }&= \\operatorname{\\vert }K_q\\operatorname{\\vert }(j_{1,q}+1).$ Note that $i_q + 1 = \\operatorname{\\vert }F_{i_q}\\operatorname{\\vert }\\le \\frac{\\operatorname{\\vert }K_q F_{i_q}\\operatorname{\\vert }}{\\operatorname{\\vert }K_q \\operatorname{\\vert }}\\frac{\\operatorname{\\vert }K_p F_{i_q}\\operatorname{\\vert }}{\\operatorname{\\vert }K_p \\operatorname{\\vert }}\\le (i_{1,q}+1)\\frac{\\operatorname{\\vert }K_p F_{j_p}\\operatorname{\\vert }}{\\operatorname{\\vert }K_p \\operatorname{\\vert }}= (i_{1,q}+1)(j_{1,p}+1)$ $j_q + 1= \\operatorname{\\vert }F_{j_q}\\operatorname{\\vert }\\le \\frac{\\operatorname{\\vert }K_q F_{j_q}\\operatorname{\\vert }}{\\operatorname{\\vert }K_q \\operatorname{\\vert }}\\frac{\\operatorname{\\vert }K_p F_{j_q}\\operatorname{\\vert }}{\\operatorname{\\vert }K_p \\operatorname{\\vert }} \\le (j_{1,q}+1)\\frac{\\operatorname{\\vert }K_p F_{j_p}\\operatorname{\\vert }}{\\operatorname{\\vert }K_p \\operatorname{\\vert }}= (j_{1,q}+1)(j_{1,p}+1).$ Thus $i_q + j_q &< (i_{1,q}+1)(j_{1,p}+1) + (j_{1,q}+1)(j_{1,p}+1) \\\\&= (i_{1,q}+j_{1,q}+2)(j_{1,p}+1) \\\\&\\le p(j_{1,p}+1) \\\\&\\le j_p.$ Because $i_q + j_q \\le j_p$ , we have $K_q \\subseteq F_{i_q + j_q - 1} \\subseteq F_{j_p}$ and because $F_{i_p}F_{j_p}$ union of $K_p$ -cosets, we have $K = K_qK_p \\subseteq F_{i_p}F_{j_p},$ which is a contradiction.", "Proposition 5.5 Suppose $n = p^k$ for $p$ and odd prime and $k \\ge 1$ .", "Then for every realizable flag type $T$ , we have $T \\ge T(p,\\dots ,p)$ .", "Suppose there exists a realizable flag type $T$ such that $T_{\\mathcal {F}} \\lnot \\ge T(p,\\dots ,p)$ .", "By testflagineqlemma, there exists integers $0 < i \\le j < n$ such that $i + j$ does not overflow modulo $(p,\\dots ,p)$ and $T(i,j) < i+j$ .", "By overflowlemma, $i + j$ must overflow over some positive integer $m$ such that $m \\mid p^k$ , which is a contradiction.", "Proposition 5.6 Suppose $n = 12$ .", "Then for every realizable flag type $T$ , we have $T \\ge T(3,2,2)$ or $T \\ge T(2,3,2)$ or $T \\ge T(2,2,3)$ .", "Assume for the sake of contradiction that there exists a flag $\\mathcal {F}$ such that $T_{\\mathcal {F}} \\lnot \\ge T(3,2,2)$ , $T_{\\mathcal {F}} \\lnot \\ge T(2,3,2)$ , and $T_{\\mathcal {F}} \\lnot \\ge T(2,2,3)$ .", "By testflagineqlemma, there exists $i_1$ , $i_2$ , $i_3$ , $j_1$ , $j_2$ , $j_3$ such that $0 < i_1 \\le j_1 &< i_1 + j_1 < 12 \\\\0 < i_2 \\le j_2 &< i_2 + j_2 < 12 \\\\0 < i_3 \\le j_3 &< i_3 + j_3 < 12,$ and $i_1 + j_1$ (resp.", "$i_2 + j_2$ , $i_3 + j_3$ ) does not overflow modulo $(3,2,2)$ (resp.", "$(2,3,2)$ , $(2,2,3)$ ), and $T_{\\mathcal {F}}(i_1 + j_1) &> i_1 + j_1 \\\\T_{\\mathcal {F}}(i_2 + j_2) &> i_2 + j_2 \\\\T_{\\mathcal {F}}(i_3 + j_3) &> i_3 + j_3.$ By overflowlemma we have that $K_1 = \\operatorname{Stab}(F_{i_1}F_{j_1})$ $K_2 = \\operatorname{Stab}(F_{i_2}F_{j_2})$ $K_3 = \\operatorname{Stab}(F_{i_3} F_{j_3})$ are a nontrivial proper subgroups; let $m_1 = \\vert K_1 \\operatorname{\\vert }$ , let $m_2 = \\operatorname{\\vert }K_2 \\operatorname{\\vert }$ , and let $m_3 = \\operatorname{\\vert }K_3 \\operatorname{\\vert }$ .", "The list of possible triples $(i_1,j_1,m_1)$ for which $0 < i_1 \\le j_1 < i_1 + j_1 < 12$ , the addition $i_1 + j_1$ overflows modulo $m_1$ , we have $1 < m_1 < 12$ , we have $m_1 \\mid 12$ , and the addition $i_1 + j_1$ does not overflow modulo $(3,2,2)$ is $\\lbrace (1,1,2), (1,3,2), (1,3,4), (1,7,2), (1,7,4), (1,9,2), (2,3,4), (2,6,4), (3,6,4), (3,7,2), (3,7,4)\\rbrace .$ The list of possible triples $(i_2,j_2,m_2)$ for which $0 < i_2 \\le j_2 < i_2 + j_2 < 12$ , the addition $i_2 + j_2$ overflows modulo $m_2$ , we have $1 < m_2 < 12$ , we have $m_2 \\mid 12$ , and the addition $i_2 + j_2$ does not overflow modulo $(2,3,2)$ is $\\lbrace (1,2,3), (1,8,3), (2,2,3), (2,2,4), (2,3,4), (2,6,4), (2,7,3), (2,7,4), (2,8,3), (3,6,4)\\rbrace .$ The list of possible triples $(i_3,j_3,m_3)$ for which $0 < i_3 \\le j_3 < i_3 + j_3 < 12$ , the addition $i_3 + j_3$ overflows modulo $m_3$ , we have $1 < m_3 < 12$ , we have $m_3 \\mid 12$ , and the addition $i_3 + j_3$ does not overflow modulo $(2,2,3)$ is $\\lbrace (1,2,3), (1,8,3), (2,4,3), (2,4,6), (2,5,3), (2,5,6), (2,8,3), (3,4,6), (4,4,6), (4,5,3), (4,5,6)\\rbrace .$ We will show that every combination of integers $(i_1,j_1,m_1)$ , $(i_2,j_2,m_2)$ , and $(i_3,j_3,m_3)$ from the lists above give rise to a contradiction.", "Then because $i_1 < m_1$ , by overflowlemma, we have $K_1F_{i_1} = K_1$ so $F_{i_1} \\subseteq K_{1}$ .", "Write $F_1 = \\lbrace 1, v_1\\rbrace $ .", "Then $\\operatorname{ord}(v_1) \\mid m_1 \\mid 4$ .", "If $\\operatorname{ord}(v_1) = 4$ then as $\\operatorname{ord}(v_1) \\mid m_1 \\mid 4$ , we have that $m_1 = 4$ .", "If $i_3 < m_3$ then $F_{i_3} \\subseteq K_{3}$ and thus $\\deg (v_1) \\mid m_{3} \\mid 6$ , which is a contradiction.", "Hence $i_3 > m_3$ so we must have $m_3 = 3$ and $i_3 = 4$ and $j_3 = 5$ .", "Moreover, by overflowlemma, $\\operatorname{\\vert }K_{3}F_5\\operatorname{\\vert }= 2 \\operatorname{\\vert }K_3\\operatorname{\\vert }$ .", "Because $m_1=4$ and $m_3=3$ are coprime, we have $F_{i_1} \\rangle \\subseteq K_{1}$ and $i_1+1 = \\operatorname{\\vert }F_{i_1} \\operatorname{\\vert }= \\frac{\\operatorname{\\vert }K_3 F_{i_1}\\operatorname{\\vert }}{\\operatorname{\\vert }K_3 \\operatorname{\\vert }} \\le \\frac{\\operatorname{\\vert }K_3 F_{5}\\operatorname{\\vert }}{\\operatorname{\\vert }K_3 \\operatorname{\\vert }} = 2,$ which is a contradiction.", "Thus $\\operatorname{ord}(v_1) = 2$ , and therefore without loss of generality we may suppose $i_1 = j_1 = 1$ and $m_1 = 2$ , as $\\operatorname{ord}(v_1) = 2$ implies that $F_1F_1 = F_1.$ Notice that $i_2 < m_2$ , so $F_{i_2} \\subseteq K_{2}$ and thus $\\operatorname{ord}(v_1) = 2 \\mid \\operatorname{ord}(K_2)$ ; hence $m_2 = 4$ .", "If $m_3 = 3$ , then if $i_3 < m_3$ we have $F_{i_3} \\rangle \\subseteq K_{3}$ and thus $\\operatorname{ord}(v_1) \\mid K_{3} \\mid 3$ , which is a contradiction.", "Thus $i_3 > m_3$ , so $i_3 = 4$ and $j_3 = 5$ .", "By overflowlemma, $\\operatorname{\\vert }K_{3}F_5\\operatorname{\\vert }= 2\\operatorname{\\vert }K_3\\operatorname{\\vert }$ ; thus $F_5 = K_{3}F_1$ is a group of order 6.", "However, $K_{3}F_1 &= \\operatorname{Stab}(F_5) \\\\&\\subseteq \\operatorname{Stab}(F_4F_5) \\\\&= K_3,$ which is a contradiction.", "Hence, $m_3 = 6$ .", "Because $i_2 < m_2$ and $i_3 < m_3$ and $2 \\le i_2,i_3$ , we have $F_2 \\subseteq K_{2} \\cap K_{3}$ , which is a contradiction, because $\\operatorname{\\vert }K_{2} \\cap K_{3} \\operatorname{\\vert }\\le \\gcd (m_2,m_3) = \\gcd (4,6) = 2.$" ], [ "Proof that the flag types $T(p_1,\\dots ,p_t)$ do not form the set of minimal flag types for certain {{formula:5415e451-439b-43fe-bfa8-fc3e1e850f80}}", "Theorem 6.1 Suppose $n$ is not of the following form: $n = p^k$ , with $p$ prime and $k \\ge 1$ ; $n = pq$ , with $p$ and $q$ distinct primes; or $n = 12$ .", "Then there exists a realizable flag type $T$ such that $P_T \\lnot \\subseteq \\bigcup _{(p_1,\\dots ,p_t)}P_{T(p_1,\\dots ,p_t)}.$ By refconjcounterexextensionlemma, proving the statement of notcompletethm for degree $n$ implies the statement of notcompletethm for degree $nt$ for any positive integer $t$ .", "Therefore, it suffices to prove the theorem when: $n = p^2q$ for two distinct odd primes $p$ and $q$ with $p < q$ , in which case p2q provides a proof; $n = pqr$ for three primes $p$ , $q$ , and $r$ with $p < q \\le r$ , in which case pqr provides a proof; or $n = 4p$ for a prime $p \\ne 2,3$ , in which case 4p provides a proof.", "The rest of this section is dedicated to proving refconjcounterexextensionlemma, p2q, pqr, and 4p.", "Lemma 6.2 Let $n,m \\in \\mathbb {Z}_{>1}$ be such that $m \\mid n$ .", "If there exists a realizable flag type $T$ of degree $m$ such that $P_T \\lnot \\subseteq \\bigcup _{(p_1,\\dots ,p_t)}P_{T(p_1,\\dots ,p_t)}$ where $(p_1,\\dots ,p_t)$ range across all tuples with prime entries such that $p_1\\dots p_t = m$ , then there exists a realizable flag type $T^{\\prime }$ of degree $n$ such that $P_{T^{\\prime }} \\lnot \\subseteq \\bigcup _{(p^{\\prime }_1,\\dots ,p^{\\prime }_t)}P_{T(p^{\\prime }_1,\\dots ,p^{\\prime }_t)}$ where $(p^{\\prime }_1,\\dots ,p^{\\prime }_t)$ range across all tuples with prime entries such that $p^{\\prime }_1\\dots p^{\\prime }_t = n$ .", "The polyhedron $P_T$ has dimension $m-1$ .", "Therefore, the set $S = \\bigg (P_T \\setminus \\bigcup _{(p_1,\\dots ,p_t)}P_{T(p_1,\\dots ,p_t)}\\bigg ) \\cap \\lbrace \\mathbf {x} \\in \\mathbb {R}^{m-1} \\; \\vert \\; x_k \\ne x_{i} + x_j \\; \\forall \\; 1 \\le i,j,k < m \\rbrace $ is nonempty.", "Let $G$ be a finite abelian group of cardinality $m$ with a flag $\\mathcal {F}$ realizing $T$ .", "Let $\\lbrace v_0,\\dots ,v_{m-1}\\rbrace $ be a sequence of elements in $G$ such that $F_i = \\lbrace v_0,\\dots ,v_{i}\\rbrace $ .", "Let $G^{\\prime } = G \\times C_p$ , where $C_p$ is a cyclic group of order $p$ .", "Let $e$ be a generator of $C_p$ .", "Define the sequence $\\lbrace 1 = v^{\\prime }_0,\\dots ,v^{\\prime }_{n-1} \\rbrace $ by $v^{\\prime }_i = v_{i_2}e^{i_1}$ for $i = i_1 + i_2 p$ in mixed radix notation with respect to $(p,m)$ .", "Define a flag $\\mathcal {F}^{\\prime } = \\lbrace F^{\\prime }_i\\rbrace _{i \\in [n]}$ of $G^{\\prime }$ by $F^{\\prime }_i = \\lbrace v^{\\prime }_0,\\dots ,v^{\\prime }_i\\rbrace $ .", "Choose $\\mathbf {x} = (x_1,\\dots ,x_{m-1}) \\in S$ .", "Let $\\epsilon = \\min _{1 \\le i < m-1}\\lbrace x_{i+1} - x_i\\rbrace $ and note that $\\epsilon \\ne 0$ .", "Define the point $\\mathbf {x}^{\\prime } = (x_1^{\\prime },\\dots ,x_{n-1}^{\\prime }) \\in \\mathbb {R}^{n-1}$ by $x_i^{\\prime } = \\epsilon \\frac{i_1}{2p} + x_{i_2}.$ We will show that $\\mathbf {x}^{\\prime } \\in P_{T^{\\prime }}$ .", "We first show that $0\\le x^{\\prime }_1\\le \\dots \\le x^{\\prime }_{mp-1}$ .", "For $1 \\le i < n-1$ , write $i = i_1 + i_2 p$ in mixed radix notation with respect to $(p,m)$ .", "If $i_1 \\ne p-1$ , then $x_{i+1}^{\\prime } = \\epsilon \\frac{i_1 + 1}{2p} + x_{i_2} \\ge \\epsilon \\frac{i_1}{2p} + x_{i_2} = x_i^{\\prime }.$ If $i_1 = p-1$ then $x_{i+1}^{\\prime } = x_{i_2+1} \\ge \\epsilon + x_{i_2} \\ge \\epsilon \\frac{i_1}{2p} + x_{i_2} = x_i^{\\prime }.$ For any $1 \\le i,j,k < n$ , we now show that $x^{\\prime }_{T_{\\mathcal {F}^{\\prime }}(i,j)} \\le x^{\\prime }_i + x^{\\prime }_j$ .", "Let $k = T_{\\mathcal {F}^{\\prime }}(i,j)$ .", "Write $i &= i_1 + i_2 p \\\\j &= j_1 + j_2 p \\\\k &= k_1 + k_2 p$ in mixed radix notation with respect to $(p,m)$ .", "Then we have $k_1 \\le i_1+j_1$ so $x_{k_2}\\le x_{i_2}+x_{j_2}$ .", "Then $x^{\\prime }_k &= \\epsilon \\frac{k_1}{2p} + x_{k_2} \\\\&\\le \\epsilon \\frac{k_1}{2p} + x_{k_2}\\\\&\\le \\epsilon \\bigg (\\frac{i_1}{2p} + \\frac{j_1}{2p}\\bigg ) + x_{i_2} + x_{j_2} \\\\&\\le \\epsilon \\frac{i_1}{2p} + x_{i_2} + \\epsilon \\frac{j_1}{2p} + x_{j_2} \\\\&\\le x^{\\prime }_i + x^{\\prime }_j.$ We will now prove that $\\mathbf {x}^{\\prime } \\notin \\bigcup _{(p^{\\prime }_1,\\dots ,p^{\\prime }_t)}P_{T(p^{\\prime }_1,\\dots ,p^{\\prime }_t)}$ where $(p_1,\\dots ,p_t)$ ranges across all tuples with prime entires such that $\\prod _i p^{\\prime }_i = n$ .", "Fix such a tuple $(p^{\\prime }_1,\\dots ,p^{\\prime }_t)$ .", "If $p^{\\prime }_1 \\ne p$ then $x_1^{\\prime }+x_{p-1}^{\\prime } = \\frac{\\epsilon }{2} < x_1 = x^{\\prime }_p,$ so $\\mathbf {x}^{\\prime } \\notin P_{T(p_1,\\dots ,p_t)}$ because $1 + (p-1)$ does not overflow modulo $(p_1,\\dots ,p_t)$ and $x_p^{\\prime } > x^{\\prime }_1 + x^{\\prime }_{p-1}$ .", "If $p^{\\prime }_1 = p$ then $p^{\\prime }_2\\dots p^{\\prime }_t = m$ .", "Because $\\mathbf {x} \\notin P_{T(p_2^{\\prime },\\dots ,p_t^{\\prime })}$ , we can choose $1 \\le i \\le j < i+j < m$ such that $i+j$ does not overflow modulo $(p^{\\prime }_2,\\dots ,p^{\\prime }_t)$ and $x_{i+j} > x_i + x_j$ .", "Note that $pi + pj$ does not overflow modulo $(p^{\\prime }_1,\\dots ,p^{\\prime }_t)$ and $x_{pi+pj}^{\\prime } = x_{i+j} > x_{i} + x_{j} = x_{pi}^{\\prime } + x_{pj}^{\\prime }.$ Thus $\\mathbf {x}^{\\prime } \\notin P_{T(p^{\\prime }_1,\\dots ,p^{\\prime }_t)}$ .", "Proposition 6.3 Let $p$ and $q$ be two distinct odd primes with $p < q$ and let $n = p^2q$ .", "Then there exists a realizable flag type $T$ such that $P_T \\lnot \\subseteq P_{T(p,p,q)} \\cup P_{T(p,q,p)} \\cup P_{T(q,p,p)}.$ Let $G = C_q \\times C_p \\times C_q$ and let $e_i$ be a generator of the $i$ -th component of $G$ .", "Define a sequence $\\lbrace 1=v_0,\\dots ,v_{n - 1}\\rbrace $ as follows.", "For $0 \\le i < q$ , set $v_i = e_1^{i}$ .", "For $q \\le i < n$ and $1 \\le i^{\\prime } < n$ , write $i^{\\prime } = i^{\\prime }_1 + i^{\\prime }_2p + i_3^{\\prime }pq$ in mixed radix notation with respect to $(p,q,p)$ .", "Inductively define $v_i$ as follows.", "Choose $i^{\\prime }$ minimal such that $e_2^{i^{\\prime }_1}e_1^{i^{\\prime }_2}e_3^{i^{\\prime }_3} \\notin \\lbrace v_0,\\dots ,v_{i-1}\\rbrace $ , and set $v_i = e_2^{i^{\\prime }_1}e_1^{i^{\\prime }_2}e_3^{i^{\\prime }_3}$ .", "Observe that for $i \\ge pq$ , $e_2^{i^{\\prime }_1}e_1^{i^{\\prime }_2}e_3^{i^{\\prime }_3} = e_2^{i_1}e_1^{i_2}e_3^{i_3}$ .", "Define a flag $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ by $F_i = \\lbrace v_0,\\dots ,v_i\\rbrace $ .", "Because $F_1 &= \\lbrace 1, e_1 \\rbrace \\\\F_{q-1} &= \\langle e_1 \\rangle \\\\F_{pq+1} &= \\langle e_1,e_2 \\rangle \\cup \\lbrace e_3, e_2e_3 \\rbrace \\\\F_{pq+p-1} &= \\langle e_1,e_2 \\rangle \\cup \\langle e_2 \\rangle \\lbrace e_3\\rbrace = \\langle e_2\\rangle ( \\langle e_1\\rangle \\cup \\lbrace e_3\\rbrace ) \\\\F_{2pq+p-1} &= \\langle e_1, e_2\\rangle \\lbrace 1, e_3\\rbrace \\cup \\langle e_2\\rangle \\lbrace e_3^2\\rbrace ,$ we have $F_1 F_{q-1} = F_{q-1}$ and $F_{pq+1}F_{pq+p-1} &= (\\langle e_1, e_2 \\rangle \\cup \\lbrace e_3, e_2e_3 \\rbrace )(\\langle e_2 \\rangle (\\langle e_1 \\rangle \\cup \\lbrace e_3\\rbrace ))\\\\&= \\langle e_1, e_2\\rangle \\lbrace 1, e_3\\rbrace \\cup \\langle e_2 \\rangle \\lbrace e_3^2\\rbrace \\\\&= F_{2pq+p-1}.$ We claim there exists $\\mathbf {x} \\in P_{T_{\\mathcal {F}}}$ such that $x_q > x_1 + x_{q-1}$ and $x_{2pq+p} > x_{pq + 1} + x_{pq + p - 1}$ .", "In particular, for $1 \\le i < q$ set $x_i = \\frac{i}{2q}$ .", "For $q \\le i < pq$ set $x_i = 1$ .", "For $pq \\le i < p^2q$ write $i = i_1 + i_2p + i_3pq$ in mixed radix notation with respect to $(p,q,p)$ and set $x_i = i_1 \\frac{1}{4pq} + i_2 \\frac{1}{2q} + i_3$ .", "We first show that $0 \\le x_1 \\le \\dots x_{n-1}$ .", "If $1\\le i < q-1$ , then $x_{i+1} = \\frac{i+1}{2q} \\ge \\frac{i}{2q} = x_i.$ Note also that $x_q = 1 > \\frac{q-1}{2q} = x_{q-1}$ If $q\\le i < pq-1$ , $x_{i+1} = 1 = x_i.$ If $pq\\le i < n-1$ then write $i+1 = (i+1)_1 + (i+1)_2p + (i+1)_3pq$ in mixed radix notation with respect to $(p,q,p)$ as well.", "Then if $i_1 = p-1$ and $i_2 = q-1$ then $(i+1)_1 = 0$ and $(i+1)_2 = 0$ and $(i+1)_3 = i_3 + 1$ .", "Then $x_{i+1} &= (i+1)_1 \\frac{1}{4pq} + (i+1)_2 \\frac{1}{2q} + (i+1)_3 \\\\&= (i_3 + 1) \\\\& > i_1 \\frac{1}{4pq} + i_2 \\frac{1}{2q} + i_3 \\\\&= x_i.$ Instead if $i_1 = p-1$ and $i_2 \\ne q-1$ then $(i+1)_1 = 0$ and $(i+1)_2 = i_2 + 1$ and $(i+1)_3 = i_3$ .", "Then $x_{i+1} &= (i+1)_1 \\frac{1}{4pq} + (i+1)_2 \\frac{1}{2q} + (i+1)_3 \\\\&= (i_2+1) \\frac{1}{2q} + i_3 \\\\&> i_1 \\frac{1}{4pq} + i_2 \\frac{1}{2q} + i_3 \\\\&= x_i.$ Finally, if $i_1 \\ne p-1$ then $(i+1)_1 = i_1+1$ and $(i+1)_2 = i_2$ and $(i+1)_3 = i_3$ .", "Then $x_{i+1} &= (i+1)_1 \\frac{1}{4pq} + (i+1)_2 \\frac{1}{2q} + (i+1)_3 \\\\&= (i_1+1) \\frac{1}{4pq} + i_2\\frac{1}{2q} + i_3 \\\\&> i_1 \\frac{1}{4pq} + i_2 \\frac{1}{2q} + i_3 \\\\&= x_i.$ Therefore, $0 \\le x_1 \\le \\dots x_{n-1}$ .", "We now show that for all $1\\le i,j < n$ , we have $x_{T(i,j)} \\le x_i + x_j$ .", "Let $k = T(i,j)$ .", "Case 1: $1 \\le i < q$ and $1 \\le j < q$.", "Then $k = \\min (q-1, i+j)$ .", "Thus $x_k = \\frac{k}{2q} \\le \\frac{i}{2q} + \\frac{j}{2q} = x_i + x_j.$ Case 2: $1 \\le i < q$ and $q \\le j < pq$.", "Then as $F_{pq-1} = \\langle e_1,e_2 \\rangle $ , we have $j < k < pq$ .", "Thus $x_k = 1 \\le \\frac{i}{2q} + 1 = x_i + x_j.$ Case 3: $1 \\le i < q$ and $pq \\le j < n$.", "Write $j = j_1 + j_2p + j_3pq$ in mixed radix notation with respect to $(p,q,p)$ .", "Then $j < k \\le j_1 + \\min (q-1,i+j_2)p + j_3pq$ .", "Then $x_k &\\le x_{j_1 + \\min (q-1,i+j_2)p + j_3pq} \\\\&= j_1 \\frac{1}{4pq} + \\min (q-1,i+j_2) \\frac{1}{2q} + j_3 \\\\&\\le \\frac{i}{2q} + j_1 \\frac{1}{4pq} + j_2 \\frac{1}{2q} + j_3 \\\\&= x_i + x_j.$ Case 4: $q \\le i < pq$ and $q \\le j < pq$.", "Then as $F_{pq-1} = \\langle e_1,e_2 \\rangle $ , we have $j < k < pq$ .", "Thus $x_k = 1 \\le \\frac{i}{2q} + 1 = x_i + x_j.$ Case 4: $q \\le i < pq$ and $pq \\le j < n$.", "Write $j = j_1 + j_2p + j_3pq$ in mixed radix notation with respect to $(p,q,p)$ .", "Because $F_{pq-1} = \\langle e_1,e_2\\rangle $ , then $j < k \\le (p-1) + (q-1)p + j_3pq$ .", "Then $x_k &\\le x_{(p-1) + (q-1)p + j_3pq} \\\\& = (p-1) \\frac{1}{4pq} + (q-1) \\frac{1}{2q} + j_3 \\\\& \\le 1 + j_3 \\\\&= 1 + j_1 \\frac{1}{4pq} + j_2 \\frac{1}{2q} + j_3 \\\\&= x_i + x_j.$ Case 5: $pq \\le i < n$.", "Write $i &= i_1 + i_2p + i_3pq \\\\j &= j_1 + j_2p + j_3pq$ in mixed radix notation with respect to $(p,q,p)$ .", "Recall that for $i \\ge pq$ , we have $v_i = e_2^{i_1}e_1^{i_2}e_3^{i_3}$ .", "Thus, $j < k \\le \\min (p-1,i_1+j_1) + \\min (q-1,i_2+j_2)p + (i_3 + j_3)pq$ .", "Then $x_k &\\le x_{\\min (p-1,i_1+j_1) + \\min (q-1,i_2+j_2)p + (i_3 + j_3)pq} \\\\& = \\min (p-1,i_1+j_1) \\frac{1}{4pq} + \\min (q-1,i_2+j_2)\\frac{1}{2q} + (i_3 + j_3) \\\\&\\le i_1 \\frac{1}{4pq} + i_2 \\frac{1}{2q} + i_3 + j_1 \\frac{1}{4pq} + j_2 \\frac{1}{2q} + j_3 \\\\&=x_i + x_j.$ Thus, for all $1 \\le i,j < n$ , we have if $x_{T(i,j)} \\le x_i + x_j$ .", "Moreover, we have that $x_q = 1 > \\frac{1}{2q} + \\frac{q-1}{2q} = x_1 + x_{q-1}$ and $x_{2pq+p} = \\frac{1}{2q} + 2 > \\bigg (\\frac{1}{4pq} + 1 \\bigg ) + \\bigg (\\frac{p-1}{4pq} + 1 \\bigg ) = x_{pq+1} + x_{pq + p - 1}.$ Moreover $P_{T(p,p,q)} \\cup P_{T(p,q,p)} \\subseteq \\lbrace \\mathbf {x} \\in \\mathbb {R}^{p^2q-1} \\mid x_{q} \\le x_1 + x_{q-1} \\rbrace $ and $P_{T(q,p,p)} \\subseteq \\lbrace \\mathbf {x} \\in \\mathbb {R}^{p^2q-1} \\mid x_{2pq+p} \\le x_{pq+1} + x_{pq + p - 1}\\rbrace .$ However, $\\mathbf {x} \\notin P_{T(p,p,q)} \\cup P_{T(p,q,p)} \\cup P_{T(q,p,p)},$ which completes our proof.", "Lemma 6.4 Let $q$ be an odd prime.", "For $a \\in \\mathbb {Z}/q\\mathbb {Z}$ , we have $a\\bigg \\lbrace \\frac{q+1}{2},\\dots ,q-1\\bigg \\rbrace = \\bigg \\lbrace \\frac{q+1}{2},\\dots ,q-1\\bigg \\rbrace \\ (\\mathrm {mod}\\ q)$ if and only if $a \\equiv 1 \\ (\\mathrm {mod}\\ q)$ .", "The statement $a\\bigg \\lbrace \\frac{q+1}{2},\\dots ,q-1\\bigg \\rbrace = \\bigg \\lbrace \\frac{q+1}{2},\\dots ,q-1\\bigg \\rbrace \\ (\\mathrm {mod}\\ q)$ is equivalent to the statement $a\\bigg \\lbrace 1,\\dots ,\\frac{q-1}{2}\\bigg \\rbrace = \\bigg \\lbrace 1,\\dots ,\\frac{q-1}{2}\\bigg \\rbrace \\ (\\mathrm {mod}\\ q).$ If $q = 3$ , it is clear that $a \\equiv 1 \\ (\\mathrm {mod}\\ q)$ ; assume $q \\ne 3$ , and hence $q \\ge 5$ .", "Then as $a(q-1) \\equiv -a \\in \\lbrace \\frac{q+1}{2},\\dots ,q-1\\rbrace \\ (\\mathrm {mod}\\ q)$ , we must have $a \\in \\lbrace 1,\\dots ,\\frac{q-1}{2}\\rbrace \\ (\\mathrm {mod}\\ q)$ .", "If $a \\lnot \\equiv 1 \\ (\\mathrm {mod}\\ q)$ , then there exists $b \\in \\lbrace 1,\\dots ,\\frac{q-1}{2}\\rbrace \\ (\\mathrm {mod}\\ q)$ such that $ab \\in \\lbrace \\frac{q+1}{2},\\dots ,q-1\\rbrace \\ (\\mathrm {mod}\\ q)$ , which is a contradiction.", "Lemma 6.5 Let $p$ , $q$ , and $r$ be odd prime numbers such that $p < q \\le r$ .", "There exists an integer $m$ such that $q \\le m \\le \\lfloor qr/2 \\rfloor ,$ the addition $pm + pm$ overflows modulo $q$ , and the addition $m + m$ does not overflow modulo $q$ or modulo $r$ .", "If $q = r$ then let $m = q + \\bigg \\lfloor \\frac{q}{p} \\bigg \\rfloor .$ Note that $m\\operatorname{\\%}q = \\lfloor q/p \\rfloor $ , as $0 \\le \\lfloor q/p \\rfloor < q$ .", "As $0 \\le 2 \\lfloor \\frac{q}{p} \\rfloor < 2q/p \\le q$ , we have $(2m)\\operatorname{\\%}q = 2\\lfloor q/p \\rfloor $ and thus $m\\operatorname{\\%}q + m\\operatorname{\\%}q = (2m)\\operatorname{\\%}q$ , so $m + m$ does not overflow modulo $q$ .", "On the other hand, $pm = pq + p\\lfloor q/p\\rfloor $ and $0 \\le p\\lfloor q/p\\rfloor < q$ , so $(pm)\\operatorname{\\%}q = p\\lfloor q/p\\rfloor $ .", "Because $p < q$ , we have $q\\operatorname{\\%}p \\le q/2$ .", "Therefore, $(pm)\\operatorname{\\%}q + (pm)\\operatorname{\\%}q &= 2p\\bigg \\lfloor \\frac{q}{p} \\bigg \\rfloor \\\\&= 2p\\bigg (\\frac{q}{p}-\\frac{q\\operatorname{\\%}p}{p}\\bigg ) \\\\&\\ge 2p\\bigg (\\frac{q}{p}-\\frac{q}{2p}\\bigg ) \\\\&= q,$ so the addition $pm + pm$ overflows modulo $q$ .", "Because $p^{-1} \\ne 1 \\ (\\mathrm {mod}\\ q)$ , by helperhelperlemma the set $\\bigg \\lbrace 1,\\dots ,\\frac{q-1}{2}\\bigg \\rbrace \\bigcap p^{-1}\\bigg \\lbrace \\frac{q+1}{2},\\dots ,q-1\\bigg \\rbrace \\ (\\mathrm {mod}\\ q)$ is nonempty.", "Choose an element $\\ell \\in \\mathbb {Z}/q\\mathbb {Z}$ contained in the set above.", "Observe that $q(r-1)/2 + (q-1)/2 = \\bigg \\lfloor \\frac{qr}{2} \\bigg \\rfloor .$ and let $q \\le \\ell _1,\\dots , \\ell _{\\frac{r+1}{2}}$ be the lifts of $\\ell $ to $[q, \\lfloor qr/2 \\rfloor ]$ .", "Because $q \\ne r$ , the lifts $\\ell _1,\\dots , \\ell _{\\frac{r+1}{2}}$ all have distinct values modulo $r$ by the Chinese remainder theorem.", "Thus, there exists $\\ell _k$ such that $\\ell _k \\in \\lbrace 0,\\dots , \\frac{r-1}{2}\\rbrace \\ (\\mathrm {mod}\\ r)$ .", "Set $m = \\ell _k$ .", "To see that the addition $pm + pm$ overflows modulo $q$ , notice that $m \\in p^{-1}\\lbrace \\frac{q+1}{2},\\dots ,q-1\\rbrace \\ (\\mathrm {mod}\\ q)$ , so $(pm)\\operatorname{\\%}q \\ge \\frac{q+1}{2}$ , and hence $(pm)\\operatorname{\\%}q + (pm)\\operatorname{\\%}q \\ge q+1.$ To see that the addition $m + m$ does not overflow modulo $q$ or $r$ , observe that $m \\in \\lbrace 1,\\dots ,\\frac{q-1}{2}\\rbrace \\ (\\mathrm {mod}\\ q)$ , hence $m\\operatorname{\\%}q + m\\operatorname{\\%}q < q.$ Similarly, since $m \\in \\lbrace 1,\\dots ,\\frac{r-1}{2}\\rbrace \\ (\\mathrm {mod}\\ r)$ , we have $m\\operatorname{\\%}r + m\\operatorname{\\%}r < r.$ Proposition 6.6 Let $n = pqr$ for primes $p$ , $q$ , and $r$ with $p < q \\le r$ .", "Then there exists a realizable flag type $T$ such that $P_T \\lnot \\subseteq P_{T(p,q,r)} \\cup P_{T(p,r,q)} \\cup P_{T(q,p,r)}\\cup P_{T(q,r,p)} \\cup P_{T(r,p,q)} \\cup P_{T(r,q,p)}.$ Let $G = C_p \\times C_q \\times C_r$ and let $e_1, e_2, e_3$ be generators of each of the components.", "Define a sequence ${1=v_0,\\dots ,v_{n-1}}$ as follows.", "For $0 \\le i < p$ , set $v_i = e_1^i$ .", "For $p \\le i < n$ and $1 \\le i^{\\prime } < n$ , write $i^{\\prime } = i^{\\prime }_1 + i^{\\prime }_2q + i_3^{\\prime }pq$ in mixed radix notation with respect to $(q,p,r)$ .", "Inductively define $v_i$ as follows.", "Choose $i^{\\prime }$ minimal such that $e_2^{i^{\\prime }_1}e_1^{i^{\\prime }_2}e_3^{i^{\\prime }_3} \\notin \\lbrace v_0,\\dots ,v_{i-1}\\rbrace $ .", "Set $v_i = e_2^{i^{\\prime }_1}e_1^{i^{\\prime }_2}e_3^{i^{\\prime }_3}$ .", "Observe that for $i \\ge pq$ , we have $i = i^{\\prime }$ .", "Define a flag $\\mathcal {F}= \\lbrace F_i\\rbrace _{i \\in [n]}$ by $F_i = \\lbrace v_0,\\dots ,v_i\\rbrace $ .", "By pqrhelperlemma, there existss an integer $m$ such that $q \\le m \\le \\lfloor qr/2 \\rfloor ,$ the addition $pm + pm$ overflows modulo $q$ , and the addition $m + m$ does not overflow modulo $q$ or modulo $r$ .", "Moreover, $2pm \\le 2p \\lfloor qr/2 \\rfloor < pqr.$ Write $pm = (pm)_1 + (pm)_2 q + (pm)_3 pq$ in mixed radix notation with respect to $(q,p,r)$ .", "Then we have $F_1 &= \\lbrace 1, e_1 \\rbrace \\\\F_{p-1} &= \\langle e_1 \\rangle \\\\F_{pm} &= \\lbrace e_2^{i_1}e_1^{i_2}e_3^{i_3} \\; \\mid \\; i_1 + i_2q + i_3pq \\le pm, \\; 0 \\le i_1 < q, \\; 0 \\le i_2 < p, \\; 0 \\le i_3\\rbrace \\\\F_{2pm-1} &= \\lbrace e_2^{i_1}e_1^{i_2}e_3^{i_3} \\; \\mid \\; i_1 + i_2q + i_3pq \\le 2pm-1, \\; 0 \\le i_1 < q, \\; 0 \\le i_2 < p, \\; 0 \\le i_3\\rbrace .$ We have $F_1F_{p-1} = F_{p-1}.$ Moreover, because the addition $pm + pm$ overflows modulo $q$ , we have $(pm)_1 + (pm)_1 \\ge q$ .", "We have that $F_{pm}F_{pm} \\subseteq \\lbrace e_2^{i_1}e_1^{i_2}e_3^{i_3} \\; \\mid \\; &i_1 + i_2q + i_3pq \\le (q-1) + \\min (p-1,2(pm)_2) q + 2(pm)_3 pq, \\\\&\\; 0 \\le i_1 < q, \\; 0 \\le i_2 < p, \\; 0 \\le i_3\\rbrace .$ Because $i_1 + i_2q + i_3pq \\le (q-1) + \\min (p-1,2(pm)_2) q + 2(pm)_3 pq \\le 2pm-1$ , we have $F_{pm}F_{pm} \\subseteq F_{2pm-1}.$ We claim there exists $\\mathbf {x} \\in P_{T_{\\mathcal {F}}}$ such that $x_p > x_1 + x_{p-1}$ and $x_{2pm} > 2x_{pm}$ .", "For $1 \\le i < p$ set $x_i = \\frac{i}{2p}$ .", "For $p \\le i < pq$ set $x_i = 1$ .", "For $pq \\le i < pqr$ write $i = i_1 + i_2q + i_3pq$ in mixed radix notation with respect to $(q,p,r)$ and set $x_i = i_1 \\frac{1}{4pq} + i_2 \\frac{1}{2p} + i_3$ .", "We first show that $0 \\le x_1 \\le \\dots \\le x_{n-1}$ .", "If $1 \\le i < p-1$ , then $x_{i + 1} = \\frac{i+1}{2p} \\ge \\frac{i}{2p} = x_i.$ Note also that $x_p = 1 \\ge \\frac{p-1}{2p} = x_{p-1}.$ If $p \\le i < pq-1$ , $x_{i+1} = 1 = x_i.$ If $pq \\le i < n-1$ then write $i+1 = (i+1)_1 + (i+1)_2q + (i+1)pq$ in mixed radix notation with respect to $(q,p,r)$ .", "If $i_1=q-1$ and $i_2 = p-1$ then $(i+1)_1 = (i+1)_2 = 0$ and $(i+1)_3 = i_3 + 1$ .", "Then $x_{i+1} &= (i+1)_1 \\frac{1}{4pq} + (i+1)_2 \\frac{1}{2p} + (i+1)_3 \\\\&= i_3 + 1 \\\\&> i_1 \\frac{1}{4pq} + i_2 \\frac{1}{2p} + i_3 \\\\&= x_{i}.$ Instead if $i_1=q-1$ and $i_2 \\ne p-1$ then $(i+1)_1 = 0$ and $(i+1)_2=i_2+1$ and $(i+1)_3 = i_3$ .", "Then $x_{i+1} &= (i+1)_1 \\frac{1}{4pq} + (i+1)_2 \\frac{1}{2p} + (i+1)_3 \\\\&= (i_2+1) \\frac{1}{2p} + i_3 \\\\&> i_1 \\frac{1}{4pq} + i_2 \\frac{1}{2p} + i_3 \\\\&= x_{i}.$ Finally, if $i \\ne q-1$ then $(i+1)_1 = i_1+1$ and $(i+1)_2=i_2$ and $(i+1)_3 = i_3$ .", "Then $x_{i+1} &= (i+1)_1 \\frac{1}{4pq} + (i+1)_2 \\frac{1}{2p} + (i+1)_3 \\\\&= (i_1+1) \\frac{1}{4pq} + i_2 \\frac{1}{2p} + i_3 \\\\&> i_1 \\frac{1}{4pq} + i_2 \\frac{1}{2p} + i_3 \\\\&= x_{i}.$ Therefore, $0 \\le x_1 \\le \\dots \\le x_{n-1}$ .", "We now show that for all $1 \\le i,j < n$ , we have $x_{T_{\\mathcal {F}}(i,j)} \\le x_i + x_j$ .", "Case 1: $1 \\le i < p$ and $1 \\le j < p$.", "Then $k = \\min (p-1, i+j)$ .", "Then $x_k = \\frac{k}{2p} \\le \\frac{i}{2p} + \\frac{j}{2p} = x_i + x_j.$ Case 2: $1 \\le i < p$ and $p \\le j < pq$.", "Then as $F_{pq-1} = \\langle e_1,e_2\\rangle $ , we have $j < k < pq$ .", "Thus $x_k = 1 \\le \\frac{i}{2p} + 1 = x_i + x_j.$ Case 3: $1 \\le i < p$ and $pq \\le j < n$.", "Write $j = j_1 + j_2q + j_3pq$ in mixed radix notation with respect to $(q,p,r)$ .", "Then $j < k \\le j_1 + \\min (p-1,i+j_2)q + j_3pq$ .", "Then $x_k &\\le x_{j_1 + \\min (p-1,i+j_2)q + j_3pq} \\\\&= j_1 \\frac{1}{4pq} + \\min (p-1,i+j_2) \\frac{1}{2p} + j_3 \\\\&\\le \\frac{i}{2p} + j_1 \\frac{1}{4pq} + j_2 \\frac{1}{2p} + j_3 \\\\&= x_i + x_j.$ Case 4: $p \\le i < pq$ and $p \\le j < pq$.", "Then as $F_{pq-1} = \\langle e_1,e_2\\rangle $ , we have $j < k < pq$ .", "Thus $x_k = 1 \\le \\frac{i}{2q} + 1 = x_i + x_j.$ Case 5: $p \\le i < pq$ and $pq \\le j < n$.", "Write $j = j_1 + j_2q + j_3pq$ in mixed radix notation with respect to $(q,p,r)$ .", "Because $F_{pq-1} = \\langle e_1,e_2\\rangle $ we have $j < k \\le (q-1) + (p-1)q + j_3pq$ .", "Then $x_k &\\le x_{(q-1) + (p-1)q + j_3pq} \\\\& = (q-1) \\frac{1}{4pq} + (p-1) \\frac{1}{2p} + j_3 \\\\& \\le 1 + j_3 \\\\&= 1 + j_1 \\frac{1}{4pq} + j_2 \\frac{1}{2p} + j_3 \\\\&= x_i + x_j.$ Case 6: $pq \\le i < n$.", "Write $i &= i_1 + i_2q + i_3pq \\\\j &= j_1 + j_2q + j_3pq$ in mixed radix notation with respect to $(q,p,r)$ .", "Recall that for $i \\ge pq$ , we have $v_i = e_2^{i_1}e_1^{i_2}e_3^{i_3}$ .", "Thus, $j < k \\le \\min (q-1,i_1+j_1) + \\min (p-1,i_2+j_2)q + (i_3 + j_3)pq$ .", "Then $x_k &\\le x_{\\min (q-1,i_1+j_1) + \\min (p-1,i_2+j_2)p + (i_3 + j_3)pq} \\\\& = \\min (q-1,i_1+j_1) \\frac{1}{4pq} + \\min (p-1,i_2+j_2)\\frac{1}{2p} + (i_3 + j_3) \\\\&\\le (i_1 \\frac{1}{4pq} + i_2 \\frac{1}{2p} + i_3) + (j_1 \\frac{1}{4pq} + j_2 \\frac{1}{2p} + j_3) \\\\&=x_i + x_j.$ Thus, for any integers $1 \\le i,j < n$ , we have $x_{T_{\\mathcal {F}}(i,j)} \\le x_i + x_j$ .", "Moreover, we have that $x_p = 1 > \\frac{1}{2p} + \\frac{p-1}{2p} = x_1 + x_{p-1}.$ Write $pm &= (pm)_1 + (pm)_2q + (pm)_3pq \\\\2pm &= (2pm)_1 + (2pm)_2q + (2pm)_3pq$ in mixed radix notation with respect to $(q,p,r)$ and recall that $pm + pm$ overflows modulo $q$ .", "Therefore, either $(2pm)_3 = 2(pm)_3$ and $(2pm)_2 = 2(pm)_2 + 1$ , or $(2pm)_3 = 2(pm)_3 + 1$ .", "If $(2pm)_3 = 2(pm)_3$ and $(2pm)_2 > 2(pm)_2+1$ then $x_{2pm} &= (2pm)_1 \\frac{1}{4pq} + (2pm)_2\\frac{1}{2p} + (2pm)_3 \\\\&\\ge (2(pm)_2+1)\\frac{1}{2p} + 2(pm)_3 \\\\&> 2(pm)_1\\frac{1}{4pq} + 2(pm)_2\\frac{1}{2p} + 2(pm)_3 \\\\&= 2x_{pm}.$ Otherwise, if $(2pm)_3 = 2(pm)_3 + 1$ , then $x_{2pm} &= (2pm)_1 \\frac{1}{4pq} + (2pm)_2\\frac{1}{2p} + (2pm)_3 \\\\&\\ge 2(pm)_3 + 1 \\\\&> 2(pm)_1\\frac{1}{4pq} + 2(pm)_2\\frac{1}{2p} + 2(pm)_3 \\\\&= 2x_{pm}.$ Further note that $P_{T(q,p,r)} \\cup P_{T(q,r,p)} \\cup P_{T(r,q,p)} \\cup P_{T(r,p,q)} \\subseteq \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\mid x_{p} \\le x_1 + x_{p-1} \\rbrace $ and $P_{T(p,q,r)} \\cup P_{T(p,r,q)} \\subseteq \\lbrace \\mathbf {x} \\in \\mathbb {R}^{n-1} \\mid x_{2pm} \\le 2x_{pm}\\rbrace .$ We have $x_p > x_1 + x_{p-1}$ and $x_{2pm} > 2x_{pm}$ , and thus our proof is complete.", "Proposition 6.7 Let $n = 4p$ for $p$ a prime not equal to 2 or 3.", "Then there exists a realizable flag type $T$ such that $P_{T} \\lnot \\subseteq P_{T(2,2,p)} \\cup P_{T(2,p,2)} \\cup P_{T(p,2,2)}.$ Let $G = C_p \\times C_2 \\times C_2$ and let $e_1,e_2,e_3$ be generators of each of the components.", "Define a sequence $\\lbrace v_0,\\dots ,v_{4p-1}\\rbrace $ as follows.", "Set $v_0 &= 1 \\\\v_1 &= e_1 \\\\v_2 &= e_2 \\\\v_3 &= e_2 e_1 \\\\v_4 &= e_1^2 \\\\v_5 &= e_2 e_1^2 \\\\v_6 &= e_1^3 \\\\v_7 &= e_2e_1^3.$ For $8 \\le i < n$ and $1 \\le i^{\\prime } < n$ , write and $i^{\\prime } = i^{\\prime }_1 + i^{\\prime }_2p + i_3^{\\prime }2p$ in mixed radix notation with respect to $(p,2,2)$ .", "Inductively define $v_i$ as follows.", "Choose $i^{\\prime }$ minimal such that $e_1^{i^{\\prime }_1}e_2^{i^{\\prime }_2}e_3^{i^{\\prime }_3} \\notin \\lbrace v_0,\\dots ,v_{i-1}\\rbrace $ .", "Set $v_i = e_1^{i^{\\prime }_1}e_2^{i^{\\prime }_2}e_3^{i^{\\prime }_3}$ .", "Observe that for $i \\ge 2p$ , we have $i = i^{\\prime }$ .", "Note that $F_1 &= \\lbrace 1, e_1 \\rbrace \\\\F_3 &= \\langle e_2\\rangle \\lbrace 1,e_1 \\rbrace \\\\F_5 &= \\langle e_2\\rangle \\lbrace 1, e_1, e_1^2 \\rbrace \\\\F_7 &= \\langle e_2\\rangle \\lbrace 1, e_1, e_1^2, e_1^3 \\rbrace \\\\F_{3p-1} &= \\langle e_1 \\rangle \\lbrace 1, e_2, e_3 \\rbrace .$ Therefore $F_1F_{3p-1} = (\\lbrace 1, e_1 \\rbrace )( \\langle e_1\\rangle \\lbrace 1, e_2, e_3 \\rbrace ) = \\langle e_1\\rangle \\lbrace 1, e_2, e_3 \\rbrace = F_{3p-1}$ $F_3F_3 = (\\langle e_2 \\rangle \\lbrace 1,e_1 \\rbrace )(\\langle e_2 \\rangle \\lbrace 1,e_1 \\rbrace ) = \\langle e_2 \\rangle \\lbrace 1, e_1, e_1^2 \\rbrace = F_5$ $F_3F_5 = (\\langle e_2 \\rangle \\lbrace 1,e_1 \\rbrace )(\\langle e_2 \\rangle \\lbrace 1, e_1, e_1^2 \\rbrace ) = \\langle e_2 \\rangle \\lbrace 1, e_1, e_1^2,e_1^3 \\rbrace = F_7.$ Suppose $p=5$ .", "Then let $\\mathbf {x} \\in \\mathbb {R}^{19}$ be as follows: $x_1 &= 1 \\\\x_2,x &= 1.4 \\\\x_4,x_5 &= 2 \\\\x_6,x_7 &= 3 \\\\x_8,\\dots ,x_{14} &= 4 \\\\x_{15},\\dots ,x_{19} &= 5.1.$ We claim that $\\mathbf {x} \\in P_{T_{\\mathcal {F}}}$ and $x_8 > x_5 + x_3$ and $x_{15} > x_1 + x_{14}$ .", "It is clear that $0 \\le x_1 \\le \\dots \\le x_{n-1}$ .", "We now show that for all $1\\le i,j < n$ , we have $x_{T_{\\mathcal {F}}(i,j)} \\le x_i + x_j$ .", "Let $k = T_{\\mathcal {F}}(i,j)$ .", "Case 1: $i = 1$.", "If $j = 1$ then $k = 4$ and $x_4 = 2 \\le 2x_1.$ Observe that if $j = 2,3$ then $k = 4,5$ , and $x_k = 2 \\le 1.4 + 1 \\le x_1 + x_j.$ If $j = 4,5$ , then $k = 6,7$ , and $x_k = 3 \\le 2 + 1 = x_1 + x_j.$ If $j = 6,7$ , then $k \\le 10$ , and $x_k \\le 4 \\le 3 + 1 = x_1 + x_j$ .", "If $8 \\le j < 15$ , then as $v_1 = e_1$ and $F_{14} = \\langle e_1 \\rangle \\lbrace 1, e_2, e_3 \\rbrace $ , we have that $j < k < 14$ .", "Thus $x_k \\le 4 \\le 4 + 1 = x_1 + x_j.$ If $15\\le j < n$ then $x_k \\le 5.1 \\le 5.1 + 1 \\le x_1 + x_j.$ Case 2: $i = 2,3$.", "If $j = 2,3$ , then $k = 4,5$ so $x_k = 2 \\le 1.4 + 1.4 = x_i + x_j.$ If $j = 4,5$ then $k = 6,7$ so $x_k = 3 \\le 2 + 1.4 = x_i + x_j.$ If $j = 6,7$ then $k < 15$ so $x_k \\le 4 \\le 3 + 1.4 \\le x_i + x_j.$ If $8 \\le j < n$ then $x_k \\le 5.1 \\le 4 + 1.4 \\le x_i + x_j.$ Case 3: $i = 4,5$ If $j < 10$ then because $F_9 = \\langle e_1,e_2 \\rangle $ we have $k < 10$ .", "Thus $x_k \\le 4 \\le 2 + 2 \\le x_i + x_j.$ If $10 \\le j < n$ then $x_k \\le 5.1 \\le 4 + 2 \\le x_i + x_j.$ Case 4: $i \\ge 6$.", "Then $x_k \\le 5.1 \\le 3 + 3 \\le x_i + x_j.$ Thus $\\mathbf {x} \\in P_{T_{\\mathcal {F}}}$ .", "One can see explicitly that $x_8 > x_5 + x_3$ and $x_{15} > x_1 + x_{14}$ .", "Because $P_{T(2,2,5)} \\cup P_{T(2,5,2)} \\subset \\lbrace \\mathbf {x} \\in \\mathbb {R}^{19} \\mid x_{15} \\le x_1 + x_{14} \\rbrace $ and $P_{T(5,2,2)} \\subset \\lbrace \\mathbf {x} \\in \\mathbb {R}^{19} \\mid x_8 \\le x_5 + x_3 \\rbrace ,$ our proof is complete for $p = 5$ .", "If $p \\ne 5$ , let $\\mathbf {x} \\in \\mathbb {R}^{4p-1}$ be as follows.", "$x_1 &= 1 \\\\x_2, x_3 &= 1.4 \\\\x_4, x_5 &= 2 \\\\x_6, \\dots , x_{3p-1} &= 2.9 \\\\x_{3p}, \\dots , x_{4p-1} &= 4.$ We claim that $\\mathbf {x} \\in P_{T_{\\mathcal {F}}}$ and $x_6 > x_3 + x_3$ and $x_{3p} > x_1 + x_{3p-1}$ .", "It is clear that $0\\le x_1 \\le \\dots \\le x_{n-1}$ .", "We now show that for all integers $1 \\le i,j < n$ , we have $x_{T(i,j)} \\le x_i + x_j$ .", "Let $k = T(i,j)$ .", "Case 1: $i = 1$.", "If $j = 1$ , then $k = 4$ and $x_4 = 2 \\le 2x_1.$ Observe that if $j = 2,3$ , then $k = 4,5$ , and $x_k \\le x_5 = 2 \\le 1.4 + 1 \\le x_1 + x_j.$ If $j = 4,5$ , then $k = 6,7$ , and $x_k \\le x_5 = 2.9 \\le 2 + 1 \\le x_1 + x_j.$ If $6 \\le j < 3p$ , then as $v_1 = e_1$ and $F_{3p-1} = \\langle e_1 \\rangle \\lbrace 1, e_2, e_3 \\rbrace $ , we have $j < k < 3p$ .", "Thus $x_k = 2.9 \\le 2.9 + 1 = x_j + x_1.$ If $j \\ge 3p$ , then $x_k = 4 \\le 4 + 1 \\le x_1 + x_j.$ Case 2: $i = 2,3$.", "If $j = 2,3$ , then $k \\le 5$ and $x_k = 2 \\le 1.4 + 1.4 = x_i + x_j.$ If $j = 4,5$ then $k \\le 7$ and $x_k \\le 2.9 \\le 2 + 1.4 = x_i + x_j.$ If $j \\ge 6$ , then $x_k \\le 4 \\le 2.9 + 1.4 = x_i + x_j.$ Case 3: $i \\ge 4$.", "Then we have $x_k \\le 4 \\le 2 + 2 \\le x_i + x_j.$ Therefore, for any integers $1 < i,j < n$ , we have $x_{T_{\\mathcal {F}}(i,j)} \\le x_i + x_j$ , so $x \\in P_{T_{\\mathcal {F}}}$ .", "Moreover, it is clear that $x_6 > x_3 + x_3$ and $x_{3p} > x_1 + x_{3p-1}$ .", "Again, we have $P_{T(2,2,p)} \\cup P_{T(2,p,2)} \\subset \\lbrace \\mathbf {x} \\in \\mathbb {R}^{4p-1} \\mid x_{3p} \\le x_1 + x_{3p-1} \\rbrace $ and because $p \\ge 7$ , we have $P_{T(p,2,2)} \\subset \\lbrace \\mathbf {x} \\in \\mathbb {R}^{4p-1} \\mid x_6 \\le x_3 + x_3 \\rbrace ,$ and thus our proof is complete." ], [ "Tower types", "Our aim in this section will be to prove the following theorem.", "* If $n = p^k$ is a prime power, minimalitycompletethm proves that $T(p,\\dots ,p)$ is the unique flag type that is minimal among flags of tower type $\\mathfrak {T}$ .", "If $n = 4$ and $\\mathfrak {T}= (4)$ apply LC4.", "If $n = 6$ , apply LC6.", "Every composite $n < 8$ is either prime, 4, or 6 the theorem is proven for $n < 8$ .", "If $n = 8$ , apply casesfor8corol.", "If $n = 2p$ for an odd prime $p$ and $\\mathfrak {T}= (2,p)$ then apply LC2p.", "Similarly, if $n = 3p$ for a prime $p$ and $\\mathfrak {T}= (3,p)$ then apply LC3p.", "We will need the following lemma.", "Lemma 7.1 Let $\\mathcal {F}$ be a flag of tower type $\\mathfrak {T}= (n_1,\\dots ,n_t)$ .", "Suppose we have $i,j,i+j \\in [n]$ such that $i + j$ does not overflow modulo $\\mathfrak {T}$ and $T_{\\mathcal {F}}(i,j) < i + j$ .", "Then there exists an integer $1 < m < n$ such that $m \\mid n$ and $i + j$ overflows modulo $m$ and $m < i$ .", "We prove the lemma when $\\mathcal {F}$ is a flag of an abelian group.", "The proof in the case of field extensions is identical.", "If $\\mathcal {F}$ is a flag of a group, let $m = \\operatorname{\\vert }\\operatorname{Stab}(F_iF_j)\\operatorname{\\vert }$ .", "Clearly $m \\mid n$ .", "By overflowlemma, we have $1 < m < n$ .", "Write $i$ , $j$ , and $i + j$ in mixed radix notation with respect to $(m, n/m)$ as $i &= i_1 + i_2 m \\\\j &= j_1 + j_2 m \\\\i+j &= (i+j)_1 + (i+j)_2 m.$ By overflowlemma, $\\operatorname{\\vert }\\operatorname{Stab}(F_iF_j)F_i \\operatorname{\\vert }&= (i_2 + 1)m \\\\\\operatorname{\\vert }\\operatorname{Stab}(F_iF_j)F_j \\operatorname{\\vert }&= (j_2 + 1)m \\\\\\operatorname{\\vert }F_iF_j \\operatorname{\\vert }&= (i_2 + j_2 + 1)m.$ Because $i + j$ overflows modulo $m$ , we have $i \\ne m$ .", "Assume for the sake of contradiction that $i < m$ ; because $\\operatorname{\\vert }\\operatorname{Stab}(F_iF_j)F_i \\operatorname{\\vert }= m$ , we have that $F_i$ is contained in the group $\\operatorname{Stab}(F_iF_j)$ .", "Therefore the group generated by $F_i$ is contained in the group $\\operatorname{Stab}(F_iF_j)$ .", "The group generated by $F_i$ has cardinality $n_1\\dots n_k$ for some $1 \\le k \\le t$ ; thus $n_1 \\dots n_k \\mid m$ .", "Because $i + j$ overflows modulo $m$ and does not overflow modulo $\\mathfrak {T}= (n_1,\\dots ,n_t)$ , we have $n_1\\dots n_k \\ne m$ .", "Write $i$ , $j$ , and $i+j$ in mixed radix notation with respect to $(n_1\\dots n_k, m/(n_1\\dots n_k), n/m)$ as $i &= i^{\\prime }_1 + i^{\\prime }_2 (n_1\\dots n_k) + i^{\\prime }_3 m \\\\j &= j^{\\prime }_1 + j^{\\prime }_2 (n_1\\dots n_k) + j^{\\prime }_3 m \\\\i+j &= (i+j)^{\\prime }_1 + (i+j)^{\\prime }_2 (n_1\\dots n_k) + (i+j)^{\\prime }_3 m.$ Because the addition $i + j$ does not overflow modulo $\\mathfrak {T}$ , we must have $i^{\\prime }_1 + j^{\\prime }_1 = (i+j)^{\\prime }_1$ .", "Because $\\langle F_i \\rangle = n_1\\dots n_k$ , we have $i < n_1\\dots n_k$ , and thus $i_2^{\\prime } = 0$ so $i^{\\prime }_2 + j^{\\prime }_2 = (i+j)^{\\prime }_2$ .", "Because $i^{\\prime }_1 + i^{\\prime }_2 (n_1\\dots n_k) = i_1$ , $j^{\\prime }_1 + j^{\\prime }_2 (n_1\\dots n_k) = j_1$ , and $(i+j)^{\\prime }_1 + (i+j)^{\\prime }_2 (n_1\\dots n_k) = (i+j)_1$ , we have that $i_1 + j_1 = (i+j)_1$ ; this is a contradiction, as the addition $i + j$ overflows modulo $m$ .", "The rest of this section is dedicated to proving LC4, LC6, casesfor8corol, LC2p, and LC3p.", "Corollary 7.2 Let $i = 1$ or $i = 2$ and choose $j$ such that $i \\le j < i + j < n$ .", "Let $\\mathcal {F}$ be a flag with tower type $\\mathfrak {T}$ .", "If $T_{\\mathcal {F}}(i,j) < i+j$ then $i + j$ overflows modulo $\\mathfrak {T}$ .", "If $i + j$ does not overflow modulo $\\mathfrak {T}$ and $T_{\\mathcal {F}}(i,j) < i+j$ apply mleqilemma and observe that there must exist an integer $1 < m < i \\le 2$ , which is a contradiction.", "Corollary 7.3 If $i + j = n-1$ then for any flag $\\mathcal {F}$ we have $T_{\\mathcal {F}}(i,j) = n-1$ .", "Observe that $i+j$ does not overflow modulo $m$ for any $m \\mid n$ , and apply overflowlemma.", "Corollary 7.4 We have that $T(4)$ is the unique flag type minimal among flags of tower type $\\mathfrak {T}= (4)$ .", "Suppose for the sake of contradiction that $\\mathcal {F}$ be a flag type of tower type $\\mathfrak {T}= (4)$ such that $T_{\\mathcal {F}} \\lnot \\ge T(4)$ .", "By testflagineqlemma, there exists a corner $(i,j)$ of $T(4)$ such that $T_{\\mathcal {F}}(i,j) < i+j$ .", "Without loss of generality, we may assume $i \\le j$ ; this implies that $i = 1$ .", "By LCfori12, this implies $i+j$ overflows modulo $\\mathfrak {T}= (4)$ , which is a contradiction.", "Corollary 7.5 We have that $T(6)$ is the unique flag type minimal among flags of tower type $\\mathfrak {T}= (6)$ .", "Suppose for the sake of contradiction that $\\mathcal {F}$ be a flag type of tower type $\\mathfrak {T}= (6)$ such that $T_{\\mathcal {F}} \\lnot \\ge T(6)$ .", "By testflagineqlemma, there exists $(i,j)$ such that $T_{\\mathcal {F}}(i,j) < i+j$ .", "Without loss of generality, we may assume $i \\le j$ .", "If $i + j = 5$ , then nminus1corollary implies that $T_{\\mathcal {F}}(i,j) = i+j$ , which is a contradiction.", "Therefore $i = 1$ or $i = 2$ .", "By LCfori12, this implies $i+j$ overflows modulo $\\mathfrak {T}= (6)$ , which is a contradiction.", "Corollary 7.6 We have that $T(2,4)$ is the unique flag type minimal among flags of tower type $\\mathfrak {T}= (2,4)$ .", "We have that $T(4,2)$ is the unique flag type minimal among flags of tower type $\\mathfrak {T}= (4,2)$ .", "Suppose for the sake of contradiction that $\\mathcal {F}$ be a flag type of tower type $\\mathfrak {T}= (2,4)$ such that $T_{\\mathcal {F}} \\lnot \\ge T(2,4)$ .", "By testflagineqlemma, there exists $(i,j)$ of $T(2,4)$ such that $T_{\\mathcal {F}}(i,j) < i+j$ and $i+j$ does not overflow modulo $(2,4)$ .", "If $i + j = 7$ , then nminus1corollary implies that $T_{\\mathcal {F}}(i,j) = i+j$ , which is a contradiction.", "Therefore $i = 1$ or $i = 2$ .", "By LCfori12, this implies $i+j$ overflows modulo $\\mathfrak {T}= (2,4)$ , which is a contradiction.", "The proof for $\\mathfrak {T}= (4,2)$ is identical.", "Corollary 7.7 For any prime $p$ , we have that $T(2,p)$ is the unique flag type minimal among flags of tower type $\\mathfrak {T}= (2,p)$ .", "Suppose for the sake of contradiction that $\\mathcal {F}$ be a flag type of tower type $\\mathfrak {T}= (2,p)$ such that $T_{\\mathcal {F}} \\lnot \\ge T(2,p)$ .", "By testflagineqlemma, there exists $(i,j)$ of $T(2,p)$ such that $T_{\\mathcal {F}}(i,j) < i+j$ and $i+j$ does not overflow modulo $(2,p)$ .", "By overflowlemma, we have that $i + j$ overflows modulo $p$ .", "Without loss of generality we may suppose $i \\le j$ .", "By mleqilemma, we have $p < i$ ; thus $i + j > 2p$ , which is a contradiction.", "Corollary 7.8 For any prime $p$ , we have that $T(3,p)$ is the unique flag type minimal among flags of tower type $\\mathfrak {T}= (3,p)$ .", "Suppose for the sake of contradiction that $\\mathcal {F}$ be a flag type of tower type $\\mathfrak {T}= (3,p)$ such that $T_{\\mathcal {F}} \\lnot \\ge T(3,p)$ .", "By testflagineqlemma, there exists $(i,j)$ of $T(3,p)$ such that $T_{\\mathcal {F}}(i,j) < i+j$ and $i+j$ does not overflow modulo $(3,p)$ .", "By overflowlemma, we have that $i + j$ overflows modulo $p$ .", "Without loss of generality we may suppose $i \\le j$ .", "By mleqilemma, we have $p < i$ .", "Because $p < i \\le j$ and $i + j$ overflows modulo $p$ , we have $i + j \\ge 3p$ , which is a contradiction." ], [ "The structure of corners", "* We prove the theorem for flag types realizable for groups; the proof for flag types realizable by field extensions is identical.", "Let $(i,j)$ be a corner of $T$ and suppose for the sake of contradiction that $T(i,j) < i + j$ .", "$F_iF_{j-1} = F_iF_j$ which would imply that $(i,j)$ is not a corner.", "To this end, set $M = \\operatorname{Stab}(F_iF_j)$ and $m = \\operatorname{\\vert }\\operatorname{Stab}(M) \\operatorname{\\vert }$ .", "Note that $m > 1$ .", "Write $i$ and $j$ in mixed radix notation with respect to $(m,n/m)$ as $i = \\alpha _1 + \\alpha _2m$ and $j = \\beta _1 + \\beta _2m$ and note that $\\alpha _1 + \\beta _1 > m$ (because $k \\ne i + j$ ).", "Thus, $\\beta _1 \\ne 1$ .", "Note that $m^{-1} \\operatorname{\\vert }MF_{j-1}\\operatorname{\\vert }\\ge \\lceil j/m \\rceil = \\beta _2 + 1$ and $m^{-1} \\operatorname{\\vert }MF_{j} \\operatorname{\\vert }= \\beta _2 + 1$ , so $MF_{j-1} = MF_j.$ Note that $\\operatorname{Stab}(MF_{i}MF_{j-1}) = M$ .", "Define $M^{\\prime } = \\operatorname{Stab}(F_{i}F_{j-1})$ and set $m^{\\prime } = \\operatorname{\\vert }M^{\\prime } \\operatorname{\\vert }$ .", "Note that because $\\operatorname{Stab}(F_{i}F_{j-1}) \\subseteq \\operatorname{Stab}(MF_{i}MF_{j-1}),$ we have $M^{\\prime } \\subseteq M$ and thus $m^{\\prime } \\mid m$ .", "First suppose $\\alpha _1 + \\beta _1 - 1 - m < m^{\\prime }$ .", "By kneserorig, we have $\\operatorname{\\vert }F_{i}F_{j-1}\\operatorname{\\vert }= (\\alpha _1 + \\beta _1 + 1)m$ and thus $F_{i}F_{j-1} = F_{i}F_{j}.$ Now suppose $\\alpha _1 + \\beta _1 - 1 - m \\ge m^{\\prime }$ .", "If $M^{\\prime } \\ne M$ , then $\\operatorname{\\vert }F_{i}F_{j-1}\\operatorname{\\vert }> \\operatorname{\\vert }F_{i} F_{j}\\operatorname{\\vert },$ which is a contradiction.", "Thus we must have $M^{\\prime } = M$ so by kneserorig, we have $\\operatorname{\\vert }F_{i}F_{j-1}\\operatorname{\\vert }= (\\alpha _1 + \\beta _1 + 1)m$ and thus $F_{i}F_{j-1} = F_{i}F_{j}$ .", "Thus, we have a contradiction so $k \\ge i + j$ .", "Lemma 8.1 There exists flag type $T$ , realizable for groups and field extensions, and a point $\\mathbf {x} \\in P_T$ such that $x_2 > 2x_1$ and $x_{15} > x_8 + x_7$ .", "Choose the $\\mathbf {x}$ given in the proof of pqr for $n = 18$ .", "The following lemma is stated for flags of field extensions; the analogous statement for abelian groups is true, and has an identical proof.", "Lemma 8.2 Let $\\mathcal {F}$ be any flag of a degree 18 field extension $L/K$ such that $T_{\\mathcal {F}}(1,1) = 1$ and $T_{\\mathcal {F}}(7,8) \\le 14$ .", "Then there exists $\\alpha ,\\beta ,\\gamma \\in L$ such that $[K(\\alpha ) \\colon K] = 2$ ; $[K(\\beta ) \\colon K] = 3$ ; $K(\\alpha , \\beta , \\gamma ) = L$ ; $F_1 = K(\\alpha )$ ; $F_8 = K(\\beta )\\langle 1, \\alpha , \\gamma \\rangle $ ; $K(\\beta )F_7 = K(\\beta )\\langle 1, \\alpha , \\gamma \\rangle $ ; and $F_{14} = K(\\beta )\\langle 1, \\alpha , \\gamma , \\alpha \\gamma , \\gamma ^2\\rangle $ .", "Let $\\lbrace 1=v_0,\\dots ,v_{n-1}\\rbrace $ be a $K$ -basis of $L$ such that $F_i = K\\langle v_0,\\dots ,v_i\\rangle $ .", "Let $\\alpha v_1$ .", "As $v_1^2 \\in K\\langle 1,v_1 \\rangle $ and $\\alpha \\notin K$ , we have that $\\deg (\\alpha ) = 2$ .", "overflowlemma implies that $[\\operatorname{Stab}(F_7F_8) \\colon K] = 3$ .", "Let $\\beta $ be any primitive element of the cubic extension $\\operatorname{Stab}(F_7F_8)$ ; such an element exists because any extension of prime degree is simple.", "Moreover overflowlemma implies that $K(\\beta )F_8 = F_8$ .", "Let $\\gamma \\in F_8 \\setminus K(\\alpha ,\\beta )$ .", "Thus, $K(\\alpha , \\beta , \\gamma ) = L$ .", "Thus, $F_8 = K(\\beta )\\langle 1, \\alpha , \\gamma \\rangle $ .", "overflowlemma implies that $K(\\beta ) F_7 = K(\\beta )\\langle 1, \\alpha , \\gamma \\rangle $ .", "By overflowlemma, $F_7 F_8 = F_{14}$ so $F_{14} = K(\\beta )\\langle 1, \\alpha , \\gamma , \\alpha \\gamma , \\gamma ^2\\rangle $ .", "* We will prove that there is such a $T$ that is minimal for field extensions; the analogous statement for abelian groups is true, and has an identical proof.", "The two statements combined prove the theorem.", "By existsapt18lemma, there exists some flag type $T$ that is realizable for field exensions and $\\mathbf {x}\\in P_T$ such that $x_2 > 2x_1$ and $x_{15} > x_8 + x_7$ .", "Let $\\mathcal {F}$ be a flag of a field extension $L/K$ realizing $T$ ; then $T_{\\mathcal {F}}(1,1)=1$ and $T_{\\mathcal {F}}(7,8) \\le 14$ .", "Apply strucof18lemma to obtain $\\alpha ,\\beta ,\\gamma \\in L$ such that $[K(\\alpha ) \\colon K] = 2$ ; $[K(\\beta ) \\colon K] = 3$ ; $K(\\alpha , \\beta , \\gamma ) = L$ ; $F_1 = K(\\alpha )$ ; $F_8 = K(\\beta )\\langle 1, \\alpha , \\gamma \\rangle $ ; $K(\\beta )F_7 = K(\\beta )\\langle 1, \\alpha , \\gamma \\rangle $ ; and $F_{14} = K(\\beta )\\langle 1, \\alpha , \\gamma , \\alpha \\gamma , \\gamma ^2\\rangle $ .", "We will prove that there exists $j \\le 12$ and $k \\ge 15$ such that $T(1,j) = k$ .", "Suppose for the sake of contradiction that the claim is false.", "Then $K(\\alpha )F_{12} \\subseteq F_{14} = K(\\beta )\\langle 1,\\alpha ,\\gamma ,\\alpha \\gamma ,\\gamma ^2\\rangle .$ Write $\\alpha ^{-1} = b\\alpha + c$ for $b,c \\in K$ and $b \\ne 0$ .", "Then, $F_{12} &\\subseteq K(\\beta )\\langle 1,\\alpha ,\\gamma ,\\alpha \\gamma ,\\gamma ^2\\rangle \\cap \\alpha ^{-1}K(\\beta )\\langle 1,\\alpha ,\\gamma ,\\alpha \\gamma ,\\gamma ^2\\rangle \\\\&= K(\\beta )\\langle 1,\\alpha ,\\gamma ,\\alpha \\gamma ,\\gamma ^2\\rangle \\cap K(\\beta )\\langle \\alpha ^{-1},1,\\alpha ^{-1}\\gamma ,\\gamma ,\\alpha ^{-1}\\gamma ^2\\rangle \\\\&= K(\\beta )\\langle 1,\\alpha ,\\gamma ,\\alpha \\gamma ,\\gamma ^2\\rangle \\cap K(\\beta )\\langle b\\alpha +c,1,b\\alpha \\gamma + c\\gamma ,\\gamma ,b\\alpha \\gamma ^2 + c\\gamma ^2\\rangle \\\\&= K(\\beta )\\langle 1,\\alpha ,\\gamma ,\\alpha \\gamma ,\\gamma ^2\\rangle \\cap K(\\beta )\\langle 1, \\alpha ,\\gamma ,\\alpha \\gamma , b\\alpha \\gamma ^2 + c\\gamma ^2\\rangle \\\\&= K(\\beta )\\langle 1,\\alpha ,\\gamma ,\\alpha \\gamma \\rangle ,$ but $\\dim _{K}F_{12} = 13$ and $\\dim _{K}K(\\beta )\\langle 1,\\alpha ,\\gamma ,\\alpha \\gamma \\rangle = 12$ , which is a contradiction.", "kneserarticle author=Kneser, Martin, title = Abschätzungen der asymptotischen Dichte von Summenmengen, journal = Matematika, year = 1961, volume = 5, issue = 3, pages = 17–44, zemorarticle author=Bachoc, Christine, author=Serra, Oriole, author=Zémor, Gilles title = Revisiting Kneser's theorem for field extensions, journal = Combinatorica, year = 2018, volume = 38, number=4, pages = 759–777," ] ]
2207.10507
[ [ "Autonomous Vehicles in 5G and Beyond: A Survey" ], [ "Abstract Fifth Generation (5G) technology is an emerging and fast adopting technology which is being utilized in most of the novel applications that require highly reliable low-latency communications.", "It has the capability to provide greater coverage, better access, and best suited for high density networks.", "Having all these benefits, it clearly implies that 5G could be used to satisfy the requirements of Autonomous vehicles.", "Automated driving Vehicles and systems are developed with a promise to provide comfort, safe and efficient drive reducing the risk of life.", "But, recently there are fatalities due to these autonomous vehicles and systems.", "This is due to the lack of robust state-of-art which has to be improved further.", "With the advent of 5G technology and rise of autonomous vehicles (AVs), road safety is going to get more secure with less human errors.", "However, integration of 5G and AV is still at its infant stage with several research challenges that needs to be addressed.", "This survey first starts with a discussion on the current advancements in AVs, automation levels, enabling technologies and 5G requirements.", "Then, we focus on the emerging techniques required for integrating 5G technology with AVs, impact of 5G and B5G technologies on AVs along with security concerns in AVs.", "The paper also provides a comprehensive survey of recent developments in terms of standardisation activities on 5G autonomous vehicle technology and current projects.", "The article is finally concluded with lessons learnt, future research directions and challenges." ], [ "Introduction", "With the emergence of new technologies such as high-speed networks, decentralised storage platforms, edge computing and others, driving a car without human intervention or very less intervention has become possible.", "Autonomous vehicles (AVs), also known as driverless car or autonomous cars, are the vehicles where the operation of the vehicle occurs without the direct driver input and driver is not expected to monitor the roadway constantly [1], [2].", "With better safety measures [3] and improved energy efficiency resulting in lower environmental impact [4], the AVs seems to be promising technology.", "For the same very reason, major car manufacturers are expanding their fleets with AVs [1].", "For example, one of the major car-manufacturer company Mercedes-Benzhttps://www.bbc.com/news/business-56332388 recently announced its intention to launch autonomous driving technology in its S-class model.", "Similalry, Teslahttps://www.tesla.com/en_CA/autopilot, Accessed:Nov.09, 2021. has already come up with a advanced hardware and software technology to make driving completely autonomous.", "However, to make driving completely autonomous, high-speed networks such as 5G or beyond 5G technologies will play a key role.", "With the emergence of fifth generation (5G) technology that can offer speed upto 10 Gbps with incredibly low latency of 1 ms (for everyday cellular users), the realisation of AVs has been made possible.", "Initiated by ITU in the year 2015, 5G new radio (NR) is a new cellular communication standard with the potential to support massive delay-sensitive applications through three key use-cases in 5G.", "These three use-cases include enhanced mobile broadband (eMBB), massive machine-type communications (mMTC) and URLLC.", "Ultra-reliable low-latency communication (URLLC)[5] is designed to support services for the delay-sensitive applications such as remote surgery and autonomous driving requiring very less error-bit rates.", "Similarly, the focus of eMBB is to improve the latency requirements by providing more bandwidth and achieve the speed in Gigabits to supprt applications such as virtual reality (VR).", "Finally, the focus of mMTC is to improve connectivity between billions of devices transmitting very short packets (not done adequately in cellular communications due to human nature type of communication) [6].", "As the integration of 5G technology with other existing technologies will open doors to thousands of other use-cases, it will be interesting to understand how its integration with autonomous driving will be beneficial.", "To the best of our knowledge, this is the first comprehensive review paper that addresses several aspects of integrating 5G technology in AVs.", "Table REF presents a comparative analysis of this work with the existing studies.", "The contributions of this work are summarised as follows: We have provided a comprehensive overview on AVs.", "Different technical aspects for the successful integration of AVs with 5G technology are enumerated.", "State-of-the-art on integrating AVs with 5G is conducted.", "Security concerns in AVs are identified and explored.", "Key projects and standardisation activities on 5G autonomous vehicles are highlighted.", "Future research challenges and directions are identified and explored.", "Table: Summary of Related Surveys on 5G-based Autonomous VehiclesThe remaining paper is organised as follows: Section I highlights the features of AVs and provides theoretical framework to achieve inter-vehicular connectivity using 5G technology.", "Section II discusses on the technical requirements for AVs to realise their full-potential and how 5G fulfils those requirements.", "Section IV explores the impact of other technologies on AVs alongwith 5G.", "Security concerns in AVs are discussed in Section V. Section VI highlights the key research projects and standardisation activities.", "Future research directions are discussed in Section VII.", "Section VIII finally concludes the article." ], [ "Introduction to Autonomous and Connected Vehicles", "The recent era is witnessing a rapid growth in the urban life and this growth has brought the need for high mobility and drastic changes in the transportation technologies.", "The Autonomous and Connected Vehicles (ACVs) plays one among the major components in the innovative solutions for promoting Intelligent Transportation System (ITS) [13].", "ITS is one in which numerous vehicles communicate with one another with the utilization of a communication infrastructure for exchanging critical information like traffic, congestion and road conditions.", "This section discusses about the background of ACVs, architecture, key enabling technologies and various requirements for Beyond 5G (B5G)." ], [ "Evolution of Autonomous and Connected Vehicles", "ACVs are characterised by two major properties namely capability of automation and connectivity [14].", "This property portrays them as unique when compared to the conventional vehicles that are connected with each other.", "The concept of self-driving cars are not completely new.", "The experiments on the self-driving cars started early during 1930s and the first modern control system played a major role.", "The modern cruise control system which was developed in 1948 paved the motivation for the development and evolution of autonomous vehicle.", "In 1966, mechanical anti-lock braking was installed in a car which was under standard production.", "Next came the electronic cruise control in the year 1968.", "In 1987, BMW and Bosch invented the electronic stability system.", "Later in 1995, Mitsubishi Diamante introduced the adaptive cruise control which was laser based.", "Nissan introduced lane departure warning system in 2001.", "In 2003, a pre-crash mitigation system was tested by Toyota Harrier.", "Later came many significant developments like traffic sign recognition (2009), Google car (2010), Mercedes S-class (2013), autopilot (2015) in Tesla cars, and intelligent speed adaptation (2022).", "A comparative analysis of ACVs with all other vehicles starting from conventional vehicles are illustrated in the Table REF .", "These ACVs are expected to have a high impact on the urban society and in future smart city applications.", "The connectivity among the autonomous vehicles are realized using the technology called vehicular networks or vehicular ad-hoc networks [15].", "Before we explore the features of ACVs, it is worth while to discuss briefly the concept of CVs and their limitations which motivated the researchers to focus on ACVs.", "CVs are designed to support better connectivity by utilizing various levels of communication which include Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) [16].", "Since the communication levels are heterogeneous, various issues arise on scalability aspects and coverage area.", "An ACV is an autonomous vehicle which has connectivity with all the levels of communication channels and retrieves information from vehicle to everything (V2X) communication level [17] to provide a complete field view and hence optimize the autonomous driving of other road users.", "Due to this characteristic, apart from providing better safety and improved performance than autonomous vehicle, ACVs also improve the throughput of traffic and economy of fuel by optimizing the route and cooperative driving." ], [ "Features of Autonomous and Connected Vehicles", "The salient features of ACVs which paves a way for fully autonomous transportation system are as discussed below.", "Management and Organization: The fundamental feature of ACV is the organization and management.", "The basic intention is to free the human from keeping track of the schedule of maintenance related operations and to provide a platform which is available 24/7 with peak performance.", "ACVs would assist the vehicles in adjusting the operations when software failures occur and adapt to the dynamic environment.", "ACVs will help in monitoring the self usage and updates of software.", "Apart from this, ACVs are expected to detect the malicious activity of a specific autonomous vehicle and isolate the same away from the network.", "The ACVs also provide an alternative solution in case of malicious activity to the passengers in a very least time.", "Optimal Configuration: For optimal functioning of the ITS, the complex system has to be configured error free.", "But since the system is a large scale configuration, it is error prone and time consuming.", "The feature expects the ACVs to adapt and accommodate all the third party components like transport policies, traffic control authorities and road users' policies.", "Optimized Resource Allocation: The ACVs have the capability to tune the performance and behaviour of the system using numerous factors as illustrated in the Table REF .", "For instance, this feature can support the system to allocate the resource based on the mobility of the vehicle.", "This also assists in tuning of performance parameters.", "Self-Protection: This feature is oriented towards self-protection and assists in providing security to the ACVs from malicious attacks.", "The hardware as well as software are protected from various attacks.", "Once a failure occurs, the system should be having the capability to mitigate the rigorous effect and hence avoid the failure of the entire network of systems.", "Table: Comparative Analysis —From Conventional Vehicles to Autonomous Connected Vehicles." ], [ "Levels of Automation", "As per the guidelines provided by Society of Automotive Engineers (SAE) [18], there are six levels of automation for driving the vehicles autonomously as illustrated in the Fig.", "REF .", "The levels are depicted based on the depth of automation utilized.", "Level 0: This level is considered as no automation level since almost all the tasks and operations with respect to driving a vehicle is taken care and controlled by human.", "Decisions regarding driving and road usage is done by human intervention.", "Level 1: The level is focused towards automating only selective functionalities of driving and provides assistance to human in driving to a limited range.", "Few examples of level 1 automation are lateral and longitudinal control of motion.", "Level 2: This is oriented towards automating the driving task by combining two or more controls.", "Examples include adaptive cruise control, lane maintaining assistance.", "Level 3: This is a kind of automation which facilitates conditional driving.", "The driver is assisted while driving the vehicle for sometime in between the whole drive for performing some other activity and then he resumes his driving.", "Level 4: The automated system takes care of the entire driving mechanism including monitoring the current environment for detection of dynamic changes and controlling of motion as per the changes.", "The system also allows the driver to take control of the entire system at critical situations.", "Level 5: This is the highest level of automation where the system controls the driving mechanism as well as monitors the current environment and the surroundings nearby by concept called cooperative driving.", "There is no intervention of human at any point of time.", "All the failures and dynamic decisions are also controlled by the system throughout the trip.", "Figure: Levels of Automation in Autonomous Vehicles.Table: Factors for Optimized Resource Allocation ." ], [ "Architecture", "ACVs refer to an integrated domain of various technologies which help in achieving the inter-vehicular connectivity for providing various services in applications like traffic safety, assistance on roadsides, driving efficiency, remote monitoring, traffic congestion avoidance, maintenance, and system failures.", "To achieve these, the ACVs are configured with wide range of sensors deployed onboard which communicate among them through Controller Area Network (CAN) bus, infrastructures related to communication and other vehicles.", "ACV is a blooming technology that is focused by both academicians and industrialists.", "The motto for moving towards this technology is to provide better safety for road users, better flow of traffic, reduction in fuel consumption and cost of travel.", "The basic infrastructure required for implementation and deployment of the ACVs to provide ITS is illustrated in the Fig.", "REF .", "As per the illustration, the entire network could be grouped into two major type of nodes namely vehicles which have on-board sensor units and the other communication based infrastructures along the roadsides.", "The channels which help in the communication within the networks are Vehicle-to-Vehicle (V2V), Infrastructure-to-Infrastructure (I2I), Vehicle-to-Infrastructure (V2I), Vehicle-to-People (V2P) and Vehicle-to-Everything (V2X).", "The vehicles' on-board units comprise of sensors which help in detecting the objects as well as obstacles within their dedicated range.", "The frequently used sensors, their range of sensing and possible applications in which they are used are tabulated in the Table REF .", "Figure: Illustration of Autonomous Connected Vehicles, Infrastructure, Environment and Communication.Table: On-Board Sensor Types, Range and Usage The general architecture of ACVs can be illustrated as in Fig.", "REF .", "The architecture consists of three layers which are grouped based on the functionalities performed by the ACV.", "The first and foremost layer is the perception layer which is responsible for gathering the raw information from the environment with the help of sensors mounted over the vehicle.", "The sensor data collected is utilized and utilizing sensor fusion techniques the local and global location parameters are calculated by the perception layer and a map of the environment is generated[21].", "The next layer is the planning/processing layer which plays a major role in determining the global route that suits best as per the current position and the requested destination.", "For determining the best possible route, the layer utilizes the remote data of road and traffic.", "Based on the environment map generated by the perception layer, the trajectory planning and tracking is computed[22].", "The final layer is the control layer which concentrates on providing appropriate commands to control the various actuators of the ACV like steering wheel, gas pedal and brake pedal [23].", "Apart form these major functionalities, the perception layer also shares the perceived information of the environment with the other users of the road and hence the planning layer supports co-operative driving [24] with the help of the inter-connectivity along with the other road users.", "Decision making is the major functionality of the planning layer where decisions with respect to control of servo motor as well as control of the actuator is taken.", "For such high level and critical decisions, the connectivity among the vehicles, among the infrastructure and the road users plays a major role which increases the complexity in deployment of the ACVs." ], [ "Key enabling technologies", "The current traditional methodologies which are proved in real-time may not be suitable for implementing and deploying the various salient features of ACVs.", "But, recently there are many innovative technologies and approaches in the field of sensors, cloud computing and artificial intelligence which can help in developing and designing intelligent ACVs that can show various benefits.", "This subsection discusses about the recent technologies that could support the deployment of ACVs in real-world." ], [ "Sensing Environment", "Vehicular networks are those which focus mainly on sensor networks of either single type or homogeneous sensors whereas ACVs focus on various sensors which are heterogeneous in nature.", "The frequently used sensors for ACVs are broadly classified into three types[19] as illustrated in the Fig.", "REF .", "The detection sensors are those which are usually mounted on the vehicle for identifying the various features in and around the environment.", "These sensors can also help in the process of monitoring the working as well as inner condition of the vehicle.", "Ambient sensors are those which generally monitor the environment and gather all the sensitive legacy data.", "The gathered data is transferred to the corresponding authorities by these sensors.", "Back-scatter sensors are specially built for usage in all kinds of objects and helps in providing better perception of the outer world including trespassers, bicyclists, etc." ], [ "Accessing Data", "The ACVs generate a large amount of data in the form of signals which are emitted by the sensors mounted in, on and around the vehicles.", "The generated data need to be accessed by the authorised authorities, other vehicles in the network, other infrastructures in the vehicular environment for the purpose of taking decisions at the right time.", "For the purpose of storing temporarily or archiving and hence accessing, very high end resources as well as servers are required.", "Recently, the cloud computing technology has solved the data access challenge by providing virtual resources whenever required and is provided as a service [25].", "This would allow the ACVs to communicate among themselves, infrastructures, things, and everything in the environment with better performance.", "Apart from cloud based technology, recently there are other key technologies like fog, edge and roof which also support in accessing the huge vehicular data.", "Also, few vehicular fog nodes are developed recently which function based on fog computing technology.", "These nodes provide better information about the environment as well as helps in inter as well as intra-vehicular communication.", "These gathered data can be processed further by the fog nodes and analytic can be performed on the crowd sourced data from the ACVs.", "There are various algorithms for optimizing the performance of the fog nodes further." ], [ "Vehicular Communication", "Almost all functionalities of the ACVs are dependent on the data perceived by the various heterogeneous sensors and their reception.", "For the purpose of improving the performance, the communication bandwidth has to be higher to an extent of giga-bits-per-second.", "To achieve high performance and better communication, millimeter-wave (mmWave) is being utilized for ACVs.", "These can be used for V2V, V2I and for intra-vehicular communications as well.", "The mmWave V2V links can help the vehicles to gather and share the raw information from and to the neighbouring vehicles in a particular environment.", "The mmWave V2I links are utilized by the ACVs in road safety based applications.", "These links help in gathering the data sensed from thee vehicles and forwarding further to the cloud resource for archiving or decision making.", "The higher data rate mmWave links are utilized by the ACVs for downloading real-time maps and streams of dynamic environment.", "Various communication bands can be utilized by ACVs for mmWave.", "To list out a few, 5G banks like 28 GHz and 28 GHz, unlicensed band like 60 GHz and automotive radar bands like 24 GHz and 76 GHz." ], [ "Security", "Security plays a major role in case of any type of vehicular communications.", "This is a very critical challenge which is to be handled when the environment includes both legacy as well as autonomous vehicles.", "In such scenarios, if the security is not handled properly, there can be a chaos [17].", "To handle the vehicular communications and also to maintain the security and privacy, recently physical layer security (PLS) has been designed as a replacement for cryptography.", "Numerous studies have shown that PLS can be utilized for improving the performance of secrecy and for employing jamming techniques among the source and destination [26].", "Also PLS can be employed for restricting the vehicles from broadcasting any false data.", "Recently block-chain techniques are being used in most of the applications for building a trust model for ACVs.", "These trust models are usually built by providing some additional policies and certificates.", "Figure: General architecture of autonomous connected vehiclesFigure: Intelligent sensors in autonomous and connected vehicles" ], [ "5G and Beyond Requirements for Autonomous Vehicular Communication", "5G is a recent emerging platform which is developed for not only supporting the existing applications but also to support and leverage the upcoming novel applications that require communication at a very low latency.", "This technology provides building blocks for supporting the existing traditional platforms like 2G, 3G, 4G, and WiFi.", "Apart from this, it also leverages greater connectivity and coverage for handling higher network density.", "It is best suited for vehicular communication.", "This section discusses about the various building blocks required for using the 5G and beyond techniques for ACVs." ], [ "Proximity Service", "The most important feature of the 5G communication is the Proximity Service (ProSe).", "The ProSe is a service which aims in providing awareness to the vehicles about the devices, infrastructure and other objects in the environment along with the locality information.", "The most significant feature of ProSe is that it provides spontaneous communication and interactions within a certain locality.", "For example, ProSe is best suited for discovering the moving vehicles on the road.", "Apart from this, it can be used as a communication service for scenarios like public safety." ], [ "Multi-access Edge Computing", "The key feature of 5G enabled vehicular communication is the low latency (upto 100 ms) utilized for safety measures and ultra-low latency (upto 1 ms) utilized in ACVs.", "One of the methodologies for achieving such low latency is to move the basic and core functionalities towards the user or the customer.", "This is considered to be the edge.", "Multi-access Edge Computing (MEC) plays a significant role in bringing all kinds of services to the concerned network locations.", "Every edge includes a multi-vendor environment which helps in hosting most of the mobile edge applications.", "These services can be achieved through a technology called network functions virtualization (NFV).", "There are various kinds of access technologies like WiFi, 802.11p and 5G which could be used by the vehicles for their communication.", "Though MEC is suitable for communication among the ACVs, it might be crucial in case of Traffic Information Systems (TIS) where the requirement of latency is flexible.", "The reason behind is that TIS do not demand any hard and fast rule on the latency.", "Hence even a small improvement in latency by using MEC can give a better user experience." ], [ "Network Slicing", "As discussed in the previous sections, 5G supports all the existing access technologies by providing a wider range.", "Since there are numerous networks utilizing different access technologies, managing all of them under a single roof named 5G is the key challenge.", "Network slicing helps in solving this challenge by separating the networks logically.", "For instance, in case of ACVs the networks can be sliced based on the applications and requirements.", "To name a few, safety applications, infotainment applications, mission critical system applications.", "Each application can be logically separated as a network slice.", "Safety applications require low latency but reliable message transmission.", "Infotainment applications aim at high bandwidth.", "Mission critical systems require very fast and spontaneous information exchange especially during disasters.", "Network slicing also helps in maintaining the integrity as well as security in vehicular networks.", "In recent years, both the academia and industry have shown great interest in the development of autonomous driving, which will liberate drivers physically and mentally, greatly improve traffic safety and energy efficiency, as well as make better use of public resources.", "Universities and research groups are actively involved in autonomous driving competitions and technical challenges.", "Some of the universities, automobile companies and Internet auto companies have joined the ranks of autonomous vehicle manufacturing and research.", "The development of light detection and ranging, Radar, camera, and other advanced sensor technologies inaugurated a new era in autonomous driving.", "However, due to the intrinsic limitations of these sensors, autonomous vehicles are prone to making erroneous decisions and causing serious disasters.", "Hence, this section, provides an overview of technical aspects of autonomous vehicles as illustrated in Table REF and also discusses the networking and communication technologies that can improve autonomous vehicle's perception and planning capabilities as well as realizing better vehicle control.", "The summary of this section is shown in Table REF ." ], [ "Introduction", "AV are going to revolutionize the future of road transportation.", "In future, ordinary vehicles would be replaced by smart vehicles which are capable of making decision, choosing the shortest path and planning the optimal travel route.", "Inorder to achieve these objectives, latest advancement in communication technologies like 5G can be used in local perception for controlling the short range vehicles with respect to the safety, traffic control and energy management parameters.", "There are more functions developed to take the control of individual driving function to fully autonomous system.", "Navigation is one of the key tasks in AV which automatically calculates the route from source to destination.", "The data of the road network should be available to the vehicle to design the route for destination point in advance.", "Autonomous navigation includes path planning, obstacle detection, obstacle avoidance and finding the optimal path for safe as shown in Fig.", "REF .", "Figure: AV data acquisition, data processing, path planing and vehicle control" ], [ "Existing Challenges/Limitations", "Autonomous navigation depends on these technologies like localization, planning and control.", "Maintaining a reliable localization along the planned path is a basic requirement in AV.", "Autonomous robots in UAV and UGV has gained its importance in AV which provides more stable and robust system.", "Specifically in UAV applications, the initial trajectory path will be provided to the system for initial guidance and further the navigation will be guided based on the waypoints of the trajectory.", "Localization can be classified based on GPS and Laser Range Finder.", "Even though GPS based system have proved good for outdoor environments, still signals are unreliable in indoor or dense urban environments.", "Whereas LRF performs better in both indoor and outdoor applications.", "In unstructured and complex environments, the LRF readings is the key factor for planning the path to minimize the localization errors or exposing the vehicle at risk of failure.", "In [27], authors have proposed the path planning for AV based on Localizability Constraint (LC) using the Laser Range Finder (LRF) sensor model of the vehicle.", "LC is maintained throughout the path planning and also it reduces the overall localization error.", "Finally, path planned with and without LC and the influence of LRF in the model is compared and also simulated.", "The proposed method ensures effective localization performance and rich environmental information when compared to existing techniques.", "Navigation and path planning under uncertainty is a very tedious task for AV such as UAV, drones and self-driving cars [28].", "Probabilistic methods can be used for optimization and modeling problems to capture the uncertainty prevailing in the dynamic environment.", "Rather than processing with traditional time-consuming methods like Monte-Carlo methods, probabilistic decision engine is designed to provide the navigation of AV under uncertain dynamic environment.", "Probabilistic decision engine can be implemented in the real-time FPGA hardware.", "Navigation algorithms can be combined with motion control algorithms and other path planning algorithms.", "Authors in [29] have proposed generalized and stable Robust Model-Predictive Control (R-MPC) technique to design the optimal path or trajectory by solving a convex quadratic program.", "This technique implements the constraints and detect the obstacles within the bounded uncertainty.", "It works well in two scenarios: military UAV vehicle flying over the target and assistive care robot to safely navigate through cluttered home.", "Recently, commercial and military applications have gained high demand for UAV.", "Vision based technique using optical flow is proposed [30] for obstacle detection and Collison avoidance for autonomous navigation in quadrotor UAVs.", "For initial path, map based offline path planning is developed followed by trajectory points for the flight guidance.", "Onboard camera is used during the navigation to analyze the monocular images for environment perception.", "Basically vision based technique can be divided into three categories such as monocular visual cues that depends on the properties of the sequence of image acquired from which the image is classified as obstacle or non-obstacle.", "Second method, stereo vision based approach uses multiple cameras for depth computation and stereo matching.", "Third approach is using motion parallax combined with the camera motion.", "Autonomous navigation for marine environment is highly required as they often fail in their mission due to harsh environments.", "Deep reinforcement learning [31] can be used for autonomous navigation in Unmanned Surface Vehicles (USV).", "Long short-term memory networks are used to remember the ocean environments values.", "The technologies are developed by the researchers to transform from semi-autonomous driving to full autonomous driving.", "These AV technologies should support the realistic road scenarios including the complex driving situations.", "It also should have an effective warning system to avoid collisions and accidents.", "So, high speed navigation path must be planned for both structured and unstructured roads.", "Many studies are constructed for structured roads whereas for unstructured roads free-form navigation method like motion planning algorithm is required to produce global path towards the destination.", "While planning for structured road, it implicitly provides the preferred path and thus planning in unstructured road is less constrained, complex and computationally expensive problem.", "To find a primitive the authors in [32] have used graph based search algorithm and discrete kinematic vehicle model to avoid collision and accidents.", "The decision in AV are taken based on the planner which creates the collision free waypoints to reach the destination.", "This planner module is capable of finding optimal solution by reducing the computational cost and the distance travelled by the vehicle.", "The planner module in navigation is divided global and local planner.", "Global planner considers the prior knowledge of the environment and static obstacles whereas the local planner regulates the path by considering the dynamic obstacles.", "The authors in [33] have developed Time Elastic Bands(TEB) method for local planning and Dijkstra algorithm for global planning.", "They collect the information from all the task in navigation process and provide these solutions as input to the global and local planner algorithm.", "The main two components in planner are sensor analytics and Path finder.", "Former combines the uncertainty of all sensors and evaluates the positioning and performance for a given location and time.", "Path finder utilizes the performance of sensor and defines the optimal and shortest path [34] to achieve the safety on road.", "AV navigation is highly influenced by uncertainties in data which leads to a high risk of accidents and it could be life threatening situation.", "These uncertainties could be from static sources which do not vary over time like an AV driving through the tunnel.", "The other one is from the dynamic source like weather condition that varies from with location, time and severity.", "For example, the quality and noise of image depends on the amount and the intensity of rain.", "These uncertainties in sensor must be considered while designing global planner to modify the plan accordingly.", "The authors in [35] have designed a fuzzy logic based uncertainty indicator and formulating the cost function for selecting the optimal routes with the lowest sensor uncertainty and thus ensuring the safe navigation on road." ], [ "How B5G help (with Related work)", "Devices used for receiving signals in AV are sensors, radar, lidar and camera.", "Some of the challenges faced in these devices are further discussed.", "Camera devices are not able to detect objects when the climatic conditions are poor.", "In radar, the main problem is to differentiate objects type because of its longer wavelength.", "Lidar is useful to detect the objects surrounding the vehicle but the laser beams does not provide the accurate results in bad weather conditions like fog and snow.", "Also, it is so expensive compared to radar and camera.", "Next challenge in AV is constructing the maps based on the inputs received from lidars and cameras which is a difficult process and a time intensive process.", "With respect the safety parameter, the main issue is that it is not able to predict the agent behaviour.", "It is very hard for AV to sense the behaviour of other objects and infrastructure on road specifically predicting the human error and behaviour is a tedious task.", "The other legal issues like who is going to take the responsibility in case of an accident?", "So the basic challenge in AV without using 5G are summarized as follows Detecting objects in poor climatic conditions Differentiating the type of objects Fails to provide accurate results in fog or snow Constructing maps from the signals received from devices is a time consuming task.", "Unable to predict the agent behaviour on roads Legal issues related to accident" ], [ "Summary", "The main challenge in autonomous navigation is detecting the localizing the obstacles at specific time and adjusting the path accordingly to avoid the accident and navigate to the destination safely.", "Also, it requires an efficient, fast and dynamic object detection and path planning algorithms to have safe and accident free navigation of vehicles on road." ], [ "Object detection/ Collision Avoidance", "Autonomous vehicles generate massive amounts of data, such as video streaming, sensor data, object detection, and lane condition.", "Providing cloud services to autonomous vehicles is considered as a major challenge in terms of latency and security.", "In [36], the authors of proposed a software-defined networking architecture at the network edge for 5G-enabled vehicular networks in order to overcome the latency problem.", "Later, the study presented a computer vision application to perform object detection and lane detection for autonomous vehicles.", "Fig.", "REF depicts object detection in autonomous vehicles.", "The lane detection algorithm performs operations such as vehicle position and path detection with high accuracy.", "Gaussian filter techniques are used to reduce granularity in order to estimate the correct lane with greater accuracy.", "The object detection algorithm detects the object; however, the authors used a limited set of traffic signals and objects in this work.", "The authors used Haar feature-based cascade classifiers for object classification and detection.", "The experimental results show that the proposed 5G-enabled vehicular network has a round trip time of 49.2 ms to transfer a message from the vehicle to the server.", "However, based on our observations, the authors did not demonstrate an accuracy rate in detecting the object and lane.", "Localization is one of the most important aspects of autonomous vehicles for avoiding collisions and ensuring safe navigation.", "Although GPS provides an accurate position, its accuracy is limited to 10 metres higher or lower.", "The precise position of autonomous vehicles must be no more than 5 metres.", "The use of 5G communication technology for cooperative localization (CL) provides an accurate vehicle position.", "The work in [37] introduces a CL approach for multi-modal fusion between autonomous vehicles.", "The proposed model's main contribution was the introduction of a Laplacian Graph Processing framework, in which all vehicles act as vertices in a graph and communication paths act as edges.", "Using the Laplacian Graph Processing framework, the experimental results show that the proposed model has a faster response time and a higher GPS accuracy rate.", "The Intelligent Transportation System (ITS) is a combination of manual and self-driving vehicles.", "The ITS faces some challenges in autonomous vehicles, particularly in object detection and accurate path prediction.", "These obstacles raise safety and traffic concerns.", "Furthermore, autonomous vehicles share information with signals and act adaptively based on the situation, whereas humans in manual vehicles act more appropriately based on the situation.", "To address the issues associated with the mixture of manual and autonomous vehicles, the authors in [38] proposed a deep learning model in a 5G enabled ITS.", "During this process, a natural-driving dataset and a driving trajectory dataset are fed into long short term memory networks.", "The softmax function computes the probability matrix of each lane change intention.", "The final lane change intention probability is then assessed in the decision layer at an accuracy rate of 85%.", "To detect nearby circumstances, AVs rely primarily on radar sensors, as well as light detection and ranging sensors.", "However, the reliability of such high-end sensors is limited over longer distances or when a vehicle enters an area with low visibility.", "One alternative solution is to enhance data exchange between vehicles by using roadside equipment.", "However, there are some risks associated with data sharing, such as when a malicious vehicle intentionally sends fake data in order to manipulate the receivers or when faulty sensors communicate incorrect data.", "The AVs will be trapped if they trust the data provided by the source vehicle, causing them to switch lanes or accelerate faster.", "As a result, there may be significant risks to human life.", "Because the decision-making process of autonomous vehicles is highly dependent on shared data and sensors, it is essential that the vehicle detect and filter out incorrect information.", "Motivated by such challenges, [39] introduced a novel approach to support a host vehicle in verifying the motion behavior of a target vehicle and then the truthfulness of sharing data in cooperative vehicular communications.", "Initially, at the host vehicle, the detection system recreates the motion behavior of the target vehicle by extracting the positioning information from the V2V received messages.", "Furthermore, the next states of that vehicle are predicted based on the unscented Kalman filter.", "Unlike prior studies, the checkpoints of the predicted trajectory in the update stage are periodically corrected with a new reliable measurement source, namely 5G V2V multi-array beamforming localization.", "If there is any inconsistency between the estimated position and the corresponding reported one from V2V, the target vehicle will be classified as an abnormal one." ], [ "Summary", "Autonomous vehicles generate massive amounts of data 5G-enabled vehicular networks helps to overcome the latency problem.", "The lane detection algorithm performs operations such as vehicle position and path detection with high accuracy.", "The use of 5G communication technology for localization provides an accurate vehicle position with optimal latency and reduces the communication overhead.", "Some of the risks associated with data sharing, as the malicious vehicle intentionally sends fake data to manipulate the receivers.", "The AVs will be trapped if they trust the data provided by the source vehicle, causing them to switch lanes or accelerate faster.", "As a result, there may be significant risks to human life.", "Because the decision-making process of autonomous vehicles is highly dependent on shared data and sensors, it is critical that the vehicle detect and filter out incorrect information as quickly as possible." ], [ "Introduction", "URLLC is specifically designed to handle the stringent requirements on reliability and latency of critical packet transmission for autonomous driving in connected vehicles.", "In one-way, highway vehicular network performance is enhanced by using the joint resource allocation of the eMBB and URLLC traffics.In AV, vehicles are interconnected with each other to transfer information.", "To achieve these transfer of information among vehicles and infrastructure present on the road, these connections need to be dynamic with ultra-low latency and ultra high reliability." ], [ "Existing Challenges/Limitations", "Ultra Reliable Low Latency Communications (URLLC) provides various services to the applications like Autonomous vehicles and Industrial IoT.", "It requires strict latency with 99 percentage of reliability.", "It is used in the applications that requires low transmission latency of 1 ms and ultra-high reliability of 1-10-5 success probability [40].", "URLLC in AV needs a target latency of 1 millisecond and end-to-end security with 99 percent of reliability.", "Thus type of communications will more useful for autonomous driving to share and receive information both with neighboring vehicles and road infrastructure.", "In fully automated with no human intervention system, based on the information received the vehicles has to perform these tasks like automated overtaking of vehicles, collision avoidance, smart decisions, prioritizing the task like giving more importance to ambulance vehicles.", "All these task requires the high level of reliability and low latency which is provided only by URLLC.", "However, onboard processing or cloud computing is not sufficient to store the massive amount of data generated from high resolution sensors and cameras.", "Inorder to provide better safety and reliability than the human driving, processing real time traffic must be within a latency of 100ms.", "Storage resources in onboard processing are very limited.", "For example, GPU required for low latency have high power consumption, further it requires cooling to satisfy the thermal constraints which would significantly degrades the fuel efficiency of the vehicle.", "Also, SSD storage device is also not a good option as it will be filled within hours to store the sensor and device data.", "Onboard processing can be convenient for the communication between passenger and vehicle but not for the V2V and V2I.", "Cloud computing is also not sufficient to provide low latencies as in Internet of Vehicles there will be communication delay in between the server and the client." ], [ "How B5G help (with Related work)", "[41] paper discusses about the various building blocks of URLLC in wireless communication systems for supporting in framing, access topology and use of diversity.", "The authors in [42]have addressed the problem of resource scheduling in URLLC using puncturing approach.", "The time domain is divided into equally spaced time slot with duration of 1 ms and each time slot is further divided into minislots in order to achieve the latency requirements of URLLC.", "The URLLC traffic received in each time slot is placed immediately to transmit in the next mini slot using the puncturing method in eMBB transmissions.", "The main objective of the user is to increase the eMBB users while maintaining the URLLC constraints.", "The Cumulative Distribution Function (CDF) is used to change the resource allocation problem into an optimization problem with a deterministic constraint.", "In [43], to satisfy the strict latency constraints each time slot is divided into minislots and currently received URLLC transmission is promptly allocated in the next immediate minislot by puncturing the on-going eMBB traffic.", "The reliability of URLLC is ensured by deploying guard zone around the vehicle receivers and even eMBB transmission is also prohibited inside the guard zone.", "Association probability of the vehicle receivers for URLLC is captured and finally the coverage of vehicle-to-vehicle links and Vehicle-to- Infrastructure are analysed.", "Some of the solutions to overcome the challenges faced using URLLC in AV are discussed here.", "Storage and computing resources can be deployed at the wireless network edge which includes edge caching, computing and AI using Mobile Edge Computing.", "MEC combined with Baseband Unit (BBU) servers installed in radio access points along the roadside or base stations.", "Next, reliability and redundancy must be achieved in all the levels such as application, transmission, software and networking.", "A cloud-native BBU server is used to virtualize these levels in wireless radio network that ensures both reliability and redundancy and makes a robust system.", "Finally, Network slicing can provide different slice to satisfy different requirement, functions and configuration.", "One slice can provide low security and low reliability for mMTC services whereas another slice can provide high security and reliability like for URLLC." ], [ "Summary", "uRLLC is a key technology to provide communication between AV and 5G network.", "Real-time decision has to be taken in fraction of seconds to have safe and accident-free drive.", "This is achieved by low latency feature of 5G network that enables vehicles to receive information at turbo speed.", "So, faster communication, real-time connectivity and low latency are essential factors for increasing growth in 5G enabled AV" ], [ "Introduction", "Machine type communications (MTC) play a key role in 5G systems.", "MTC can be classified into massive Machine Type Communication (mMTC) and ultra-reliable Machine Type Communication (uMTC).", "mMTC provides connectivity to a large number of devices in which traffic profile is typically a minimum amount of data.", "It achieves minimum latency and high throughput but the main concern is about the optimal power utilisation of these connected devices.", "mMTC provides wireless connectivity to billions of devices with low complexity and low-power type of devices.", "Some of the use cases are automated industries, remote surgeries and smart metering.", "On the other hand uMTC is about connecting adequate wireless link for network services which are widely used in V2X and industrial control applications.", "The main features of mMTC are as follows.", "Small packets are transmitted from devices with bytes and it connects large number of devices per cell.", "Sporadic user activity, uplink transmission and low user data rates of around 10 kb/s per user.", "Finally, optimal power usage and long battery life is also achieved by using mMTC." ], [ "Existing Challenges/Limitations", "The existing communication technologies [44] focus only on some specific applications.", "They are not able to meet the latency and reliability requirements of applications such as connected cars, automated vehicles and industrial automation.", "The traditional technologies evolved from Long-Term Evolution (LTE) cannot be applied to IoT devices.", "So, these technologies are not able to optimise or handle IoT specific features like sporadic transmission, optimization of power and uplink centric transmission and so on.", "These factors motivated to shift from existing technologies to mMTC.", "The application of 5G technology in Various communication like V2V, V2I and V2X communication are inevitable as shown in Fig.", "REF .", "V2V communication generates the network by connecting different vehicles using mesh topology[45].", "Based upon the number of hops, communication can be classified as single-hop (SIVC) which is used for short range application and multi-hop vehicular communication (MIVC) [46] used in traffic monitoring.", "Vehicle interacting with infrastructure helps to detect traffic lights lane markers and parking meters [47].", "V2X communication interact with all the entities such as roadside, grid, devices and pedestrians[48].", "This type of communication prevents road accidents and to alert the passengers before accident [49]" ], [ "How B5G help (with Related work)", "In [50],authors have proposed the Hybrid Hovering Position Selection (HHPS) algorithm to determine the hovering positions of Unmanned Aerial Vehicles(UAV) which minimizes the power consumptions of machine type communication devices.", "Inorder to optimize the latency of communication devices, cuckoo search algorithm was proposed for trajectory planning.", "Furthermore, energy consumption and throughput of UAV is optimized.", "Based on the priorities given, efficiency of data and computing services are also optimized.", "Bockelmann et al.", "[44] have designed a FP7 METIS project using 5G system concept in which well-suitable chase for high data rates are summarized by the term ”Extreme Mobile Broadband” (xMBB).", "The overhead seen in exchange of messages required before the transmission of data payload affects the energy efficiency of MTC devices.", "Hence, less frequent and shorter transmissions preserves energy.", "Therefore MAC protocols coupled with Physical layer approaches enable devices with long battery lives.", "In [51], orthogonal frequency division multiplexing with index modulation is proposed to improve inter carrier interference caused by asynchronous transmission for uncoordinated mMTC networks.", "Data transmission is performed by the indices of active subcarriers.", "Subcarrier Mapping Scheme called inner subcarrier activation scheme is proposed to further improve the interference of adjacent user in asynchronous systems.", "In [52], proposed a novel 5G based network architecture called Non-Orthogonal Multiple Access (NOMA) to reduce the latency and reliability requirements in V2X communication.", "One of the critical challenge for next-generation wireless communication is to satisfy the high demand for mMTC systems which performs a random transmission between base station and machine users.", "Therefore, the lack of coordination between the base station causes inter-carrier interference.", "Researchers are facing a big challenge in providing the services for asynchronous massive machine users.", "Activation ration and optimization of subblock size can be evaluated for clustering users and conflicting users with respect to their requirements.", "Autonomous vehicle in reality faces various challenges like real-time data analytics, software heterogeneity, validation, verification and latency issues.", "Aforementioned challenges, researchers across the globe are trying to provide solutions using 5G-based testbeds." ], [ "Summary", "V2X and 5G connections helps AV to visualize the objects and obstacles around corners and beyond.", "Connectivity between the cars and infrastructure provides awareness ahead among the vehicles like reduce the speed automatically in slow-moving traffic areas.", "By the time when it reaches the signal, traffic would have been cleared thereby reducing the waiting time on the traffic signal.", "'Traffic Light Information' is a classic case study based on V2I initiated by Audi in Europe.", "All these scenarios requires good connectivity at the speed of light.", "This can be achieved by providing 5G network that enables fast processing and prior decision making.To summarize, AV would become reality with the application of 5G technologies." ], [ "Introduction", "eMBB service is used particularly to enhance the Quality of Experience(QoE) in bandwidth for in-vehicle applications." ], [ "Existing Challenges/Limitations", "One of the significant way in 5G to deliver wireless broadband to previously unreached areas is through the technology Fixed Wireless Access (FWA) network.", "Globally, FWA has gained its importance in developed and developing countries and is expected to expand exponentially from 2018-2025.", "This FWA creates a platform for eMBB for wide coverage using higher-spectrum bands.", "Some of the use cases of eMBB are as follows: Broadband everywhere: FWA technology can provide wide coverage globally with minimum speed of 50 Mbps.", "Public transportation: Broadband access used in high-speed trains and public transport systems Hot spots: Enhancing broadband coverage in densely populated areas and in high rise buildings Large-scale events: Enabling high speed of broadband data where thousands of people are gathered in one place for any kind of big event.", "Smart offices: Delivering high-bandwidth of data connections to thousands of users even in the environment with heavy data traffic.", "Enhanced Multimedia: Provides high quality video streaming and real-time content over wide coverage areas." ], [ "How B5G help (with Related work)", "In [42], authors have addressed the problem of resource scheduling problem in eMBB traffics.", "Initially the resource blocks are assigned to the beginning of each time slot based on the channel state and the previous average data rate upto the current time slot of each eMBB users.", "Two dimensional Hopfield Neural Network and the energy function is used to solve the resource allocation problem.", "Then, chance constraint problem is applied to maximize the eMBB data rates.", "The authors in [53] targets more number of users in data transmission in AV.", "Non-Orthogonal Multiple Access (NOMA) is proposed to optimize the distribution in unicast/multicast scenarios.", "The complexity of the algorithm is measured and compared using Time Division Multiplexing (TDM).", "In order to reduce the complexity of the algorithm, two solutions are proposed.", "First, the numbers of injection levels are reduced and second one is choosing the smart algorithm that selects the optimum injection levels.", "In [43], authors have considered eMBB and URLLC to be the important prerequisites of smart intelligent transportation systems.", "The eMBB traffic is scheduled at the boundary of each time slot for data transmissions.", "During the transmission interval, random arrivals of URLLC traffics are allowed." ], [ "Summary", "URLLC and mMTC works together with eMBB to fulfill the needs in new wireless networks to provide the facilities in applications like healthcare, manufacturing, military and emergency response.", "These three features in 5G provides the solutions for the issues with respect to bandwidth, density and latency in applications with restricted LTE's capabilities like autonomous vehicles, smart city and augmented reality.", "Table: Benefits and challenges of Technical Aspects of Autonomous Vehicles." ], [ "Impact of 5G and B5G technologies on AV", "In this section we discuss the impact of some of the prominent technologies in 5G and B5G on AV." ], [ "Multi-access Edge Computing (MEC)", "MEC extends the capabilities of cloud computing and Information Technology environment by bringing them closer to the network or to the edge of the network, i.e.", "to the end users.", "By bringing in these capabilities to the network edge, the congestion, latency are reduced and also the applications run faster.", "Real-time analytics for applications such as traffic analysis, big data analytics in social media, smart cities, etc.", "can be realized by MEC [54], [55].", "MEC is a key enabling technology in 5G and B5G technology as MEC enabled B5G networks offer high-bandwidth, low latency and real-time access to the resources of the radio network.", "MEC can help the network operators to support wide range of innovative services and also integration of IoT based applications in B5G [56].", "Enormous amounts of data will be generated from AV.", "This data can be used for recognition of several external and road features such as, pedestrians, traffic signs, lanes, road condition, etc.", "When multiple AVs cooperate for tasks such as collision avoidance and lane merging, data processing from the vehicles at global perspective is required.", "Enormous computing power is required to process several computer vision operations that are essential for AD like extraction of features from images to perform the analytics in real-time.", "Hence, MEC enabled 5G services plays a major role in fast communication, fast processing of complex data in real-time analytics which allows the offloading latency-sensitive and computing-intensive tasks to the edge devices as depicted in Fig.", "REF [57], [36], [58].", "Rest of this section discusses some of the recent literature on MEC enabled B5G for AV.", "Zhou et al.", "[59] proposed a novel MEC enabled 5G architecture for vehicular networks.", "They discuss how the proposed architecture can accommodate V2I and V2V communication with guaranteed low packet delay and high scalability.", "The proposed architecture addressed the issue of mobility management on MEC for AVs.", "The IP-handoff procedure is made transparent and seamless by the application of distributed mobility management.", "The authors used NS3 tool to conduct proof of concept simulation.", "Fabio et al.", "[60] discussed the importance of MEC in 5G-enabled connected cars.", "They have provided several technical aspects and practical use cases on MEC-enabled 5G for V2X.", "In a similar work, Claudia Campolo et al.", "[61] proposed an architecture MEC enabled 5G for V2X application.", "The authors mainly focus on service migration for V2X applications to reduce the duration of non-availability of services provided by the virtualized migrating instance (service downtime).", "Sokratis Barmpounakis  et al.", "[62] proposed a an architecture for 5G systems that uses MEC and NFV to select the computing resources for V2X applications.", "A novel algorithm, VRU-Safe, is proposed by the authors that operates on top of the proposed architecture to identify and predict road hazards like vehicle collisions.", "Coronado et al.", "[63] proposed a novel MEC and NS enabled 5G for ACV.", "The proposed architecture enables features such as object detection and lane tracking to be offloaded to the MEC without affecting the effectiveness of 5G.", "The authors have developed an application named as road data processing to analyze the videos streams from the AVs.", "An ML model is used for lane tracking, detecting traffic and road signs.", "The MEC server will instruct the AV with an appropriate command to take required action.", "Lee et al.", "[64] have proposed a simulation as a service (SIMaaS) to offload the simulation by making use of cloud infrastructure which is based on computational offloading for the MEC enabled 5G platform for AVs.", "As an use case, the authors conducted Monte-Carlo simulations using SIMaaS for optimal tollgate selection in a highway for AVs.", "Shie et al.", "[65] proposed an algorithm based on distributed heuristics for 5G-V2X to solve the problem of motion planning of Avs that travel in industrial parks.", "The proposed algorithm focuses on avoiding the collision of AVs at intersections, ensuring safety of the pedestrians.", "To achieve these goals, mutual cooperation and seamless communication among the AVs are desired.", "To ensure highly reliable and fast communications, the authors employed 5G uRLLC slice and for mutual cooperation of the AVs, a distributed heuristic algorithm is employed.", "The proposed solution will pass on the information related to the pedestrians and the vehicles passing through the intersections that are downloaded downloaded from the MEC to the vehicle that is closest to the intersection.", "Rasheed  et al.", "[66] proposed a hierarchical offloading scheme which is application-aware for MEC-5G enabled AV architecture.", "In the proposed architecture, the network is divided into three layers based on the requirements of the applications that result in efficient network and quick response that meets the requirements of the application.", "To handle each application at appropriate layer to cater the individual computation and latency requirements, every application is treated independently based on their complexity, computation requirements and data rate for deciding the computation layer.", "Huisheng Ma et al.", "[67] proposed a prototype for cooperative AD oriented MEC enabled 5G-V2X.", "The proposed prototype is based on the experimental platform for the next generation RAN, a cooperative AD vehicle platoon, and to provide high-definition 3D map service dynamically, MEC server is used.", "Lian et al.", "[68] proposed a scalable MEC-enabled 5G infrastructure for unmanned vehicular systems.", "The proposed system provides high-precision maps through the offloading capacity of MEC-enabled 5G, through which the unmanned vehicles can sense the environment that enables AD.", "Sabella et al.", "[69] have proposed and implemented an infotainment service based on MEC for AVs in smart road environment in 5G network.", "MEC has enabled the reduction in improving the usage of the backbone bandwidth.", "It also reduced the latency by accelerating the contents which are frequently accessed.", "The also the speed of content transfer between the AVs has been significantly increased due to 5G network." ], [ "Summary", "Many researchers have used the MEC enabled 5G to successfully solve many issues such as latency, computational offloading successfully in several applications such as object detection, infotainment services, lane tracking etc.", "However, there are several issues like standardization of protocols, heterogeneity in computing and communication, privacy and security, that have to be addressed to fully realize the potential of MEC enabled 5G in AV." ], [ "Introduction", "Network slicing is considered to be an important enabler of softwarized 5G and beyond communication networks [70].", "Network slicing enables multiple end-to-end logical networks on the same physical and virtual network resources [71].", "This supports diverse communication requirements of emerging applications, such as, ACVs, which cannot be satisfied by a generic ”one-fits-all” type of network architecture in pre-5G communication networks [72].", "The network slicing architecture consists of three functional layers.", "They are service instance layer, network slice instance layer, and resource layer, as presented in Fig.", "REF  [71].", "The functionality of these three layers are coordinated by the network slice orchestrator [73].", "Each network slice consists of network functions that are customized to satisfy the communication requirements of different verticals, applications and use-cases.", "For instance, in case of ACVs, a network slice with large bandwidth can be allocated for vehicle infotainment while a network slice with ultra low latency and high reliability can be assigned for autonomous driving, as depicted in Fig.", "REF .", "Figure: Network slicing for ACVs" ], [ "Related Work", "Many research work consideres network slicing as an enabler of ACVs.", "For instance, Campolo et.", "al.", "[74] presented a vision for 5G network slicing to facilitate V2X services by partitioning the resources in vehicular devices, radio access network, and the core network.", "This work emphasizes the importance of network slicing towards guaranteeing the performance of V2X services through dedicated V2X slices.", "In addition, a model for network slicing based vehicular networks is presented in  [75] where dedicated network slices are allocated for autonomous driving and infotainment.", "The autonomous driving slice is concerned on communicating safety messages while the infotainment slice is utilized for video streaming.", "Furthermore, vehicles are assigned to different clusters and slice leaders are allocated for each cluster to facilitate V2V communication and safety information with high reliability and low latency.", "In similar vein, a network slicing based communication model for diverse traffic requirements in vehicular networks is proposed in [76].", "This work also proposes two slices for autonomous driving and infotainment.", "The quality of infotainment slices are improved through V2X links to improve the packet reception rate.", "Moreover, [77] proposes a framework for 5G network slicing to control bandwidth with full-flow-isolation in network slicing for vehicular systems.", "This method incorporates heterogeneous radio access technologies, such as cellular and IoT communication (e.g.", "LoRaWAN) to generate end-to-end network slices.", "The proposed method also relies on distributed MEC and cloud computing to facilitate the dynamic deployment of virtual network functions required for network slices.", "In addition, a MEC-based online approach towards performing tasks such as power allocation, coverage selection and network slice selection of vehicular networks using deep reinforcement learning is proposed in  [78].", "This algorithm is also proved to be resilient against the high mobility in vehiclular networks.", "Furthermore, [79] proposes a method to tailor network slicing for vehicular fog-RANs.", "A smart slice scheduling mechanism formed as a Markov decision process is considered for dynamic resource allocation for network slices while overcoming the limitations caused by the dynamic nature of vehicular fog resources and vehicular movement." ], [ "Summary", "Network slicing is yet to be matured as a technology to realize fully functional ACVs in the future.", "The dynamic communication requirements of ACVs ranging from time critical communication for safety and high volume data requirements for vehicle infotainment can be satisfied through network slicing enabled 5G networks.", "However, the dyanmic nature of vehicular movement and vehicular resources make it challenging to deploy network slicing for vehicular networks.", "Several research work have attempted to address this problem through smart resource allocation algorithms.", "Furthermore, utilizing generic slices in various combinations to satisfy the demands of vehicular networks can also be explored.", "These methods should also evolve from simulations to fully-fledged and functional technologies.", "In addition, the security and privacy aspects of network slicing based vehicular networks should also be considered in order to ensure passenger and road safety, data security and privacy of ACV users." ], [ "Introduction", "Recent advancements in telecommunication and automotive industries have paved a way for innovative communication and intelligent sensing.", "This has further led to the innovation of next generation ACVs.", "Nowadays the autonomous vehicles try to communicate with other vehicles and pedestrians via wireless for sharing the critical information for collision mitigation.", "Also the vehicles try to coordinate with the TIS entities using the various network infrastructures for gaining extra awareness about the road hazards and guidance on speed while travelling.", "This obviously provides an efficient traffic flow.", "To achieve these discussed goals, a secure as well as low latency communication is the need of the hour.", "The major challenge faced while deploying are the heterogeneity in radio access technologies, deployment inflexibility, network fragmentation issues and inefficient utilization of resources.", "These bottlenecks need to be handled efficiently to realize a best vehicular networking architecture for ACVs.", "This subsection discusses about the recent advancements in 5G enabled SDN vehicular networking architecture which ensures a very high agile infrastructure for rapid and secure communication among ACVs as illustrated in the Fig.", "REF .", "Figure: Impact of 5G enabled SDN on ACVs." ], [ "Related Work", "In [80], Mahmood et.", "al.", "proposed an architecture named SDHVNet for handling the heterogeneous vehicular network using SDN.", "The architecture primarily consists of two planes namely infrastructure plane and control plane.", "The infrastructure plane handles all the vehicles, vehicle users, road users like pedestrians, infrastructure on the roads like access points, traffic signals, etc.", "The control plane is facilitated by the various application programming interfaces like OpenFlow.", "This plane is responsible for all sorts of network related functionalities like virtualization of vehicular environment, gathering status of SDN switches, dynamic topology and frequency of service request response.", "The control plane includes a cache manager and handover decision manager that is responsible for providing a seamless interconnection among the ACVs and satisfy the safety as well as critical requirements.", "They also preserve the resource utilization by preventing handover failures.", "In [81], Goudarzi et.", "al.", "has formulated a three-tier framework consisting of cloud layer, edge computation layer and device layer for calculating the capacity of elastic processing and dynamic route for the purpose of monitoring the ACVs in their respective environment.", "They have also proposed a resource allocation policy combining SDN and edge computation based on reinforcement learning (RL).", "The proposed methodology improves the performance of the network layer.", "SDN is specifically used for managing the connectivity as a whole and to dynamically handle the computation in the edge devices.", "In [82], Garg et.", "al.", "utilized SDN and formulated a framework for providing end-to-end security as well as privacy in 5G enabled ACVs.", "The major contribution of this framework is that it acts as an optimized network management layer.", "The intrusions in case of high speed ACVs are also identified potentially by this framework.", "The intrusion detection module is designed using a multi-objective evolutionary algorithm and a dimensionality reduction scheme.", "This integrated scheme improves the performance of the intrusion detection module.", "Also the framework consists of a module which authenticates the environment before the commencement of communication.", "SDN is utilized for reducing the latency in a high density traffic [83].", "The ACVs are grouped into a cluster based on the real-time conditions of the road.", "The clustering scheme follows a double head concept for each cluster.", "This enhances the quality of the trunk link thereby increasing the throughput.", "Recently, Garg et.", "al.", "utilized SDN and edge computing for solving the primary challenges of ITS like high level of mobility, minimum latency, high quality of service and other real time services [84].", "The author has proposed a framework named SDN-DMM based on EC, SDN and distributed mobility management (DMM).", "DMM helps in solving the routing issues and hence improves the quality of service in ACVs.", "The framework proposed proved to have an improvement in end-to-end delay of around 15.9 percent.", "This improvement in performance was achieved by utilization of multi-objective evolutionary algorithm while communication between the edge and the cloud.", "This enhanced the overall communication latency and also reduced overheads by better utilization of bandwidths." ], [ "Summary", "The rapid advancements in autonomous driving would result in adoption of the ACVs in near future.", "This would surely challenge the current ITS and its existing solutions.", "The major focus for facing these challenges should be on providing support for mobility, reducing the latency, increasing connectivity and providing higher intelligence.", "5G enabled SDN could be utilized for handling the communication overheads, reducing the latency, reducing the overall throughput and also reducing the delay hence improving the mobility.", "5G enabled SDN architectures are recently designed by researchers for handling the security concerns like intrusion detection.", "Though researchers have focused on utilizing SDN for finding few solutions for the above mentioned challenges, there are still significant open issues like analysis of energy utilization with respect to communication and data, real-time verification and validation, quality of everything (QoE), reduction in overheads and end-to-end delay." ], [ "5GNR/Physical layer stuff", "To support the potential applications of future intelligent and autonomous vehicles in the coming B5G era, more demanding requirements have to be supported.", "Transmission of extremely large quantity of the data is one of the major requirements in vehicular communications in B5G[9].", "To meet the growing traffic demands, B5G is supposed to provide a transmission rate up to 1 terabit/second.", "There exist many technologies that can improve the efficiency of spectrum for B5G.", "One of the most used technologies among them is massive multiple-input-multiple-output (MIMO), that can have more than 100 antennas.", "To increase the bandwidth of the spectrum, the millimetre wave (MMWAVE) is considered in B5G [85], [86].", "Table: Suitability of various vehicular communication technologies for vehicular communication use-cases Vehicles in the next generation will be equipped with radar, cameras, wireless technologies, GNSS, and other sensors that can support autonomous driving.", "The requirement of line-of sight propagation limits the functionality of the aforementioned sensors.", "By adding the cellular V2X technology in the vehicles, this issue can be overcome.", "The functionalities of sensors can be complemented by the cellular V2X technology by exchanging of the sensory data among the vehicles that can provide situational awareness to the drivers at a higher level.", "Cellular V2X has attracted significant interest from both academia and the industry recently.", "One of the promising technologies that can be used to realize autonomous driving and communications in V2X is 5G New Radio (5GNR) [88].", "From a communication point of view, 5G should support the following three broad categories of services: URLCC, eMBB, and mMTC [89].", "eMBB, that aims to provide an uplink data rate of 10 Gbps and for downlink channels, data rates of at least 20 Gbps, plays a pivotal role for several functionalities in autonomous cars such as several multimedia services, video gaming/conferencing within the car, downloading of high-precision maps, etc [90].", "For constantly sensing and learning the changes in the environment through the sensors that are equipped within the cars/infrastructure by the future driver-less vehicles, mMTC will play a very important role [91].", "High reliability and over the air round trip time, which are crucial for autonomous driving can be realized by URLCC [92].", "The 5G NR physical layer has to deal with diverse data requirements and tough V2X channel conditions like : 1) Extremely dynamic mobility, in which slow moving vehicles with speeds of upto 60 kilometers/hour and high speed trains/cars travelling with a speed of greater than 500 kilometers/hour have to be dealt with.", "Resources with more time-frequency are needed for high mobility communication in the design of air interface for dealing with the disabilities resulting from multi-path channels and Doppler spread.", "2) Conflicting requirements like data services with wide range such as downloading of high precision maps,video conferencing, multimedia entertainment with the car,, etc.", "with diverse quality of requirements with respect to latency, reliability and data rates are difficult to be supported simultaneously [93].", "To address this issue, 5GNR frame structure ensures flexible configurations to support wide variety of use cases for cellular V2X.", "Orthogonal frequency-division multiplexing is used by 5GNR.", "A maximum of 400 MHz bandwidth/new radio carrier is the limit for the channels.", "In cellular V2X physical layer, channel coding plays a very important role in accommodating several requirements such as decoding latency, data throughput, mobility, packet length, support for hybrid automatic repeat request, and compatibility of rates.", "A common physical infrastructure is expected to be used for several applications for system resources such as computing, storage, bandwidth, etc.", "in cellular V2X services in 5GNR.", "NS can be used for providing E2E data services by slicing the resources of physical layer in radio access network [94], [95].", "The information related to the status of every vehicle like intended route, trajectory, speed, and location, can be exchanged with the road infrastructure, pedestrians, other vehicles, in the network by Sidelink protocols [96], [97], [98].", "Some of the major enhancements of NR Sidelink to enable use cases of 5G for enhancing AD are i) For low latency and high reliability, Sidelink feedback channel is proposed, ii) Support of up to 16 carriers for the aggregation of carriers, iii) To increase the throughput for every carrier upto 256-QAM, a modulation scheme is proposed, and iv) To reduce the time for selecting the resources, a modified resource scheduling is proposed.", "NR-V2X supports uncast and groupcast communications along with traditional broadcast communication, in which a vehicle can transmit variety of messages with several QoS requirements.", "As an example, some periodic messages can be transmitted by a vehicle through broadcasting and some aperiodic messages can be transmitted by groupcast or unicast [99].", "The available resources in NR-V2X can be either exclusive or shared by the cellular users for direct communication among the vehicles.", "Mode-1 and Mode-2, which are two Sidelink nodes are defined for managing the resources in NR-V2X.", "It is expected that vehicles are covered completely by the base stations (BS) in Sidelink Mode-1.", "The vehicles are allocated with the resources depending on the dynamic and configured scheduling by the BSs.", "Resources are allocated based on a pre-defined bitmap in configured scheduling, whereas the resources are either reallocated or allocated every millisecond depending of the dynamic coverage of the cellular network in dynamic scheduling.", "Resources are allocated distributedly without the coverage of cellular network in Sidelink Mode-2.", "In Mode-2, there are four sub-modes; sub-node 2(a) - 2(d).", "Every vehicle may select the required resources in an autonomous manner by using a mechanism based on semi-persistent transmission in 2(a).", "In 2(b), most suitable resources for transmission are selected in a cooperative approach, where the vehicles will be assisting each other.", "The resources are selected by the vehicles based on pre-determined scheduling in 2(c).", "The sidelink transmissions are scheduled by the vehicles for their neighboring vehicles in 2(d) [100].", "Several vehicular communication technologies for vehicular communication use-cases are summarized in Table REF .", "Some of the recent works related to the physical layer communication in 5G for AVs is discussed below.", "A location aware handover for enabling self-organized communication for AVs using a robotic platform enabled by mmWave in multi-radio environment is presented by Lu et al.", "[101].", "The location aware handover mechanism studied by the authors enhances the robustness of the wireless link for IIoT based applications.", "A geometry based positioning algorithm is proposed by the authors for acquiring the location awareness in this work.", "The obstacles for achieving high transmission rates in V2X networks are the increased demands for data bandwidths and the dynamic nature in the traffic.", "Rasheed et al.", "[102] presented a predictive routing approach for V2X based on SDN controlled CR to overcome these obstacles.", "The proposed routing supports the smart switching between mmWave and THz.", "The V2X network is segregated into several clusters for efficient network management.", "To form the clusters, a stability-aware clustering method is used by the authors.", "Te authors achieved minimum transmission delay and high transmission rate fir THz anf mmWave communications by using the following design approaches.", "(1) For real-time and high-resolution prediction, a deep neural network with extended kalman filter approach is used to predict future positions of the vehicles in 3D.", "(2) CR-enabled road side units perform ThZ band detection.", "For optimal route selection in THz communications, a hybrid fruitfly-genetic algorithm algorithm is used.", "(3) A multi-type2 fuzzy inference system is used for selecting optimal beam in V2X communications based on mmWave.", "Eric et al.", "[103] investigated commuication links based on 5G mmWave for a low-speed AV.", "They have studied the effects of the positions of antenna on the performance of the links and also the quality of the received signal.", "The authors have observed that the communication losses in front of the vehicles for the road side infrastructure is almost half-power beam independent, also a major role in the quality of the signal is played by the increased root mean square delay spread." ], [ "Summary", "5GNR plays a very important role in several use cases for AVs such as providing high data rate, throughput and URLCC for coordinated driving, real-time updates on local conditions, trajectory sharing and sharing of raw data gathered by the vehicles." ], [ "Introduction", "The ITS can be fully realized when the issues such as high availability, data privacy, and scalability are addressed.", "The traditional ML models are not efficient in dynamic environments such as ITS and Internet of Vehicles (IoV).", "They face challenges such as complexity of the system, performance of the model, privacy preservation and data management.", "In IoV and ITS, even though roadside infrastructure remains constant, vehicles enter and leave the system constantly, which makes the system volatile and complex.", "Traditional ML algorithms find it difficult to handle such volatility and complexity.", "Traditional ML models use static local intelligence making it difficult for them to adapt to the dynamically changing environments, that can hamper their performance.", "Any degradation in the performance of the ML algorithms can affect he human lives.", "The next issue faced by the traditional ML models is the latency and privacy preservation involved when the sensitive data generated from the autonomous vehicles has to be transferred to the cloud for model training.", "Also, data management is very difficult as new vehicles with processing capabilities will be continuously added to the IoV/ITS.", "As the roadside infrastructure has limited resources, efficient storage of the data is a challenge, due to which, the traditional ML methods may not have access to sufficient data for training phase [104].", "The aforementioned issues of ITS/IoV can be effectively handled by FL coupled with B5G.", "B5G connectivity can enable flexible V2X interactions with other vehicles (V2V) and road infrastructure (V2I) in vehicular URLCC.", "FL can solve the issues inITS/IoV such as privacy preservation, real time analytics in complex and dynamic environments, and latency.", "FL enabled B5G can ensure seamless, reliable communication between the autonomous vehicles in ITS/IoV that will enable collaborative driving through sharing of maps, sharing the information about the conditions of the roads, weather, accidents, landslides, etc., without exposing the sensitive information of the vehicles [105], [106], [107].", "Some of the recent works on the FL enabled B5G for ITS/IoV and autonomous vehicles are presented in the rest of this subsection.", "Table: Benefits of AI/FL/Edge AI in Automated Vehicles , , To improve the service quality and the driving experience in IoV, vehicles can share the data among themselves for collaborative analysis.", "The data providers hesitate to participate in the process of data sharing due to several issues such as privacy, security and bandwidth issues.", "The efficiency and reliability of the data sharing has to be enhanced further as the communication in IoV is unreliable and intermittent.", "Lu et al.", "[108] proposed a hybrid blockchain architecture that consists of local directed acyclic graph and permissioned blockchain that is run by the vehicles for reliable and secured data sharing in 5G network.", "To improve the efficiency, the authors have proposed a scheme based on asynchronous FL that uses deep reinforcement learning for learning models on the data at the edge nodes.", "In a similar work, Lu et al.", "[109] have proposed a FL enabled 5G networks for vehicular cyber-physical systems to reduce the leakage of private and sensitive data while sharing the data among the vehicles that leads to preserving the privacy of the passengers and improve their safety.", "The key applications of autonomous vehicles such as collision avoidance and autonomous driving require that the positioning of the vehicles is precise.", "Due to the advancements in V2I communications and sensing techniques, the vehicles can communicate with the nearby landmarks to enable precise positioning.", "This involves sharing data related to the the positioning of the vehicles loaded with sensors.", "Collecting and training the data centrally is difficult due to the sensitive nature of the trajectory data.", "The continuous sharing updating the location data of the vehicles wastes network bandwidth and other resources.", "To address these issues, Kong et al.", "[110] proposed a vehicular cooperative positioning system based on FL, namely, FedVCP, that utilizes collaborative edge computing to provide accurate positioning information in a privacy preserving manner.", "The comparative analysis of AI, Fl and Edge AI for AVs in 5GB networks are summarized in Table REF ." ], [ "Summary", "FL has a huge potential in solving some of the pressing issues in connected AVs in B5G era such as improved latency, privacy issues, optimal resource utilization, giving customized recommendations/predictions in heterogeneous networks [111].", "However, FL faces several challenges such as poisoning attacks, high false alarms, energy efficiency in low powered devices used in AVs, etc.", "These issues are to be addressed to realize the full potential of FL enabled B5G networks for AVs." ], [ "Introduction", "Blockchain is a distributed ledger technology that provides several features like immutability, transparency, and trustability.", "The records in the blockchain are distributed and duplicated throughout the nodes in the network.", "Blockchain consists of millions of blocks; each block in the blockchain consists of information related to transactions, its own hash and also previous block's hash [122].", "Whenever a new transaction is executed in the blockchain, that transaction's record is added in the ledger of every participant.", "Consensus algorithm is an important concept in blockchain, where majority of the blocks/nodes in the blockchain have to agree to the present state of the distributed ledger.", "Consensus algorithms make sure that, in order to update/modify the information stored in the blockchain, more than 50% of the millions of nodes in the blockchain have to approve it.", "This inherent property of the blockchain makes it nearly impossible to tamper or alter the data, that make the data in the blockchain immutable.", "Consensus algorithms also ensure that if a new node has to be added into the network, majority nodes in the blockchain have to approve it [123]." ], [ "How it used in AV (with Related work)", "AVs that are interconnected through 5G can share crucial information about the environments, road blockages, traffic conditions, etc.", "with each other.", "However, if a malicious AV enters into the network, it can pose serious security issues to the AVs.", "A malicious AV can post wrong information about the environment, can get access to private information about the vehicles such as its location, path, modify the information shared by other AVs, etc.", "To address security and privacy issues, blockchain can play a major role.", "When a blockchain is integrated with 5g for connected AVs, it can make sure that only trusted vehicles can enter into the network.", "Also, the blockchain ensures that the information stored in it is immutable [124], [125], [126].", "These features of blockchain make it a very important enabling technologies of 5G and beyond for AVs.", "Rest of this subsection discusses about the state-of-the-art on blockchain enabled 5GB for AVs.", "Reebadia et al.", "[127] presented a blockchain based approach to assure the privacy and security of the AVs that automatically sense events and act with necessary responses.", "The events such as accident detection, home transfer, congestion in traffic, etc.", "through nearby AVs or RSUs and will respond by actions such as calling an ambulance, calling the logistics, or informing the traffic department by sharing the location of the event.", "In this work, for communication edge-enabled 5G network is proposed to support huge bandwidth requirements and to process the data in real time.", "The blockchain is used in this work to ensure that the privacy and security of the messages that are shared in the network regarding the incidents/people/AV are preserved and by ensuring the trustworthiness of the AVs that join the network.", "AVs generate event driven messages (EDM) when they detect events such as road sidings or accidents.", "The AVs have to send videos or audio or pictures for proof of the occurrence of those events.", "To ensure the reliability and authenticity of EDMs, Lewis et al.", "[128] have proposed a private blockchain enabled edge nodes for 5G for storing the records of EDMs.", "Gao et al.", "[112] proposed SDN and blockchain enabled 5G network for VANETs.", "Due to its distributed nature, the blockchain is used in this work to avoid single point of failure in VANETs.", "Blockchain ensures trust-based message propagation that can evaluate the data passed by the peers in the connected environment and can also control the doctored or false data shared in VANETs for communication systems based on fog computing and 5G.", "For effective management of the network, SDN is used.", "In a similar work, Xie et al.", "[113] proposed a blockchain based system for addressing trustworthiness, security and privacy issues in 5G integrated with SDN for VANETs.", "In this work, the authors used the proposed framework for securing and preserving the privacy of the videos shared in VANETs by allowing only the trusted vehicles to be entered into the VANET.", "SDN is used o control and network and also enables global information gathering.", "The traditional approach of using costly base stations that are fixed in a particular locations needs a revamp to meet the rise in number of interconnected IoT devices, irregular service and data requests.", "UAVs provide on-demand mobile access points for realization of several scenarios such as smart cities, smart manufacturing, etc.", "MEC enabled B5G networks can provide low latency, high data rates that can meet several high-end QoS requirements of UAVs.", "Integrating B5G with blockchain ensures secured and decentralized service delivery [114].", "To address several issues such as requirement of high-seed communication, security of the messages passed between the UAVs/drones, trustworthiness of the UAvs joining the network, Aloqaily et al.", "[115] proposed a blockchain based 5G network.", "To identify the intrusive traffic in the network, ML methods are proposed by the authors.", "In a similar work, Gupta  et al.", "[116] proposed an architecture based on blockchain enabled softwarized 5G for secured and easy management of UAVs.", "In this work, SDN along with blockchain can help in securing the 5G enabled UAV network against intruders, DoS and DDoS attacks, identifying anomalies, man in the middle attacks, etc.", "Similarly, Jian et al.", "[117] proposed a blockchain empowered UAV ad hoc network for B5G era.", "The authors have designed HELLO packet, consensus packet, and topology control packet for secured and reliable communication in the 5G enabled UAV ad hoc network.", "For ensuring trust in the network, the authors have used a consensus mechanism based on Byzantine fault tolerance for secured multi-point relaying of messages.", "Feng et al.", "[118] proposed a blockchain enabled 5G networks for secured and efficient sharing of data in flying drones.", "In the proposed approach, attribute based encryption and blockchain are used for securing the data sharing and instructions in the drones.", "Ghimire et al.", "[119] proposed a blockchain and software defined Internet of UAV for defence applications.", "To provide visibility of the network and to dynamically configure the network parameters for better manageability and better security of the Internet of UAVs.", "To prevent the tempering of the control and command operations exchanged by the UAVs, blockchain technology is used in this work.", "UAVs in the network are responsible to create the blocks in the blockchain and also act as miners to validate each transaction.", "To address the scalability issue of blockchain technology, the authors have proposed sharding-enabled blockchain.", "In this work, shards of light weight UAVs are used for maintaining the required number of miners whenever a miner is damaged or destroyed in the battlefield.", "Some sensitive data of UAVs like flight modes and identification of drones can be communicated with other UAVs using radio frequency (RF) signals in 5G networks.", "Gumaei et al.", "[120] proposed a blockchain enabled 5G network for secured and decentralized transmission of RF signals among the drones/UAVs.", "The authors in this work have proposed a framework that integrated blockchain with deep recurrent neural network for 5G enabled flight mode detection and identifications of drones.", "Blockchain can be used for privacy preservation and securing the data during transmission among the UAVs connected through 5G in medical applications [129].", "The impact of blockchain technology on several technical aspects for AVs in 5GB is summarized in Table REF ." ], [ "Summary", "UAVs/drones are being used for several applications like crop/soil analysis, road surveillance, monitoring of natural disasters, delivering the product to the customers, etc.", "apart from traditional military and defense applications.", "But there are several challenges that need to be addressed to integrate blockchain in B5G for Avs such as lack of standardization, scalability (handling the ever increasing AVs), etc.", "Table: Benefits and challenges of B5G Technologies in Autonomous Vehicles." ], [ "ZSM(AI/ML)", "AVs in B5G era need extreme range of requirements, such as personalized services with dramatic improvements in customer-experience, ultra-high reliability, massive seemingly infinite capacity, global web-scale reach, support for massive machine-to-machine communication, and imperceptible latency.", "Fully automated network and service management is the need of the hour for delivering the aforementioned services for AVs in B5G era.", "The fully automated futuristic network will be driven by high-level rules and policies.", "This automation in the B5G networks for AVs will be capable of self-optimization, self-monitoring, and elf-configuration of the B5G networks without human intervention.", "Zero touch network & Service Management (ZSM) architecture is proposed by European Conference of Postal and Telecommunications Administrations (ETSI) to achieve full automation of the networks [131], [132].", "ZSM can play a major role in providing seamless, reliable B5G network for AVs." ], [ "Security Concerns in AVs", "Autonomous-based vehicles are vulnerable to several cyber attacks.", "Based on safety features, AVs can be categorised into two categories [133] i.e.", "AV components (including vehicles hardware/software components) and the infrastructure/environment within which vehicle is operating (such as pedestrian movement, traffic signs, road conditions, network-type etc).", "The attackers can jeopardize the safety of passengers by targeting these two components via In-Vehicle communication (through control network bus (CAN)) and Vehicle-to -everything communication (V2X).", "Hence, in this section, we will provide overview of potential attacks in AVs based on CAN and 5G-enabled V2X communication.", "We will also discuss the emerging security and privacy concerns in AVs based on B5G connectivity." ], [ "Security Concerns via In-Vehicle Communication", "From the security perspective, there are four key requirements for any intelligent transport system (ITS) [134], [135], [136], [137] such as AVs to achieve the security goals.", "One of the requirement is authentication which requires proper mechanism to identify an authorised user (such as driver) and grants access to features of the vehicle based on the privilege.", "Authentication might also include source authentication (making sure data is generated from legitimate sources) and location authentication (ensuring integrity of the received information).", "The second requirement includes availability which ensures that the exchanged/shared data is always available for processing in a real-time.", "The third requirement is data integrity/trust which ensures that the received data is free from any kind of modifications, manipulation etc.", "Finally, the last requirement involves confidentiality/privacy to ensure that the exchanged data is not exposed to unauthorised or malicious users.", "Based on the above-mentioned requirements, the attackers can exploit a specific weakness and compromise a specific security goal.", "Attacker can be anyone such as active/passive or external/insider [138] with different malicious intentions.", "With respect to In-Vehicle communication, the following components are vulnerable to different cyberattacks: Although, AVs come up with several sensors, from the security point of view, the sensors such as Light detection and ranging (LiDAR), image sensors (Camera's) and radio detection and ranging (RADAR) are vulnerable to cyberattacks [12].", "LiDAR sensors are mostly used for obstacle detection by using light as a measure to probe surrounding environment.", "In case of an obstacle detection, emergency autonomous brakes are applied.", "LiDAR sensors are vulnerable to spoofing attacks where an attack can create a counterfeit signal representing some near-by object making AVs to apply emergency braking system, thereby reducing the speed of vehicles which can prove fatal.", "Spoofing-based attacks were successfully demonstrated by [139].", "There is also a possibility of performing denial of service attacks by jamming the functionality of LiDAR sensors.", "This can be done by preventing LiDAR sensors to acquire the legitimate light wave through sending out the light of same wavelength.", "One such attack was demonstrated by the work of [140].", "RADAR-based sensors are also vulnerable to spoofing [141] and jamming attacks [142] as their working principle is same as of LiDAR i.e.", "object detection.", "The main difference is RADAR sensors use radio waves to detect objects instead of laser/LED and has comparatively long object detection range.", "On the otherhand, Camera's consisting of image-based sensors are attached at many places of the AVs to acquire 360 degree view for different purposes such as detection of traffic signs [143], lane-detection [144], etc.", "Although, these sensors can be used to replace LiDAR sensors, however performance of image-based sensors is not good under certain conditions such as rain, fog, etc [145].", "Image-based sensors are also vulnerable to DOS attacks as in case of LiDAR sensors.", "Camera's can be blinded by supplying extra-light as successfully demonstrated by [139].", "Besides, the developments in adversarial machine learning where a perturbed image can be used to trick the underlying artificial-intelligence model to cause incorrect predictions is one more serious concern.", "The attackers just need to place perturbed input such as stickers [146]in front of the observing camera to confuse the intelligent system within AVs." ], [ "Controller Area Network", "Controller area network is a central network within AVs with the main purpose of connecting different electronic control unit (ECUs) to each other.", "ECUs are embedded electronic systems used to control different sub-systems or electrical systems in a vehicle (such as brake-control system, engine control system etc), which are necessary requirements for autonomous driving.", "The detailed explanation with respect to working of these modules can be found in the work of [147].", "Different ECUs within AVs communicate through CAN and provides necessary inputs for running the safe autonomous driving system [148].", "The whole CAN is vulnerable to different cyberattacks due to the fact that CAN-based communication protocols have no authentication mechanism with no support for encrypting messages to fulfill the security goal of confidentiality [149].", "The strong authentication mechanism requires large processing power, memory, and bandwidth which is lacking in CAN.", "On an average, CAN supports a data-rate between 33kbps and 500kbps) and has a limited bandwidth [150].", "Following these limitations, the CAN is vulnreable to DOS attack [149], replay attack[150] and Eavesdrop [149]." ], [ "On-board Computer(OBD)", "To get different information from the vehicle such as its emission, speed, and other data, there is a connection known as OBD which can be used for this purpose.", "Modern vehicles do come up with OBD-II standard.", "OBD can be also used to perform firmware update and modify software embedded in control units [151], [12].", "OBD can serve as a gateway to several attacks as OBDs do not encrypt data and have no authentication mechanism.", "There are several third-party devices such as Telia Sense, AutoPi and car-manufacturer based devices that can be used to connect smart-phones or computers to OBD ports for self-diagnostic purposes.", "Through OBD ports, attacks such as man-in-the-middle, code-injection are feasible to compromise certain critical components of the vehicle [152].", "Table: Cyberattacks on Autonomous Vehicles" ], [ "Security Concerns Via V2X technologies", "For AVs to perform different functions such as information exchange between different sensors, vehicles, its surrounding environment (i.e.", "infrastructure, pedestrians etc), a high-bandwidth, low latency and highly reliable communication network is required.", "The ultimate goal of V2X technology is increase traffic efficiency, reduce accidents by enhancing road safety measures and energy savings.", "As progress in this direction is still at its infant stages, there are two key standards i.e.", "IEEE 802.11P[158], [159] and Cellular C2X that has unique characteristics such as standard performance under bad weather conditions and low latency makes them idea to achieve Vehicle-to-Vehicle and Vehicle-to-Infrastructure communication.", "Based on the literature, to achieve the goal of V2X paradigm, communication technologies have been categorised into three main categories i.e.", "short-range, medium-range and long-range [9].", "For short-range communication, bluetooth, ZigBee, Ultra-wide band (UWB) technologies are being considered to achieve the following short-range functionalities within V2X paradigm i.e.", "vehicle localisation, real-time driving assistance, vehicle identification, collision warning and other functions such as car-owner identification, detection passenger position in the vehicle.", "Similarly,in medium-range technologies Wi-Fi and DSRC (IEEE 802.11p) are being explored to achieve the functionalities related to traffic safety, traffic management system and vehicle management.", "Finally, C-V2X and 5G-NR standard are being explored to achieve long-range based objectives within the scope of V2X paradigm.", "To get detailed information in 5G-NR and C-V2X, please refer to the works of [160] and [161] respectively.", "Based on all the communication technologies discussed above, irrespective of which technology will be used to achieve the V2X communication, the security concerns will remain there.", "Based on the literature, we have summarised the potential attacks in Table REF ." ], [ "Projects and Standardization", "This section highlights the key projects and standardization activities related to the 5G autonomous vehicles." ], [ "Research Projects", "Research projects plays vital role in realizing the deployment of 5G autonomous vehicles.", "This section summarises the several key global level research and development projects related to the 5G autonomous vehicles.", "5G-DRIVE [162] is an European Union (EU) funded project under the Horizon (H2020) framework.", "This project is EU and China collaboration project which is mainly focusing on operating 5G 3.5 GHz bands for eMBB scenarios as well as 3.5 GHz and 5.9 GHz bands for V2X use cases.", "Moreover, 5G-DRIVE project researching on use of key 5G technologies such as networking slicing, NFV, MEC and 5G New Radio (5GNR) for real-world V2X deployments.", "The 5G-DRIVE is also focusing on increasing EU-China collaboration on 5G activities related to V2X research activities.", "5G-DRIVE involves 17 European partners from 11 countries.", "It carried out trials as three locations i.e.", "Espoo (Finland), Surrey (United Kingdom) and JRC Ispra (Italy)" ], [ "5G for Connected and Automated Road Mobility in the European UnioN (5G-CARMEN)", "5G-CARMEN [163] is also an European Union (EU) funded project under the Horizon (H2020) framework.", "It focuses on building trial 5G networks along Bologna-Munich corridor to support the European research activities related connected and automated mobility.", "5G-CARMEN builds a 600 km long trial site which spread across three countries.", "5G-CARMEN project mainly focuses on 5G NR, C-V2X (Cellular vehicle to everything), and secure service orchestration.", "This trial platform supports V2I, direct short range V2V and long-range V2N (Vehicle to Network) communication modes.", "Moreover, 5G-CARMEN focuses on four main use cases i.e.", "cooperative manouvering, situation awareness, video streaming and green driving." ], [ "Fifth Generation Cross-Border Control (5GCroCo)", "5GCroCo [164] is also an European Union (EU) funded project under the Horizon (H2020) framework.", "It focuses on building trial 5G networks along France, Germany and Luxembourg to support the European-level research activities by integrating telecommunication and automotive industries.", "5GCroCo builds two main 5G trial sites across Germany - Luxembourg border and France - Germany border.", "In addition, 5GCroCo develops five small scale test sites at Montlhéry (France), Motorway A9 (Germany), Munich (Germany), AstaZero (Sweden) and Barcelona (Spain) 5GCroCo project mainly focuses on emerging 5G technologies such as MEC, 5G NR V2X and secure service orchestration.", "It also focuses on defining new business models related to 5G autonomous vehicles.", "Finally, 5GCroCo focuses on three main use cases i.e.", "tele-operated driving, High-Definition (HD) mapping and anticipated cooperative collision avoidance." ], [ "5G for cooperative and connected automated MOBIility on X-border corridors (5G-MOBIX)", "5G-MOBIX [165] is another European Union (EU) funded project under the Horizon (H2020) framework.", "It focuses on building trial 5G networks to enable sustainable future for connected and automated vehicle.", "5G-MOBIX contains two cross-boarder corridors at Greece – Turkey and Spain – Portugal boarders.", "In addition, 5G-MOBIX develops several small scale test sites at Espoo (Finland), Jinan (China), Versailles Satory (France), Paris (France), Berlin (Germany), Stuttgart (Germany), Yeonggwang (South Korea) and Eindhoven-Helmond (Amsterdam).", "In addition, 5G-MOBIX builds the International research cooperation between Europe, China and Korea in 5G autonomous vehicles related research activities.", "Finally, 5G-MOBIX focuses on several 5G autonomous vehicles use cases such as cooperative overtake, highway lane merging, truck platooning, valet parking, urban environment driving, road user detection, vehicle remote control, see through, HD map update, media and entertainment." ], [ "ICT Infrastructure for Connected and Automated Road Transport (ICT4CART)", "ICT4CART [166] is an H2020 project got funded by EU.", "It focuses on blending the technological developments in telecommunication, automotive and IT industries to realize the transition towards connected and automated vehicles.", "It focuses on research areas such as hybrid connectivity, flexible network slicing, data management and privacy, network security and localisation.", "ICT4CART project has a cross-boarder corridors at Italy – Austrian boarder and three small scale trail sites at Graz (Austria), Ulm (Germany), and Verona (Italy)." ], [ "Terahertz sensors and networks for next generation smart automotive electronic systems (car2TERA)", "Car2TERA [167] project is an EU H2020 funded project which focuses on in-cabin radar and high speed onboard data communications for autonomous automobiles.", "On this regards, the Car2TERA project focuses on adapting sub-terahertz (150-330 GHz) communication to offer efficient and sufficient bandwidth for high resolution demands of in-vehicle radar communications.", "In addition, short-range, sub-THz frequency radar technology can be useful to improve in-cabin as well as outdoor sensing." ], [ "Fifth Generation Communication Automotive Research and innovation (5GCAR)", "5GCAR [168] project is an EU H2020 funded project which focuses on 5G C-V2X.", "It has focuses on different aspects of 5G C-V2X such as radio access network, spectrum matters, system architectural options, network orchestration and management security and privacy issues, Edge computing enhancements, multi-connectivity cooperation and possible business models.", "5GCAR focuses on three 5G autonomous vehicles use cases i.e.", "lane merge coordination, cooperative perception for maneuvers of connected vehicles and vulnerable road user protection." ], [ "Other Projects", "Several other projects are listed below which have 5G autonomous vehicles as a minor focus.", "Enhance driver behaviour and Public Acceptance of Connected and Autonomous vehicLes (PAsCAL) Project [169] is focusing on studying the opinions and expectations of general public towards the autonomous vehicles and connected driving technologies.", "Artificial Intelligence based cybersecurity for connected and automated vehicles (CARAMEL) project [170] is developing AI/ML based anti-hacking intrusion detection and prevention systems for automotive industry including autonomous vehicles.", "It considers the security impact of several novel technological directions such as 5G, autopilots, and smart charging.", "Next generation connectivity for enhanced, safe and efficient transport and logistics (5G-Blueprint) project [171] is focusing on 5G based tele-operated cross-border transport and logistics.", "Unmanned Aerial Vehicle Vertical Applications' Trials Leveraging Advanced 5G Facilities (5G!Drones) project[172] is focusing on trial several UAV use-cases covering eMBB, URLLC, and mMTC 5G services." ], [ "Standards Developing Organizations (SDOs)", "Standardization activities are the key to define technological requirements of autonomous vehicles and also define possible 5G technologies to realize these requirements.", "This section summarises the several key global level standardization activities related to the 5G autonomous vehicles." ], [ "European Commission (EC)", "EC supports the development of autonomous vehicles to realize the Connected and Automated Mobility (CAM) across the EuropeShaping Europe's digital future https://digital-strategy.ec.europa.eu/en/policies/connected-and-automated-mobility.", "This initiative aims on developing a safer, user-friendly, greener and more-efficient European-level transport and mobility systems.", "EC supports the introduction and deployment of autonomous vehicles in different levels such as European-level policy development European-level standard development Providing funding for research and innovation projects Developing EU-level legislation Table: Important standardization efforts by EC related to 5G and B5G Autonomous VehiclesTable: Recent important white papers by 5GAA related to 5G and B5G Autonomous VehiclesTable: Recent important standardization efforts by ESTI related to 5G and B5G Autonomous VehiclesEC also has special interest in unitizing 5G technologies for autonomous vehicles and connected mobility.", "In 2017, EC has initiated the task of building designate 5G cross-border corridors with the support of 29 signatory countries.", "These 5G cross-border corridors can be used to test and demonstrate the EU level automated driving projects5G cross-border corridors https://digital-strategy.ec.europa.eu/en/policies/cross-border-corridors.", "EU H2020 program has funded three 5G cross-border corridor projects (i.e.", "5G-CARMEN [163], 5GCroCo [164] and 5G-MOBIX [165]) to realize this vision.", "In 2016, EC launched a new high level group for the automotive industry called GEAR 2030Commission launches GEAR 2030 to boost competitiveness and growth in the automotive sector https://ec.europa.eu/growth/content/commission-launches-gear-2030-boost-competitiveness-and-growth- automotive-sector-0_en.", "It consist of members such as EU commissioners, representatives from member states and stakeholders from automotive, telecommunications and insurance sectors.", "This GEAR 2030 group is focusing on developing coherent EU level policy, legal and public support framework for autonomous vehicles.", "In addition, EC supports the development of Cooperative Intelligent Transport Systems (C-ITS) which can intelligently and securely share the road users and traffic information Intelligent transport systems Cooperative, connected and automated mobility (CCAM) https://ec.europa.eu/transport/themes/its/c-its_en.", "These information are useful for autonomous vehicles to take more informed decision and efficiently coordinate their actions.", "Under EU funding programs, EU is offer funding for research and innovation projects to utilize and also further developing this C-ITS framework.", "Table REF presents the important standardization efforts by EC." ], [ "European Automotive - Telecom Alliance (EATA)", "As a result of EU commission's high level round table discussion on CAM and autonomous vehicles, EATA has formed to promote the the development of EU level autonomous vehicles activities[202].", "The vision of EATA is to support the collaboration between stakeholders in automotive and telecommunication sectors to explore and accelerate the deployment of connected and automated driving in across the Europe.", "EATA mainly focuses on addressing regulatory and legislative obstacles on deployment of connected and automated vehicles.", "It proactively involve in initiating dialogue with national, EU-level as well as global level policy makers to eliminate the potential obstacles and define new technical and regulatory measures support connected and automated driving.", "Moreover, it supports research and innovation projects focusing on automated vehicles by mobilizing public funding for trail sites." ], [ "CAR 2 CAR Communication Consortium(C2C-CC) ", "The CAR 2 CAR Communication Consortium (C2C-CC)[203] is an global-level organization which consist of members from road operators, automotive manufactures, IT service providers, telecommunication operators and research organizations.", "It was founded in 2002.", "The vision of C2C-CC is to realize the goal of vision zero, i.e accident free traffic as early as possible.", "To realize this vision, C2C-CC is supporting the development of ultra-reliable, robust and matured safety solutions.", "It also supports the innovations in 5G and wireless technologies with special focus on spectrum efficiency, ad-hoc short-range V2X communications.", "It contributes to the EU and global level standardisation activities to harmonized the development of V2X communication.", "Moreover, C2C-CC contributes to development of the EC's C-ITS framework." ], [ "5G Automotive Association (5GAA)", "5GAA[204] is the one of the leading global level SDOs which is mainly focus on the integration of 5G with automotive industry.", "It has cross-industry members from different sectors such as automotive, IT and telecommunication.", "The 5GAA is working on developing 5G as the ultimate platform to enable C-ITS and V2X.", "5GAA has established seven Working Groups (WGs), i.e.", "WG 1: Use Cases and Technical Requirements WG 2: System Architecture and Solution Development WG 3: Evaluation, Testbeds, and Pilots WG 4: Standards and Spectrum WG 5: Business Models and Go-To-Market Strategies WG 6: Regulatory and Public Affairs WG 7: Security and Privacy These 5GAA WGs develop relevant standards, architectures, frameworks and business cases related to 5G based autonomous vehicles and its applications.", "Table REF presents the important standardization efforts by 5GAA." ], [ "European Telecommunications Standards Institute (ESTI)", "ETSI[205] is one of the world largest telecom SDOs which is focusing of standardization of mobile networks and its services.", "ETSI initiates a Technical Committee (TC) called Intelligent Transportation Systems (ITS) which has a special focus on automated and connected vehiclesAutomotive Intelligent Transport Systems (ITS).", "ITS TC is working under three main themes i.e.", "Cooperative-ITS (C-ITS), automotive radar and anti-collision radar.", "Under these themes, they develop standards related overall automotive communication architecture, system management, communication protocols, security and access layer agnostic protocols.", "Table REF presents the important standardization efforts by ESTI." ], [ "3rd Generation Partnership Project (3GPP)", "3GPP[206] is another one of the largest telecom SDO which is actually a consortium of seven other telecommunication SDOs.", "3GPP is mainly responsible for developing standards for C-V2X which replaces the Dedicated short-range communications (DSRC) developed in US and C-ITS originated in EuropeV2X https://www.3gpp.org/v2x.", "Initial C-V2X standard was included in the Release 14 3GPP Release 14 https://www.3gpp.org/release-14 which was published in 2017.", "This C-V2X standards are developed to as an decisive step towards enabling autonomous driving with the support of future telecom networks such as 5G and beyond.", "In 2020, NR-V2X as a part of Release 163GPP Release 16 https://www.3gpp.org/release-16 comes as an improvement to support automated driving.", "It supports complex interaction Use cases such as cooperative automated driving." ], [ "International Telecommunication Union - Telecommunication (ITU-T)", "ITU-T[207] is a global level telecom SDO.", "In 2019, ITU-T has launched a Focus Group on AI for autonomous and assisted driving (FG-AI4AD) to support the standardization activities of AI-enabled autonomous and assisted driving systems and servicesFocus Group on AI for autonomous and assisted driving (FG-AI4AD) https://www.itu.int/en/ITU-T/focusgroups/ai4ad/Pages/default.aspx.", "This FG is also focusing on harmonize global-level activities to define a minimal performance threshold for AI-enabled autonomous and assisted driving systems." ], [ "Alliance for Telecommunications Industry Solutions (ATIS)", "ATIS[208] is a leading ICT SDO which is focusing of various technologies such as 5G/6G, intelligent networks, blockchain, IoT, smart Cities and security.", "ATIS has formed a Connected Car-Cybersecurity Ad Hoc Group which studies collaboration opportunities between telecoms and automotive sectors in-terms of cybersecurity.", "Specially, ATIS group analyzes the different types of security threats in connected automated vehicles, and discuss role of future mobile network to prevent these attacks[209]." ], [ "5G Americas", "5G Americas[210] is an SDO based in America which support the advancement 5G and beyond network applications throughout the American countries.", "5G Americas identifies and promotes 5G V2X as an critical technology to enable connected and autonomous vehicles.", "5G V2X can be used to enable critical information exchange among the autonomous vehicles to improve navigation and situation awareness to avoid road accidents.", "In 2018, 5G Americas published a white papers on \"Cellular V2X Communications Towards 5G\" [211] which provides insights on the role of emerging 5G technologies to realize advance V2X communication." ], [ "Next Generation Mobile Networks (NGMN) Alliance", "NGMN[212] is an telecom SDO which is also focusing on 5G and beyond networks.", "In 2016, NGMN formed a V2X task force to evaluate V2X technologies to speedup the deployment of the C-V2X technology.", "This task force has fuel the cooperation between telecom and automotive sectors to develop new policies and business models.", "The task force also studies spectrum management, security and privacy aspects of future ITS.", "In 2018, NGMN published a white paper [213] which presents eight V2X use cases and their technological requirements." ], [ "Other SDOs", "Several other SDOs are listed below which have 5G autonomous vehicles as a minor focus.", "The European Council for Automotive R&D (EUCAR)[214] is a consortium of vehicle manufacturers which was founded in 1994.", "The EUCAR is focusing on development of strategies and solutions for future challenges in car industry by defining common frameworks and supporting research and innovation activities.", "5G Alliance for Connected Industries and Automation (5G-ACIA)[215] is an global-level SDO mainly focusing on deployment of 5G for Industrial Internet and smart factory applications.", "5G-ACIA is also working on research and trail use-cases of 5G-enabled cloud-controlled Automated Guided Vehicles (AGVs) which can be used in industrial environments5G Alliance for Connected Industries and Automation endorses testbeds for evaluation of industrial 5G use cases https://5g-acia.org/press-releases/5g-alliance-for-connected-industries-and-automation-endorses-testbeds-for- evaluation-of-industrial-5g-use-cases/." ], [ "Lessons Learned and Future Research Directions", "This section discusses the lessons learned and based on these lessons it synthesizes the future research directions that paves the way for the researchers to carry out their research on this domain." ], [ "Lessons Learned", "AV basically relies on V2X technology and 5G communications to fulfill the requirements like high transmission rate, low latency less than 5ms, reliability, and with good response time to communicate with other vehicles and communication with infrastructure present on the road.", "Telecommunication and automobile industry are trying to build an innovative ecosystem by integrating 5G in AV with the recent technologies.", "Recently, 5G Automotive Association (5GAA) a global cross industry released a new roadmap called automotive Cellular V2X technology which received Innovation Award Honour in the ”Vehicle Intelligence and Self-Driving Technology” category at the CES 2019.In navigation and path planning, 5G is used in local perception to control the short range vehicles with respect to safety, traffic control and energy management parameters.", "Accurate positioning of vehicles in AV is achieved in object detection using 5G enabled networks.", "URLLC is designed to handle V2V communication dynamically with ultra-low latency and ultra-high reliability.", "Real-time decision are made faster in 5G enabled AV.", "Integration of 5G with V2X helps to visualize the objects and obstacles more quickly.", "Inorder to provide fast processing and prior decision making, good connectivity is required which is achieved by 5G in AV.", "emBB provides high quality in bandwidth for vehicular communication.", "URLLC, mMTC and eMBB in 5G works together to provide faster connection speeds, higher device capacity and lower latency in AV applications.Some of the benefits of using 5G in AV are driving the vehicles at high speed with high bandwidth, information delivery in minimal time limits due to low latency, notification alerts ahead about any hazardous events and provides self-decision using AV integrated with AI" ], [ "Possible Future Directions", "Some of the challenges in using 5G in AV are basically 5G spectrum are too expensive and can be purchased by spectrum auctions.The cost incurred in extending the existing network to 5G is too high and mapping network with reference to geo locations is a tedious task.", "AV using 5G and beyond can be further extended to perform the following activities Predictive maintenance:Drivers can proactively maintain the vehicle to avoid failure during driving.", "In-vehicle sensors monitors the conditions of the components present inside the vehicle like battery life, fuel pump and starter motor.", "Data from cloud combined with AI can predict potential maintenance issues before the occurrence of failure.", "Advanced Infotainment Customers can be provided with additional information by integrating AI with AR, VR and sensors to have realistic experience.", "Traffic safety service: 5G communication devices embedded inside the vehicles collects the information from the environment and pedestrians are stored in the cloud.", "Analytics on these data warns the drivers about the hazardous roads and traffic congestion which in turn recommends the solution for alternate path.", "5G network is considered as a key technology to design a driverless vehicles which is one of the most exciting domain in near future.", "It has become an important technology in automobile industry integrated with the telecom industry to provide a best experience to the customer.", "With the advancement in the technologies like V2X and wireless communication, new generation of driverless vehicles is going to drive the automobile industry globally." ], [ "Lessons Learned", "Apart from large bandwidth requirements, several functionalities of AVs like object detection, lane detection, collision avoidance, navigation, V2V/V2X/V2I communications require several features like reduced latency, improved physical layer infrastructure, privacy & security for seamless AD.", "The supporting technologies of B5G such as MEC, network slicing, SDN/NFV, 5GNR, blockchain, FL, and ZSM can help in providing these requirements to AVs.", "However, several challenges such as generation of labels in real time for training the ML algorithms, lack of justification of the predictions/recommendations of ML algorithms, huge dimensionality of the data generated, etc.", "have to be addressed to realize the full potential of B5G for AV.", "Also, the autonomous vehicles may be connected to multiple networks on the road.", "Due to mobility of vehicles, it is likely that an AV may move out of coverage area of the access network and may have to join another network.", "Availability of availability of MEC for multiple access technologies is a significant challenge.", "Another significant challenge is the management of huge data generated from the AVs that are interconnected in real time [216]." ], [ "Possible Future Directions", "Explainable Artificial Intelligence (XAI) can be adopted in B5G to address for trustability/justification/explainability of the decisions/predictions from ML algorithms in applications such as object detection, lane detection, collision avoidance, etc [217], [218], [219].", "For instance, consider that the AV is taking a human to the destination.", "Suddenly, the AI/ML model gives suggestion to the AV to take another route, which is different than the intended route.", "In these situations, the AV should be in a position to explain/justify the actions it has taken to avoid a particular route or choosing a particular route.", "XAI can help the humans to understand the actions of AVs in these kinds of situations.", "Unsupervised ML models can work on unlabelled data, that can address the challenge of big data generated by AV in 5GB era [220].", "Several dimensionality reduction techniques can be used to extract the most important features from the large volumes of data generated in real time [221].", "Handover problems associated with the AVs registered to one network and requiring to join another network during the mobility can be addressed by open radio access network (Open RAN) platform [222]." ], [ "Lessons Learned", "From the security point of view, as highlighted in Table REF , it can observed that compared to In-Vehicle communication, the vulnerabilities brought by V2X technologies are more.", "V2X technologies will widen the attack surfaces thereby increasing the level of threats for the AVs.", "All the basic security goals i.e.", "confidentiality, integrity, availability and authentication needs to be addressed while incorporating V2X technologies with AVs.", "There is need of effective countermeasures to mitigate the threats arising from three main attack categories i.e.", "spoofing, denial of service, injection and physical attacks." ], [ "Possible Future Directions", "The potential future directions can be categorised based on basic security goals as mentioned above especially with the main focus on V2X technologies.", "A strong and robust authentication mechanisms will be required for AVs to communicate with trusted entities while moving from one place to the other.", "An authentication mechanism with high latency and computing power requirement might provide attackers ample time to carry out spoofing or man-in-the-middle attack.", "Similarly, most of the protocols within AVs do not use encryption, how to secure communication within vehicle and while interacting with other entities outside the vehicles also seems interesting future research direction.", "Also, there will be huge amount of data generated by AVs, storing that data while preserving privacy of users and making that data available to legitimate users is yet another interesting research direction." ], [ "Lessons Learned", "Today, many research projects and SDO activities are focusing on the development of 5G and B5G networks to support the deployment of autonomous vehicles.", "These activities primarily focus on technical developments of 5G/b5G related technologies such as MEC, NS, 5G NR, ZSM, and AI.", "Significantly, NS and MEC got particular focus as these technologies can play a critical role in deploying autonomous vehicle-related applications and services.", "In addition, there are a significant number of research projects and standardization activities contributing to relevant 5G and B5G technical aspects such as path planning, mobility, service migration, security, and privacy.", "However, it might take a few more years for these standardizations to be fully deployed by different stakeholders such as mobile operators, governments, regulators, manufacturers, and third-party service providers.", "Thus, the continuous cooperation between different working groups (i.e., academia and industry) and SDOs are highly required in the coming years.", "Especially, funding frameworks such as H2020 by European Commission (EC) fuel these activities." ], [ "Possible Future Directions", "In the future, there are two primary questions to be addressed.", "First, how to address the lack of AV-specific standardization in core 5G SDOs?", "Currently, AV is considered just one use case of 5G/B5G networks and still lacks a dedicated focus on AV within the core 5G SDOs.", "To resolve this, more AV stakeholders should contribute to 5G SDOs, and more AV-related sub-groups would be formed within core 5G SDOs to develop dedicated AV-related standards.", "Second, how do we integrate and cooperate between different working groups and SDOs?", "The strong cooperation between AV SDOs and core 5G SDOs would be further encouraged.", "This will resolve some of the issues related to the first issue.", "More joint research and SDOs activities should be planned in the future." ], [ "Emerging and Future Research Directions related to AV", "5G network is considered as a key technology to design a driverless vehicles which is one of the most exciting domain in near future.", "It has become important technology in automobile industry integrated with the telecom industry to provide a best experience to the customer.", "With the advancement in the technologies like V2X and wireless communication, new generation of driverless vehicles is going to drive the automobile industry globally." ], [ "Quantum Computing", "Quantum computing is the next future generation of automotive technology.", "Electric vehicles are significant part of quantum revolution.", "Automobile manufactures have started taking advantage of quantum computers to solve various automotive problems.", "AI in AV requires large amount of data for analysing and providing optimal response in dynamic situations.", "For detecting real time car locations and designing the optimal path requires high computing power and speed in AI.", "The former feature, high computing power can achieved by quantum computers.", "German automobile company Volkswagen collaborated with D-Wave systems to design and develop traffic routing in Beijing based on quantum computing systems.", "These systems can also solve optimization problems like waiting time, deployment of fleets etc., Volkswagen has also partnered with google to predict the state of traffic to avoid accident and to simulate the behaviour of electrical component and embed AI in driverless cars.", "As the AV are more vulnerable to outside world, security breaches can be solved by quantum security.", "AV requires tremendous computing powers like optimized route planning and change the entire transport systems into smart systems.", "The cars will become smarter by communicating among themselves and outside world.", "More developments are expected in the field of AV integrated with quantum computers to achieve the benefits of computing and processing power." ], [ "Cognitive Cloud", "Cognitive AI and algorithms would help us to simulate human-level performance specifically in level 5 AV.", "Satisfying the level 5 AV is a tedious task as it needs accurate decision making, object detection and localization under uncertain conditions like fog, rain and extreme darkness.", "Cognitive computing enhances the model accuracy to achieve closeness to humanlike performance in object detection and decision making.", "Integration of Cognitive computing in AV leads to improved safety and accuracy.", "Cognitive Internet of Vehicles allows the AV to focus on what, how and where to compute dynamically closer to human brain." ], [ "Conclusion", "In this study, several aspects of AVs such as its features, levels of automation, architecture, key-enabling technologies and requirements for autonomus vehicular communication were discussed.", "Key requirements in terms of latency, security level, privacy, bandwidth, mobility, scalability, availability and reliability for potential AV applications (navigation and path planning, object detection, URLLC, mMTC, eMBB) were also identified.", "Several emerging technologies such as MEC, SDN and others were studied in detail and impact of 5G/B2G on these technologies was discussed.", "We also identified key security concerns in AVs with respect to 5G/B2G technology and highlighted recent standardization efforts by different organisation.", "Finally, several key research challenges and future research directions were also identified and discussed." ] ]
2207.10510
[ [ "Order Determination for Tensor-valued Observations Using Data\n Augmentation" ], [ "Abstract Tensor-valued data benefits greatly from dimension reduction as the reduction in size is exponential in the number of modes.", "To achieve maximal reduction without loss in information, our objective in this work is to give an automated procedure for the optimal selection of the reduced dimensionality.", "Our approach combines a recently proposed data augmentation procedure with the higher-order singular value decomposition (HOSVD) in a tensorially natural way.", "We give theoretical guidelines on how to choose the tuning parameters and further inspect their influence in a simulation study.", "As our primary result, we show that the procedure consistently estimates the true latent dimensions under a noisy tensor model, both at the population and sample levels.", "Additionally, we propose a bootstrap-based alternative to the augmentation estimator.", "Simulations are used to demonstrate the estimation accuracy of the two methods under various settings." ], [ "Tensors and image data", "Tensors offer a particularly convenient framework for the modelling of different types of image data [1], [8].", "For example, a collection of gray-scale images can be viewed as a sample of second-order tensors (i.e., matrices) where the tensor elements correspond to intensities of the pixels.", "A set of colour images can be represented as a sample of third-order tensors where the third mode has dimensionality equal to 3 and collects the colour information through RGB-values of the pixels.", "Still higher-dimensional tensors are obtained if we sample multiple images per subject (possibly in the form of a video).", "The total number of elements in a tensor grows exponentially in its order.", "Therefore, as soon as our images have even a reasonably large resolution, the resulting data set consists of a huge number of variables.", "Consider, for instance, the butterfly data setThe data set is freely available at https://www.kaggle.com/datasets/gpiosenka/butterfly-images40-species that will serve as our running example throughout the paper; see the top row of Figure REF for five particular images in the data set.", "For simplicity, we use in this paper a subsample of the full data, consisting of $n = 882$ RGB images of butterflies of various species with the resolution $224 \\times 224$ .", "As such, the number of variables per image is 150528.", "However, as any two neighbouring pixels in an image are typically highly correlated, the “signal dimension” of the butterfly data, and image data in general, is most likely much smaller than the total number of observed variables.", "Figure: A collection of five images from the butterfly data set (top row) and the corresponding reconstructed images using compressed, 103×111×3103\\times 111\\times 3-dimensional core tensors (bottom row).The standard approach to remove the redundant information and noise from the data is through low-rank decomposition where the images are approximated as products of low-rank tensors [1], [6].", "As with any dimension reduction method, a key problem in this procedure is the selection of the reduced rank/order/dimension (we use all three terms interchangeably in the sequel).", "We want to keep the order small not to incorporate noise in the decomposition but, at the same time, enough components should be included to ensure accurate capturing of the signal.", "As the main contribution of the current work, we develop a method for the automatic determination of the order, separately in each mode.", "Reconstructions of the five butterfly images based on the dimensionalities chosen by our method are shown in the bottom row of Figure REF .", "For further details on this example, see Section REF .", "We develop our method under a specific statistical model and show that asymptotically, when the number of images $n$ generated from the model grows without bounds, we are guaranteed to recover the true rank of the data.", "Our main idea is based on the combination of a tensor decomposition known as higher-order singular valued decomposition (HOSVD) [3] and a specific method of order determination known as predictor augmentation [13].", "To set up the framework, the next two subsections briefly review the literature pertaining to these subjects and, afterwards, in Section REF we still highlight the contributions of this work in comparison to the existing literature.", "In the same section we also discuss the connection to the conference paper [17], of which this work is an extended version." ], [ "Tensor decompositions", "The term tensor decomposition refers to a class of algorithms that approximate an input tensor as a sum or product of tensors/matrices of lower rank.", "The low-rank structure is usually thought to represent the signal/information of the original data tensor, whereas, if we manage to choose the rank appropriately, the difference between the original tensor and its low-rank approximation is pure noise.", "A comprehensive review of tensor decompositions is given by [9] and, for some recent uses of tensor decompositions in the context of image data, see [6], [25], [11].", "Our decomposition of choice in this work is the higher-order singular value decomposition [3].", "In HOSVD, an $m$ th order tensor $\\mathcal {A}\\in \\mathbb {R}^{p_1\\times \\cdots \\times p_m} $ is approximated as $\\mathcal {A} \\approx \\mathcal {B}\\times _{i=1}^m\\textbf {U}_i$ , a multilinear product of the core tensor $\\mathcal {B}$ of a low dimensionality and the matrices $\\textbf {U}_1, \\dots ,\\textbf {U}_m$ having orthonormal columns.", "The matrices $\\textbf {U}_i$ are in HOSVD estimated through singular value decompositions of certain flattenings of the tensor $\\mathcal {A}$ , in turn allowing the estimation of the core.", "In practice, the optimal dimensions of the core tensor are usually not known a priori and have to be estimated, which is the main objective of the current work.", "We discuss HOSVD more closely in Section in conjunction with our model of choice.", "In fact, there we use a specific “statistical” version of HOSVD known as $(2\\mathrm {D})^2$ PCA [24].", "Besides HOSVD, another classical tensor decomposition is the Tucker decomposition [21], which also seeks for an approximation of the previous form but uses a different algorithm for estimating the loading matrices and the core.", "However, in this paper we have chosen to work with HOSVD as it has a closed-form solution, which is what allows studying the theoretical properties of our proposed estimator of the latent dimensionality.", "Finally, we still remark that, arguably, the most typical way of decomposing image data to low-rank components is through standard principal component analysis (PCA) [7].", "However, as standard PCA operates on vector-valued data and not tensors, this approach requires the vectorization of the data, causing them to lose their natural tensor structure.", "Whereas, tensor decompositions retain the row-column structure of image data, making them the natural choice in the current context." ], [ "Order determination", "The problem of choosing the (in some sense) optimal rank in tensor decompositions, or in dimension reduction in general, is known as order determination.", "The order determination literature can be roughly divided to two categories: (1) methods targeted for specific parametric/semi-parametric models and, (2) more general methods that estimate the rank of a fixed matrix from its noisy estimate (under some regularity conditions).", "Category (1) is by far the more popular one (as general methods are more difficult to come by) and usually accomplishes the estimation by exploiting asymptotic properties of eigenvalues of certain matrices, see, e.g., [18], [15] for PCA and [2], [26] for dimension reduction in the context of regression (sufficient dimension reduction).", "Whereas, methods belonging to category (2) tend to be based on various bootstrapping and related procedures, see [23], [12] and, in particular, [13] whose augmentation procedure serves as a starting point for the current work.", "We note that all of the previous references targeted exclusively vector-valued data and order determination in the context of general tensor-valued data is indeed still rare in the literature.", "Moreover, aside from  [17], to our best knowledge, automated order determination in the context of matrix-variate (gray-scale image) data has been studied only by [20] who use Stein's unbiased risk estimation for the task.", "Also, a simpler problem of selecting the dimension when the amount of retained variance is pre-determined is discussed in [5].", "Thus, there is still much work to do in the order determination of tensor data and the current work is a step towards this direction." ], [ "Relation to previous work", "In this work, we propose an automatic procedure for the selection of the dimensionality in the HOSVD decomposition of tensorial data.", "Our primary contributions in relation to earlier literature are the following.", "(i) To our best knowledge, ours is the first order determination procedure for general tensor-valued data with statistical guarantees.", "(ii) Unlike [13] who originally introduced the augmentation procedure in vector-valued context, we prove the validity of the procedure also on the population level, see Corollary REF .", "(iii) We derive the full limiting distribution for the norms of augmented parts of the noise eigenvectors, see part (ii) of Theorem REF .", "This is in strict contrast to [13] who, in an analogous context (PCA) for vector-valued data, based their work on the weaker claim that the norms are non-negligible in probability.", "This improvement both allows us to get quantitative results concerning the eigenvectors and sheds some light on the interplay of the augmentation procedure with the true dimensionality of the data.", "(iv) To accompany the augmentation estimator, we also propose an alternative estimator of the latent dimensionality based on the “ladle” procedure [12].", "Compared to the conference paper [17], which presented versions of the augmentation and ladle procedures for matrix-valued data and of which the current work is an extended version, we go beyond them in the following respects: (i) We allow for general tensor-valued data, not just matrices.", "(ii) We give estimation guarantees for the augmentation method both on the population and the sample level (unlike [17] who did not consider the theoretical properties of the method at all)." ], [ "Organization of the manuscript", "The rest of the paper is organized as follows.", "In Section  we introduce the statistical framework along with HOSVD for the proposed model.", "The proposed augmentation estimator as well as its theoretical properties are discussed in Section , separately on the population and sample levels.", "Section  covers the alternative, bootstrap-based ladle estimator.", "We conclude the paper by evaluating the performances of the proposed estimators in Section  and by discussing possible future work in Section .", "A summary of tensor notation is given in Appendix and the proofs of all technical results are collected in Appendix  in the supplement." ], [ "Model", "We use standard tensor notation throughout the manuscript: the calligraphy font $\\mathcal {A} $ refers to individual tensors, the $k$ -unfolding of a tensor $\\mathcal {A}$ is denoted by $\\mathcal {A}_k$ and the $k$ -mode multiplication of a tensor $\\mathcal {A}$ by a matrix $\\textbf {A}_k$ is denoted by $\\mathcal {A} \\times _k \\textbf {A}_k$ , etc.", "For readers unfamiliar with tensor notation a summary is given in Appendix ; see also [9].", "We start by defining an appropriate statistical framework.", "Let $\\mathcal {X}^1, \\ldots , \\mathcal {X}^n \\in \\mathbb {R}^{p_1\\times \\dots \\times p_m}$ be an observed set of tensors of order $m\\in \\mathbb {N}$ drawn independently from the model $\\mathcal {X}= \\mathcal {M} +\\mathcal {Z} \\times _{k=1}^m \\textbf {U}_k +\\mathcal {E},$ where $\\mathcal {M} \\in \\mathbb {R}^{p_1 \\times \\dots \\times p_m}$ is the mean tensor, $\\textbf {U}_k\\in \\mathbb {R}^{p_k \\times d_k}$ , $k=1,\\dots ,m$ , are unknown mixing matrices with orthonormal columns and $\\mathcal {Z}\\in \\mathbb {R}^{d_1 \\times \\dots \\times d_m}$ is a core tensor of order $m$ with zero mean, finite second moment, $\\mathbb {E}(\\Vert \\mathcal {X} \\Vert _F^2) < \\infty $ , and dimensions $d_k\\le p_k$ , $k=1,\\dots ,m$ .", "Furthermore, the additive noise $\\mathcal {E}$ is assumed to follow a tensor spherical distribution, i.e., $\\mathcal {E}\\times _{k=1}^m\\textbf {V}_k\\sim \\mathcal {E}$ , for all orthogonal matrices $\\textbf {V}_k\\in \\mathbb {R}^{p_k\\times p_k}$ , $k=1,\\dots ,m$ , where the symbol $\\sim $ means “is equal in distribution to”.", "Additionally, we make the technical assumption that for all $k$ -flattenings $\\mathcal {Z}_k\\in \\mathbb {R}^{d_k\\times \\rho _k }$ of $\\mathcal {Z}$ the matrix $\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })\\in \\mathbb {R}^{d_k\\times d_k}$ is positive definite, where $\\rho _k := \\prod _{j \\ne k} p_j$ .", "This assumption is made precisely for the identifiability of the latent dimensions $(d_1, \\ldots , d_m)$ .", "A schematic representation of the model is given in Figure REF for $m=3$ .", "A special case of Model REF for $m=2$ was studied in [17].", "Figure: Schematic representation of Model  for m=3m=3.Our proposed approach is based on a statistical formulation of HOSVD known as $(2\\mathrm {D})^2$ PCA [24].", "For this, consider the $k$ -flattenings of Model REF , for $k=1,\\dots , m$ , $\\mathcal {X}_{k} = \\mathcal {M}_k+\\textbf {U}_k\\mathcal {Z}_k (\\textbf {U}^\\otimes _{-k})^{\\prime }+\\mathcal {E}_k,$ where $\\mathcal {X}_{k},\\, \\mathcal {M}_k,\\,\\mathcal {E}_k\\in \\mathbb {R}^{p_k\\times \\rho _k}$ , $\\mathcal {Z}_k\\in \\mathbb {R}^{d_k\\times \\prod _{i\\ne k}d_i}$ and $\\textbf {U}^\\otimes _{-k} := \\textbf {U}_{k+1}\\otimes \\cdots \\otimes \\textbf {U}_m\\otimes \\textbf {U}_1\\otimes \\cdots \\otimes \\textbf {U}_{k-1}$ .", "Then, the HOSVD solution to Model REF is $(\\mathcal {X}-\\mathcal {M})\\times _{k=1}^m\\textbf {V}_k^{\\prime }$ , where the columns of $\\textbf {V}_k\\in \\mathbb {R}^{p_k\\times d_k}$ , $k=1,\\dots , m$ , are the first $d_k$ eigenvectors of the matrix $\\mathbb {E}\\lbrace (\\mathcal {X}_k-\\mathcal {M}_k)(\\mathcal {X}_k-\\mathcal {M}_k)^{\\prime }\\rbrace $ .", "If, for a fixed $k$ , the matrix $\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })$ is a diagonal matrix with pair-wise distinct diagonal elements, then $\\textbf {V}_k$ equals the mixing matrix $\\textbf {U}_k$ up to a permutation and sign change of columns.", "However, even if that is not the case, the column space of the estimated matrix $\\textbf {V}_k$ coincides with that of $\\textbf {U}_k$ .", "It is worth mentioning that even though Model REF assumes that the core tensor $\\mathcal {Z}$ is mixed by matrices $\\textbf {U}_1,\\dots ,\\textbf {U}_m$ with orthonormal columns, this assumption is without loss of generality since the singular values and the right singular vectors of a general mixing matrix can be absorbed into the core.", "This, however, has the drawback of making the core tensor identifiable only up to multiplications by invertible matrices from each mode.", "Nevertheless, the latent dimensionalities $d_k$ are identifiable in this case as well, and therefore we tolerate this ambiguity.", "Furthermore, this reveals why the proposed method is also a valid pre-processing step for computationally more involved linear feature extraction procedures.", "Having estimated (in the previous sense) the parameters $\\textbf {U}_k$ , we next develop our augmentation estimator for the latent dimensionality." ], [ "Population-level methodology", "We next describe our basic strategy behind the estimation of the latent dimensions $d_1,\\dots ,d_m$ using data augmentation.", "The approach can be seen as a tensorial extension of the method introduced by [13].", "Let $\\mathcal {X}_k$ be the $k$ -flattening (REF ).", "Then, as $\\textbf {U}^\\otimes _{-k}$ has orthonormal columns, we have $\\mathbb {E}\\lbrace (\\mathcal {X}_k-\\mathcal {M}_k)(\\mathcal {X}_k-\\mathcal {M}_k)^{\\prime }\\rbrace =\\textbf {U}_k\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })\\textbf {U}_k^{\\prime }+\\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime }).$ Lemma 1 Let $\\mathcal {E}\\in \\mathbb {R}^{p_1\\times \\cdots \\times p_m}$ be a random tensor with tensor spherical distribution.", "Then, for $k=1,\\dots ,m$ , $\\displaystyle \\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })=\\sigma _k^2\\textbf {I}_{p_k}$ , for some $\\sigma _k^2>0$ , where $\\mathcal {E}_k$ is the $k$ -flattening of $\\mathcal {E}$ .", "Using Lemma REF we obtain that $\\mathbb {E}\\lbrace (\\mathcal {X}_k-\\mathcal {M}_k)(\\mathcal {X}_k-\\mathcal {M}_k)^{\\prime }\\rbrace =\\textbf {U}_k\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })\\textbf {U}_k^{\\prime }+\\sigma _k^2\\textbf {I}_{p_k},$ and, since $\\mathrm {rank}(\\textbf {U}_k\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })\\textbf {U}_k^{\\prime }) = d_k$ , the problem of estimating $d_k$ boils down to the problem of estimating the rank of the matrix $\\mathbb {E}\\lbrace (\\mathcal {X}_k-\\mathcal {M}_k)(\\mathcal {X}_k-\\mathcal {M}_k)^{\\prime }\\rbrace - \\sigma _k^2\\textbf {I}_{p_k}$ .", "A naive way would be to inspect the scree plot of the eigenvalues of the sample estimate of $\\mathbb {E}\\lbrace (\\mathcal {X}_k-\\mathcal {M}_k)(\\mathcal {X}_k-\\mathcal {M}_k)^{\\prime } \\rbrace $ and search for an elbow, a point at which the eigenvalues start to even off.", "However, it is often very difficult and subjective to find such a point.", "Therefore, we supplement the scree plot with additional information extracted from the eigenvectors of an appropriately constructed matrix, a task in which we employ the augmentation technique demonstrated in [17], and initially introduced by [13] for vectors.", "More precisely, for fixed $k = 1, \\ldots , m$ and $r_k\\in \\mathbb {N}$ , we define $\\textbf {X}_S\\in \\mathbb {R}^{r_k \\times \\rho _k}$ to be a random matrix with i.i.d.", "entries having the distribution $\\mathcal {N}(0, \\sigma _k^2/\\rho _k)$ , implying that $\\mathbb {E}(\\textbf {X}_S)=\\textbf {0}$ and $\\mathbb {E}(\\textbf {X}_S \\textbf {X}_S^{\\prime })=\\sigma _k^2\\textbf {I}_{r_k}$ .", "We then augment (concatenate) the centered $k$ -flattening of $\\mathcal {X}$ with $\\textbf {X}_S$ to obtain the $(p_k + r_k) \\times \\rho _k$ matrix $\\textbf {X}_k^*=((\\mathcal {X}_k-\\mathcal {M}_k)^{\\prime },\\textbf {X}_S^{\\prime })^{\\prime }$ that satisfies, $\\mathbb {E}\\lbrace \\textbf {X}_k^* (\\textbf {X}_k^*)^{\\prime }\\rbrace =\\begin{pmatrix} \\textbf {U}_k\\mathbb {E}(\\mathcal {Z}_k \\mathcal {Z}_k^{\\prime })\\textbf {U}_k^{\\prime } &\\textbf {0}\\\\\\textbf {0} & \\textbf {0}\\end{pmatrix}+\\sigma _k^2\\textbf {I}_{p_k+r_k}.$ We further define $\\textbf {M}_k^* := \\mathbb {E}\\lbrace \\textbf {X}_k^* (\\textbf {X}_k^*)^{\\prime }\\rbrace -\\sigma _k^2\\textbf {I}_{p_k+r_k}$ .", "Then $\\textbf {M}_k^*$ and $\\mathbb {E}(\\mathcal {Z}_k \\mathcal {Z}_k^{\\prime })$ both have the same rank $d_k$ and also the same positive eigenvalues $\\lambda _{k,1}\\ge \\lambda _{k,2}\\ge \\cdots \\ge \\lambda _{k,d_k}>0$ .", "Let next $\\beta _{k,i}^* = (\\beta _{k,i}^{\\prime }, \\beta _{k,i,S}^{\\prime })^{\\prime }\\in \\mathbb {R}^{p_k+r_k}$ , $i=1,\\dots ,p_k+r_k$ , be any eigenvector of $\\textbf {M}_k^*$ corresponding to its $i$ th largest eigenvalue, where we call the $r_k$ -dimensional subvector $\\beta _{k,i,S}$ its augmented part.", "Then, for $i\\le d_k$ , $\\textbf {M}_k^*\\beta _{k,i}^*=(\\beta _{k,i}^{\\prime }\\textbf {U}_k\\mathbb {E}(\\mathcal {Z}_k \\mathcal {Z}_k^{\\prime })\\textbf {U}_k^{\\prime },\\textbf {0}^{\\prime })^{\\prime }=\\lambda _{k,i}(\\beta _{k,i}^{\\prime },\\beta _{k,i,S}^{\\prime })^{\\prime },$ implying that $\\beta _{k,i,S}=\\textbf {0}$ for $i=1,\\dots d_k$ .", "Not only does the equivalent fail for the eigenvectors belonging to a zero eigenvalue ($i > d_k$ ), but the following theorem shows that for $r_k\\rightarrow \\infty $ , exactly the opposite happens.", "Prior to stating the theorem, let us briefly discuss the form and the arbitrariness of the zero-eigenvalue eigenvectors of $\\textbf {M}^*_k$ .", "Let $\\textbf {B}^*_{k,0} \\in \\mathbb {R}^{(p_k + r_k) \\times (p_k + r_k - d_k)}$ denote a matrix that contains an arbitrary orthonormal basis of the null space of $\\textbf {M}_k^*$ as its columns (the below result is invariant to the exact choice of this basis).", "Then, for $i>d_k$ , any eigenvector $\\beta _{k,i}^*$ lies in the null space of $\\textbf {M}_k^*$ and is thus of the form $\\beta _{k,i}^*=\\textbf {B}_{k,0}^*\\textbf {a}$ for some unit length vector $\\textbf {a}\\in \\mathbb {R}^{p_k+r_k-d_k}$ .", "The following theorem then shows that the norm of the augmented part of a randomly chosen zero-eigenvalue eigenvector follows a specific beta distribution.", "Theorem 1 Fix $i=(d_k + 1),\\dots ,(p_k+r_k)$ and let $\\beta _{k,i}^* = (\\beta _{k,i}^{\\prime }, \\beta _{k,i,S}^{\\prime })^{\\prime }\\in \\mathbb {R}^{p_k+r_k}$ be of the form $\\beta _{k,i}^* = \\textbf {B}^*_{k,0} \\textbf {a}$ where $\\textbf {a}$ is drawn uniformly from the unit sphere in $\\mathbb {R}^{p_k + r_k - d_k}$ .", "Then $\\Vert \\beta _{k,i,S}\\Vert ^2\\sim \\mathrm {Beta}\\lbrace r_k/2,(p_k-d_k)/2\\rbrace $ .", "Figure REF illustrates the behaviour of the tail probabilities of the augmented parts of randomly chosen eigenvectors (in the sense of Theorem REF ) belonging to a zero eigenvalue, as a function of $r_k$ .", "Note that, in practice, we would like the sample analogues of the quantities $\\Vert \\beta _{k,i,S}\\Vert ^2$ , $i = d_k + 1, \\ldots , p_k$ to be as large as possible to be able to distinguish the transition from signal to noise.", "Based on Figure REF this can be achieved by using large values of $r_k$ .", "However, this matter actually turns out to be more complicated in a finite-sample case where increasing $r_k$ with $n$ held fixed might lead to high-dimensional phenomena, see Section .", "Figure: The curves represent the probability that a random variable from Beta (r/2,(p-d)/2)\\mathrm {Beta}(r/2,(p-d)/2)-distribution takes a value larger than ε>0\\varepsilon >0, as a function of the parameter rr, for various values of p-dp-d and ε\\varepsilon .Given the distributional result in Theorem REF , the following properties of the augmented parts of the null eigenvectors of $\\textbf {M}_k^*$ now straightforwardly follow.", "Corollary 1 Under the conditions of Theorem REF , the following hold.", "(i) Let $\\varepsilon _n$ be any sequence of positive real numbers such that $\\varepsilon _n \\rightarrow 0$ as $n \\rightarrow \\infty $ .", "Then, $\\mathbb {P}(\\Vert \\beta _{k,i,S}\\Vert ^2>\\varepsilon _n) \\rightarrow 1$ as $n \\rightarrow \\infty $ .", "(ii) For fixed $p_k,\\,d_k$ and for every $\\varepsilon >0$ , $\\mathbb {P}(\\Vert \\beta _{k, i, S}\\Vert ^2\\ge 1-\\varepsilon )\\rightarrow 1$ , as $r_k\\rightarrow \\infty $ .", "(iii) In a high-dimensional regime, if $p_k-d_k=o(r_k)$ , then for every $\\varepsilon >0$ , $\\mathbb {P}(\\Vert \\beta _{k, i, S}\\Vert ^2\\ge 1-\\varepsilon )\\rightarrow 1$ , as $r_k\\rightarrow \\infty $ .", "Corollary REF indicates that for $r_k$ large enough, the norms of the augmented parts of the zero-eigenvalue eigenvectors get arbitrary close to 1, thus explaining (on the population level) the behavior observed in [17], where, when the number of augmentations was increased, the function $\\hat{f}_k$ that captures the information from the eigenvectors by accumulating the norms of their augmented parts acted as a linear function of the dimension $j$ , for $j>d_1$ ; see Figure 5 in [17] for more insight.", "The previous population properties of the augmented parts of the eigenvectors serve as the basis for the construction of the augmentation estimator.", "Note that the successful estimation of the noise variance $\\sigma _k^2$ of the $k$ th flattering is crucial for the above construction, and therefore we discuss it next in more detail." ], [ "Estimation of the noise variance", "Let first $\\mathcal {X}^1,\\dots ,\\mathcal {X}^n$ be an i.i.d.", "sample from Model REF and let $\\bar{\\mathcal {X}}$ be the corresponding sample mean.", "Furthermore, let $\\hat{\\sigma }_{k,1}^2 \\ge \\cdots \\ge \\hat{\\sigma }_{k,p_k}^2$ be the eigenvalues of $(1/n) \\sum _{i=1}^n (\\mathcal {X}_{k,i}-\\bar{\\mathcal {X}}_k) (\\mathcal {X}_{k,i}-\\bar{\\mathcal {X}}_k)^{\\prime }$ and denote by $\\sigma _{k,1}^2 \\ge \\cdots \\ge \\sigma _{k,p_k}^2$ the eigenvalues of $\\mathbb {E}\\left\\lbrace (\\mathcal {X}_k-\\mathcal {M}_k)(\\mathcal {X}_k-\\mathcal {M}_k)^{\\prime }\\right\\rbrace $ .", "Due to the independence of the signal and the noise, $\\sigma _{k,i}^2 = \\lambda _{k,i}+\\sigma _k^2$ for $i=1,\\dots ,d_k$ , and $\\sigma _{k,i}^2 =\\sigma _k^2$ , for $i=d_k+1,\\dots ,p_k$ , where $\\lambda _{k,1},\\dots ,\\lambda _{k,d_k}$ are the eigenvalues of $\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })$ .", "This consideration, together with Corollary REF in Section REF implies how we can justly use $\\hat{\\sigma }^2_{k,d_k+1},\\dots ,\\hat{\\sigma }^2_{k,p_k}$ to construct a consistent estimator of the noise variance $\\sigma _k^2$ of the $k$ th mode.", "However, since it is mostly the case that one wishes to estimate the latent dimensions in all modes, the following discussion allows us to construct a pooled estimator of the noise variance using all modes.", "The matrix $\\mathcal {E}_k$ is left spherical for each $k=1,\\dots m$ , thus making $\\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })=\\sigma _k^2\\textbf {I}_{p_k}$ , $\\sigma _k^2>0$ .", "Therefore, $\\sigma _k^2=\\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })_{i,i}=\\sum _{j=1}^{\\rho _k}\\mathbb {E}(\\mathcal {E}_{k,(i,j)}^2).$ If we sum the identity (REF ) over all $i=1,\\dots ,p_k$ , we obtain $ p_k\\sigma _k^2=\\sum _{i=1}^{p_k}\\sum _{j=1}^{\\rho _k}\\mathbb {E}(\\mathcal {E}_{k,(i,j)}^2)=\\mathbb {E}\\Vert \\mathcal {E}\\Vert _\\mathrm {F}^2,$ thus implying the relationship $p_1\\sigma _1^2=p_2\\sigma _2^2=\\cdots =p_m\\sigma _m^2,$ between the noise variances of the $k$ -flattenings $\\mathcal {E}_k$ of the noise tensor $\\mathcal {E}$ .", "Define now $S_k :=\\lbrace \\frac{p_i}{p_k}\\sigma _{i,j}^2:i=1,\\dots ,m,\\,j=1,\\dots ,p_i\\rbrace $ to be the set of eigenvalues from all modes in the “scale” of the $k$ th mode.", "Similarly, define $\\hat{S}_k :=\\lbrace \\frac{p_i}{p_k}\\hat{\\sigma }_{i,j}^2:i=1,\\dots ,m,\\,j=1,\\dots ,p_i\\rbrace $ , to be the sample counterpart of $S_k$ .", "Lemma REF in Section REF shows that under certain assumptions on the compressibility of the data, quantiles as well as means of suitable tails of $\\hat{S}_k$ are consistent estimators of $\\sigma _k^2$ .", "Naturally, once the noise variance of the $k$ th mode has been estimated, we obtain estimates $\\hat{\\sigma }_i^2$ , $i\\ne k$ , for the noise variances of the remaining modes simply by scaling $\\hat{\\sigma }_i^2=(p_k/p_i)\\hat{\\sigma }_k^2$ , $i\\ne k$ .", "Remark 1 To further clarify the scaling constants $p_i/p_k$ , $i=1,\\dots ,m$ , used in the estimation of the noise variance in the $k$ th mode, consider a scenario where the entries of $\\mathcal {E}$ are uncorrelated with zero mean and variance $\\delta ^2>0$ .", "Then, for $k=1,\\dots ,m$ , $\\mathrm {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })=\\sum _{i=1}^{\\rho _k}\\delta ^2\\textbf {I}_{p_k}=\\rho _k\\delta ^2\\textbf {I}_{p_k}$ , thus showing that the noise variance accumulates with the number of columns." ], [ "Sample-level estimation", "We are now equipped to define the augmentation estimator for estimation of the $k$ th latent dimension $d_k$ .", "Let $\\textbf {X}_{1,S},\\dots ,\\textbf {X}_{n,S}$ be a sample of i.i.d.", "$r_k\\times \\rho _k$ matrices with elements drawn from the standard normal distribution $\\mathcal {N}(0,1)$ .", "We define the augmented $k$ -flattenings of the observations $\\mathcal {X}^1,\\dots ,\\mathcal {X}^n$ as the $ (p_k+r_k)\\times \\rho _k$ matrices $\\textbf {X}_{i,k}^* := ((\\mathcal {X}_{k}^i)^{\\prime },\\hat{\\sigma }_k\\textbf {X}_{i,S}^{\\prime })^{\\prime }$ , $i=1,\\dots ,n$ , where $\\hat{\\sigma }_k^2$ is any consistent estimator of the noise variance $\\sigma _k^2$ , see Lemma REF in Section REF for examples.", "Let further $\\bar{\\textbf {X}}_k^*$ be the sample mean of the obtained augmented sample.", "A sample estimate $\\hat{\\textbf {M}}_k^*$ of the matrix $\\textbf {M}_k^*$ is then $\\hat{\\textbf {M}}_k^*=\\frac{1}{n}\\sum _{i=1}^n(\\textbf {X}_{i,k}^*-\\bar{\\textbf {X}}_k^*)(\\textbf {X}_{i,k}^*-\\bar{\\textbf {X}}_k^*)^{\\prime }-\\hat{\\sigma }_k^2\\textbf {I}_{p_k+r_k},$ whose first $p_k$ eigenvectors we denote in the following by $\\hat{\\beta }_{k,1}^*, \\dots , \\hat{\\beta }_{k,p_k}^*$ .", "Mimicking [13] and [17], we define the normalized scree plot curve, $\\hat{\\Phi }_k:\\lbrace 0,1,\\dots ,p_k\\rbrace \\rightarrow \\mathbb {R},\\quad \\hat{\\Phi }_k(l)=\\hat{\\lambda }_{k,l+1}/\\left(\\sum _{i=1}^{l+1}\\hat{\\lambda }_{k,i}+1\\right),$ where $(\\hat{\\lambda }_{k,1}, \\ldots , \\hat{\\lambda }_{k,p_k}) := (\\hat{\\sigma }_{k,1}^2 - \\hat{\\sigma }_k^2, \\ldots , \\hat{\\sigma }^2_{k,p_k} - \\hat{\\sigma }_k^2)$ , and we take $\\hat{\\lambda }_{k,p_k+1} := 0$ .", "However, as the values $\\hat{\\sigma }_{k,i}^2 - \\hat{\\sigma }_k^2$ are not necessarily non-negative (unlike their population counterparts, the eigenvalues of $\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })$ ), we suggest using $\\hat{\\lambda }_{k,i}=\\max \\lbrace \\hat{\\sigma }_{k,i}^2-\\hat{\\sigma }_k^2,0\\rbrace $ , $i = 1, \\ldots , p_k$ , instead.", "In any case, one should proceed with caution as very negative values of $\\hat{\\sigma }_{k,i}^2-\\hat{\\sigma }_{k}^2$ indicate possible overestimation of the noise variance $\\sigma _k^2$ .", "Lemma REF in Section REF lists a number of consistent estimators of noise variance which, however, possibly behave rather differently.", "Thus, in Remark REF we discuss the effect of misestimation of the noise variance for the presented procedure.", "Remark 2 Let $\\textbf {X}_S\\in \\mathbb {R}^{r_k\\times \\rho _k}$ be the augmented submatrix for the $k$ th flattening $\\mathcal {X}_k$ , having independent $\\mathcal {N}(0,\\sigma _S^2/\\rho _k)$ -elements, where $\\sigma ^2_S > 0$ is now understood to be the (fixed) estimated value of $\\sigma _k^2$ .", "Furthermore, $\\textbf {M}_k^*&=\\mathbb {E}\\lbrace (\\textbf {X}_k^*-\\mathbb {E}(\\textbf {X}_k^*)) (\\textbf {X}_k^*-\\mathbb {E}(\\textbf {X}_k^*))^{\\prime }\\rbrace -\\sigma _S^2\\textbf {I}_{p_k+r_k}\\\\&=\\begin{pmatrix} \\textbf {U}_k \\lbrace \\mathrm {E}(\\mathcal {Z}_k \\mathcal {Z}_k^{\\prime })+(\\sigma _k^2-\\sigma _S^2)\\textbf {I}_{p_k} \\rbrace \\textbf {U}_k^{\\prime } &\\textbf {0}\\\\\\textbf {0} & \\textbf {0}\\end{pmatrix},$ and the eigenvalues of $\\textbf {M}_k^*$ are $\\lambda _{k,i}+(\\sigma _k^2-\\sigma _S^2)$ , $i=1,\\dots ,p_k$ , where $\\lambda _{k,i}=0$ for $i>d_k$ , in addition to the $r_k$ zero eigenvalues corresponding to the lower right block.", "In practice, since the eigenvalues of $\\textbf {M}_k^*$ serve as the estimators of the eigenvalues of the positive-definite matrix $\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })$ , as discussed in Section REF , we replace $\\lambda _{k,i}+(\\sigma _k^2-\\sigma _S^2)$ with $\\max \\lbrace 0,\\lambda _{k,i}+(\\sigma _k^2-\\sigma _S^2)\\rbrace $ , $i=1,\\dots ,p_k$ , to avoid negative values.", "Let now $\\sigma _S^2=\\sigma _k^2+\\delta $ , where $0\\le \\delta <\\lambda _{k,d_k}$ and $\\delta >0$ corresponds to the amount of overestimation of $\\sigma _k^2$ .", "Then, $\\max \\lbrace 0,\\lambda _{k,i}+(\\sigma _k^2-\\sigma _S^2)\\rbrace =\\lambda _{k,i}-\\delta $ , $i=1,\\dots ,d_k$ , and $\\max \\lbrace 0,\\lambda _{k,i}+(\\sigma _k^2-\\sigma _S^2)\\rbrace =0$ , for $i>d_k$ , implying that the thresholding preserves the rank $d_k$ while shifting the nontrivial eigenvalues by $-\\delta $ .", "Thus, Remark REF shows that the method is robust towards slight overestimation of the noise variance, where such behavior is directly related to the thresholding of the eigenvalues of $\\hat{\\textbf {M}}_k^*$ from below by 0.", "The “allowed” amount of overestimation is equal to the smallest non-trivial eigenvalue of $\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })$ .", "Though Remark REF explains the effect of the overestimation of the noise variance at the population level, approximation to the phenomenon holds in the sample case.", "Moving back to the estimation of the order $d_k$ , additional information about it can now be obtained by using the eigenvectors of $\\textbf {M}_k^*$ .", "To reduce the effect of randomness in the augmentation, the augmentation procedure is conducted independently $s_k$ times, and we compute the eigenvectors of $\\hat{\\textbf {M}}_k^{*}$ for each replicate.", "For $j=1,\\dots ,s_k$ , we denote by $\\hat{\\beta }_{k,i,S}^j$ the augmented part of the $i$ th eigenvector of the matrix $\\hat{\\textbf {M}}_k^{*j}$ in the $j$ th replicate.", "The eigenvector information is then captured by the function $\\hat{f}_k:\\lbrace 0,1,\\dots ,p_k\\rbrace \\rightarrow \\mathbb {R},\\quad \\hat{f}_k(i)=\\frac{1}{s_k}\\sum _{j=1}^{s_k}\\Vert \\hat{\\beta }_{k,i,S}^{j}\\Vert ^2,$ where $\\hat{\\beta }_{k,0,S}^{j}:=\\textbf {0}$ .", "Finally, we combine the eigenvalue information captured by $\\hat{\\Phi }_k$ and the eigenvector information in $\\hat{f}_k$ to form the final objective function $\\hat{g}_k:\\lbrace 0,1,\\dots ,p_k\\rbrace \\rightarrow \\mathbb {R}$ , $\\hat{g}_k(j)=\\hat{\\Phi }_k(j)+\\sum _{i=0}^j \\hat{f}_k(i) ,$ whose minimizer $\\hat{d}_k$ is taken to be the estimator of the latent dimension $d_k$ .", "This definition of $\\hat{d}_k$ is intuitively clear as, assuming that $d_k > 0$ , for any $i<d_k$ the eigenvalue part $\\hat{\\Phi }_k(i)$ of (REF ) is large, while the eigenvector part $\\hat{f}_k(i)$ is small.", "For $i > d_k$ , the opposite happens and the eigenvalue part is small while the eigenvector part is large, due to Corollary REF in Section REF .", "At the true dimension $i=d_k$ both parts are small, thus implying that the sum curve $\\hat{g}_k$ is (at the population level) minimized precisely at $i = d_k$ .", "In the extreme noise case where $d_k=0$ , the eigenvalue part in (REF ) is always negligible, while again due to Corollary REF , the eigenvector part is always large, except for $i=0$ , in which case it vanishes, causing the minimum to occur at $i=0$ .", "An algorithm for the augmentation estimator is given in Algorithm REF and the augmentation process is visualized in Figure REF .", "[ht] Augmentation estimator for the dimension $d_k$ of the $k$ th mode InputInput Set the row dimension $r_k>0$ ; Set the number of augmented replicates $s_k>0$ ; Calculate $\\hat{\\textbf {M}}_k=\\frac{1}{n}\\sum _{i=1}^n\\mathcal {X}_{k}^i{\\mathcal {X}_{k}^i}^{\\prime }$ , for $k=1,\\dots ,m$ ; Calculate the estimate $\\hat{\\sigma }_k^2$ of the noise variance based on the pooled set of scaled eigenvalues of $\\hat{\\textbf {M}}_k$ , $\\displaystyle \\hat{S}_k=\\lbrace \\frac{p_i}{p_k}\\hat{\\sigma }_{i,j_i}^2:i=1,\\dots ,m,\\,j_i=1,\\dots ,p_i\\rbrace .$ Compute $\\hat{\\lambda }_{k,i} = \\max \\lbrace \\hat{\\sigma }_{k,i}^2-\\hat{\\sigma }_k^2,0\\rbrace $ ; $i\\leftarrow 1$ $n$ $j\\leftarrow 1$ $s_k$ Generate an $r_k \\times \\rho _k$ matrix $\\textbf {X}_{i,S}^j$ , with entries drawn i.i.d.", "from $\\mathcal {N}(0, 1)$ ; Define the augmented $i$ th observation as $\\textbf {X}_i^{*j}={({\\mathcal {X}_{k}^i}^{\\prime }, \\hat{\\sigma }_k {\\textbf {X}_{i,S}^j}^{\\prime })}^{\\prime };$ $j\\leftarrow 1$ $s_k$ Compute the eigendecomposition of the $j$ th replicated matrix $\\hat{\\textbf {M}}_k^{*j}=\\frac{1}{n}\\sum _{i=1}^n\\textbf {X}_i^{*j}{\\textbf {X}_i^{*j}}^{\\prime }-\\hat{\\sigma }_k^2\\textbf {I}_{p_k + r_k}.$ Let $\\hat{\\beta }_{k,i,S}^{j}$ be the augmented part of the $i$ th eigenvector of $ \\hat{\\textbf {M}}_k^{*j}$ ; Compute the objective function $\\displaystyle \\hat{g}_k(j)=\\hat{\\Phi }_k(j)+\\sum _{i=0}^j \\hat{f}_k(i),$ where $\\hat{\\beta }_{k,0,S}^{j}=\\textbf {0}$ and $\\hat{\\lambda }_{k,p_k+1}=0$ ; Return $\\hat{d}_k=\\mathrm {argmin}\\lbrace \\hat{g}_k(i):\\,i=0,\\dots ,p_k\\rbrace $ ; Figure: Representation of the augmentation process in the case m=3m=3.", "The parts of the tensors and matrices corresponding to the augmentation have grey background.", "Step 1 represents the flattening along mode kk and Step 2 the subsequent computation of the scatter in that mode along with thr corresponding eigenvalue-eigenvector decomposition." ], [ "Asymptotic properties of the augmentation estimator", "The following corollary gives the asymptotic behavior of the eigenvalue part $\\hat{\\Phi }_k$ of $\\hat{g}_k$ , justifying its use.", "Corollary 2 Let $\\lambda _{k,i}$ and $\\hat{\\lambda }_{k,i}$ , $i=1,\\dots ,p_k+r_k$ be the eigenvalues of $\\textbf {M}_k^*$ and $\\hat{\\textbf {M}}_k^*$ , respectively.", "Then, $\\hat{\\lambda }_{k,i}\\rightarrow _{P}\\lambda _{k,i}$ for $i=1,\\dots ,p_k+r_k$ , as $n \\rightarrow \\infty $ .", "An implication of Corollary REF is Lemma REF , that lists a number of consistent estimators of the noise variance.", "Lemma 2 Let $\\hat{\\sigma }_{k,q}^2$ , $q\\in (0,1)$ , be the $q$ th sample quantile of $\\hat{S}_k$ and $\\bar{\\sigma }_{k,q}^2$ , $q\\in (0,1)$ , be the mean of those elements of $\\hat{S}_k$ that are smaller than or equal to $\\hat{\\sigma }_{k,q}^2$ .", "i) If $d_1+\\dots +d_m<(1-q)(p_1+\\dots +p_m)$ , then $\\hat{\\sigma }^2_{k,q_1}$ and $\\bar{\\sigma }_{k,q_1}^2$ , for any $q_1\\le q$ , are consistent estimators of $\\sigma ^2_k$ .", "ii) If $d_1+\\dots +d_m<p_1+\\dots +p_m$ , then $\\min \\lbrace \\hat{S}_k \\rbrace $ is a consistent estimator of $\\sigma ^2_k$ .", "The following theorem illustrates the behaviour of the norms of the augmented parts of the eigenvectors on the sample level, under the assumption of normality for the additive noise $\\mathcal {E}$ in Model REF , and shows (i) that for $i\\le d_k$ the norms are negligible in probability and (ii) that this is not the case for the later eigenvectors.", "Assumption 1 The additive noise $\\mathcal {E}$ in Model (REF ) has i.i.d.", "Gaussian entries.", "Theorem 2 Let $\\hat{\\beta }_{k,i}^*=(\\hat{\\beta }_{k,i,1}^{\\prime },\\hat{\\beta }_{k,i,S}^{\\prime })^{\\prime }$ , $i=1,\\dots , p_k+r_k$ be any set of eigenvectors of $\\hat{\\textbf {M}}_k^*$ , where $\\hat{\\beta }_{k,i,S}\\in \\mathbb {R}^{r_k}$ is the augmented part of the $i$ th eigenvector of $\\hat{\\textbf {M}}_k^*$ .", "Then, (i) $\\Vert \\hat{\\beta }_{k,i,S}\\Vert ^2=o_P(1)$ , $i\\le d_k$ .", "(ii) If additionally Assumption (REF ) is satisfied, then $ \\Vert \\hat{\\beta }_{k,i,S}\\Vert ^2 \\rightsquigarrow \\mathrm {Beta}\\lbrace r_k/2, (p_k - d_k)/2 \\rbrace $ for $i > d_k$ .", "Corollary REF further illustrates the behaviour of the norms of the augmented parts $\\hat{\\beta }_{k,i,S}$ for $i > d_k$ .", "Corollary 3 Let $\\hat{\\textbf {M}}_k^*$ be as defined above and let $\\hat{\\beta }_{k,i}^* = (\\hat{\\beta }_{k,i,1}^{\\prime }, \\hat{\\beta }_{k,i,S}^{\\prime })^{\\prime }\\in \\mathbb {R}^{p_k+r_k}$ , $i=1,\\dots ,p_k+r_k$ , be an eigenvector of $\\hat{\\textbf {M}}_k^*$ corresponding to its $i$ th eigenvalue.", "Then, under Assumption (REF ), for $i>d_k$ and for every $\\varepsilon >0$ , $\\displaystyle \\lim _{\\varepsilon \\rightarrow 0^+}\\mathbb {P}(\\Vert \\hat{\\beta }_{k,i,S}\\Vert ^2>\\varepsilon )\\rightarrow 1$ , as $n\\rightarrow \\infty $ .", "Interestingly, the limiting distribution of $\\Vert \\hat{\\beta }_{k,i,S}\\Vert ^2$ , $i>d_k$ , in Theorem REF does not depend directly on the dimension $p_k$ , but rather on the “amount of noise” $p_k-d_k$ in the $k$ th mode.", "This implies that the more noise components there are in the $k$ th mode, the more difficult it is to differentiate between the signal and the noise eigenvectors using the norms of the corresponding augmented parts; see Figure REF for more insight.", "Finally, the following theorem proves the validity of the method, in the sense of the consistency of the estimated latent dimensions, under the assumption of normality of the additive noise.", "Theorem 3 Let Assumption (REF ) be satisfied and let $\\hat{d}_k$ be the estimator of the unknown dimension $d_k$ defined in (REF ).", "Then, $\\lim _{n\\rightarrow \\infty }\\mathbb {P}(\\hat{d}_k=d_k)=1, \\quad k=1,\\dots ,m.$" ], [ "Bootstrap-based ladle estimator", "As a competitor to the augmentation strategy presented in Section REF , we introduce a generalization of the bootstrap-based “ladle”-technique for extracting information from the eigenvectors of the variation matrix presented in [12] for vector-valued observations.", "The general idea is to use bootstrap resampling techniques to approximate the variation of the span of the first $k$ eigenvectors of the corresponding sample scatter matrix, where high variation of the span indicates that the chosen eigenvectors belong to the same eigenspace, i.e., that the difference between the corresponding eigenvalues is small, see [23].", "The information obtained from the eigenvectors is then combined with the one from the eigenvalues of the variation matrix, as in the augmentation estimator.", "As in [12], we refer to this composite estimator as the bootstrap ladle estimator, due to a specific ladle shape of the associated plots.", "Namely, a ladle-shaped curve is obtained when the bootstrap ladle estimator is plotted as a function of the unknown dimension.", "Extending the bootstrap ladle estimator of [12] to tensor-valued observations, we obtain estimators for the orders $d_k$ , $k=1,\\dots ,m$ , where the information contained in the eigenvalues is extracted similarly as in Algorithm REF .", "More precisely, for centered, independent realizations $\\lbrace \\mathcal {X}^1,\\dots ,\\mathcal {X}^n\\rbrace $ of a zero-mean tensor from Model (REF ), let $\\hat{\\textbf {M}}_k$ and $\\hat{\\sigma }_{k,i}^2$ , $k=1,\\dots ,m$ , $i=1,\\dots ,p_k$ , be defined as in Algorithm REF , and let $\\hat{\\textbf {B}}_{j,k}$ be a matrix containing any first $j$ eigenvectors of $\\hat{\\textbf {M}}_k$ .", "We define further $\\hat{\\phi }_{k,\\rm {B}}:\\lbrace 0,1,\\dots ,p_k-1\\rbrace \\rightarrow \\mathbb {R},$ with $\\hat{\\phi }_{k,\\rm {B}}(j)=\\hat{\\sigma }_{k,j+1}^2/\\left(\\sum _{i=1}^{p_k-1}\\hat{\\sigma }_{k,i}^2+1\\right),$ observing that the value of $\\hat{\\phi }_{k,\\rm {B}}(j)$ is large for $j<d_k$ and small, but not zero, for $j\\ge d_k$ , for the reasons presented in Section REF .", "As mentioned earlier, the eigenvalue information is in ladle supplemented with information taken from the eigenvectors of $\\hat{\\textbf {M}}_k$ using a bootstrap technique.", "For the $i$ -th centered bootstrap sample, $\\lbrace \\mathcal {X}_{i}^{1*},\\dots ,\\mathcal {X}_{i}^{n*}\\rbrace $ , let $\\textbf {M}_{k}^{i*}=\\frac{1}{n}\\sum _{j=1}^n\\mathcal {X}_{i,k}^{j*}(\\mathcal {X}_{i,k}^{j*})^{\\prime }$ be the scatter matrix of the $k$ -flattening of the $i$ th bootstrap sample.", "Furthermore, let $\\textbf {B}_{j,k}^{i*}=(\\beta _{1,k}^{i*},\\dots ,\\beta _{j,k}^{i*})$ be a matrix of any first $j$ eigenvectors, belonging to the $j$ largest eigenvalues of $\\textbf {M}_k^{i*}$ , $j=1,\\dots , p_k-1$ .", "Define further $\\hat{f}_{k,\\rm {B}}:\\lbrace 0,1,\\dots ,p_k-1\\rbrace \\rightarrow \\mathbb {R}$ , with $\\hat{f}_{k,\\rm {B}}(0):=0$ and $\\hat{f}_{k,\\rm {B}}(j)=\\frac{1}{s_k}\\sum _{i=1}^{s_k}\\left(1-|\\det (\\hat{\\textbf {B}}_{j,k}^{\\prime }\\textbf {B}_{j,k}^{i*})|\\right),\\,\\text{ for }j>0,$ where $s_k$ is the number of bootstrap samples.", "Note that $\\hat{f}_{k,\\rm {B}}(p_k)=0$ , since $\\hat{\\textbf {B}}_{j,k}$ and $\\textbf {B}_{j,k}^{i*}$ both span the same space $\\mathbb {R}^{p_k}$ .", "The bootstrap ladle estimator $\\hat{d}_k$ of $d_k$ is then defined as the minimizer of $\\hat{g}_{k,\\rm {B}}:\\lbrace 0,1,\\dots ,p_k-1\\rbrace \\rightarrow \\mathbb {R}$ , where $\\hat{g}_{k,\\rm {B}}(j)=\\hat{\\phi }_{k,\\rm {B}}(j)+\\hat{f}_{k,\\rm {B}}(j)/\\left(\\sum _{i=1}^{p_k-1}\\hat{f}_{k,\\rm {B}}(i)+1\\right),$ see Algorithm for a detailed algorithm.", "As the value of the objective function $\\hat{g}_{k,\\rm {B}}$ is not defined at $p_k$ , it is assumed that compression is possible in every mode, which is different from the augmentation estimator where compression in at least one mode is compulsory.", "[12] argue further that if $p_k$ is large (e.g., $p_k>10$ ), the normalizing constants for the eigenvalue and eigenvector parts of the objective function should be replaced by $\\sum _{i=1}^{{p_k/\\log (p_k)}}\\hat{\\sigma }^2_{k,i}+1$ and $\\sum _{i=1}^{{p_k/\\log (p_k)}}\\hat{f}_{k,\\rm {B}}(i)+1$ , respectively.", "The reason is that if $p_k$ is very large, the normalization constant of the eigenvector part of $\\hat{g}_{k,\\rm {B}}$ increases and weights down the bootstrap part compared to $\\hat{\\phi }_{k,\\rm {B}}$ .", "Therefore, it is usually beneficial to optimize $\\hat{g}_{k,\\rm {B}}$ only up to some $q_k<p_k$ , and $q_k={p_k/\\log (p_k)}$ seems, according to [12], justifiable in many applications.", "[ht] Bootstrap ladle estimator for the dimension $d_k$ of the $k$ th mode.", "InputInput Set the number of bootstrap samples $s_k>0$ ; Calculate the scatter $\\hat{\\textbf {M}}_{k}$ of the $k$ -flattening of the sample $\\mathcal {X}^1,\\dots , \\mathcal {X}^n$ ; Calculate the eigendecomposition of $\\hat{\\textbf {M}}_{k}$ and denote by $\\hat{\\textbf {B}}_{j,k}$ any matrix of first $j$ eigenvectors of $\\hat{\\textbf {M}}_k$ ; $i\\leftarrow 1$ $s_k$ Sample with repetition the $i$ -th (centered) bootstrap sample $\\lbrace \\mathcal {X}_{i}^{1*},\\dots ,\\mathcal {X}_{i}^{n*}\\rbrace $ ; Calculate the scatter $\\textbf {M}_{k}^{i*}=\\frac{1}{n}\\sum _{j=1}^n\\mathcal {X}_{i,k}^{j*}(\\mathcal {X}_{i,k}^{j*})^{\\prime }$ of the $k$ -flattening of the $i$ th bootstrap sample; Calculate the eigendecomposition of $\\textbf {M}_{k}^{i*}$ and take $\\textbf {B}_{j,k}^{i*}$ to be any matrix of first $j$ eigenvectors of $\\textbf {M}_k^{i*}$ ; $p_k\\le 10$ Set $q_k\\leftarrow p_k-1$ Set $q_k\\leftarrow {p_k/\\log (p_k)}$ The objective function is $ \\hat{g}_{k,\\rm {B}}(j):\\lbrace 0,1,\\dots q_k\\rbrace \\rightarrow \\mathbb {R}$ , with $\\hat{g}_{k,\\rm {B}}(0)=0$ , and, for $j>0$ , $\\displaystyle \\hat{g}_{k,\\rm {B}}(j)=$ $\\hat{\\sigma }_{k,j+1}^2/\\left(\\sum _{i=1}^{q_k}\\hat{\\sigma }_{k,i}^2+1\\right)+\\hat{f}_{k,\\rm {B}}(j)/\\left(\\sum _{i=1}^{q_k}\\hat{f}_{k,\\rm {B}}(i)+1\\right);$ Return $\\hat{d}_k=\\mathrm {argmin}\\lbrace \\hat{g}_{k,\\rm {B}}(j):\\,i=0,\\dots ,q_k\\rbrace $ ; A clear advantage of the bootstrap ladle estimator over the augmentation one is that no noise variance estimation is required and one can use the matrix $\\hat{\\textbf {M}}_k$ directly to draw inference on the order $d_k$ .", "However, as shown in a simulation study in [17], it is computationally more demanding and less accurate and is therefore omitted from the real data analysis in Section REF (but it is still included in the simulations in Section REF ).", "Further detailed intuition on the bootstrap ladle estimator can be found in [12]." ], [ "Numerical results", "The analysis of both simulated and real data was conducted using R [16] jointly with the packages ICtest [14], MixMatrix [19] and tensorBSS [22].", "To obtain the augmentation and ladle estimates of the latent dimensions we use Algorithms REF and , respectively, which are available in the package tensorBSS [22]." ], [ "Simulation study", "In the simulation study, the data are generated from the model $\\mathcal {X}=\\mathcal {Z}\\times _{i=1}^3\\textbf {U}_i+\\mathcal {E},$ where $\\mathcal {Z}=\\mathcal {Z}_0\\times _{i=1}^3\\textbf {A}_i$ , $\\mathcal {E}=\\sigma \\mathcal {E}_0\\times _{i=1}^3\\textbf {V}_i$ and $\\mathcal {Z}_0$ has i.i.d.", "$t(3)$ entries, $\\mathcal {E}_0$ has i.i.d.", "$\\mathcal {N}(0,1)$ entries, $\\textbf {A}_i\\in \\mathbb {R}^{d_i\\times d_i}$ and $\\textbf {V}_i\\in \\mathbb {R}^{p_i\\times p_i}$ , $i=1,2,3$ , are full column-rank and orthogonal matrices, respectively, with $d_1 = 3,\\, d_2 = 5,\\, d_3 = 10,\\, p_1 = 5,\\, p_2 = 15,\\, p_3 = 20$ .", "Furthermore, we consider three different values for the noise variance $\\sigma ^2=0.1,\\,0.5,\\,1.$ Thus, $\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })=\\tfrac{5}{3}\\textbf {A}_k\\textbf {A}_k^{\\prime }\\prod _{i\\ne k}\\mathrm {tr}(\\textbf {A}_i\\textbf {A}_i^{\\prime }),\\,\\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })=\\sigma ^2\\prod _{i\\ne k}p_i\\textbf {I}_{p_k}.$ The eigenvalues of $\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })$ are for mode 1 $\\lbrace $ 5.75, 12.93, 22.99$\\rbrace $ , for mode 2 $\\lbrace $ 5.39, 5.94, 8.41, 9.81, 12.12$\\rbrace $ and for mode 3 $\\lbrace $ 2.74, 3.02, 3.31, 3.62, 3.94, 4.28, 4.63, 4.99, 5.37, 5.76$\\rbrace $ .", "The eigenvalues of $\\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })$ , for $k=1,\\,2,\\,3$ and $\\sigma ^2\\in \\lbrace 0.1,0.5,1\\rbrace $ , along with the resulting signal-to-noise ratios (SNR) for Model (REF ) are given in Table REF where we define $\\mathrm {SNR}_k=\\Vert \\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })\\Vert ^2/\\Vert \\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })\\Vert ^2$ to be the ratio of the total variations of the signal and the noise components in the $k$ th mode.", "Note that for a fixed mode $k=1,\\,2,\\,3$ , all eigenvalues of $\\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })$ are equal.", "The table shows that in all settings the SNR is quite small and the simulation settings are quite challenging.", "Table: Eigenvalues of 𝔼(ℰ k ℰ k ' )\\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime }) and SNR for Model (), k=1,2,3k=1,\\,2,\\,3.For each of the 3 variance values, we simulate 1000 data sets of size $n=1000$ from Model (REF ), where in each simulated data sample $\\textbf {V}_i$ , $i=1,2,3$ , are randomly generated orthogonal matrices.", "The invertible matrices specifying the covariance structure of the core are of the form $\\textbf {A}_i=\\textbf {W}_i\\textbf {D}_i\\textbf {W}_i^{\\prime }$ , where $\\textbf {W}_i\\in \\mathbb {R}^{d_i\\times d_i}$ , $i=1,\\dots ,3$ , are randomly generated orthogonal matrices and $\\textbf {D}_1\\approx \\rm {diag}($ 1.857, 2.785, 3.714$)$ , $\\textbf {D}_2\\approx \\rm {diag}($ 1.797, 1.887, 2.247, 2.427, 2.696$)$ and $\\textbf {D}_3\\approx 1.282\\,\\rm {diag}($ 1.05,1.1,...,1.45$)$ .", "Furthermore, the mixing matrices $\\textbf {U}_i\\in \\mathbb {R}^{p_i\\times d_i}$ are taken to be the first $d_i$ columns of randomly generated orthogonal matrices in $\\mathbb {R}^{p_i\\times p_i}$ , $i=1,2,3$ .", "The augmentation estimator has two tuning parameters, $s_k$ and $r_k$ , for the estimation of the latent dimensions $d_1,\\,d_2,\\,d_3$ .", "Additionally, one also has to choose the used estimator of the noise variance.", "It is natural to choose the number of replications $s_k>0$ to be large to reduce variation in the results.", "Based on the simulation results of [17], we choose $s_k=50$ , $k=1,\\,2,\\,3$ , as that seems a good compromise between stability and computation time and since the choice of $s_k$ was in [17] deemed to be less crucial than the choice of $r_k$ .", "In the vector case, [13] propose to use $r\\approx p_1/5$ which in light of our results at the population level, see Corollary REF , seems insufficiently large.", "This effect is further visualized in Figure REF which also shows how, in practice, the choice of $r_k$ might be guided by the difference between the signal dimension and the full data dimension.", "However, the optimal choice of $r_k$ should be investigated further, most likely in the high-dimensional framework, and is beyond the scope of this paper.", "To cover a wide range of values, we consider in the simulation study for each mode the values $r_k\\in \\lbrace 1,5,10,25,50\\rbrace $ .", "For the estimation of the noise variance in the matrix case, [17] considered different estimators with the “largest” being the median estimator which also turned out to give the best performance.", "To evaluate if this is the case in the current scenario as well, we consider the quantile-based estimators, $\\hat{\\sigma }_{k,q}^2$ (see Lemma REF ), for the quantiles $q\\in \\lbrace 0,0.1,\\dots ,0.6\\rbrace $ .", "As a competitor we use the bootstrap-based ladle estimator described in Algorithm , with the number of bootstrap samples $s_k=200$ and where we ignored the if-condition $p_k \\le 10$ and always used $q_k = p_k-1$ .", "The results of the simulation study are presented in Figure REF , where in each sub-figure, columns correspond to the row-dimensions $r_k$ , $k=1,2,3$ , of the augmentation matrices, while the rows specify the standardized noise variance.", "Figure: Frequencies of estimated latent dimensions in Model  based on 1000 repetitions.", "The true latent dimensions are d 1 =3d_1=3, d 2 =5d_2=5 and d 3 =10d_3=10 and are always marked as grey.", "Note that for ladle (method boot) the estimates are the same for all r k r_k.The results show that, for the smallest value of the noise variance $\\sigma ^2 = 0.1$ , the correct signal dimension is obtained in all modes and for all values of $r_k$ if the quantile index used for the noise variance estimation is not too extreme.", "However, when the SNR is decreased the number of augmented components $r_k$ and the quantile used for the noise variance estimation become of more relevance, especially for the modes with large values of $p_k$ .", "Counter-intuitively to our theory, too large values of $r_k$ and noise variance lead to underestimation of the signal dimension.", "Recall, however, that in these simulations the SNR is in all cases quite low showing that the estimators perform all rather well.", "Nevertheless, we suggest not to use too drastic values for $r_k$ , restricting to values such as $r_k=10$ or $r_k=25$ and using 0.2 or 0.3 as the quantile level for the noise variance estimation.", "The simulations also show that if $p_k$ is small and the SNR is not too low, the ladle works very well, but it also seems to suffer the most from deviations from these optimal settings and then always estimates the signal dimension as zero.", "The disagreement between Corollary REF and the simulation results with respect to increasing the number of augmented components especially when $p_k$ is large might be due to the fact that our asymptotic results are stated for fixed dimensionality and growing sample size $n$ .", "Whereas, in practice, $n$ is fixed and, therefore, if $p_k + r_k$ is relatively large we might fall into the area of the Marchenko–Pastur law [4].", "This will be investigated further in future research." ], [ "Example", "To illustrate the augmention estimator we considered 882 $224 \\times 224$ RGB images of butterflies from various species, mostly taken in natural habitats.", "We assume that the SNR of the data is larger than in the previous simulation and therefore use $s_k=50$ , $r_k=5$ and estimate the noise variance using the 30% quantile.", "Based on these values, Figure REF visualizes the graph of our augmentation objective function $\\hat{g}_k$ in (REF ) for each mode on a logarithmic scale.", "The estimated signal dimension, obtained as the modewise minima, is (103, 111, 3).", "The fact that $d_3$ is estimated to be 3 indicates that there is a lot of information in the data contained in all three colors, a fact that indeed is plausible, since butterflies are greatly characterized by the color of their wings.", "Figure REF illustrates five selected original (top) and reconstructed (bottom) images of butterflies and differences between the two are indeed only visible when zooming in.", "Figure: Logarithmized objective function g ^ k \\hat{g}_k for the augmentation estimator using r k =25r_k = 25, s k =20s_k=20, k=1,2,3k=1,2,3, calculated for the 𝑏𝑢𝑡𝑡𝑒𝑟𝑓𝑙𝑦\\textit {butterfly} data set." ], [ "Discussion", "Data objects which have a natural representation as tensors, like images or video, are increasingly common and often the dimensions of these tensors are huge.", "In such cases it is often assumed that the data contain a lot of noise and that a representation using smaller tensors should be sufficient to capture the information content of the data.", "A difficult key question is then to decide on the dimensions of these smaller signal tensors.", "We considered this problem in the framework of a tensorial principal component analysis, also known as HOSVD, and suggested an automated procedure for the order determination by extending the works of [13], [17].", "The properties of the novel estimator were rigorously derived and its usefulness was demonstrated using simulated and real data.", "We also observed that, for finite $n$ , increasing the number $r_k$ of augmented rows did not fully agree with the theory, which might be related to the Marchenko-Pastur law.", "This will be investigated further in future research by assuming a high-dimensional framework where we will also consider the use of different norms when computing the objective criterion.", "Furthermore, we plan to derive also a hypothesis test which would allow inference about the signal tensor dimension." ], [ "Tensor notation", "Let $\\mathcal {A} = (a_{i_1, \\dots , i_m})\\in \\mathbb {R}^{p_1\\times \\cdots \\times p_m}$ be a tensor of order $m$ .", "The $m$ “directions” from which we can look at $\\mathcal {A}$ are called the modes of $\\mathcal {A}$ .", "As discussed, the number of elements in a high-order tensor is in general rather large, and it is therefore often useful to split a tensor into smaller pieces using $k$ -mode vectors.", "For a fixed mode $k$ , a single $k$ -mode vector of a tensor $\\mathcal {A}$ is obtained by letting the $k$ th index of $a_{i_1, \\dots , i_m}$ vary while simultaneously keeping the rest fixed.", "The total number of $k$ -mode vectors is thus $\\rho _k := \\prod _{i\\ne k}p_i$ and each of them is $p_k$ -dimensional.", "The $p_k \\times \\rho _k$ -dimensional matrix $\\mathcal {A}_k$ having all $k$ -mode vectors of $\\mathcal {A}$ as its columns is known as the $k$ -unfolding/flattening/matricization of the tensor $\\mathcal {A}$ .", "The ordering of the $k$ -mode vectors in $\\mathcal {A}_k$ is for our purposes irrelevant as long as it is consistent, and we choose the cyclical ordering as suggested in [3].", "For a matrix $\\textbf {A}_m\\in \\mathbb {R}^{q_k\\times p_k}$ , the $k$ -mode multiplication $\\mathcal {A}\\times _m\\textbf {A}_m$ of a tensor $\\mathcal {A}$ by the matrix $\\textbf {A}_m$ is defined as the tensor of order $p_1\\times \\cdots \\times p_{k-1}\\times q_k\\times p_{k+1}\\times \\cdots \\times p_m$ obtained by pre-multiplying each $k$ -mode vector of $\\mathcal {A}$ by $\\textbf {A}_k$ .", "Often, linear transformations are applied simultaneously from all $m$ modes and we therefore use the notation $\\mathcal {A}\\times _{i=1}^m\\textbf {A}_i:=\\mathcal {A}\\times _1\\textbf {A}_1\\times _2\\cdots \\times _m\\textbf {A}_m$ .", "The flattenings of such linear transformation have a particularly nice form.", "Namely, the $k$ -flattening of $\\mathcal {A}\\times _{i=1}^m\\textbf {A}_i$ is $\\textbf {A}_k \\mathcal {A}_k (\\textbf {A}_{k+1}\\otimes \\cdots \\otimes \\textbf {A}_m\\otimes \\textbf {A}_1\\otimes \\cdots \\otimes \\textbf {A}_{k-1})^{\\prime }$ .", "Relevant to our purposes is also the Frobenius norm $\\Vert \\mathcal {A}\\Vert _\\mathrm {F}$ of a tensor $\\mathcal {A}$ , defined as the square root of the sum of squared elements of $\\mathcal {A}$ .", "For more details on tensor algebra and the corresponding notation see e.g.", "[3], [9]." ], [ "Proofs of technical results", "[Proof of Lemma REF ] $\\mathcal {E}$ has spherical distribution, implying that, for any orthogonal matrices $\\textbf {V}_i\\in \\mathbb {R}^{p_i\\times p_i}$ , $i=1,\\dots ,m$ , $\\mathcal {E}\\times _{i=1}^m\\textbf {V}_i\\sim \\mathcal {E}$ .", "Then $\\textbf {V}_k\\mathcal {E}_k(\\textbf {V}_{k+1}\\otimes \\textbf {V}_{k+2}\\otimes \\cdots \\otimes \\textbf {V}_m\\otimes \\textbf {V}_1\\otimes \\textbf {V}_2\\otimes \\cdots \\otimes \\textbf {V}_{k-1})^{\\prime }\\sim \\mathcal {E}_k,$ for all orthogonal matrices $\\textbf {V}_i$ .", "Since the Kronecker product of orthogonal matrices is again an orthogonal matrix [10], then $(\\textbf {V}_{k+1}\\otimes \\textbf {V}_{k+2}\\otimes \\cdots \\otimes \\textbf {V}_m\\otimes \\textbf {V}_1\\otimes \\textbf {V}_2\\otimes \\cdots \\otimes \\textbf {V}_{k-1})$ is a $(\\prod _{i\\ne k}p_i\\times \\prod _{i\\ne k}p_i)$ orthogonal matrix.", "Thus, $\\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })=\\textbf {V}_k\\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })\\textbf {V}_k^{\\prime }$ , for all $(p_k\\times p_k)$ orthogonal matrices $\\textbf {V}_k$ , further implying that $\\mathbb {E}(\\mathcal {E}_k\\mathcal {E}_k^{\\prime })=\\sigma _k^2\\textbf {I}_{p_k}$ for some $\\sigma _k^2>0$ .", "[Proof of Theorem REF ] Let $\\beta _{}^*=(\\beta _{},\\beta _{S}) = \\textbf {B}^*_{k,0}\\textbf {a} \\in \\mathbb {R}^{p_k+r_k}$ , where $\\textbf {a}=(\\textbf {a}_1^{\\prime },\\textbf {a}_2^{\\prime })^{\\prime }$ follows a uniform distribution on the unit sphere in $\\mathbb {R}^{p_k-d_k+r_k}$ and $\\textbf {a}_1\\in \\mathbb {R}^{p_k-d_k}$ , $\\textbf {a}_2\\in \\mathbb {R}^{r_k}$ .", "Due to the sphericality of $\\textbf {a}$ , we can take $\\textbf {B}_{k,0}^*=\\mathrm {diag}([\\textbf {u}_{d_k+1},\\dots ,\\textbf {u}_{p_k}],\\textbf {I}_{r_k})$ , where $\\lbrace \\textbf {u}_{d_k+1},\\dots ,\\textbf {u}_{p_k}\\rbrace $ is a fixed orthonormal basis of the null space of $\\textbf {U}_k$ .", "Thus, the norm of the augmented part is $\\Vert \\beta _S\\Vert =\\Vert \\textbf {a}_2\\Vert $ .", "This now shows that $\\Vert \\beta _{S}\\Vert ^2$ , as the squared norm of the $r_k$ -dimensional sub-vector of the random vector $\\textbf {a}$ with a uniform distribution on the unit sphere in $\\mathbb {R}^{p_k-d_k+r_k}$ , has the $\\mathrm {Beta}(r_k/2,(p_k-d_k)/2)$ -distribution.", "In the following proofs we adopt the notation that $\\mathcal {X}^1,\\dots ,\\mathcal {X}^n$ is an i.i.d.", "sample from Model REF .", "For simplicity of notation, we assume without loss of generality that the mean $\\mathcal {M}$ of $\\mathcal {X}$ is a zero-tensor.", "Before giving proofs of Corollary REF and Lemma REF , we present first an auxiliary result.", "Lemma 3 Let $\\textbf {M}_k^*$ and $\\hat{\\textbf {M}}_k^*$ be as defined in the main text.", "Then, $\\Vert \\hat{\\textbf {M}}_k^*-\\textbf {M}_k^*\\Vert \\rightarrow _P 0$ .", "[Proof of Lemma REF ] It is straightforward to verify that $\\textbf {M}_k^* &= \\begin{pmatrix}\\textbf {U}_k\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })\\textbf {U}_k^{\\prime } &\\textbf {0}\\\\\\textbf {0} &\\textbf {0}\\end{pmatrix}, \\\\\\hat{\\textbf {M}}_k^* &= \\frac{1}{n}\\sum _{i=1}^n\\begin{pmatrix}\\mathcal {X}_{k,i}\\mathcal {X}_{k,i}^{\\prime } &\\hat{\\sigma }_k\\mathcal {X}_{k,i}\\textbf {X}_{S,i}^{\\prime }\\\\\\hat{\\sigma }_k\\textbf {X}_{S,i}\\mathcal {X}_{k, i}^{\\prime } & \\hat{\\sigma }_k^2\\textbf {X}_{S,i}\\textbf {X}_{S,i}^{\\prime }\\end{pmatrix}-\\hat{\\sigma }_k^2\\textbf {I}_{p_k+r_k}.$ Observe now the first diagonal block of $\\hat{\\textbf {M}}_k^*$ .", "Due to WLLN, continuous mapping theorem and the fact that $\\hat{\\sigma }_k$ is a consistent estimator of $\\sigma _k$ , we have $\\frac{1}{n}\\sum _{i=1}^n\\mathcal {X}_{k,i}\\mathcal {X}_{k,i}^{\\prime }-\\hat{\\sigma }_k^2\\textbf {I}_{p_k}\\rightarrow _P \\mathbb {E}(\\mathcal {X}_k\\mathcal {X}_k^{\\prime })-\\sigma _k^2\\textbf {I}_{p_k}=\\textbf {U}_k\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })\\textbf {U}_k^{\\prime }$ .", "The convergence of the second diagonal block is proved similarly.", "Furthermore, by assumption, $\\hat{\\sigma }_k^2\\rightarrow _{P}\\sigma _k^2$ and $\\frac{1}{n}\\sum _{i=1}^n\\textbf {X}_{S,i}\\mathcal {X}_{k,i}^{\\prime }\\rightarrow _{P}\\mathbb {E}(\\textbf {X}_S \\mathcal {X}_k^{\\prime })=\\textbf {0}$ , implying the convergence of the off-diagonal blocks.", "Finally, since matrix norms are continuous functions, we obtain that $\\Vert \\hat{\\textbf {M}}_k^*-\\textbf {M}_k^*\\Vert \\rightarrow _P 0$ .", "[Proof of Corollary REF ] $\\hat{\\textbf {M}}_k^*$ , $\\textbf {M}_k^*$ and their difference $\\textbf {R}_{k,n}=\\hat{\\textbf {M}}_k^*-\\textbf {M}_k^*$ are symmetric matrices.", "Weyl's theorem then implies that, for any $i\\in \\lbrace 1,\\dots , p_k+r_k\\rbrace $ , we have $0\\le |\\hat{\\lambda }_{k,i}-\\lambda _{k,i}|\\le \\Vert \\textbf {R}_{k,n}\\Vert _2=o_P(1)$ , where $\\Vert \\textbf {R}_{k,n}\\Vert _2=o_P(1)$ holds due to Lemma REF .", "[Proof of Lemma REF ] For simplicity of notation, we do not write the mean terms $\\mathcal {M}_k$ and $\\bar{\\mathcal {X}}_k$ below.", "Mimicking the proof of Lemma REF it is easy to show that $\\left\\Vert \\frac{1}{n}\\sum _{i=1}^n(\\mathcal {X}_{k,i}{\\mathcal {X}_{k,i}}^{\\prime })-\\mathbb {E}\\left(\\mathcal {X}_{k,i}\\mathcal {X}_{k,i}^{\\prime }\\right)\\right\\Vert ^2=o_P(1)$ , for $k=1,\\dots ,m$ .", "Mimicking further the proof of Corollary REF for $\\mathbb {E}(\\mathcal {X}_{k,i}\\mathcal {X}_{k,i}^{\\prime })$ one can show that the eigenvalues satisfy $\\hat{\\sigma }_{k,i}^2=\\sigma _{k,i}^2+o_P(1)$ , $k=1,\\dots ,m$ .", "We then have that $\\mathbb {E}(\\mathcal {X}_k\\mathcal {X}_k^{\\prime })=\\textbf {U}_k^{\\prime }\\mathbb {E}(\\mathcal {Z}_k\\mathcal {Z}_k^{\\prime })\\textbf {U}_k+\\sigma _k^2\\textbf {I}_{p_k}$ implying that $\\sigma _{k,i}^2=\\lambda _{k,i}+\\sigma _k^2$ , $k=1,\\dots ,m$ .", "Moreover, $\\hat{S}_k =\\lbrace \\frac{p_i}{p_k}\\hat{\\sigma }_{i,j_i}^2:i=1,\\dots ,m,\\,j_i=1,\\dots ,d_i\\rbrace \\cup \\hat{S}_{0k},$ where $\\hat{S}_{0k}=\\lbrace \\frac{p_i}{p_k}\\hat{\\sigma }_{i,j_i}^2:i=1,\\dots ,m,\\,j_i=d_i+1,\\dots ,p_i\\rbrace $ and each element of $\\hat{S}_{0k}$ is due to the upper discussion of form $\\sigma _k^2+o_P(1)$ .", "i) If $d_1+\\dots +d_m<(1-q)(p_1+\\dots +p_m)$ , then both estimators $\\hat{\\sigma }^2_{k,q_1}$ and $\\bar{\\sigma }_{k,q_1}^2$ , for $q_1\\le q$ belong to $\\hat{S}_{0k}$ , which proves the first statement.", "ii) If $d_1+\\cdots +d_m<p_1+\\cdots +p_m$ then the estimator $\\min \\lbrace \\hat{S}_k\\rbrace $ belongs to $\\hat{S}_{0k}$ , proving the second statement.", "In the following, we let $\\hat{\\sigma }_k^2=\\hat{\\sigma }_k^2(\\mathcal {X}^1,\\dots ,\\mathcal {X}^n)$ be a consistent estimator of the error variance in the $k$ th mode.", "[Proof of Theorem REF ] Lemma REF gives that $\\Vert \\hat{\\textbf {M}}_k^*-\\textbf {M}_k^*\\Vert =o_P(1)$ , thus implying that each element of the difference of those two matrices is $o_P(1)$ .", "For simplicity, we write $\\hat{\\textbf {M}}_k^*-\\textbf {M}_k^*=o_P(1)$ , where the convergence is element-wise in probability.", "Lemma REF and Corollary REF now imply $\\sum _{i=1}^{d_k}{\\lambda }_{k,i}^2\\hat{\\beta }_{k,i}^*{\\hat{\\beta }_{k,i}^*}{}^{\\prime }-\\sum _{i=1}^{d_k}{\\lambda }_{k,i}^2{\\beta }_{k,i}^*{\\beta _{k,i}^*}^{\\prime }=o_P(1).$ If we multiply (REF ) by $\\hat{\\beta }_{k,1}^*$ from the right-hand side, we obtain $\\hat{\\beta }_{k,1,S}=\\frac{1}{\\lambda _{k,1}^2}\\sum _{i=1}^{d_k}{\\lambda }_{k,i}^2({\\beta _{k,i}^*}^{\\prime }\\hat{\\beta }_{k,1}^*){\\beta }_{k,i,S}+o_P(1)=o_P(1),$ where the last equality holds due to $\\beta _{k,i,S}=\\textbf {0}$ as shown in Section .", "We repeat the same procedure for all $\\hat{\\beta }_{k,i,S}$ , $i=1,\\dots ,d_k$ , thus proving statement (i).", "We next prove statement (ii).", "If we further multiply (REF ) from both sides by any $\\beta _k^*\\in \\mathrm {Ker}(\\textbf {M}_k^*)$ , we obtain $\\sum _{i=1}^{d_k}\\lambda _{k,i}^2({\\hat{\\beta }_{k,i}^*}{}^{\\prime }\\beta _k^*)^2=o_P(1)$ , thus implying that ${\\hat{\\beta }_{k,i}^*}{}^{\\prime }\\beta _{k,j}^*=o_P(1),\\,\\, i=1,\\dots ,d_k,\\,\\, j=d_k+1,\\dots ,p_k + r_k.$ We let $\\hat{\\textbf {M}}^*_{k,0}$ denote the matrix that is otherwise exactly as $\\hat{\\textbf {M}}^*_{k}$ but with every instance of $\\hat{\\sigma }^2_k$ replaced with $\\sigma ^2_k$ and with the sample means $\\bar{\\textbf {X}}^*_k$ replaced with zero matrices.", "That is, in $\\hat{\\textbf {M}}^*_{k,0}$ the augmented part is scaled with the true value $\\sigma _k$ instead of the estimator $\\hat{\\sigma }_k$ and we “center” using the population-level matrix $\\sigma _k^2 \\textbf {I}_{p_k + r_k}$ .", "By Lemma REF and the law of large numbers, we then have $ \\hat{\\textbf {M}}^*_{k} = \\hat{\\textbf {M}}^*_{k,0} + o_p(1) $ .", "Define then the following $(p_k + r_k) \\times (p_k + r_k)$ orthogonal matrix, $\\textbf {W} := \\begin{pmatrix}\\textbf {U}_k & \\textbf {U}_0 & \\textbf {0} \\\\\\textbf {0} & \\textbf {0} & \\textbf {I}_{r}\\end{pmatrix},$ where the $p_k \\times d_k$ matrix $\\textbf {U}_k$ is as in (REF ), and the $p_k \\times (p_k - d_k)$ matrix $\\textbf {U}_0$ is an arbitrary matrix that makes $(\\textbf {U}_k, \\textbf {U}_0)$ orthogonal.", "Block-wise multiplication now shows that $\\textbf {W}^{\\prime } ((\\mathcal {X}_{k,i})^{\\prime },\\sigma _k\\textbf {X}_{S,i}^{\\prime })^{\\prime }$ equals $\\begin{pmatrix}\\mathcal {Z}_{k,i} (\\textbf {U}_{-k}^\\otimes )^{\\prime } + \\textbf {U}_k^{\\prime } \\mathcal {E}_{k,i} \\\\\\textbf {U}_0^{\\prime } \\mathcal {E}_{k,i} \\\\\\sigma _k \\textbf {X}_{S,i}\\end{pmatrix}.$ The final $p_k - d_k + r_k$ rows of the matrix in (REF ) consist solely of mutually independent elements from the distribution $\\mathcal {N}(0, \\sigma ^2_k)$ .", "As these rows are further independent from the first $d_k$ rows, the distribution of the matrix (REF ) is invariant to multiplication from the left with matrices of the form $\\mathrm {diag}(\\textbf {I}_{d_k}, \\textbf {V}^{\\prime })$ where $\\textbf {V}$ is an arbitrary $(p_k - d_k + r_k) \\times (p_k - d_k + r_k)$ orthogonal matrix.", "Consequently, we also have the distributional invariance, $\\textbf {W}^{\\prime } \\hat{\\textbf {M}}^*_{k,0} \\textbf {W} \\sim \\mathrm {diag}(\\textbf {I}_{d_k}, \\textbf {V}^{\\prime }) \\textbf {W}^{\\prime } \\hat{\\textbf {M}}^*_{k,0} \\textbf {W} \\mathrm {diag}(\\textbf {I}_{d_k}, \\textbf {V}),$ for any orthogonal $\\textbf {V}$ .", "As the distribution of $\\mathcal {E}_{k,i}$ is absolutely continuous, we have, for large enough $n$ , that the eigenvalues of $\\hat{\\textbf {M}}^*_{k,0}$ are almost surely simple.", "Without loss of generality, we next restrict solely to this event.", "Consequently, the corresponding eigenvectors $\\hat{\\gamma }_{k,j}^*$ , $j = 1, \\ldots , p_k + r_k$ are uniquely defined up to their sign, which we choose arbitrarily.", "Next, for a symmetric random matrix $\\textbf {A}$ with simple eigenvalues, we let $\\gamma _j(\\textbf {A})$ denote its $j$ th eigenvector multiplied with a uniformly random sign (the sign multiplication guarantees that the distribution of $\\gamma _j(\\textbf {A})$ is well-defined even though the eigenvectors themselves are unique only up to sign.", "Applying the map $\\textbf {A} \\mapsto \\gamma _k(\\textbf {A})$ to relation (REF ) gives that $s_1 \\textbf {W}^{\\prime } \\hat{\\gamma }_{k,j}^* \\sim s_2\\mathrm {diag}(\\textbf {I}_{d_k}, \\textbf {V}^{\\prime }) \\textbf {W}^{\\prime } \\hat{\\gamma }_{k,j}^*,$ where $s_1, s_2$ are uniformly random signs.", "Thus, the sub-vector consisting of the final $p_k - d_k + r_k$ elements of $\\textbf {W}^{\\prime } \\hat{\\gamma }_{k,j}^*$ is orthogonally invariant in distribution.", "Using the decomposition $ \\hat{\\gamma }_{k,j}^* = (\\hat{\\gamma }_{k,j,1}, \\hat{\\gamma }_{k,j,S}) $ , when $j > d_k$ this sub-vector writes $ \\hat{\\textbf {m}} := (\\textbf {U}_0^{\\prime } \\hat{\\gamma }_{k,j,1}, \\hat{\\gamma }_{k,j,S})$ .", "By mimicking (REF ), we see that $ \\textbf {U}_k^{\\prime } \\hat{\\gamma }_{k,j,1} = o_p(1) $ , and then the unit length of $\\hat{\\gamma }_{k,j}^*$ guarantees that $\\Vert \\hat{\\textbf {m}} \\Vert = 1 + o_p(1)$ .", "By writing $\\hat{\\textbf {m}} = \\hat{\\textbf {m}}/\\Vert \\hat{\\textbf {m}} \\Vert + (1 - 1/\\Vert \\hat{\\textbf {m}}\\Vert ) \\hat{\\textbf {m}} = \\hat{\\textbf {m}}/\\Vert \\hat{\\textbf {m}} \\Vert + o_p(1),$ we see that the limiting distribution of $\\hat{\\textbf {m}}$ is the uniform distribution on the unit sphere in $\\mathbb {R}^{p_k - d_k + r_k}$ .", "As such, the limiting distribution of $\\Vert \\hat{\\gamma }_{k,j,S} \\Vert ^2$ is $\\mathrm {Beta}\\lbrace r_k/2, (p_k - d_k)/2 \\rbrace $ .", "As $ \\hat{\\textbf {M}}^*_{k} = \\hat{\\textbf {M}}^*_{k,0} + o_p(1) $ and as the extraction of the $j$ th eigenvector is (up to sign) a continuous mapping when the $j$ th eigenvalue is simple, the same limiting distribution is valid also for $\\Vert \\hat{\\beta }_{k,j,S} \\Vert ^2$ , concluding the proof.", "[Proof of Corollary REF ] Let $F_n$ and $F$ be the CDFs of $\\Vert \\hat{\\beta }_{k,i,S}\\Vert ^2$ and $\\mathrm {Beta}\\lbrace r_k/2, (p_k - d_k)/2 \\rbrace $ , respectively, and observe that $F$ is continuous at 0 (from the right).", "Furthermore, due to Theorem REF , $\\Vert \\hat{\\beta }_{k,i,S}\\Vert ^2\\rightsquigarrow \\mathrm {Beta}\\lbrace r_k/2, (p_k - d_k)/2 \\rbrace $ , $i > d_k$ .", "Observe then that, since $F_n$ is a CDF, it is a bounded and increasing function.", "Thus, for fixed $n\\in \\mathbb {N}$ , $\\lim _{\\varepsilon \\rightarrow 0}F_n(\\varepsilon )$ exists.", "Furthermore, as $\\Vert \\hat{\\beta }_{k,i,S}\\Vert ^2$ converges weakly to $\\mathrm {Beta}\\lbrace r_k/2, (p_k - d_k)/2 \\rbrace $ , and $F$ is continuous on $[0,1]$ , then $F_n$ converges uniformly to $F$ on $[0,1]$ .", "Therefore, due to Moore-Osgood theorem the following limits interchange: $\\lim _{n\\rightarrow \\infty }\\lim _{\\varepsilon \\rightarrow 0}F_n(\\varepsilon )=\\lim _{\\varepsilon \\rightarrow 0}\\lim _{n\\rightarrow \\infty }F_n(\\varepsilon )=\\lim _{\\varepsilon \\rightarrow 0}F(\\varepsilon )=0,$ where the last equality holds since $F$ is continuous at 0 from the right as the CFD of the $\\mathrm {Beta}\\lbrace r_k/2, (p_k - d_k)/2 \\rbrace $ -distribution.", "[Proof of Theorem REF ] We give a proof for $s=1$ .", "The general case is proven using the same technique.", "For $i<d_k$ , the denominator of $\\hat{\\Phi }_k$ satisfies $1+\\sum _{j=1}^i\\hat{\\lambda }_{k,j}\\rightarrow _P 1+\\sum _{j=1}^i\\lambda _{k,i}>1.$ The continuous mapping theorem ($x\\mapsto 1/x$ is continuous on $[1,\\infty )$ ) then implies that $(1+\\sum _{j=1}^i\\hat{\\lambda }_{k,j})^{-1}\\rightarrow _P (1+\\sum _{j=1}^i\\lambda _{k,j})^{-1}\\in (0,1).$ Since $\\hat{\\lambda }_{k, i+1}\\rightarrow _P\\lambda _{k,i+1}>0$ , then $\\hat{\\Phi }_k(i)\\rightarrow _P \\lambda _{k,i+1}/(1+\\sum _{j=1}^{i+1}\\lambda _{k,j})>0$ .", "This shows then that $\\hat{g}_k(i)$ , $i < d_k$ , is bounded from below by a quantity which converges in probability to something strictly greater than zero.", "For $i>d_k$ , statement $(ii)$ of Corollary REF gives that for any sequence $\\varepsilon _n>0$ such that $\\varepsilon _n \\rightarrow 0$ , we have $\\mathbb {P}(\\Vert \\hat{\\beta }_{k,i,S}\\Vert ^2>\\varepsilon _n)\\rightarrow 1$ , as $n\\rightarrow \\infty $ .", "Thus, $\\mathbb {P}(\\hat{g}_k(i)>\\varepsilon _n)\\ge \\mathbb {P}(\\Vert \\hat{\\beta }_{k,i,S}\\Vert ^2>\\varepsilon _n)\\rightarrow 1$ , as $n\\rightarrow \\infty $ .", "For $i=d_k$ , by Theorem REF we have $\\sum _{j=1}^{d_k}\\Vert \\hat{\\beta }_{k,j,S}\\Vert ^2=o_P(1)$ , and by Corollary REF , $\\hat{\\lambda }_{k,d_k+1}=\\lambda _{k,d_k+1}+o_P(1)=o_P(1),$ implying that $\\hat{\\Phi }_k(d_k) \\rightarrow _p 0 $ .", "Thus $\\hat{g}_k(d_k) = o_p(1) $ .", "The above behaviours of $g_k$ in the three different cases $i < d_k$ , $i = d_k$ , $i > d_k$ now together give the desired claim." ] ]
2207.10423
[ [ "Statistical analysis of dislocation cells in uniaxially deformed copper\n single crystals" ], [ "Abstract The dislocation microstructure developing during plastic deformation strongly influences the stress-strain properties of crystalline materials.", "The novel method of high resolution electron backscatter diffraction (HR-EBSD) offers a new perspective to study dislocation patterning.", "In this work copper single crystals deformed in uniaxial compression were investigated by HR-EBSD, X-ray line profile analysis, and transmission electron microscopy (TEM).", "With these methods the maps of the internal stress, the Nye tensor, and the geometrically necessary dislocation (GND) density were determined at different load levels.", "In agreement with the composite model long-range internal stress was directly observed in the cell interiors.", "Moreover, it is found from the fractal analysis of the GND maps that the fractal dimension of the cell structure is decreasing with increasing average spatial dislocation density fluctuation.", "It is shown that the evolution of different types of dislocations can be successfully monitored with this scanning electron microscopy based technique." ], [ "Introduction", "It was first observed nearly 60 years ago that dislocations created during the plastic deformation of crystalline materials tend to form different patterns with morphology depending on the mode, temperature and rate of deformation.", "There is an equally longstanding discussion regarding the physical origin of these patterns.", "A large variety of approaches have been proposed to model the instability leading to the spatial variation of the dislocation density, many of which are based upon analogies with pattern formation in other physical systems.", "It has been argued that dislocation patterns can be understood by the tendency toward the minimization of some kind of elastic energy functional (Hansen and Kuhlmann-Wilsdorf [1], Holt [2], Richman and Vinas [3]), but the theories have never been worked out in details.", "Another approach proposed is to model the dislocation patterning as a reaction-diffusion phenomena of the mobile and inmobile dislocation densities (Walgraef and Aifantis [4], Pontes et al. [5]).", "The fundamental problem with this approach is that it is completely phenomenological, i.e.", "one can not see how the different terms appearing in the evolution equations are related to the properties of individual dislocations.", "In a recent series of papers [6], [7], [8] a new theoretical approach based on a continuum theory of dislocations, derived from the evolution of individual dislocations, was proposed for modelling the patterning process.", "According to the theory the main source of the instability is the nontrivial mobility of the dislocations caused by the finite flow stress, while the characteristic length scale of the pattern is selected by the “diffusion” like terms appearing in the theory due to dislocation correlation effects.", "Since, however, the theory is developed for a rather idealised 2D dislocation configuration, further experimental and theoretical investigations are needed to create a general comprehensive theory of dislocation patterning.", "One of the most challenging issue is the characterisation and modelling of the self-similar fractal-like dislocation cell structure formed in FCC crystals oriented for ideal multiple slip (for details see the pioneering works of Zaiser and Hähner [9], [10]).", "In this paper we present high resolution electron backscatter diffraction (HR-EBSD), X-ray line profile analysis, and transmission electron microscopy (TEM) investigations on compressed Cu single crystals.", "Since some of the aspects of the applied methods are developed exclusively for the specific requirements of the addressed problem, in the first half of the paper the applied experimental methods are explained in detail.", "In the second half the obtained results are thoroughly discussed.", "In order to study the dislocation cell formation mechanism in FCC materials a high purity copper single crystal was used.", "The initial average dislocation density was confirmed to be $\\mathrm {1.5 \\times 10^{14}}$ m$^{-2}$ by X-ray line profile analysis (the details of the method applied are explained below in Sec.", "REF ).", "For the compression tests prism shaped samples with dimensions of 2.5 $\\times $ 2.5 $\\times $ 5 mm$^3$ were cut with an electrical discharge machine (EDM).", "The orientation of each surface was of (100) type.", "For removing the amorphous layer created by EDM the specimens were etched in a 30% $\\mathrm {HNO_3}$ solution for 10 minutes.", "To reduce the initial dislocation density the samples were heat treated at $600 \\; \\mathrm {^oC}$ for 6 hours in a vacuum furnace.", "According to X-ray line profile measurements the initial dislocation density decreased to about $\\mathrm {2 \\times 10^{13} \\; m^{-2}}$ .", "The samples were compressed on the squared shape surfaces ensuring uniaxial deformation in the [001] direction corresponding to ideal multiple slip.", "The geometry of the compression, EBSD scanning and the activated slip systems are shown in Fig.", "REF .", "Figure: The sample geometry and the active slip planes of the compressed Cu single crystals.", "The compression was applied on the [001] surface, while the EBSD and X-ray line profile measurements were performed on the [010] surface.Six samples were deformed up to different strain levels.", "The resolved shear stress $\\tau ^*$ vs. strain $\\varepsilon $ curve of the sample with the highest terminal deformation is shown in Fig.", "REF .", "The black dots on the curve mark the maximum stresses and strain levels of the 6 different samples.", "The red line in the figure shows the hardening rate $=d \\tau /d \\varepsilon $ as a function of strain.", "As it is expected for ideal multiple slip [11] the $(\\varepsilon )$ curve consists of a nearly horizontal (stage II) and a decreasing (stage III) part.", "Figure: Resolved shear stress and hardening rate vs. strain obtained on a compressed Cu single crystal oriented for (001) ideal multiple slip.", "The black dots on the curve mark the stress levels until which the 6 different samples were compressed.From the six specimens prepared three are close to the strain level corresponding to the transition from stage II to stage III.", "As it is seen below this strain region is critical for the statistical properties of the dislocation cell structure developing during the deformation.", "Finally, in order to prepare the samples for TEM and HR-EBSD measurements, electropolishing was applied at 20 V, 1.2 A using Struers D2 electrolyte for 30 seconds." ], [ "X-ray line profile analysis", "X-ray line profile analysis is a well-established method to determine the average dislocation density, the average squared dislocation density and the dislocation polarization from the measured intensity profile.", "In our analysis the “restricted moments” method developed by I. Groma et al.", "[12], [13], [14] was applied.", "In the evaluation of the measured data the asymptotic behavior of the different order restricted moments are analyzed.", "The $k\\mathrm {^{th}}$ order restricted moments are defined as $v_k(q) = \\frac{\\int _{-q}^{q} q^{\\prime k} I(q^{\\prime }) dq^{\\prime }}{\\int _{-\\infty }^{\\infty }I(q^{\\prime })dq^{\\prime }},$ where $I(q^{\\prime })$ is the intensity distribution near to a Bragg peak, in which $q^{\\prime } = 2 (\\sin \\theta - \\sin \\theta _0) / \\lambda $ , $\\lambda $ is the wavelength of the applied X-rays, and $\\theta $ and $\\theta _0$ are the half of the diffraction and Bragg angles, respectively.", "As it is explained in detail in [12] for large enough $q^{\\prime }$ value the asymptotic form of the $\\mathrm {2^{nd}}$ order restricted moment reads as $v_2(q) = 2 \\Lambda \\langle \\rho \\rangle \\ln {\\left(\\frac{q}{q_0}\\right)},$ were $\\langle \\rho \\rangle $ is the average dislocation density, $q_0$ is a parameter determined by the dislocation-dislocation correlation, and $\\Lambda $ is a constant depending on the dislocation Burgers vector $\\vec{b}$ , the line direction $\\vec{l}$ , and the diffraction vector $\\vec{g}$ .", "$\\Lambda $ is commonly written in the form $\\Lambda =\\pi |\\vec{g}|^2|\\vec{b}^2|C/2$ where $C$ is called the contrast factor.", "(For its actual value a detailed deduction and explanation can be found in [12].)", "From the intensity profiles measured the values of $\\Lambda \\langle \\rho \\rangle $ and $q_0$ can be obtained by fitting a straight line on the asymptotic part of the $v_2(q)$ versus $\\ln (q)$ plot.", "Beside the $\\mathrm {2^{nd}}$ order restricted moment for our analysis the $\\mathrm {4^{th}}$ order restricted moment is also important.", "In the asymptotic regime it is [12]: $v_4(q) = \\Lambda \\langle \\rho \\rangle q^2 + 12 \\Lambda ^2 \\langle \\rho ^2 \\rangle \\ln ^2{\\left(\\frac{q}{q_1}\\right)},$ where $\\langle \\rho ^2 \\rangle $ is the average dislocation density fluctuation, and $q_1$ is a parameter.", "For the better visualization it is useful to consider the quantity $\\frac{v_4(q)}{q^{2}} = \\Lambda \\langle \\rho \\rangle + 12 \\Lambda ^2 \\langle \\rho ^2 \\rangle \\frac{\\ln ^2{\\left(\\frac{q}{q_1}\\right)}}{q^{2}},$ which asymptotically tends to $\\Lambda \\langle \\rho \\rangle $ .", "The actual values of the parameters $\\Lambda \\langle \\rho \\rangle $ , $\\Lambda \\langle \\rho ^2 \\rangle $ , and $q_1$ can be determined by fitting the form given by Eq.", "(REF ) to the asymptotic regime of the $ v_4(q)/q^2 $ versus $q$ plot.", "An important statistical parameter of the dislocation microstructure developed is the relative dislocation fluctuation defined as $\\sigma = \\sqrt{\\frac{\\langle \\rho ^2 \\rangle - \\langle \\rho \\rangle ^2}{\\langle \\rho \\rangle ^2}}$ that can be determined from the $\\mathrm {4^{th}}$ order restricted moment.", "It should be noted that the measured intensity often contains a background which has to be subtracted before the calculation of the restricted moments.", "Since however, the background has different contribution to the $\\mathrm {2^{nd}}$ and $\\mathrm {4^{th}}$ order restricted moments, determining the average dislocation density from both moments offers a internal checking possibility whether the background level was selected correctly.", "The profile measurements have been performed with a Cu rotating anode Cu X-ray generator at 40 kV and 100 mA with wavelength $\\lambda =0.15406$ nm.", "In order to reduce the instrumental broadening the symmetrical (220) reflection of a Ge monochromator was used.", "The K$\\alpha _2$ component of the Cu radiation was eliminated by an 0.1 mm slit between the source and the Ge crystal.", "The profiles were registered by a linear position sensitive DECTRIS MYTHEN2 R detector with 50 $\\mu $ m spatial resolution and 1280 channels.", "The sample-detector distance was $0.7$ m resulting an angular resolution in the order of $0.004^{\\circ }$ .", "The evaluation method applied is demonstrated on the intensity distribution (Fig.", "REF ) obtained on the sample compressed up to 43.12 MPa resolved shear stress and on the corresponding restricted moments (Fig.", "REF ).", "The different parameters can be determined with an accuracy of less than 5%.", "Figure: The X-ray line profile obtained at g →=(020)\\vec{g}=(020) on the sample compressed up to 43.12 MPa.", "In order to eliminate the effect of the noise the peak intensity should be at least 10 3 -10 4 \\mathrm {10^3-10^4} times higher than the background, and a subsequent background subtraction should be carried out.Figure: The raw data (blue lines) and the fitted restricted moments (orange lines)." ], [ "HR-EBSD", "EBSD measurements were carried out in a FEI Quanta 3D SEM equipped with an Edax Hikari EBSD detector.", "Diffraction patterns were recorded with 1$\\times $ 1 binning (640 px $\\times $ 480 px resolution) using an electron beam of 20 kV, 16 nA.", "In order to carry out statistical analysis on the collected data a 20 $\\mu $ m $\\times $ 20 $\\mu $ m area was mapped with a step size of 100 nm on each sample.", "The HR-EBSD technique utilizes image cross-correlation on the recorded diffraction patterns [15].", "The local strain tensor components can be determined, and a lower bound estimate of the GND density can be given using the commercially available software.", "The method requires an ideally stress-free diffraction pattern as reference, that is often difficult to obtain experimentally.", "In the absence of such reference, it is noted that the scales should be implemented as relative and not absolute measures.", "Image cross-correlation based HR-EBSD calculations were performed using BLG Vantage CrossCourt v.4 software that provided the components of the elastic distortion ($\\beta _{ij}^{el}$ ) and the stress tensor ($\\sigma _{ij}$ ) and the also the values of the GND density ($\\rho _\\mathrm {GND}$ ).", "From the distortion map the Nye dislocation density tensor $\\alpha _{ij}$ defined as [16] $\\alpha _{ij}=-e_{klj}\\partial _k \\beta ^{el}_{il}$ can be determined where $\\beta _{ij}^{el}$ is the elastic distortion tensor and $e_{ijk}$ is the Levi-Civita symbol.", "Since, however, in the HR-EBSD measurement the distortion tensor is measured directly on the sample surface, only those components of $\\alpha _{ij}$ can be calculated that are independent from the derivation in the direction perpendicular to the sample surface.", "So, in a coordinate system with $z$ axis perpendicular to the sample surface only the ${iz}$ components of the Nye tensor $\\alpha _{iz}=\\partial _y \\beta ^{el}_{ix}-\\partial _x \\beta ^{el}_{iy}$ can be directly determined from a HR-EBSD measurement.", "Since $\\alpha _{ij}=\\sum _t b_i^t l_j^t \\rho ^t $ where the superscript $t$ denotes a given type of dislocation present in the system with Burgers vector $\\vec{b}^t$ , line direction $\\vec{l}^t$ , and dislocation density $\\rho ^t$ , from the measured Nye tensor components, one can make an estimate on the dislocation population in the different slip systems (for details see below).", "Furthermore, to characterize the GND density the scalar quantity $\\rho _\\mathrm {GND}=\\frac{1}{b}\\sqrt{\\alpha _{xz}^2+\\alpha _{yz}^2+\\alpha _{zz}^2} $ was introduced.", "The GND density and the $\\alpha _{iz}$ tensor components were determined using a C++ code developed by some of the Authors [17], [18].", "It was already demonstrated earlier by Groma et al.", "and Wilkinson et al.", "[19], [20], [21], that for a dislocation ensemble of parallel edge dislocations the asymptotic part of the probability distribution of the internal stress $p(\\sigma )$ decays as $p(\\sigma ) \\approx \\frac{b^2 \\mu ^2}{8 \\pi ^2} C \\langle \\rho \\rangle \\frac{1}{\\sigma ^3}$ were $\\mu $ is the shear modulus and $C$ is a “geometrical” constant depending on the type of dislocation similar to the contrast factor in the case of X-ray peaks, and the stress component considered.", "So, like for X-ray line broadening, in the asymptotic regime the second order restricted moment of $p(\\sigma )$ is linear in $\\ln (\\sigma )$ .", "It should be noted, however, that the stress value obtained by HR-EBSD in a given scanning point is the average stress on the area illuminated by the incoming electron beam.", "As a result at large enough stress levels the probability distribution $P(\\sigma )$ measured deviates from the inverse cubic decay, as it turns to a much faster decaying regime [17].", "Nevertheless, for most cases one can easily identify a linear regime on the the second order restricted moment of $p(\\sigma )$ versus $\\ln (\\sigma )$ plot (see Fig.", "REF ).", "From the deviation of the inverse cubic decay we can define a characteristic length scale $r_d=\\mu b /\\sigma _d$ where $\\sigma _d$ is the stress level where the probability distribution start to deviate from the inverse cubic regime.", "In the investigations performed $r_d\\approx 75$ nm.", "This means, that dislocation dipoles narrower than $r_d$ are not “seen” by this method.", "So, compared to X-ray line profile analysis HR-EBSD somewhat underestimates the dislocation density (for details see below).", "Since from the HR-EBSD analysis one can obtain 5 independent stress components ($\\sigma _{zz} \\equiv \\sigma _{33}$ is assumed to vanish in the HR-EBSD analysis) a “formal” dislocation density $\\rho ^*_{ij}=b^2 \\mu ^2/ (8\\pi ) C_{ij} \\langle \\rho \\rangle $ can be determined from the stress maps corresponding to different $ij$ stress components, where the parameter $C_{ij}$ is the geometrical constant of the $ij^{\\mathrm {th}}$ stress component.", "Unlike for the X-ray line broadening there is no existing analytical calculation to give the precise value for $C_{ij}$ .", "($C_{ij}$ is calculated only for the shear stress generated by edge dislocations in isotropic materials in the coordinate system defined by the Burgers and line direction vectors of the dislocation [19].", "For the shear stress $C_{12}=\\pi /[2(1-\\nu )^2)]$ where $\\nu $ is the Poisson's ratio.)", "In the results presented we gave only the average of the 5 formal dislocation densities.", "A typical stress map, stress probability distribution, and the corresponding second order restricted moment can be seen in Fig.", "REF .", "Figure: Stress map, stress probability distribution, and the corresponding second order restricted moment obtained on the sample compressed up to 43.12 MPa \\mathrm {43.12 \\; MPa}." ], [ "TEM investigations", "A TEM specimen was fabricated from the bulk copper single crystal deformed up to $\\mathrm {43.12 \\; MPa}$ resolved shear stress with the aim of qualitative comparison of dislocation structures with those obtained from GND density maps.", "The TEM lamella preparation was carried out via a FEI Quanta 3D FEG dual-beam SEM-FIB microscope.", "The initial fabrication process was carried out at 30 kV acceleration voltage and 1,3,5 and 30 nA ion current.", "The finishing consisted of low current (220 pA, 470 pA) and low voltage (2 kV, 5 kV) ion polishing.", "It is noted that to be able to investigate very large dislocation cells unusually large (20 $\\mu $ m $\\times $ 20 $\\mu $ m) specimens were fabricated requiring extra care during the preparation process [22], [23], [24].", "Bright field images of the dislocation network were recorded on a $6 \\times 6$ $\\mathrm {cm^2}$ $\\mathrm {4k\\times 4k}$ CETA 16 CMOS camera with $\\mathrm {14 \\; \\mu m}$ pixel size, controlled by VELOX software in a Titan Themis G2 200 transmission electron microscope operated at 200 kV (see Fig.13)." ], [ "Fractal analysis", "The dislocation cell structure developing under unidirectional deformation at ideal multiple slip is known to be a “hole” fractal [9].", "Since the GND maps obtained by HR-EBSD measurements allow us to study the dislocation microstructure on a much larger area than one can do with TEM (applied traditionally for microstructure characterization) we performed fractal dimension analysis on the GND maps at different stress levels.", "We have applied two different methods, the “traditional” box counting and the correlation dimension analysis." ], [ "Box-counting algorithm", "A common algorithm to determine the fractal dimension of a set is the well known box-counting algorithm [25].", "In the method we cover the image with a $L$ sized grid, and then count the number of boxes $N$ covering part of the image.", "The fractal dimension $D_B$ is $D_B=\\frac{d\\ln (N)}{d\\ln (L)},$ that is obtained by fitting a straight line to the $\\ln (N)$ versus $\\ln (L)$ plot.", "It is a numerically cheap, fast and fairly precise method." ], [ "Correlation dimension", "One can also measure the geometrical randomness of points through the so-called correlation integral, which may be estimated for large enough systems with the correlation sum [26] $C(\\epsilon ) = \\frac{1}{N(N-1)}\\sum _{i \\ne j}^{N} H \\left(\\epsilon - |r_i - r_j|\\right)$ where $\\epsilon $ is the threshold distance, $N$ is the number of non-zero points, $H$ is the Heaviside step function, $r_i$ and $r_j$ are the coordinates of the set points.", "The correlation integral scales with the threshold distance as [26] $C(\\epsilon ) \\propto \\epsilon ^{D_c},$ where $D_c$ is the correlation dimension.", "One can easily see, that for points on a circle the correlation dimension $D_c = 1$ , for points on a sphere $D_c = 2$ and for points evenly distributed in a sphere $D_c = 3$ .", "For the analysis of 2D embedded geometrical structures one may expect that $1\\le D_c \\le 2$ ." ], [ "Image filtering", "The GND maps measured can not be analyzed with the method explained above in a straightforward manner.", "One issue is that the maps are obviously not binary ones so, one has to introduce some threshold value above which we consider the map intensity to be 1 and 0 below.", "The fractal dimension obtained may depend on the threshold value chosen.", "Another problem we face is that the GND map contains numerous random points.", "They may correspond to individual dislocations or narrow dislocation multipoles but certainly they should not be considered during the fractal analysis.", "A simple method for global binarization is the so-called Otsu's method [27], [28].", "(It is analogous to Fischer's Discriminant Analysis [29] method and equivalent to a globally optimized k-means clustering method [30], [31].)", "In the simplest form it returns a binarised intensity map threshold by maximizing the inter-class variance.", "In order to get this threshold value first the probability distribution of the point intensity $p(I)$ is calculated numerically with some appropriate binning level $L$ chosen.", "After this with a threshold level $t$ the histogram is cut into two subhistograms separated by the threshold, and the quantity $\\sigma _{w}^{2}(t)=P_{0}(t)\\sigma _{0}^{2}(t)+P_{1}(t)\\sigma _{1}^{2}(t)$ is calculated, where $P_{0}$ and $P_{1}$ are the probabilities of the two classes separated by $t$ , while $\\sigma _{0}^{2}$ and $\\sigma _{1}^{2}$ are variances of the two classes.", "The threshold for the image binarization is selected by minimizing $\\sigma _w(t)$ .", "Otsu's method performs exceptionally well when the histogram obtained on the image has a bimodal distribution and the background and foreground values are separated by a deep valley.", "However, if the image is corrupted with additive noise or the variation of intensities between background and foreground are large compared to the mean difference, the histogram may degrade.", "One may observe a fluctuating salt-and-pepper like noise on the Nye-tensor component maps (Fig.", "REF ) and GND density maps (Fig.", "REF ).", "This prevents the direct applicability of Otsu's method.", "In order to eliminate this noise, a smoothing window was applied to the measurable Nye-tensor components.", "The maps were convoluted with a circular averaging window of radius $\\mathrm {r=150}$ nm.", "The application of a smoothing window results in a more pronounced dislocation wall structure (Fig.", "REF ).", "Figure: Example for α yz \\mathrm {\\alpha _{yz}} maps and GND density maps obtained with and without smoothing for the sample compressed up to 43.12 MPa.", "a) α yz \\alpha _{yz} map without smoothing, b) with smoothing, c) GND map without smoothing, d) with smoothing.A globally applied binarization method discussed above may ignore those dislocation walls, which may have a lower dislocation density than the thickest dislocation ensembles.", "In order to avoid this problem a multiscale binarization method was developed.", "The area map was subdivided into squared sub-areas and Otsu's method was applied separately for each sub-areas (Fig.", "REF ).", "By repeating this algorithm with areas with different sizes and by adding up the maps binarized with different scales we could obtain a purely bimodal histogram for the image (Fig.", "REF ).", "Those pixels were considered as dislocation walls which had a higher value than the intensity value corresponding to the minimum of the histogram valley.", "This method is a powerful tool to obtain not only the global, but also the globally invisible, locally present dislocation walls (see Fig.", "REF ).", "Figure: a) Otsu's binarization method with box size of 1 μ\\mathrm {\\mu }m, b) 5 μ\\mathrm {\\mu }m, c) the added binary maps at all binarization sizes, and d) the map after the final binarization." ], [ "Burgers vector analysis", "As it was discussed above only the $\\alpha _{iz}, \\; i = x,y,z;$ components of the Nye-tensor can be determined from a HR-EBSD measurement without any further assumption regarding the dislocation system (Fig.", "REF ).", "Therefore, according to Eq.", "(REF ) the vector constructed from the available Nye-tensor components $\\vec{B} = (\\alpha _{xz}, \\alpha _{yz}, \\alpha _{zz})$ is $B_i=\\sum _t b_i^t \\rho ^t \\cos (\\vartheta ^t)$ where $\\vartheta ^t$ is the angle between the line direction of the $t^{\\mathrm {th}}$ type of dislocation and the surface normal vector.", "To characterize the type and sign of the dislocation at a given point of the scanned surface the method introduced in [32] is followed, that is, the quantity $a_i = \\cos \\left(\\varphi _i \\right) = \\frac{\\vec{B} \\cdot \\vec{b}_i}{B b_i} $ can be calculated where the index $i$ goes through all the 6 Burgers vectors existing in FCC crystals [32].", "Certainly one cannot determine the relative population of the different type of dislocations from $\\vec{B}$ , but according to the definition given by Eq.", "(REF ) if the $\\rho ^t$ density of one of the Burgers vectors is dominantly larger than the other ones, the absolute values of the corresponding $a_i$ are close to 1.", "Therefore, $a_i$ values can help to describe the type of dislocations at the sample surface.", "To visualize this the product of the $a_i$ and GND maps were calculated for the 6 possible Burgers vectors.", "(Typical results can be seen in Fig.", "REF .)" ], [ "Results and discussion", "As a first step X-ray line profile measurements with $\\lbrace 020\\rbrace $ Bragg reflection were performed on the $(010)$ surface of the undeformed and the 6 samples deformed up to different stress levels.", "According to earlier investigations on deformed Cu single crystals oriented for ideal multiple slip [14] for this reflection $\\Lambda =0.783$ .", "The intensity distributions were analysed with the restricted momentum method explained earlier.", "Both the $2^{\\mathrm {nd}}$ and the $4^{\\mathrm {th}}$ order restricted moments were evaluated.", "As it can be seen in Fig.", "REF the Taylor linear relation between the square root of the dislocation density and the resolved shear stress is fulfilled.", "The small deviation seen at $\\tau ^*=26.5$ MPa and $\\tau ^*=36.11$ MPa stress levels can be attributed to the fact that close to the end of stage II the dislocation population may differ from the one obtained in the investigations presented in [14].", "As a consequence the contrast factor can be slightly different from the one used.", "The results are in agreement with the earlier investigations of Székely et al. [33].", "It should be noted, however, that the X-ray detector used in the measurements reported here has a much better signal-to-noise ratio than the one used earlier resulting in an improved accuracy of the current study.", "Figure: The 〈ρ〉-τ * \\sqrt{\\langle \\rho \\rangle } - \\tau ^* relation, where 〈ρ〉\\langle \\rho \\rangle is the average dislocation density measured by X-ray and τ * \\tau ^* is the resolved shear stress.In Fig.", "REF the $v^4(q)/q^2$ restricted moments are plotted for the 7 samples.", "It can be seen even without any curve fitting that the asymptotic part of the curves tend to a constant value that increases monotonically with the applied stress.", "Since the asymptotic value of the $v^4(q)/q^2$ is proportional to the average dislocation density this is in accordance with the results discussed above.", "It is remarkable, however, that the maximum values of the curves normalized with the asymptotic value is not monotonous with the stress.", "It has a clear maximum at 36.11 MPa stress level.", "After performing the fitting of the function given by Eq.", "(REF ) the $\\sigma $ value defined by Eq.", "(REF ) can be determined.", "Figure: The v 4 (q)/q 2 v^4(q)/q^2 versus qq curves at different compression levels.", "The corresponding resolved shear stresses are indicated in the upper right corner.In agreement with the “phenomenological” feature mentioned above the $\\sigma $ versus $\\tau $ curve exhibits a sharp maximum at $\\tau =36.11$ MPa (see Fig.", "REF ) corresponding to the stage II to stage III transition stress level (see Fig.", "REF ).", "The results obtained are in agreement with the ones reported earlier on the same material [33].", "Figure: The σ(τ * )\\sigma (\\tau ^*) function is represented, where σ\\sigma is the average dislocation density fluctuation.As it was suggested earlier by Mughrabi et al.", "[34], [35] the dislocation system can be envisaged as a composite of “hard” dislocation walls with dislocation density of $\\rho _w$ and “soft” cell interiors with dislocation density $\\rho _c$ .", "Within this model $\\langle \\rho \\rangle =f \\rho _w +(1-f) \\rho _c$ and $\\langle \\rho ^2 \\rangle =f \\rho _w^2 +(1-f) \\rho _c^2$ where $f$ is the volume fraction of the cell walls.", "Since according to earlier investigations [34], [35] $f$ is in the order of 0.1, and $\\rho _w$ is an order of magnitude higher than $\\rho _c$ the second term in $ \\langle \\rho ^2 \\rangle $ can be neglected so $\\langle \\rho ^2 \\rangle \\approx f \\rho _w^2$ With this the quantity $\\rho _w^{app}=\\langle \\rho ^2 \\rangle /\\langle \\rho \\rangle $ , that can be determined directly from the X-ray line profile, is $\\rho _w^{app}=\\rho _w \\frac{1}{1+\\frac{(1-f)\\rho _c}{f \\rho _w}}.$ If $\\rho _c$ is small then the “apparent” dislocation density $\\rho _w^{app}\\approx \\rho _w $ .", "According to Fig.", "REF in stage II, $\\rho _w^{app}\\approx \\rho _w $ increases monotonically and at the stage II to III transition stress level it has a maximum.", "In stage III at large enough stress it tends to saturate.", "Figure: The “apparent” dislocation density ρ w app \\rho _w^{app} as a function of the resolved shear stress.Based on the X-ray line profile results it can be concluded that during stage II the dislocation distribution becomes more and more inhomogeneous within the studied deformation range, as dense dislocation walls are formed.", "At a given deformation level, however, the dislocation density in the walls reaches a maximum level, dislocation annihilation prevents the further increase of the dislocation density.", "This process is called dynamic recovery [34].", "In stage III new walls and an increase of the dislocation density in the cell interiors is needed to accumulate more dislocations.", "According to Fig.", "REF for large enough stress levels the term $(1-f)\\rho _c/f/\\rho _w$ is in the order of unity.", "As it is seen above X-ray line profile analysis is a rather powerful method to determine some average statistical properties of the dislocation microstructure, but certainly it is not able to say anything about the actual dislocation morphology.", "Beside the traditionally applied TEM [36] the relatively recently developed HR-EBSD method offers a new perspective to directly study the dislocation microstructure.", "A big advantage of the HR-EBSD is that a much larger area can be studied than by TEM.", "Moreover, the sample preparation is much easier.", "Figure REF shows the GND maps obtained on the 6 deformed samples.", "Figure: The GND density maps obtained on samples deformed up to (a) 17.43, (b) 26.5, (c) 36.11, (d) 43.12, (e) 55.04, and (f) 67.22 MPa.At each deformation level a clear cell structure can be seen with increasing volume fraction of the cell walls.", "In Fig.", "REF the maps of the three $\\alpha _{iz}$ components, the GND density, the $\\sigma _{yy}$ stress, and a TEM picture obtained on the sample deformed is plotted.", "Similar picture were obtained at the other stress levels studied.", "Figure: The maps of the (a) α 13 \\alpha _{13}, (b) α 23 \\alpha _{23}, and (c) α 33 \\alpha _{33} components, (d) the GND, the (e) σ 22 \\sigma _{22}, and (f) a TEM picture obtained on the sample deformed up to 43.12 MPa.", "Notice that the scale and the observation site on the TEM picture is different than on the other ones.According to Fig.", "REF , as it was assumed earlier [34], [35], long-range internal stress develops in the cell interiors.", "Unlike X-ray line profile analysis, HR-EBSD is a direct method to determine the local stress state of the sample, so the result obtained is a direct evidence of the presence of long-range internal stresses.", "The dislocation density was also determined from the stress maps by the restricted moment analysis of the internal stress distribution.", "In order to reduce the error the average of the $\\rho ^*_{ij}$ values were calculated for the 5 independent components of the stress tensor.", "The results obtained are plotted in Fig.", "REF .", "As it is seen there is correlation between the $\\rho _{\\sigma }$ average dislocation density obtained from the stress maps and the $\\langle \\rho \\rangle $ density found by the X-ray line profile analysis, but the relation is clearly not linear.", "One can also note, that in some points (for example in the fourth), a higher difference is present.", "This is due to the local nature of the HR-EBSD, but also due to the fact that the measurements were carried out on different samples, not on the same surface and place.", "Due to these reasons, the first point was eliminated from this figure.", "Moreover, the $C_{ij}$ geometrical factor may vary with stress and due to the finite volume illuminated by the electrons we cannot detect the small dislocation dipoles (see above).", "So, the HR-EBSD internal stress analysis is a possible method for the determination of the dislocation density, but the issue requires further investigations to be able to produce dislocation density values with high precision.", "The average GND density $\\rho _\\mathrm {GND}$ defined by Eq.", "(REF ) was also determined from the Nye's tensor maps.", "According to Fig.", "REF , as a general trend $\\rho _\\mathrm {GND}$ increases with increasing deformation but due to the large dislocation density fluctuation in order to get more precise GND density values one should perform EBSD measurements on a very large area that was not possible with the setup used.", "Figure: The dislocation densities obtained from the stress probability distribution (black curve) and from the Nye's tensor components (green curve) versus the dislocation density obtained by X-ray line profile analysis.After the image binarization with the method explained above the fractal dimensions of the $\\rho _\\mathrm {GND}$ maps plotted in Fig.", "REF were also determined by both the Hausdorff ($D_H$ ) and the correlation dimension ($D_c$ ) analysis.", "Since the maps were binarized, every subset of the map is taken into account with the same weight, so the Hausdorff dimension is equal to the box counting dimension ($D_H = D_B$ ).", "Figure: On the left image, the correlation dimension D c D_c is presented versus the Hausdorff-dimension D H D_H obtained by box counting.", "On the right figure, the average dislocation density fluctuation is shown in function of the Hausdorff-dimension.It is found that $D_c\\approx 0.95 D_H$ so, the two methods give the same fractal dimension within experimental error.", "This consistent correlation confirms the formation of a special dislocation structure with non-integer (fractal) dimension.", "It is, however, a nontrivial result that the fractal dimension is decreasing with increasing relative dislocation density fluctuation $\\sigma $ .", "(A similar tendency was reported earlier [33].)", "Figure: The ρ GND ·a i ,i=1..6\\rho _\\mathrm {GND}\\cdot a_i, \\ \\ i=1..6 maps obtained on the sample compressed up to 43.12 MPa.In Sec.", "REF , a method was outlined to analyze Burgers vectors based on the projection of vector $\\vec{B}$ to the different possible Burgers vectors [32].", "For the sample deformed up to $43.12$ MPa the different $\\rho _\\mathrm {GND}\\cdot a_i$ maps obtained are plotted in Fig.", "REF .", "Similar behavior is found for the other samples.", "As it is seen some of the walls have a positive (red) or negative (blue) net Burgers vector while the other ones are more dipole like with a positive net Burgers vector on one side and a negative one on the other side.", "This picture somewhat refines the composite model proposed by Mughrabi et al.", "[34], [35].", "In the original form of the model the sources of the long-range stress are the two dislocation walls allocated on the two sides of a cell wall.", "The dislocation walls are formed by the reaction of dislocations in two slip systems resulting a Burgers vector parallel to the surface of the cell wall, i.e., the GND structure imagined is dipole like.", "However, an elongated region with finite length $d$ having a net Burgers vector can also generate long-range internal stress within the connected $d\\times d$ sized area.", "So, the source of the long-range stress are not necessary dipole-like walls as assumed earlier." ], [ "Summary and conclusions", "Copper single crystals oriented for (100) ideal multiple slip were compressed uniaxially up to different stress levels.", "The dislocation microstructure developing in the samples were studied by X-ray line profile analysis and HR-EBSD.", "The main conclusions are: It is shown that HR-EBSD offers a new efficient method to study dislocation microstructure with much less sample preparation effort than TEM conventionally applied; According to X-ray line profile investigations on compressed Cu single oriented for ideal multiple slip the relative dislocation density fluctuation exhibits a sharp maximum at stage II to III transition stress level; The dislocation cell structure is well described by a hole fractal with fractal dimension decreasing monotonically with the relative dislocation density fluctuation; The presence of the long-range internal stress developing in the cell interiors is directly seen by HR-EBSD measurements.", "Moreover, the stress maps can be directly measured.", "Some of the walls have a positive or negative net Burgers vector while the other ones are more dipole-like with positive net Burgers vector on one side and negative one other other side.", "The results obtained can be directly compared to the prediction of the theoretical models, so they can help to inspire and validate them.", "Finally, it is important to emphasize that there are still a lot of issues that should be addressed.", "One very important question is related to size effect, especially the formation of cells in micropillars, that may lead to new interesting results.", "Moreover, it would be important to study in-situ the cell formation process." ], [ "Acknowledgments", "The authors are grateful to Prof. Géza Györgyi for the fruitful discussions on fractal analysis.", "This work has been supported by the National Research, Development and Innovation Office of Hungary (PDI and IG, project No. NKFIH-FK-138975).", "JLL acknowledges the support by VEKOP-2.3.3-15-2016-00002 of the European Structural and Investment Funds." ] ]
2207.10516
[ [ "Noncommutative geometry and MOND" ], [ "Abstract Modified Newtonian dynamics (MOND) is a hypothesized modification of Newton's law of universal gravitation to account for the flat rotation curves in the outer regions of galaxies, thereby eliminating the need for dark matter.", "Although a highly successful model, it is not a self-contained physical theory since it is based entirely on observations.", "It is proposed in this paper that noncommutative geometry, an offshoot of string theory, can account for the flat rotation curves and thereby provide an explanation for MOND.", "This paper extends an earlier heuristic argument by the author." ], [ "Introduction", "It is generally assumed that dark matter in needed to account for the flat galactic rotation curves in the outer regions of galactic halos.", "A well-known alternative is a hypothesis called modified Newtonian dynamics (MOND) due to M. Milgrom [1].", "Although highly successful, it is not a self-contained physical theory, but rather a purely ad hoc variant based entirely on observations.", "The purpose of this paper is to show that noncommutative geometry, an offshoot of string theory, can provide an explanation for the theory.", "This possibility had already been suggested in Ref.", "[2] based on a heuristic argument.", "The dark-matter problem can also be addressed by means of $f(R)$ and other modified gravitational theories.", "See, for example, Ref.", "[3] and references therein." ], [ "Noncommutative geometry and\nthe dark-matter hypothesis", "In this section we take a brief look at noncommutative geometry, as discussed in Refs.", "[4], [5].", "One outcome of string theory is that coordinates may become noncommuting operators on a $D$ -brane [6], [7]; the commutator is $[\\textbf {x}^{\\mu },\\textbf {x}^{\\nu }]=i\\theta ^{\\mu \\nu }$ , where $\\theta ^{\\mu \\nu }$ is an antisymmetric matrix.", "The main idea, discussed in Refs.", "[4], [5], is that noncommutativity replaces point-like structures by smeared objects, thereby eliminating the divergences that normally occur in general relativity.", "The smearing effect can be accomplished in a natural way by means of a Gaussian distribution of minimal length $\\sqrt{\\beta }$ instead of the Dirac delta function [8], [9], [10].", "A simpler, but equivalent, way is proposed in Refs.", "[11], [12]: we assume that in the neighborhood of the origin, the energy density of the static and spherically symmetric and particle-like gravitational source has the form $\\rho _{\\beta }(r)=\\frac{m\\sqrt{\\beta }}{\\pi ^2(r^2+\\beta )^2}.$ The point is that the mass $m$ of the particle is diffused throughout the region of linear dimension $\\sqrt{\\beta }$ due to the uncertainty.", "According to Ref.", "[8], we can keep the standard form of the Einstein field equations in the sense that the Einstein tensor retains its original form, but the stress-energy tensor is modified.", "It follows that the length scales can be macroscopic despite the small value of $\\beta $ .", "As already noted, the gravitational source in Eq.", "(REF ) results in a smeared mass.", "According to Refs.", "[4], [5], the Schwarzschild solution of the field equations involving a smeared source leads to the line element $ds^{2}=-\\left(1-\\frac{2M_{\\beta }(r)}{r}\\right)dt^{2}+\\frac{dr^2}{1-\\frac{2M_{\\beta }(r)}{r}}+r^{2}(d\\theta ^{2}+\\text{sin}^{2}\\theta \\ d\\phi ^{2}).$ The smeared mass is given by $M_{\\beta }(r)=\\int ^r_04\\pi (r^{\\prime })^2\\rho (r^{\\prime })dr^{\\prime }=\\frac{2M}{\\pi }\\left(\\text{tan}^{-1}\\frac{r}{\\sqrt{\\beta }}-\\frac{r\\sqrt{\\beta }}{r^2+\\beta }\\right),$ where $M$ is now the total mass of the source.", "Observe that the mass of the particle is zero at the center and rapidly rises to $M$ .", "As a result, from a distance, the smearing is no longer observed and we get an ordinary particle: $\\text{lim}_{\\beta \\rightarrow 0}M_{\\beta }(r)=M$ .", "Returning now to the dark-matter hypothesis, despite its origin in the 1930's, the implications of this hypothesis were not fully understood until the 1970's with the discovery of flat galactic rotation curves, i.e., constant velocities sufficiently far from the galactic center [13].", "This led to the conclusion that matter in a galaxy increases linearly in the outward radial direction.", "To recall the reason for this seemingly strange behavior, suppose $m_1$ is the mass of a star, $v$ its constant velocity, and $m_2$ the mass of everything else in the outer region, i.e., the region characterized by flat rotation curves.", "Multiplying $m_1$ by the centripetal acceleration, we get $m_1\\frac{v^2}{r}=m_1m_2\\frac{G}{r^2},$ where $G$ is Newton's gravitational constant.", "Using geometrized units, $c=G=1$ , we obtain the linear form $m_2=rv^2.$ So $v^2$ is independent of $r$ , the distance from the center of the galaxy.", "The purpose of MOND is to account for this outcome without hypothesizing dark matter." ], [ "Explaining MOND", "Consider next a particle located on the spherical surface $r=r_0$ .", "The density of the smeared particle now becomes $\\rho _s(r)=\\frac{M\\sqrt{\\beta }}{\\pi ^2[(r-r_0)^2+\\beta ]^2},$ valid for any surface $r=r_0$ .", "[If $r_0=0$ , we return to Eq.", "(REF ).]", "Eq.", "(REF ) can also be interpreted as the density of the spherical surface, yielding the smeared mass of the shell in the outward radial direction, the analogue of the smeared mass at the origin.", "Integration then yields $m_0(r)=\\frac{2M}{\\pi }\\left[\\text{tan}^{-1}\\frac{r-r_0}{\\sqrt{\\beta }}-\\frac{(r-r_0)\\sqrt{\\beta }}{(r-r_0)^2+\\beta }\\right].$ In particular, $\\text{lim}_{\\beta \\rightarrow 0}m_0(r)=M$ .", "So while $m_0(r)$ is zero at $r=r_0$ , it rapidly rises to $M$ in the outward radial direction.", "We can readily show that this mass is completely independent of $r_0$ by rewriting Eq.", "(REF ) as $m_0(r)=\\frac{2M}{\\pi }\\left[\\text{tan}^{-1}\\frac{1-\\frac{r_0}{r}}{\\frac{\\sqrt{\\beta }}{r}}-\\frac{(1-\\frac{r_0}{r})\\frac{\\sqrt{\\beta }}{r}}{(1-\\frac{r_0}{r})^2+(\\frac{\\sqrt{\\beta }}{r})^2}\\right].$ Indeed, as $\\frac{\\sqrt{\\beta }}{r}\\rightarrow 0$ , $m_0(r)\\rightarrow M$ for every $r_0$ .", "To retain the smearing, we would normally require that $\\beta >0$ .", "So for $m_0(r)$ to get close to $M$ , $r=r_0$ would have to be sufficiently large.", "In other words, we would have to be in the outer region of the galaxy, i.e., the region characterized by flat rotation curves.", "Now consider the finite sequence $\\lbrace r_i\\rbrace $ of such radii.", "Then the smeared mass of every spherical shell becomes $m_i(r)=\\frac{2M}{\\pi }\\left[\\text{tan}^{-1}\\frac{r-r_i}{\\sqrt{\\beta }}-\\frac{(r-r_i)\\sqrt{\\beta }}{(r-r_i)^2+\\beta }\\right]$ with $\\text{lim}_{\\beta \\rightarrow 0}m_i(r)=M$ for every $i$ .", "To obtain the total mass $M_T$ of the outer region, we can think of $m_i(r)$ as the increase in $M_T$ per unit length in the outward radial direction, making $M$ a dimensionless constant.", "If we denote the thickness of each smeared spherical shell by $\\Delta r$ , then $m_i(r)\\Delta r$ becomes the mass of the shell.", "However, we cannot simply integrate $m_i(r)$ over the entire region, since each shell has a different $r_i$ .", "Instead, we proceed as follows: $\\Delta M_T=\\int _{r_i}^{r_i+\\Delta r}m_i(r)\\,dr=\\int _{r_i}^{r_i+\\Delta r}\\frac{2M}{\\pi }\\left[\\text{tan}^{-1}\\frac{r-r_i}{\\sqrt{\\beta }}-\\frac{(r-r_i)\\sqrt{\\beta }}{(r-r_i)^2+\\beta }\\right]dr\\\\=\\frac{2M}{\\pi }\\left[(r-r_i)\\text{tan}^{-1}\\frac{r-r_i}{\\sqrt{\\beta }}\\left.-\\sqrt{\\beta }\\,\\,\\text{ln}\\left[(r-r_i)^2+\\beta \\right]\\right]\\right|_{r_i}^{r_i+\\Delta r} \\\\= \\frac{2M}{\\pi }\\Delta r\\left[\\text{tan}^{-1}\\frac{\\Delta r}{\\sqrt{\\beta }}-\\sqrt{\\beta }\\,\\frac{\\text{ln}\\left[(\\Delta r)^2+\\beta \\right]}{\\Delta r}+\\frac{\\sqrt{\\beta }\\,\\text{ln}\\,\\beta }{\\Delta r}\\right].$ Given that $\\Delta r$ is large compared to $\\beta $ and that $\\text{lim}_{\\beta \\rightarrow 0}\\sqrt{\\beta }\\,\\,\\text{ln}\\,\\beta =0$ , it follows that $\\Delta M_T\\approx M\\Delta r$ for all $r$ in the outer region.", "So in this region, all shells of thickness $\\Delta r$ have approximately the same mass $\\Delta M_T$ due to the noncommutative-geometry background.", "Since $\\beta >0$ is fixed, we can now safely let $\\Delta r\\rightarrow 0$ .", "This leads to our main conclusion: from $\\text{lim}_{\\Delta r\\rightarrow 0}\\Delta M_T/\\Delta r=M$ , we obtain the linear form $M_T=Mr.$ By Eq.", "(REF ), $M=v^2$ , showing that $v^2$ is indeed independent of $r$ .", "So the noncommutative-geometry background provides an explanation for MOND in the outer region of the galaxy.", "This region is characterized by extremely low accelerations, also referred to as the deep-MOND regime." ], [ "Conclusions", "The existence of flat rotation curves in the outer regions of galaxies can be accounted for by the presence of dark matter or by the use of modified gravitational theories.", "An example of the latter is M. Milgrom's modified Newtonian dynamics or MOND.", "Although a highly successful model, MOND is a purely ad hoc theory based entirely on observations.", "So it cannot be called a self-contained theory.", "It is proposed in this paper that noncommutative geometry, an offshoot of string theory, can account for the flat rotation curves, thereby providing an explanation for MOND." ] ]
2207.10459
[ [ "Nonlinear Model Predictive Control for Quadrupedal Locomotion Using\n Second-Order Sensitivity Analysis" ], [ "Abstract We present a versatile nonlinear model predictive control (NMPC) formulation for quadrupedal locomotion.", "Our formulation jointly optimizes a base trajectory and a set of footholds over a finite time horizon based on simplified dynamics models.", "We leverage second-order sensitivity analysis and a sparse Gauss-Newton (SGN) method to solve the resulting optimal control problems.", "We further describe our ongoing effort to verify our approach through simulation and hardware experiments.", "Finally, we extend our locomotion framework to deal with challenging tasks that comprise gap crossing, movement on stepping stones, and multi-robot control." ], [ "Introduction", "Model predictive control (MPC) is a powerful tool for enabling agile and robust locomotion skills on legged systems.", "Its capability of handling flying phases while rejecting disturbances enhances the maneuverability [1], [2] and mobility of quadruped robots [3].", "Standard MPC implementations are concerned with solving finite-horizon optimal control problems (OCPs) at a real-time rate.", "This process comes with a high computational cost that defies online execution.", "Current existing MPC methods for quadrupedal locomotion tackle this challenge through careful software designs and high-performance, parallel implementations [4], [5], [6].", "In addition, they adopt simplified dynamics models to reduce computational burdens: one common simplification is pre-defining the footholds with heuristics-based methods [1], [2], [7] that can restrict the range of achievable motion and the capability to reject external disturbances.", "In this paper, we present a versatile nonlinear MPC (NMPC) strategy that jointly optimizes a base trajectory and a sequence of stepping locations.", "We describe the system dynamics as a function of a control input vector evolving over a time horizon and a time-invariant set of footholds.", "We solve the resulting OCP using a second-order numerical solver [8] that leverages sensitivity analysis (SA) [9], [10], [11], [12] to compute the exact values of the required derivatives efficiently.", "This approach significantly improves the robustness of the controller while ensuring real-time execution.", "Moreover, our formulation is easily adaptable to various nonlinear models and quadrupedal locomotion scenarios.", "In the following sections, we provide the mathematical formulation of our method.", "Furthermore, we describe two examples based on different nonlinear dynamics models compatible with our framework.", "Finally, we present our preliminary results verifying our approach and applying it to various locomotion control tasks.", "Figure: Our MPC-based locomotion controller in action on a simulated Unitree A1 robot (left) and a real one (right).", "Given kinematically generated references (red curve and yellow circles), our planner generates optimal base trajectory (green curve) and footholds (orange circles) that are dynamically feasible." ], [ "Nonlinear MPC", "In this section, we describe our optimal control framework based on second-order SA.", "Subsequently, we formulate a model-agnostic OCP for quadrupedal locomotion, where the optimization variables include system states and stepping locations.", "Finally, we provide two examples applying our formulation to nonlinear dynamics models; namely, the variable-height inverted pendulum and the single rigid body model." ], [ "Framework", "We express the discrete-time dynamics of a system through an implicit function $\\mathbf {G}_k(\\mathbf {x}_k,\\, \\mathbf {x}_{k+1}, \\mathbf {u}_k,\\, \\mathbf {p}) = \\mathbf {0}_n \\,,$ where $\\mathbf {x}_k\\in \\mathbb {R}^n$ and $\\mathbf {u}_k\\in \\mathbb {R}^{m}$ denote the system state and control input vectors at time step $k$ , $\\mathbf {G}_k$ is a differentiable function capturing the system evolution at time step $k$ , and $\\mathbf {0}_n\\in \\mathbb {R}^n$ is the $n$ -dimensional zero vector.", "In this formulation, the dynamics further depend on a time-invariant vector of parameters $\\mathbf {p}\\in \\mathbb {R}^p$ : while $\\mathbf {u}_k$ affects the system only at time step $k$ , $\\mathbf {p}$ does so for multiple time steps.", "In our application to locomotion control, $\\mathbf {p}$ represents a set of footholds to be stepped on over a time horizon (see sec:locomotion-control).", "We define the stacked state vector $\\mathbf {X}\\in \\mathbb {R}^{Nn}$ and the stacked input vector $\\mathbf {U}\\in \\mathbb {R}^{Nm+p}$ as follows: $\\mathbf {X}&\\begin{bmatrix}\\mathbf {x}_1^\\top & \\mathbf {x}_2^\\top & \\dots & \\mathbf {x}_N^\\top \\end{bmatrix}^\\top \\,, \\\\\\mathbf {U}&\\begin{bmatrix}\\mathbf {u}_0^\\top & \\mathbf {u}_1^\\top & \\dots & \\mathbf {u}_{N-1}^\\top & \\mathbf {p}^\\top \\end{bmatrix}^\\top \\,,$ where $N$ denotes the time horizon.", "Additionally, given a measurement $\\mathbf {x}_0$ of the current state of the system, we define the stacked dynamics constraint function as $\\mathbf {G}(\\mathbf {X}, \\mathbf {U}) \\begin{bmatrix}\\mathbf {G}_0^\\top & \\mathbf {G}_1^\\top & \\dots & \\mathbf {G}_{N-1}^\\top \\end{bmatrix}^\\top \\,.$ Then, we can define a finite-horizon OCP for the system (REF ) $\\begin{split}\\min _{\\mathbf {X},\\,\\mathbf {U}} \\quad &\\mathcal {J}(\\mathbf {X},\\,\\mathbf {U}) \\, \\\\\\text{s.t.}", "\\quad &\\mathbf {G}(\\mathbf {X}, \\mathbf {U}) = \\mathbf {0}_{Nn} \\,,\\end{split}$ where $\\mathcal {J}(\\mathbf {X},\\,\\mathbf {U})$ is a cost function that depends on the stacked state and input vectors.", "If an explicit function $\\mathbf {g}_k$ such that $\\mathbf {x}_{k+1} = \\mathbf {g}_k(\\mathbf {x}_k,\\,\\mathbf {u}_k,\\,\\mathbf {p})$ is available, we can define $\\mathbf {G}_k \\mathbf {x}_{k+1} - \\mathbf {g}_k(\\mathbf {x}_k,\\,\\mathbf {u}_k,\\,\\mathbf {p})\\,$ to adapt the system dynamics to the form (REF ).", "However, we note that (REF ) is general enough for cases where an explicit form of the dynamics equation does not exist.", "For instance, if the dynamics of a system are defined as: $\\mathbf {x}_{k+1} \\mathbf {x}^\\ast = \\operatornamewithlimits{arg\\,min}_{\\mathbf {x}} E_k(\\mathbf {x}, \\mathbf {x}_{k}, \\mathbf {u}_{k})\\,,$ where $E_k$ is the energy function of the system at time step $k$ , then they cannot be made explicit [11], [12].", "Nevertheless, we can define an implicit function $\\mathbf {G}_k \\frac{\\partial E_k}{ \\partial \\mathbf {x}}$ that is equal to zero for $\\mathbf {x}^\\ast $ that minimizes $E_k$ .", "Under mild assumptions, (REF ) implies that there is a map between $\\mathbf {X}$ and $\\mathbf {U}$ , i.e., $\\mathbf {X}(\\mathbf {U})$ , although the map may not have an analytic form.", "Therefore, we can convert (REF ) into the following unconstrained minimization problem $\\min _{\\mathbf {U}} \\quad \\mathcal {J}(\\mathbf {X}(\\mathbf {U}),\\,\\mathbf {U}) \\,.$ We find the optimal control inputs and parameters $\\mathbf {U}^\\ast $ minimizing the cost function of $\\mathbf {U}$ .", "Even if an analytic expression of $\\mathbf {X}(\\mathbf {U})$ does not exist, we can perform such optimization using a second-order method through sensitivity analysis [10], [9], [12].", "SA allows us to compute the exact values of the first and the second derivatives efficiently.", "Firstly, We apply the chain rule to the cost function $\\mathcal {J}(\\mathbf {X}(\\mathbf {U}),\\,\\mathbf {U})$ for the total derivative: $\\frac{\\mathrm {d} \\mathcal {J}}{\\mathrm {d} \\mathbf {U}} = \\frac{\\partial \\mathcal {J}}{\\partial X} \\frac{\\mathrm {d} X}{\\mathrm {d} \\mathbf {U}} + \\frac{\\partial \\mathcal {J}}{\\partial U}\\,.$ The partial derivative terms $\\frac{\\partial \\mathcal {J}}{\\partial X}$ and $\\frac{\\partial \\mathcal {J}}{\\partial U}$ are straightforward to compute.", "Meanwhile, the sensitivity matrix $\\mathbf {S}\\frac{\\mathrm {d} X}{\\mathrm {d} \\mathbf {U}} \\in \\mathbb {R}^{Nn \\times (Nm+p)}$ requires additional steps for an analytic expression.", "According to the implicit function theorem, for a feasible pair $(X, \\mathbf {U})$ that satisfies $\\mathbf {G}(\\mathbf {X}, \\mathbf {U}) = \\mathbf {0}_{Nn}$ , $\\frac{\\mathrm {d} \\mathbf {G}}{\\mathrm {d} \\mathbf {U}} = \\frac{\\partial \\mathbf {G}}{\\partial \\mathbf {X}} \\mathbf {S}+ \\frac{\\partial \\mathbf {G}}{\\partial \\mathbf {U}} \\overset{!", "}{=}\\mathbf {0}_{Nn \\times (Nm+p)}\\,.$ By rearranging the terms, we can express $\\mathbf {S}$ as: $\\mathbf {S}= -\\left(\\frac{\\partial \\mathbf {G}}{\\partial X}\\right)^{-1} \\frac{\\partial \\mathbf {G}}{\\partial U}\\,.$ Eventually, the analytic expression of the first and the second derivatives of the cost function are $\\frac{\\mathrm {d} \\mathcal {J}}{\\mathrm {d} \\mathbf {U}} &= \\frac{\\partial \\mathcal {J}}{\\partial \\mathbf {X}} \\mathbf {S}+ \\frac{\\partial \\mathcal {J}}{\\partial \\mathbf {U}}\\,, \\\\\\frac{\\mathrm {d}^2 \\mathcal {J}}{\\mathrm {d} \\mathbf {U}^2} &= \\left(\\frac{\\mathrm {d}}{\\mathrm {d} U} \\frac{\\partial \\mathcal {J}}{\\partial X} \\right) \\mathbf {S}+ \\frac{\\partial \\mathcal {J}}{\\partial X} \\frac{\\mathrm {d} \\mathbf {S}}{\\mathrm {d} U} + \\frac{\\mathrm {d}}{ U} \\frac{\\partial \\mathcal {J}}{\\partial U} \\\\&\\approx \\mathbf {S}^\\top \\frac{\\partial ^2 \\mathcal {J}}{\\partial X^2} \\mathbf {S}+ \\mathbf {S}^\\top \\frac{\\partial ^2 \\mathcal {J}}{\\partial U \\partial X} + \\frac{\\partial ^2 \\mathcal {J}}{\\partial X \\partial U} \\mathbf {S}+ \\frac{\\partial ^2 \\mathcal {J}}{\\partial U^2} \\,.", "$ For the full derivation of the second derivative, we refer the reader to the technical note by [9].", "We note that the generalized Gauss-Newton approximation () can be employed in place of the Hessian () to reduce the computational cost and to guarantee the semi-positive definiteness of the second derivative for nonlinear least-squares objectives." ], [ "Quadrupedal Locomotion Control", "We formulate an OCP for quadrupedal locomotion using the framework described in sec:framework.", "The main objective is to track a reference base trajectory generated from a user's commands.", "Thus, we define the following cost function on the base positions $\\mathbf {r}_k$ over a time horizon $N$ : $\\mathcal {J}(\\mathbf {X},\\,\\mathbf {U}) & K_1 \\sum _{k=0}^{N} \\Vert (\\mathbf {r}_{k+1} - \\mathbf {r}_k) - (\\mathbf {r}_{k+1}^\\mathrm {ref}- \\mathbf {r}_k^\\mathrm {ref})\\Vert _2^2 \\\\&+ K_2 \\sum _{k=0}^{N} \\Vert h_{k+1} - h_{k+1}^\\mathrm {ref}\\Vert _2^2 \\\\&+ K_3 \\sum _{i=1}^p \\hspace{-5.69046pt}\\sum _{j=i+1}^{\\min {(p, i+3)}} \\Vert (^i - ^j) - (^{\\mathrm {ref},i} - ^{\\mathrm {ref},j})\\Vert _2^2 \\\\&+ \\mathcal {R}_{\\textrm {model}}(\\mathbf {X},\\,\\mathbf {U}) \\,, $ The term (REF ) penalizes base velocity tracking errors, () penalizes base height tracking errors, () regularizes the displacements between adjacent stepping locations, and finally () is a model specific cost term.", "The variable $\\mathbf {r}_k$ is a part of the system state vector $\\mathbf {x}_k$ , and $^i$ is a part of the parameter vector $\\mathbf {p}$ .", "We provide the values of the weighting coefficients $K_i$ for all the cost terms we present in tab:parameters.", "eq:mpc-obj-footstep-regularization regularizes the foothold optimization towards kinematically feasible solutions.", "We determine the reference footholds $^{\\mathrm {ref},i}$ based on a simple impact-to-impact method whereby support feet lie below the corresponding hip in the middle of the stance phase [13].", "We note that the term () only penalizes relative positions between stepping locations, thus making the corresponding support polygons loosely resemble the reference ones [14]." ], [ "Examples", "We briefly describe two nonlinear systems, namely the variable-height inverted pendulum (IPM) and the single rigid body (SRBM) models, and we show how they can be integrated into our framework.", "When possible, we discretize the continuous dynamics by employing a semi-implicit Euler method; given $\\mathbf {r}_k$ and $\\mathbf {r}_{k-1}$ , we approximate the velocity at time steps $k$ and $k+1$ , respectively, as $\\dot{\\mathbf {r}}_k \\approx (\\mathbf {r}_k - \\mathbf {r}_{k-1}) / \\Delta t$ and $\\dot{\\mathbf {r}}_{k+1} \\approx \\dot{\\mathbf {r}}_k + \\ddot{\\mathbf {r}}_k \\Delta t$ , where $\\ddot{\\mathbf {r}}_k$ can be computed using a model-specific dynamics equation: $\\mathbf {r}_{k+1} \\hspace{-1.70709pt} &\\approx \\mathbf {r}_k + \\dot{\\mathbf {r}}_{k+1} \\Delta t \\nonumber \\\\&\\approx \\mathbf {r}_k + \\dot{\\mathbf {r}}_k \\Delta t + \\ddot{\\mathbf {r}}_k \\Delta t^2 \\nonumber \\\\&\\approx 2 \\mathbf {r}_k - \\mathbf {r}_{k-1} + \\ddot{\\mathbf {r}}_k \\Delta t^2 \\nonumber \\\\&= 2 \\mathbf {r}_k - \\mathbf {r}_{k-1} \\nonumber \\\\& \\qquad + f_\\textrm {model}(\\mathbf {r}_k,\\,\\mathbf {u}_k,\\,^{i_1},\\,^{i_2},\\,\\ldots ,\\,^{i_{|\\sigma _k|}}) \\Delta t^2 \\nonumber \\\\&\\mathbf {g}_{\\textrm {model},k}\\left(\\mathbf {r}_{k-1},\\,\\mathbf {r}_k,\\,\\mathbf {u}_k,\\,\\right) \\,, $ where $\\sigma _k$ denotes the subset of the stance foot positions at time step $k$ , and $^{i_j}\\in \\sigma _k,\\,\\forall j\\in \\lbrace 1,\\,2,\\,\\ldots ,\\,|\\sigma _k|\\rbrace $ .", "The makeup of the control input vector $\\mathbf {u}_k$ depends on the model and we will introduce it in due time.", "In the following subsections, we define an explicit function $f_\\textrm {model}$ for the IPM and the SRBM.", "As mentioned in sec:framework, we define $\\mathbf {G}_k \\mathbf {x}_{k+1} - \\mathbf {g}_k(\\mathbf {x}_k,\\,\\mathbf {u}_k,\\,\\mathbf {p})\\,$ since an explicit expression of the system dynamics exists." ], [ "Inverted Pendulum Model", "The inverted pendulum model represents a legged robot as a point mass $m$ concentrated at the center of gravity of the system $\\mathbf {r}$ and a massless telescoping rod in contact with a flat ground.", "We assume that the contact point of the rod is at the center of pressure (CoP) of the robot $\\mathbf {p}$ , i.e., the location at which the resultant ground reaction force vector $\\mathbf {f}$ would act if it were considered to have a single point of application [15].", "The CoP always exists inside the support polygon of all stance foot positions $^i\\in \\mathbb {R}^3$ .", "Thus, we can express its position with respect to an inertial reference frame as a convex combination: $\\mathbf {p} = \\sum _{^i\\in \\sigma } w^i \\, ^i,$ where $\\sigma $ is the set of the stance foot positions, and $w^i\\in \\mathbb {R}_{\\ge 0}$ is a non-negative scalar weight corresponding to $^i$ that satisfy $\\sum _i w^i = 1 \\,.$ The equation of motion for the IPM is given as follows: $\\ddot{\\mathbf {r}} &= (\\mathbf {r} - \\sum _{^i\\in \\sigma } w^i \\, ^i) \\frac{\\ddot{h} + \\Vert \\mathbf {g}\\Vert _2}{r_z} + \\mathbf {g} \\nonumber \\\\&f_\\textsc {ipm}(\\mathbf {r},\\, \\mathbf {u},\\,^{i_1},\\, ^{i_2},\\, \\ldots ,\\, ^{i_{|\\sigma |}}) \\,, $ with control input vector $\\mathbf {u}\\left[ \\ddot{h} \\; w^{i_1} \\; w^{i_2} \\; \\ldots \\; w^{i_{|\\sigma |}} \\right]^\\top $ , and parameters $^{i_j}\\in \\sigma ,\\,\\forall j\\in \\lbrace 1,\\,2,\\,\\ldots ,\\,|\\sigma |\\rbrace $ .", "The derivation of eqn:ipeom is available in the related paper [16].", "Furthermore, we define the model specific cost term () for the IPM as follows: $\\mathcal {R}_{\\textsc {IPM}} \\sum _{k=0}^{N-1} \\left( \\frac{K_4}{2} \\Vert 1 - \\sum _i w_k^i \\Vert _2^2 + K_5 \\sum _i \\mathcal {S}_{\\ge 0}( w_k^i ) \\right) \\hspace{-2.84544pt}\\,,$ where $\\mathcal {S}_{\\ge r}\\colon \\mathbb {R}\\rightarrow \\mathbb {R}_{\\ge 0},\\,\\forall r\\in \\mathbb {R}$ is a $\\mathcal {C}^2$ -continuous function following [Bern2017InteractiveDO, eq.", "(8)], namely $\\mathcal {S}_{\\ge r}(x) {\\left\\lbrace \\begin{array}{ll}0 &\\Gamma \\ge \\epsilon \\\\-\\frac{1}{6\\epsilon }\\Gamma ^3 + \\frac{1}{2}\\Gamma ^2 + \\frac{\\epsilon }{2}\\Gamma + \\frac{\\epsilon ^2}{6} &-\\epsilon \\le \\Gamma < \\epsilon \\\\\\Gamma ^2 + \\frac{\\epsilon ^2}{3} &\\Gamma < -\\epsilon \\end{array}\\right.", "}$ with $\\Gamma = x-r$ and $\\epsilon = 0.1$ .", "This term enforces the constraint $\\sum _i w^i = 1 \\,$ , and the non-negativity of the weights as a soft constraint." ], [ "Single Rigid Body Model", "If the limbs of a robot are lightweight compared to its body, we can neglect their inertial effects and reduce the system to a single rigid body with mass $m$ and body frame moment of inertia ${}^B\\mathbf {I}\\in \\mathbb {R}^{3\\times 3}$ .", "The position $\\mathbf {r}\\in \\mathbb {R}^3$ and unit quaternion $\\mathbf {q}\\in \\mathbb {S}^3$ define the pose of the lumped rigid body.", "${}^B\\omega $ denotes its angular velocity vector expressed in body frame, and $\\mathbf {f}^i\\in \\mathbb {R}^3$ denotes the ground reaction force associated with the stepping location $\\mathbf {s}^i\\in \\sigma $ .", "Then, we can write the dynamics of the system as: $\\ddot{\\mathbf {r}} &= \\frac{1}{m} \\sum _{\\mathbf {s}^i\\in \\sigma } \\mathbf {f}^i + \\mathbf {g} f_{\\textsc {srbm}, \\mathrm {t}}(\\mathbf {u}) \\,, \\\\{}^B\\dot{\\omega } &= {}^B\\mathbf {I}^{-1} \\left[ \\mathbf {R}(\\mathbf {q})^\\top \\sum _{\\mathbf {s}^i\\in \\sigma } \\left( \\mathbf {s}^i - \\mathbf {r}\\right)\\mathbf {f}^i - {}^B\\omega {}^B\\mathbf {I}{}^B\\omega \\right] \\nonumber \\\\&f_{\\textsc {srbm}, \\mathrm {r}}(\\mathbf {r},\\, \\mathbf {q},\\, {}^B\\omega ,\\, \\mathbf {u},\\, ^{i_1},\\, ^{i_2},\\, \\ldots ,\\, ^{i_{|\\sigma |}}) \\,, $ where $\\mathbf {R}(\\mathbf {q})$ is the rotation matrix corresponding to $\\mathbf {q}$ , and $\\mathbf {u}\\left[ \\mathbf {f}^{i_1} \\; \\mathbf {f}^{i_2} \\; \\ldots \\; \\mathbf {f}^{i_{|\\sigma |}} \\right]^\\top $ is the control input vector.", "We employ a semi-implicit Euler method similar to the one outlined in subsec:examples.", "However, to integrate the orientation dynamics, we employ a forward Lie-group Euler method which allows us to preserve the unitary norm constraint of unit quaternions.", "Specifically, we approximate the body frame angular velocity at time step $k$ and $k+1$ , respectively, as ${}^B\\omega _k \\approx 2\\,\\mathfrak {Im}(\\bar{\\mathbf {q}}_{k-1} \\ast \\mathbf {q}_k) / \\Delta t$ and ${}^B\\omega _{k+1} \\approx {}^B\\omega _k + {}^B\\dot{\\omega }_k \\Delta t$ , where $\\mathfrak {Im}(\\mathbf {q})$ extracts the imaginary part of $\\mathbf {q}$ , $\\bar{\\mathbf {q}}$ is the conjugate of $\\mathbf {q}$ , $\\ast $ is the quaternion multiplication operator, and ${}^B\\dot{\\omega }_k$ can be computed using ().", "Then, our integration scheme for unit quaternions translates to $\\mathbf {q}_{k+1} \\approx \\mathbf {q}_k \\ast \\exp \\left({}^B\\omega _{k+1} \\Delta t\\right)$ , where $\\exp \\colon \\mathbb {R}^3\\rightarrow \\mathbb {S}^3$ is a Lie-group exponential function which, for unit quaternions, has the following closed form: $\\exp (\\mathbf {v}) {\\left\\lbrace \\begin{array}{ll}\\cos (\\frac{1}{2}\\Vert \\mathbf {v}\\Vert ) + \\frac{\\mathbf {v}}{\\Vert \\mathbf {v}\\Vert } \\sin (\\frac{1}{2}\\Vert \\mathbf {v}\\Vert ) &\\Vert \\mathbf {v}\\Vert \\ne 0 \\\\1 &\\Vert \\mathbf {v}\\Vert = 0\\end{array}\\right.}", "\\,.$ Using the equations above, we can finally write the SRBM dynamics in the form (REF ) as: $\\begin{bmatrix}\\mathbf {r}_{k+1} \\\\\\mathbf {q}_{k+1}\\end{bmatrix} &\\approx \\begin{bmatrix}2 \\mathbf {r}_k - \\mathbf {r}_{k-1} + f_{\\textsc {srbm}, \\mathrm {t}}(\\mathbf {u}_k) \\Delta t^2 \\\\\\mathbf {q}_k \\ast \\exp ({}^B\\omega _{k+1} \\Delta t )\\end{bmatrix} \\nonumber \\\\&\\mathbf {g}_{\\textsc {srbm},k}(\\mathbf {r}_{k-1},\\, \\mathbf {r}_k,\\, \\mathbf {q}_{k-1},\\, \\mathbf {q}_k,\\, \\mathbf {u}_k,\\, ) \\,.", "$ Following (), we can define a cost term for the SRBM penalizing deviations from a reference orientation trajectory $\\mathbf {q}_k^\\mathrm {ref}$ [18] and imposing non-negative vertical components of the GRFsFor our preliminary results, we do not include friction cone soft constraints to (REF ) to keep our implementation simple.", ": $\\mathcal {R}_{\\textsc {SRBM}} \\sum _{k=0}^{N-1} \\left( K_6 \\left( 1-\\left| \\mathbf {q}_k^\\top \\mathbf {q}^{\\mathrm {ref}}_k \\right| \\right) + K_7 \\sum _i \\mathcal {S}_{\\ge 0}( \\mathbf {f}_{z, k}^i ) \\right) \\hspace{-2.84544pt}\\,.$" ], [ "Experiments", "We present a series of simulation and hardware experiments we conducted to verify the efficacy of our approach.", "The results we discuss in this section were attained using the IPM described in sec:inverted-pendulum.", "The footage of the experiments is available in the supplementary videoThe video is available in https://youtu.be/BrJSRlAJaX4., along with preliminary results achieved with the SRBM.", "In all our experiments, the optimal base trajectories output by the MPC scheme were tracked by a quadratic programming-based whole-body controller [10].", "Firstly, we tested the robustness of our controller for locomotion on flat terrains using the Unitree A1 robot.", "We hindered the robot while it was trotting in place, as portrayed in the snapshots in fig:disturbance.", "In our tests, the robot was able to withstand unexpected disturbances and successfully recover its stability.", "We demonstrate in the accompanying video how the foothold optimization improves the capability of the system to resist large lateral pushes.", "Figure: We pushed (left) and disturbed the Unitree A1 robot by putting a plate under its feet (right) while it was performing a trot gait.We show the versatility of our foothold optimization approach to adapt a gait to different terrain types, namely a gap crossing and a stepping stones scenarios.", "The former setting consists of a sequence of rifts with different widths; the latter comprises a grid of stepping stones distant 20 from each other the robot must step on – see fig:gc-ss.", "We add the following terms to the objective function (REF ) to model each gap and stepping stone, respectively: $\\mathcal {L}_\\mathrm {gc}() &K_8 \\sum _{i=1}^p \\mathcal {S}_{\\ge g}(| s_x^i - g_x |) \\,, \\\\\\mathcal {L}_\\mathrm {ss}() &K_9 \\sum _{i=1}^p -\\exp {-\\frac{1}{2} \\frac{\\Vert ^i - \\mathbf {t}\\Vert _2^2}{K_{10}^2}} \\,, $ where $g$ and $g_x$ are the gap half width and x-position, respectively, $\\mathbf {t}$ is the stepping stone location, and $K_9$ and $K_{10}$ are tuning parameters.", "To model the stepping stones in (), we employ a negative Gaussian function centered at the corresponding positions; in this way, we incentivize nearby stepping locations to converge towards the closest footholds.", "As shown in the supplementary video, these simple penalty terms are sufficient to ensure that the associated constraints are almost never violated.", "The occasional missteps may be avoided through a careful tuning of (REF ) and (), or by designing some fallback control strategies.", "Figure: Snapshots of simulation experiments for the gap crossing (left) and stepping stone (right) scenarios with the Aliengo robot.", "The red spheres depict the reference stepping locations, while the yellow and orange trajectories are the outputs of our MPC controller.", "The resulting footstep placements deviate considerably from the corresponding references and allow the robot to avoid 32 wide gaps (left, in light red) and step on isolated footholds (right, in green).Figure: Two Laikago quadruped robots controlled using a centralized MPC strategy in simulation.", "The robot on the left is executing a walking gait, whereas the one on the right is performing a flying trot.", "The reference trajectories are denoted by blue spheres, and the optimized ones are represented by green spheres: the latter deviate significantly from the former to prevent the robots from colliding.Finally, we extend our framework so that the state vector contains the positions of two robots, and we couple the solutions for the two subsystems by adding the following collision avoidance term to the objective function: $\\mathcal {L}_\\mathrm {ca}(\\mathbf {X}) K_{11} \\sum _{k=0}^N\\mathcal {S}_{\\ge 1}(\\Vert \\mathbf {r}_k^a - \\mathbf {r}_k^b\\Vert _2) \\,,$ where $\\mathbf {r}_k^a$ and $\\mathbf {r}_k^b$ are the states of the two robots at time step $k$ , respectively.", "This cost term ensures that the robots keep a distance of at least 1 from each other – see fig:ipm-pair.", "As shown in the supplementary video, our NMPC framework is able to control the multi-robot system in real time by solving a single OCP." ], [ "Conclusion and Future Work", "Our NMPC scheme facilitates the implementation of robust controllers for various quadrupedal locomotion tasks.", "We can easily integrate different nonlinear dynamics models into our framework.", "We leave a complete demonstration with different models and more comprehensive analysis for future work.", "Our immediate next step is to verify our formulation of the SRBM and test it on hardware.", "Furthermore, we intend to compare our method to other state-of-the-art nonlinear control frameworks." ] ]
2207.10465
[ [ "Evaporation of a Kerr-black-bounce by emission of scalar particles" ], [ "Abstract We study a regular rotating black hole evaporating under the Hawking emission of a single scalar field.", "The black hole is described by the Kerr-black-bounce metric with a nearly extremal regularizing parameter $\\ell=0.99r_+$.", "We compare the results with a Kerr black hole evaporating under the same conditions.", "Firstly, we compute the gray-body factors and show that the Kerr-black-bounce evolves towards a non-Schwartzchild-like asymptotic state with $a_* \\sim 0.47$, differently from a Kerr black hole whose asymptotic spin would be $a_* \\sim 0.555$.", "We show that this result depends on the combined contributions of the differences in the gray-body factors and the surface gravity affected by the regularizing parameter.", "We also discuss how the surface gravity affects the temperature and the primary emissivity and decreases those quantities with respect to the Kerr black hole.", "Consequently, the regular black hole has a longer lifetime.", "Finally, we briefly comment on the possibility of investigating the beyond-the-horizon structure of a black hole exploiting its Hawking emission." ], [ "Introduction", "General Relativity (GR) has been tested for more than one century providing outstanding results in describing the Solar System and the Universe.", "Despite GR's successes, its lack in addressing many open problems remains and propels the idea that it may not be the ultimate theory of gravity.", "The origin of the cosmic acceleration and the nature of the dark contents of the Universe have been extensively studied using modified theories of gravity [1].", "In recent years, the detection of Gravitational Waves (GW) from Black Hole (BH) coalescence [2] by the LIGO/Virgo Collaboration and the direct observation of the BH shadows at the center of the Milky Way [3] and M87 [4] by the Event Horizon Telescope (EHT), provided a new test bench capable of probing GR robustness in a strong-field regime [5], [6], [7].", "The existence of singularities, namely portions of spacetime with an infinite curvature, is a hint that the classical framework of GR should break down or at least, be incomplete, at this high-energy scale.", "It is well accepted the idea that singularities just revile our lack of knowledge in the high energy regime and the related problem may be cured by a quantum theory of gravity.", "Unfortunately, a theory of quantum gravity is not yet developed.", "Nevertheless, it is still possible to gain intuition by postulating the existence of regularized spacetime inspired by quantum gravity arguments and study whether these new metrics give rise to new signatures or modify preexisting characteristics.", "Since the 90s these motivations lead the research of regularized metrics mimicking the behavior of BH solutions [8].", "Furthermore, in light of the new available high-energy regime tests, the field gained even more traction, and are regularly announced many studies about quasi-normal modes, superradiant regimes, instabilities, etc.", "[9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23].", "An interesting regular metric was proposed in [24] and further analyzed in [25].", "This spacetime configuration, known as black-bounce, allows to interpolate between the standard and regularized Schwarzschild BH and the Morris-Thorne traversable wormhole by introducing an additional parameter, $\\ell $ .", "The black-bounce metric caused a fervent activity leading to many studies of its characteristics [26], [27], [28], [29], [30], [31], [32] and was recently extended in order to account for rotation [33], and afterwards rotation and charge [34].", "The Kerr-black-bounce and Kerr-Newman-black-bounce have also gained the attention of many studies [35], [36], [37], [38], [39], [40].", "The main motivation of this paper is to further enlarge the analysis of the Kerr-black-bounce characteristics considering its dynamical evolution due to the Hawking evaporation driven by a singular scalar field.", "Such characteristics are certainly irrelevant for BHs of the size we measure today but may become a powerful and handy tool in light of the possible future measurement of primordial BHs.", "This paper is organized as follows.", "Section II contains a brief review of the Kerr-black-bounce.", "Section III shows the equation governing the scalar perturbation of the Kerr-black-bounce metric, the evolution of the metric under a single scalar emission, and the numerical method used for calculating the Gray-Body Factors (GBFs).", "In section IV the results are presented.", "Section V provides a summary in which future perspectives are considered." ], [ "Kerr-black-bounce metric", "This section we briefly review of the Kerr-black bounce metric [33]: $ds^2=- \\left(1- \\frac{2M \\sqrt{ \\tilde{r}^2 + \\ell ^2}}{\\Sigma }\\right) dt^2 + \\frac{\\Sigma }{\\Delta } d\\tilde{r}^2 + \\Sigma d\\theta ^2 +\\frac{A\\sin ^2 \\theta }{\\Sigma } d\\phi ^2 - \\frac{4Ma\\sqrt{\\tilde{r}^2+ \\ell ^2}\\sin ^2\\theta }{\\Sigma }dt d\\phi ,$ where $M$ , $a$ , and $\\ell $ are the parameters describing mass, spin, and the regularizing parameter of the metric, while $\\Sigma = \\tilde{r}^2 +\\ell ^2+a^2 \\cos ^2 \\theta , \\;\\;\\;\\; \\;\\;\\Delta = \\tilde{r}^2 +\\ell {l}^2 +a^2 - 2M\\sqrt{\\tilde{r}^2 \\ell ^2} , \\;\\;\\;\\; \\;\\; A=( \\tilde{r}^2 +\\ell ^2+a^2)^2 - \\Delta a ^2 \\sin ^2 \\theta .$ This is a generalization of the static and spherically symmetric metric proposed by Simpson-Visser [24], [25], [41].", "It is a stationary, axially symmetric metric which, introducing a positive parameter, $a<M$ , describes the angular momentum of the black-bounce.", "This line element was recently further extended in order to describe a charged spacetime [34].", "When the positive regularizing parameter $\\ell \\rightarrow 0$ , the Kerr-black-bounce metric reduces to the singular Kerr solution, while for $\\ell \\ne 0$ the spacetime is regular and posses a wormhole throat at $\\tilde{r}=0$ .", "A coordinate singularity interpreted as an event horizon is present when $\\Delta =0$ , or, equivalently, when $\\tilde{r}_\\pm = \\sqrt{r^2_\\pm -\\ell ^2},$ where $r_\\pm = M \\pm \\sqrt{M^2-a^2}$ .", "Depending on the values of the regularizing parameter $\\ell $ , the metric (REF ) describes a wormhole for $\\ell >r_+$ , for which no coordinate singularities are present on the many-fold fabric.", "If $\\ell <r_+$ the metric (REF ) describes a BH which may have one or two coordinate singularities whether $r_- < \\ell <r_+$ or $\\ell <r_-$ , respectively.", "Finally, when $\\ell =r_+$ the throat and the event horizon coincide.", "To better visualize this interplay, it is convenient to define a new radial coordinate $r^2=\\sqrt{\\tilde{r}^2+ \\ell ^2}$ and pass from an extrinsic description of the manifold to an intrinsic one.", "It is easy to notice that $r\\ne 0$ for all $\\ell \\ne 0$ .", "In particular, the minimum value of $r$ corresponds to the minimal radius of the throat.", "The coordinate $r$ measures the distance from the center of the object.", "Given this new coordinate, the metric reads $ds^2=- \\left(1- \\frac{2Mr^2}{\\Sigma }\\right) dt^2 + \\frac{\\Sigma }{\\delta \\Delta } dr^2 + \\Sigma d\\theta ^2 +\\frac{A\\sin ^2 \\theta }{\\Sigma } d\\phi ^2 - \\frac{4Mar\\sin ^2\\theta }{\\Sigma }dt d\\phi ,$ and $\\Sigma =r^2 +a^2 \\cos ^2 \\theta , \\;\\;\\;\\; \\;\\;\\Delta = r^2 +a^2 - 2Mr , \\;\\;\\;\\; \\;\\; A=r^2 - \\Delta a ^2 \\sin ^2 \\theta , \\;\\;\\;\\; \\;\\; \\delta =1-\\frac{\\ell ^2}{r^2} .$ If $\\ell \\ne 0$ , the curvature singularity at $r=0$ is always prevented by the wormhole throat.", "When $\\ell >r_+$ the wormhole throat is located at a larger radial value with respect to the coordinate singularity of the event horizon.", "In this way, the presence of the horizon is prevented by the regular finite surface of the wormhole throat.", "If $0\\ne \\ell <r_+$ , the throat of the wormhole is enclosed in the event horizon and the metric describes a BH.", "In the following part of this paper we focus on regular BHs avoiding coordinates singularities and inner horizons.", "The absence of the inner horizon is a desirable feature since might avoid the problems related to mass inflation.", "Moreover, this choice allows a nearly maximal value of $\\ell $ for which the metric (REF ) mostly differs from the Kerr BH and still describes a BH.", "It has to be noticed that the metric (REF ) or, equivalently, (REF ), is inspired by the reasonable quantum gravity argument of avoiding singularities and other pathology, and it is not a vacuum solution of GR." ], [ "Scalar perturbations and evolution", "In this section we derive the equation describing the scalar massless perturbation of the metric (REF ) and discussed the appropriate boundary conditions.", "The massless Klein-Gordon equation $\\nabla ^{\\mu } \\nabla _{\\mu } \\Phi =0$ in curved spacetime reads $1\\frac{1}{\\sqrt{-g}} \\partial _{\\mu } (\\sqrt{-g} g^{\\mu \\nu } \\partial _{\\nu }) \\Phi =0.$ Taking into account the decomposition $\\Phi = R_{l m}(r) S_{l m}(\\theta ) e^{i m \\phi } e^{-i \\omega t }$ where $\\omega $ is the perturbation frequency, $m$ is the azimutal quantum number, (REF ) separates into an angular equation $\\frac{1}{\\sin \\theta } \\frac{d}{d\\theta }\\left(\\sin \\theta \\frac{d}{d\\theta } S_{l m}\\right)+ \\left(a^2 \\omega ^2 \\cos ^2 \\theta + A_{l m} - \\frac{m^2}{\\sin ^2 \\theta }\\right)S_{l m}=0$ describing the spheroidal harmonics equation where $A_{l m}$ are its eigenvalues, and a radial equation $\\sqrt{\\delta } \\frac{d}{dr} \\left( \\sqrt{\\delta } \\Delta \\frac{d R_{l m}}{dr} \\right) + \\left( \\frac{K^2}{\\Delta } + 2 a m \\omega - a^2 \\omega ^2 - A_{l m } \\right)R_{l m}=0,$ where $K=(r^2 + a^2) \\omega - a m$ .", "The angular equation (REF ) is the spin-less case of the well studied spin-weighted spheroidal harmonics equation [42], [43], [44].", "An approximation in high order expansion of $a \\omega $ provides good evaluations of the eigenvalues $A_{l m}$ up to any desired accuracy.", "Besides, to our purposes, it is worth studying the radial equation (REF ) in two limits, near the horizon and at spatial infinity.", "If the regularizing parameter satisfies $\\ell < r_+$ and the Kerr-black bounce metric (REF ) describes a regular BH, than the close to the horizon solution reads [35], $R(r) \\sim (r-r_+)^{\\pm i \\sigma }, \\;\\; \\;\\;\\;\\; \\;\\;\\sigma = \\frac{a m - 2 M \\omega r_+}{\\gamma (r_+-r_-)},\\;\\; \\;\\;\\;\\; \\;\\; \\gamma = \\sqrt{1-\\frac{\\ell ^2}{r_+^2}},$ while the far away solution simply reads $R(r)\\sim \\frac{1}{r} e^{\\pm i \\omega r}.$ To study a BH described by (REF ),evolving by the solely emission of scalar particles due to Hawking radiation, it is necessary to set up a scattering-like problem and take into account in-going and out-going boundary condition at infinity, while on the event horizon one must consider pure absorption.", "This asymptotic solutions and the conservation of energy fluxes, both at the horizon and at infinity, allow to calculate the GBF or transmission coefficient, defined as $T= \\frac{\\frac{dE_{hole}}{dt}}{\\frac{dE_{in}}{dt}}.$ The GBFs depend on the modes, and, at a constant $\\ell $ , are functions of both the BH spin parameter and frequency of the perturbation, $T= { T ^l_m }(a , \\omega )$ .", "The GBFs emerge as a consequences of a geometrical potential in equation (REF ) which, acting as a barrier, partially shields the Hawking radiation from being totally emitted.", "This way the radiation emerging from the BH is not the one of a black body.", "The field quanta posses energy and spin and their emission comes at the expense of both BH mass and angular momentum.", "Following the path outlined in [45], the rates of mass and angular momentum loss are called $f$ and $g$ , respectively, and they read $\\begin{pmatrix}f\\\\g\\end{pmatrix}=\\sum _{i,l,m}\\frac{1}{2\\pi } \\int _0^{\\infty }dx \\frac{T_{i,l,m}}{e^{2\\pi k/\\kappa }- 1}\\begin{pmatrix}x\\\\ma_*^{-1}\\end{pmatrix},$ where the sum is taken over all particle species $i$ , and $l$ , $m$ are the usual angular momentum quantum numbers.", "Here $x=\\omega M$ , $k=\\omega -m\\Omega $ and $ \\kappa =\\sqrt{\\frac{r_H^2}{r_H^2 + \\ell ^2}}\\sqrt{1-a_*^2}/2r_+$ is the surface gravity of the BH [8], [34].", "Since the choice of analyzing a singular Kerr BH $\\ell =0$ and the regular Kerr-black-bounce BH having $\\ell =0.99 r_+$ , the pre-factor $\\sqrt{\\frac{r_H^2}{r_H^2 + \\ell ^2}}$ takes the values of 1 or $\\sqrt{1-0.99^2}$ , respectively.", "To determine whether a BH spins up or down during its evolution it is necessary to calculate the mass to angular momentum loss rates.", "For this reason, one defines $h=\\frac{g}{f}-2.$ A root of the function $h$ , $\\tilde{a}_*$ , for which $h^{\\prime }(\\tilde{a}_*)>0$ , represents a stable state towards which the BH evolves while evaporating.", "To investigate the temporal evolution of angular momentum and mass it is followed the path outlined in [45], [46], [47] and later in [48], [49] defining $y=-\\ln {a},$ $z=-\\ln {M/M_i},$ and $\\tau =-M^{-3}_i t,$ where $M_i$ is the initial mass of the BH.", "The evolution is then fully determined by the differential equations $\\frac{d}{dy} z=\\frac{1}{h},$ $\\frac{d}{dy} \\tau =\\frac{e^{-3z(y)}}{h f},$ and the reasonable choice of the initial conditions $z(t=0)=0$ and $\\tau (t=0)=0$ .", "To estimate the primary spectrum of scalar particles it is used the well known formula [45], [46], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60]: $\\frac{d^2N}{dt dE}=\\frac{1}{2\\pi }\\sum _{l,m}\\frac{T_{l,m}(\\omega )}{e^{2\\pi k/\\kappa }-1}~.$" ], [ "Numerical method", "An explicit analytical calculation of the GBFs is possible only under stringent approximations and numerical methods are usually required to evaluate them.", "It is implemented a code based on the so called shooting method which has been applied to solve similar problems, for example, in [61], [62] and allows the calculation of the GBFs with good accuracy.", "The first step is to rewrite Eq.", "(REF ) in terms of the re-scaled coordinate $x=\\frac{r-r_+}{r_+},$ having so: $\\delta x^2(x+\\tau )^2 \\partial _x^2 R(x) +2x(x+\\tau )\\left( \\frac{1}{2}(2x+\\tau \\delta + \\frac{x(x+\\tau )}{x+1}(1-\\delta )) \\right) \\partial _x R(x) + V(\\omega ,x)R(x)=0,$ where $V(\\omega ,x)=\\mathcal {K}^2-x(x+\\tau )(A_{l m }+a^2 \\omega ^2-2am\\omega ),$ with $\\tau =\\frac{r_+ - r_-}{r_+}$ , $\\mathcal {K}=\\varpi +x(x+2)\\bar{\\omega }$ , $\\varpi = (2-\\tau )(\\bar{\\omega }-m \\bar{\\Omega }_+)$ , where $\\bar{\\omega }=r_+ \\omega $ , $\\bar{\\Omega }_+ = r_+ {\\Omega _+}$ and $\\Omega _+=\\frac{a}{2Mr_+}$ .", "Setting purely in-going boundary condition near the horizon, the solutions of Eq.", "(REF ) can be expressed in the form of the Taylor expansion [61], [62] of the form $R(x)= x^{- i \\varpi /(\\gamma \\tau )} \\sum _{n=0}^\\infty a_n x^n.$ The coefficients $a_n$ can be determined by substituting (REF ) in (REF ) and resolving iteratively the algebraic equation.", "The near horizon solution is used to set the boundary conditions and numerically integrate the radial equation up to large distances, where the general form of the solution takes the form: $R(x) \\rightarrow \\frac{ Y^{l m }_{in}}{r_+} \\frac{e^{-i \\bar{\\omega }x}}{x} + \\frac{ Y^{l m }_{out}}{r_+} \\frac{e^{i \\bar{\\omega }x}}{x}.$ It is than possible to extract the coefficient $Y^{l m}_{in}(\\omega )$ in order to evaluate the GBF.", "The normalization of the scattering problem is set by requiring $a_0=1$ , this way GBFs read $&T^{l m}(\\omega )= | Y^{l m}_{in}(\\omega )|^{-2}.$ With this method we computed the GBFs of a scalar perturbation on a regular BH described by the Kerr-black-bounce metric having a nearly extremal regularizing parameter ($\\ell =0.99 r_+$ ).", "Different values for the spin parameter of the BH spanning from $a_*=0$ to $a_*=0.99$ are considered and the GBFs calculated the up to $l=4$ ." ], [ "Results", "Let's compare the scalar perturbations of the Kerr BH and the ones of the nearly extremal Kerr-black-bounce BH.", "Those BHs share many characteristics such as the presence of a superradiant regime and a non-null asymptotic value of the spin parameter $a_*$ .", "Nevertheless, for the two different metrics, the phenomenology changes and it is of great interest to analyze those differences.", "The GBFs of the modes $l=m=0$ are identical (as shown in Fig.REF for the non-rotating cases) and this equality is independent of the considered BH spin.", "The GBFs of the Kerr-black-bounce BH show a common behavior for the modes with $l\\ne 0$ .", "When they are compared with the Kerr BH ones, they grow faster for frequencies lower then the main GBFs flex point, contrary, grow slower for higher frequencies (as shown in Fig.REF for the l=1, m=-1 mode).", "Also, this behavior is independent from the spin of the BH.", "The scalar perturbation of both metrics shows superradiant amplification if $\\omega <m\\Omega $ .", "When this condition is met, both the GBFs have negative values, which are interpreted as a wave amplification.", "Fig.REF displays the comparison of the GBFs for the $l=m=1$ modes at $a_* =0.99$ highlighting the superradiant regime.", "The Kerr-black-bounce GBFs show a less intense amplification and the distribution of the superradiant regime peaks at lower frequencies.", "Also, the shape of the GBFs in the superradiant regime is different, being more symmetric than in the singular case.", "This result agrees with the tendency shown in the recent paper [35], in which is reported that increasing the parameter $\\ell $ causes a decrease in the superradiant amplification factor.", "Those are common features of all the superradiant modes.", "However, it has to be noticed that with increasing the azimuthal quantum number, the superradiant peak of the Kerr-black-bounce BH GBFs becomes smaller and smaller with respect to its singular counterpart.", "Functions $f$ , and $g$ , are calculated through (REF ).", "The two BHs show differen values of those functions.", "This dwell in the just described GBFs differences and in a different surface gravity (REF ), which plays a crucial role in the Bose-Einstein statistic factor integrand of (REF ) selecting lower frequency if $\\ell \\ne 0$ .", "For these reasons, the Kerr-black-bounce BH functions $f$ and $g$ are orders of magnitude smaller if compared with the singular case.", "Fig.", "REF reports a comparison of those two cases.", "For the same reasons the functions $h$ are also different.", "It is shown in Fig.REF that the root of the Kerr BH is located at $\\tilde{a}_{*}=0.555$ , while the one of the Kerr-black-bounce is at $\\tilde{a}_{*}=0.47$ .", "If the natal spin of both BHs is smaller than the respective root of $h$ , the dominant emission mode is l=0.", "In this case the evaporation due to a single scalar field will cause both BHs to lose mass at a faster rate with respect to angular momentum.", "As a result, the evaporating BH will increase its value of $a_*$ up to the respective asymptotic value $\\tilde{a}_{*}$ .", "Vice versa, the evolution of high spinning BHs is dominated by higher l modes decreasing the angular momentum of the BH and driving it towards its asymptotic values.", "The regularizing parameter influencing the surface gravity plays a significant role in the dynamic evolution of the regular BH, which is much slower with respect to its singular counterpart.", "The lifetime of an isolated Kerr BH emitting only one scalar particle and having natal mass and spin of $M_{i}=10^{11}$ kg, and $a_{*i}=0,01$ , is $\\sim 2.34 \\times 10^{16}$ s, while a nearly extremal Kerr-black-bounce BH with the same initial conditions has a lifetime of $\\sim 4.37 \\times 10^{20}$ s. Fig.REF reports mass, spin parameter, and temperature as a function of time for such BHs.", "It is interesting to consider two BHs of the same life span, and analyze their evolution.", "It is worth noticing that the time evolution of the spin parameters is different and the Kerr-black-bounce spin grows at a faster rate for most of the evolution as reported in Fig.REF .", "Given its slower dynamical evolution, it is not surprising that the intensity peak of the primary emission for the regular BH is less intense with respect to a Kerr BH having same mass and spin.", "This situation is reported in Fig.REF (a) where are considered masses of $M=3.5 \\times 10^{10}$ kg and spin values of $a_*=0,0.9,0.99$ .", "This plot shows a reduction in the number of emitted scalars as well as a reduction in the energy at which they are emitted, in line with the previous comments.", "Finally, Fig.REF (b) shows the primary emissivity for same temperatures.", "Namely, $301.93$ , $183.35$ and $74.67$ MeV for $a_*=0,0.9,0.99$ , respectively.", "One may compare REF with Fig.2 of [63] which describes the primary emission of a Kerr BH for different field spins.", "Fig.2 of [63] highlights how the rotation in a Kerr BH reinforces the emission of non spin-less particles, and decreases the emission of scalar particles.", "This is no longer valid for the Kerr-bounce BH.", "In fact its scalar particles emissivity peaks at higher values for values of the spin parameter close extremality." ], [ "Conclusions", "In this paper, we studied the evolution, under the emission of scalar radiation via the Hawking process, of a rotating regular black hole described by the Kerr-black-bounce metric.", "The study is performed in the case of a nearly extremal value of the regularizing parameter ($\\ell = 0.99 r_+$ ).", "The differences in the dynamics of the evaporation of such BH and a Kerr BH are outlined.", "Namely, the negative transmission coefficients regime, the asymptotic value of $a_*$ , the emissivity, and the lifetime are discussed and compared.", "In a standard evolution scenario, BHs clearly do not evaporate through the sole emission of a scalar field.", "Nevertheless, some scenarios involving the conspicuous presence of scalar particles such as the string axiverse [60], [64] may display a similar evolution.", "In the limit of many axion-like particles the emission of scalar particles dominates the evolution which result similar, up to a normalization, to the single scalar case.", "Anyway, the main lesson of this toy-model points towards a possible investigation of beyond-the-horizon features by analyzing the Hawking radiation.", "For example, by assuming a way to infer the BH mass and spin independently from the primary Hawking emission, it is possible by measuring the peak intensity to obtain an indirect measure of $\\ell $ in the contest of the Kerr-black-bounce solutions, and in general, would provide a measure of how much the BH solution differs from the Kerr one.", "It is most likely to observe the Hawking emission of photons and not scalar particles, but, since, the definition of $f$ and $g$ for spin-1 bosons is given by Eq.", "(REF ) with the appropriate GBFs, one can expect similar behaviors.", "This work also suggests that tracking the time evolution of the spin parameter could provide information on the spacetime structure.", "Such characteristics are certainly irrelevant for BHs of the size measured today but may become a powerful and handy tool in light of possible future primordial BHs detection.", "We leave GBFs calculation for spin 1/2, 1, and 2 fields and implementation of an accurate evaporation scenario for future studies." ], [ "Acknowledgements", "I would like to thank Violetta Sagun, Massimiliano Rinaldi, and João Rosa, for the fruitful discussions we had.", "This work was supported by national funds from FCT, I.P., within the projects UIDB/04564/2020, UIDP/04564/2020 and the FCT-CERN project CERN/FIS-PAR/0027/2021.", "M.C.", "is also supported by the FCT doctoral grant SFRH/BD/146700/2019." ] ]
2207.10467
[ [ "Whiteness-based parameter selection for Poisson data in variational\n image processing" ], [ "Abstract We propose a novel automatic parameter selection strategy for variational imaging problems under Poisson noise corruption.", "The selection of a suitable regularization parameter, whose value is crucial in order to achieve high quality reconstructions, is known to be a particularly hard task in low photon-count regimes.", "In this work, we extend the so-called residual whiteness principle originally designed for additive white noise to Poisson data.", "The proposed strategy relies on the study of the whiteness property of a standardized Poisson noise process.", "After deriving the theoretical properties that motivate our proposal, we solve the target minimization problem with a linearized version of the alternating direction method of multipliers, which is particularly suitable in presence of a general linear forward operator.", "Our strategy is extensively tested on image restoration and computed tomography reconstruction problems, and compared to the well-known discrepancy principle for Poisson noise proposed by Zanella at al.", "and with a nearly exact version of it previously proposed by the authors." ], [ "Introduction", "In many research areas related to imaging applications, such as astronomy, microscopy and computed tomography, the acquired images are formed by counting the number of photons irradiated by a source and hitting the image domain.", "The number of photons measured by the sensor can differ from the expected one due to fluctuations that are modelled by a Poisson noise [12].", "The general image formation (or degradation) model under Poisson noise corruption in vectorized form reads $\\mathbf {y} \\;\\,{=}\\,\\; \\mathbf {\\mathrm {poiss}}\\left(\\,\\overline{\\mathbf {\\lambda }}\\,\\right)\\,, \\quad \\;\\overline{\\mathbf {\\lambda }} = \\mathbf {g}\\left(\\mathbf {\\mathrm {H} }\\bar{\\mathbf {x}}\\right) + \\mathbf {b}\\,,$ where $\\,\\mathbf {y} \\in \\mathbb {N}^m$ , $\\overline{\\mathbf {x}} \\in {\\mathbb {R}}_+^n$ and $\\mathbf {b}\\in {\\mathbb {R}}_+^{m}$ - with $\\mathbb {N}$ and ${\\mathbb {R}}_+$ denoting the sets of natural numbers including zero and of non-negative real numbers, respectively - are vectorized forms of the observed degraded $m_1 \\times m_2$ image, the unknown uncorrupted $n_1 \\times n_2$ image and the so-called (usually known) background emission $m_1 \\times m_2$ image, respectively, with $m = m_1m_2$ , $n = n_1 n_2$ .", "Matrix $\\mathbf {\\mathrm {H}} \\in {\\mathbb {R}}^{m \\times n}$ contains the coefficients of a linear degradation operator, whereas the vectorial function $\\mathbf {g}:{\\mathbb {R}}^m\\rightarrow {\\mathbb {R}}^m$ is the identity function or a nonlinear function modelling the eventual presence of (deterministic) nonlinearities in the degradation process.", "$\\mathbf {\\mathrm {H}}$ and $\\mathbf {g}$ are determined by the specific application at hand and here are assumed to be known.", "In particular, for the applications of interest in this paper, the function $\\mathbf {g}$ can be restricted to the simplified form $\\,\\mathbf {g}(\\mathbf {h}) = \\left(g(h_1),g(h_2),\\ldots ,g(h_m)\\right)^T$ , with $g: {\\mathbb {R}}_+ \\rightarrow {\\mathbb {R}}_+$ .", "Finally, $\\mathbf {\\mathrm {poiss}}\\left(\\,\\overline{\\mathbf {\\lambda }}\\,\\right) := \\left(\\mathrm {poiss}\\left(\\overline{\\lambda }_1\\right),\\mathrm {poiss}\\left(\\overline{\\lambda }_2\\right),\\ldots ,\\mathrm {poiss}\\left(\\overline{\\lambda }_m\\right)\\right)$ , with $\\mathrm {poiss}(\\,\\overline{\\lambda _i}\\,)$ indicating the realization of a Poisson-distributed random variable with parameter (mean) $\\overline{\\lambda }_i$ , hence $\\overline{\\mathbf {\\lambda }} \\in {\\mathbb {R}}_+^m$ is the vectorized form of the $m_1 \\times m_2$ noise-free degraded image.", "The inverse problem of determining a good estimate $\\mathbf {x}^*$ of the uncorrupted image $\\overline{\\mathbf {x}}$ given the degraded observation $\\mathbf {y}$ is a hard task even when $\\mathbf {\\mathrm {H}}$ , $\\mathbf {g}$ and $\\mathbf {b}$ are known.", "In fact, such an inverse problem is typically ill-posed and, hence, some a priori information or belief on the target image $\\overline{\\mathbf {x}}$ must necessarily be encoded, e.g.", "in the form of regularization, in order to obtain an acceptable estimate $\\mathbf {x}^*$ .", "A popular and effective approach allowing to explicitly include regularization is the so-called variational approach, according to which an estimate $\\mathbf {x}^*$ of the original image $\\bar{\\mathbf {x}}$ is sought as the global minimizer of a given cost (or energy) function $\\mathcal {J}:{\\mathbb {R}}^n\\rightarrow {\\mathbb {R}}$ ; in formula $\\mathbf {x}^*(\\mu )\\in \\operatornamewithlimits{arg\\,min}_{\\mathbf {x}\\,\\in \\,{\\mathbb {R}}_+^n}\\lbrace \\, \\mathcal {J}(\\mathbf {x};\\mu )\\;{:=}\\; \\mathcal {R}(\\mathbf {x}) \\;{+}\\; \\mu \\, \\mathcal {F}(\\mathbf {\\lambda };\\mathbf {y})\\,\\rbrace \\,,\\quad \\mathbf {\\lambda } \\;{:=}\\,\\mathbf {g}(\\mathbf {\\mathrm {H}x})+\\mathbf {b}\\,,$ where $\\mathcal {R}$ and $\\mathcal {F}$ are referred to as the regularization term and the data fidelity term, respectively, and where the so-called regularization parameter $\\mu \\in {\\mathbb {R}}_{++}$ allows to balance the contribution of the two terms in the overall cost function.", "The data fidelity term $\\mathcal {F}$ measures the discrepancy between the noise-free degraded image $\\mathbf {\\lambda }$ and the noisy observation $\\mathbf {y}$ in a way that accounts for the noise statistics.", "In presence of Poisson noise, according to the Maximum Likelihood (ML) estimation approach, the fidelity term is typically set as the (generalized) Kullback-Leibler (KL) divergence between $\\mathbf {\\lambda }$ and $\\mathbf {y}$ (see, e.g., [3]), that is $\\mathcal {F}(\\mathbf {\\lambda };\\mathbf {y})\\;{=}\\; \\mathrm {KL}(\\mathbf {\\lambda };\\mathbf {y})\\;{:=}\\; \\sum _{i=1}^m \\, \\left(\\lambda _i-y_i \\ln \\lambda _i + y_i \\ln y_i - y_i\\right)\\,,$ where $\\,0 \\ln 0 = 0\\,$ is assumed.", "We indicate by $\\mathcal {R}$ -KL the class of variational models defined as in (REF ) with $\\mathcal {F}$ equal to the KL divergence term in (REF ).", "The regularization term $\\mathcal {R}$ in (REF ) encodes prior information or beliefs on the target uncorrupted image $\\overline{\\mathbf {x}}$ .", "One of the most popular and widely adopted regularizers in imaging is the Total Variation (TV) semi-norm [15], which reads $\\mathcal {R}(\\mathbf {x})\\;{=}\\;\\mathrm {TV}(\\mathbf {x}) \\;{:=}\\;\\sum _{i=1}^n \\Vert (\\mathbf {\\nabla x})_i\\Vert _2\\,,$ where $(\\mathbf {\\nabla x})_i \\in {\\mathbb {R}}^2$ denotes the discrete gradient of image $\\mathbf {x}$ at pixel location $i$ .", "The TV term is known to be particularly effective for the regularization of piece-wise constant images as it promotes sparsity of gradient magnitudes.", "We denote by TV-KL the $\\mathcal {R}$ -KL variational model with $\\mathcal {R}$ equal to the TV function in (REF ).", "The regularization parameter $\\mu $ in (REF ) is of crucial importance for getting high quality reconstructions $\\mathbf {x}^*(\\mu )$ .", "In fact, it is well established that, even upon the selection of suitable fidelity and regularization terms, an incorrect value of $\\mu $ can easily lead to meaningless reconstructions.", "For this reason, a lot of research has been devoted to the design of effective strategies for the $\\mu $ -selection task under Poisson noise corruption.", "In abstract form, such strategies can be formulated as follows: $\\text{Select}\\;\\;\\mu =\\mu ^*\\;\\;\\text{such that}\\;\\;\\mathcal {C}(\\mathbf {x}^*(\\mu ^*))\\;\\;\\text{is satisfied}\\,,$ where $\\mathbf {x}^*(\\mu ):{\\mathbb {R}}_{++}\\rightarrow {\\mathbb {R}}^{n}$ is the image reconstruction function introduced in (REF ) and where $\\mathcal {C}(\\cdot )$ is some selection criterion or principle.", "The selection principles designed so far to deal with Poisson noise have mostly been inspired by the very wide literature related to the parameter selection under additive white Gaussian noise corruption, and they can be thus divided into two main classes according to their original derivation set-up: principles derived from imposing the value of some $\\mu $ -dependent quantity; principles derived from optimizing some $\\mu $ -dependent quantity.", "As typical examples of first class strategies, we mention the discrepancy principles (DP) whose general form is given by $\\mathcal {C}(\\mathbf {x}^*(\\mu ^*)):\\;\\quad \\mathcal {D}(\\mu ;\\mathbf {y}) \\;{=}\\; \\Delta \\in {\\mathbb {R}}_{++}\\,,$ with $\\mathcal {D}(\\mu ;\\mathbf {y})=\\mathrm {KL}(\\mathbf {\\lambda }(\\mu );\\mathbf {y})$ , and where the above equality is referred to as discrepancy equation while $\\Delta $ is the so-called discrepancy value that changes when considering different DP instances.", "The Morozov discrepancy principle, which is widely adopted in presence of Gaussian noise, naturally induces a DP version according to which the scalar value $\\Delta $ is replaced by the expected value of the KL fidelity term regarded as a function of the $m$ -variate random vector $(Y_1,Y_2,\\ldots ,Y_m)$ , with vector $\\mathbf {\\lambda }$ fixed.", "Unfortunately, recovering an exact closed-form expression for the target expected value has been proven to be theoretically unfeasible, especially in low counting regimes, i.e.", "when the entries of $\\mathbf {\\lambda }$ are small [3].", "A popular alternative to the aforementioned exact but theoretical DP, which has been proposed in [19] for the image denoising problem and extended in [2] to the restoration task, is based on truncating the Taylor series expansion of the theoretical expected value.", "For this reason, in [3] this version of DP has been referred to as Approximate DP (ADP); in formula, it reads $\\mathcal {C}(\\mathbf {x}^*(\\mu ^*)):\\;\\quad \\mathcal {D}(\\mu ^*;\\mathbf {y})\\;{=}\\;\\frac{m}{2}\\,,\\qquad \\mathrm {(ADP)}$ with $m$ indicating the total number of pixels in the observed image $\\mathbf {y}$ .", "The ADP is particularly robust from the theoretical viewpoint as it has been proven that the associated discrepancy equation admits a unique solution.", "Nonetheless, it has also been observed that, as the number of photons hitting the image domain gets smaller, the approximation considered by ADP becomes particularly rough and the principle returns low-quality reconstruction - see, e.g., [3].", "In order to overcome the limitations presented by the ADP, in [3] the authors proposed the Nearly Exact DP (NEDP), which is a novel version of DP based on a more accurate approximation - both in low-counting and mid/high-counting regimes - of the expected value of the KL divergence term.", "In the NEDP, the discrepancy value $\\Delta $ is replaced by a weighted least-square fitting of Montecarlo realizations of the target expected value and it is regarded as a function $f$ of the regularization parameter $\\mu $ ; in formula: $\\mathcal {C}(\\mathbf {x}^*(\\mu ^*)):\\;\\quad \\mathcal {D}(\\mu ;\\mathbf {y})\\;{=}\\;f(\\mu )\\,.\\qquad \\mathrm {(NEDP)}$ Despite its very good experimental performances, the NEDP is characterized by theoretical limitations which are mostly related to the lack of guarantees on the uniqueness of the solution for the discrepancy equation; such limitations are also combined with the empirical evidence of multiple solutions in very extreme scenarios where the number of zero-pixels in the acquired data is particularly relevant.", "The general formulation of minimization-based principles is $\\mathcal {C}(\\mathbf {x}^*(\\mu ^*))\\;:\\;\\quad \\mu ^*\\in \\operatornamewithlimits{arg\\,min}_{\\mu \\in {\\mathbb {R}}_{++}} \\mathcal {V}(\\mu )\\,,\\quad \\mathcal {V}:{\\mathbb {R}}_{++}\\rightarrow {\\mathbb {R}}$ where $\\mathcal {V}$ represents some demerit function to be minimized for selecting $\\mu $ .", "For Poisson data, this class of strategies has not been explored as much as the former.", "A few decades ago, some attempts have been made in order to adapt the popular Generalized Cross Validation (GCV) approach [6] to non-Gaussian data [8], [18]; nonetheless, these strategies, which ultimately rely on a weighted approximation of the KL fidelity term and on a slight reformulation of the classical GCV score, have not been diffusively employed for imaging problems.", "Among the parameter selection strategies that have been developed in the context of additive white noise corruption, the class of minimization-based principles exploiting the noise whiteness property is one of the best performing [9], [1], [10], [13], [14], [11].", "More specifically, one selects $\\mu $ by minimizing the correlation between the residual image components, that is by guaranteeing that the residual image resembles as much as possible the underlying additive noise in terms of whiteness.", "The whiteness-based approaches have been proven to outperform the Morozov discrepancy principle in different imaging tasks, such as, e.g., denoising/restoration [10] and super-resolution [13], [14].", "Nonetheless, despite the encouraging results on Gaussian data, so far the whiteness principle has not been extended to Poisson noise corruption.", "In this work, we are going to address such extension." ], [ "Contribution", "The main contribution of this paper is to provide the first extension of the whiteness principle proposed for additive white noise to the case of Poisson noise.", "In particular, we will illustrate theoretically how our proposal simply relies on applying the standard whiteness principle to a suitably standardized version of the Poisson-corrupted observation and that it can be used to select the regularization parameter $\\mu $ in any variational model of the $\\mathcal {R}$ -KL class.", "In this work, we apply the proposed selection strategy to the very popular TV-KL model $\\mathbf {x}^*(\\mu )\\;{\\in }\\;\\operatornamewithlimits{arg\\,min}_{\\mathbf {x}\\,\\in \\,{\\mathbb {R}}_+^n}\\lbrace \\,\\mathrm {TV}(\\mathbf {x}) \\;{+}\\; \\mu \\, \\mathrm {KL}(\\mathbf {\\lambda };\\mathbf {y})\\,\\rbrace \\,,\\quad \\mathbf {\\lambda }=\\mathbf {g}(\\mathbf {\\mathrm {H}x})+\\mathbf {b}\\,,$ employed for the image restoration (IR) and X-rays CT image reconstruction (CTIR) tasks.", "In the two application scenarios, the matrix $\\mathbf {\\mathrm {H}} \\in {\\mathbb {R}}^{m \\times n}$ and the vectorial function $\\mathbf {g}(\\mathbf {h}) = \\left(g(h_1),\\ldots ,g(h_m)\\right)^T$ in (REF ) are specified as follows: $\\begin{array}{rll}\\mathrm {IR:}\\;&\\mathbf {\\mathrm {H}} \\in {\\mathbb {R}}^{n\\times n} \\;\\;\\, \\text{blurring matrix}, & g(h_i) = h_i,\\\\\\mathrm {CTIR:}\\;&\\mathbf {\\mathrm {H}} \\in {\\mathbb {R}}^{m\\times n} \\;\\: \\text{Radon matrix}, & g(h_i) = I_0\\,e^{-h_i}\\!,\\;\\: I_0\\in {\\mathbb {R}}_{++}\\,.\\end{array}$ From the numerical optimization viewpoint, in both scenarios the TV-KL model (REF ) will be solved by means of a two-blocks ADMM approach, which for the CTIR problem will be adopted in a semi-linearized version so as to significantly decrease the per-iteration computational cost.", "Experimental tests will show that in most cases the proposed selection principle outperforms the aforementioned ADP and NEDP and returns output images characterized by quality measures which are close to the ones achievable by manually tuning the regularization parameter $\\mu $ .", "The paper is organized as follows.", "In Section we set the notations and recall some preliminary results on white random processes.", "The proposed selection strategy is introduced in Section , while in Section we outline the numerical scheme employed for the solution of model (REF ).", "In Section we extensively test the newly introduced strategy on IR and CTIR problems.", "Finally, we draw some conclusions and provide an outlook for future research in Section ." ], [ "Notations and Preliminaries", "In this paper, scalars, vectors and matrices are denoted, e.g., by $x$ , $\\mathbf {x} = \\left\\lbrace x_i\\right\\rbrace $ and $\\mathbf {\\mathrm {X}} = \\left\\lbrace x_{i,j}\\right\\rbrace $ , respectively, whereas scalar random variables and random matrices (also referred to as random fields) are indicated by $X$ and $\\mathbf {\\mathcal {X}} = \\left\\lbrace X_{i,j}\\right\\rbrace $ , respectively.", "We indicate by $\\mathrm {E}[X]$ , $\\mathrm {Var}[X]$ , $\\mathrm {Corr}[X,Y] = \\mathrm {E}[XY]$ and $\\mathrm {P}_X$ the expected value (or mean) of random variable $X$ , the variance of $X$ , the correlation between random variables $X$ and $Y$ and the probability mass function of (discrete) random variable $X$ , respectively.", "We denote by $\\mathbf {0}_d$ , $\\mathbf {I}_d$ and $\\iota _{\\mathrm {S}}$ the $d$ -dimensional null (column) vector, the identity matrix of order $d$ and the indicator function of set ${\\mathrm {S}}$ , respectively, with $\\iota _{\\mathrm {S}}(\\mathbf {x}) = 0$ for $\\mathbf {x} \\in {\\mathrm {S}}$ and $\\iota _{\\mathrm {S}}(\\mathbf {x}) = +\\infty $ for $\\mathbf {x} \\notin {\\mathrm {S}}$ .", "We indicate respectively by $(\\,\\cdot \\,,\\,\\cdot \\,)$ and $(\\,\\cdot \\,;\\,\\cdot \\,)$ the concatenation by rows and by columns of scalars, vectors and matrices.", "Finally, $\\Vert \\,\\cdot \\,\\Vert _2$ denotes the vector Euclidean norm or the matrix Frobenius norm, depending on the context In order to introduce the theory underlying our proposal, it is useful to rewrite the vectorized image formation model (REF ) in its equivalent matrix form.", "Denoting by $\\,\\mathbf {\\mathrm {Y}}, \\overline{\\mathbf {\\Lambda }} \\in {\\mathbb {R}}^{m_1 \\times m_2}$ and $\\mathbf {\\mathrm {B}},\\overline{\\mathrm {\\mathbf {X}}} \\in {\\mathbb {R}}^{n_1 \\times n_2}$ the matrix forms of vectors $\\,\\mathbf {y},\\overline{\\mathbf {\\lambda }}\\in {\\mathbb {R}}^m$ and $\\mathbf {b},\\overline{\\mathbf {x}}\\in {\\mathbb {R}}^n$ , respectively, it reads $\\mathbf {\\mathrm {Y}} \\;\\,{=}\\,\\; \\mathbf {\\mathrm {POISS}}\\left(\\overline{\\mathbf {\\Lambda }}\\right)\\,, \\quad \\;\\overline{\\mathbf {\\Lambda }} = \\mathbf {\\mathrm {G}}\\left(\\mathbf {\\mathrm {H}}\\left(\\overline{\\mathbf {X}}\\right)\\right) + \\mathbf {\\mathrm {B}}\\,,$ where, with a little abuse of notation, $\\mathbf {\\mathrm {H}}: {\\mathbb {R}}^{n_1 \\times n_1} \\rightarrow {\\mathbb {R}}^{m_1 \\times m_1}$ indicates here the linear operator encoded by matrix $\\mathbf {\\mathrm {H}} \\in {\\mathbb {R}}^{m \\times n}$ in the vectorized model (REF ), and where $\\mathbf {\\mathrm {POISS}}\\left(\\,\\overline{\\mathbf {\\mathrm {\\Lambda }}}\\,\\right) = \\left\\lbrace \\mathrm {poiss}\\left(\\,\\overline{\\lambda }_{i,j}\\right)\\right\\rbrace $ and $\\mathbf {\\mathrm {G}}\\left(\\mathbf {\\mathrm {H}}\\left(\\overline{\\mathbf {X}}\\right)\\right)= \\left\\lbrace g\\left(\\left(\\mathbf {\\mathrm {H}}\\left(\\overline{\\mathbf {X}}\\right)\\right)_{i,j}\\right)\\right\\rbrace $ , i.e.", "the matrix forms of vectors $\\mathbf {\\mathrm {poiss}}\\left(\\,\\overline{\\mathbf {\\lambda }}\\,\\right)$ and $\\mathbf {g}(\\mathbf {\\mathrm {H} }\\bar{\\mathbf {x}})$ in (REF ).", "We now recall the definitions of weak stationary random field, ensemble normalized auto-correlation, sample normalized auto-correlation and, of particular importance for our purposes, white random field.", "To shorten notations in the definitions, we preliminarily define the following two sets of integer index pairs $\\begin{array}{lcl}\\mathrm {I} & \\!\\!\\!{:=}\\!\\!\\!\\!", "& \\left\\lbrace (i,j)\\:\\, \\in \\mathbb {Z}^2\\!", ": \\;(i,j) \\:\\,\\in [1,m_1] \\times [1,m_2] \\,\\right\\rbrace ,\\\\\\mathrm {L} & \\!\\!\\!{:=}\\!\\!\\!\\!", "& \\left\\lbrace (l,m) \\in \\mathbb {Z}^2\\!", ": \\; (l,m) \\in \\left[-(m_1-1),(m_1-1)\\right] \\times \\left[-(m_2-1),(m_2-1)\\right] \\,\\right\\rbrace .\\end{array}$ Definition 1 (weak stationary random field) A $m_1 \\times m_2$ random field $\\mathcal {Z} = \\left\\lbrace Z_{i,j}\\right\\rbrace $ , $(i,j) \\in \\mathrm {I}$ , is said to be weak stationary if $\\begin{array}{l}\\bullet \\;\\:\\mathrm {E}\\left[Z_{i,j}\\right] \\;{=}\\; \\mu _{\\mathbf {\\mathcal {Z}}} \\in {\\mathbb {R}}\\,, \\;\\; \\mathrm {Var}\\left[Z_{i,j}\\right] \\;{=}\\; \\sigma ^2_{\\mathbf {\\mathcal {Z}}} \\in {\\mathbb {R}}_{++}, \\;\\; \\forall \\,(i,j) \\in \\mathrm {I}\\,;\\\\\\bullet \\;\\:\\mathrm {Corr}\\left[Z_{i_1,j_1},Z_{i_1+l,j_1+m}\\right] \\;{=}\\; \\mathrm {Corr}\\left[Z_{i_2,j_2},Z_{i_2+l,j_2+m}\\right] \\, ,\\\\\\;\\;\\;\\;\\forall \\, (i_1,j_1) \\in \\mathrm {I}\\,, \\; \\forall \\,(i_2,j_2) \\in \\mathrm {I}\\,,\\;\\forall \\, (l,m) \\in \\mathrm {L}: \\: (i_2+l,j_2+m) \\in \\mathrm {I} \\, .\\end{array}\\nonumber $ Definition 2 (ensemble normalized auto-correlation) The ensemble normalized auto-correlation of a $m_1 \\times m_2$ weak stationary random field $\\mathcal {Z} = \\left\\lbrace Z_{i,j}\\right\\rbrace $ , $(i,j) \\in \\mathrm {I}$ , is a $(2 m_1-1)\\times (2 m_2 -1)$ matrix $\\mathbf {\\mathrm {A}}\\left[\\mathbf {\\mathcal {Z}}\\right]\\;{=}\\;\\left\\lbrace a_{l,m}\\left[\\mathbf {\\mathcal {Z}}\\right]\\right\\rbrace $ , $(l,m) \\in \\mathrm {L}$ , defined by $a_{l,m}\\left[\\mathbf {\\mathcal {Z}}\\right]\\;{=}\\; \\frac{\\mathrm {Corr}\\left[Z_{i,j},Z_{i+l,j+m}\\right]}{\\sigma ^2_{\\mathbf {\\mathcal {Z}}}}\\,, \\quad (l,m) \\in \\mathrm {L}\\,, \\; (i,j) \\in \\mathrm {I}: \\: (i+l,j+m) \\in \\mathrm {I} \\, .$ Definition 3 (white random field) A $m_1 \\times m_2$ random field $\\mathcal {Z} = \\left\\lbrace Z_{i,j}\\right\\rbrace $ , $(i,j) \\in \\mathrm {I}$ , is said to be white if $\\begin{array}{l}\\bullet \\;\\:\\text{it is weak stationary with}\\;\\;\\mu _{\\mathbf {\\mathcal {Z}}}\\;{=}\\; 0 \\, ; \\\\\\bullet \\;\\:\\text{it is uncorrelated, that is its ensemble normalized autocorrelation}\\\\\\;\\;\\;\\:\\mathbf {\\mathrm {A}}\\left[\\mathbf {\\mathcal {Z}}\\right]\\;{=}\\;\\left\\lbrace a_{l,m}\\left[\\mathbf {\\mathcal {Z}}\\right]\\right\\rbrace , \\;(l,m) \\in \\mathrm {L}, \\;\\text{satisfies:}\\vspace{2.84544pt}\\\\\\qquad \\qquad \\qquad a_{l,m}\\left[\\mathbf {\\mathcal {Z}}\\right]\\;{=}\\;\\left\\lbrace \\begin{array}{ll}0 & \\forall \\, (l,m) \\in \\mathrm {L} \\setminus \\left\\lbrace (0,0)\\right\\rbrace \\, , \\\\1 & \\text{if}\\;\\;(l,m)\\;{=}\\;(0,0) \\, .\\end{array}\\right.\\end{array}\\nonumber $ Definition 4 (sample normalized auto-correlation) The sample normalized auto-correlation of a $m_1 \\times m_2$ non-zero matrix $\\mathbf {\\mathrm {Z}} = \\left\\lbrace z_{i,j}\\right\\rbrace $ , $(i,j) \\in \\mathrm {I}$ , is a $(2 m_1-1)\\times (2 m_2 -1)$ matrix $\\mathbf {\\mathrm {S}}\\left(\\mathbf {\\mathrm {Z}}\\right)\\;{=}\\;\\left\\lbrace s_{l,m}\\left(\\mathbf {\\mathrm {Z}}\\right)\\right\\rbrace $ , $(l,m) \\in \\mathrm {L}$ , defined by $s_{l,m}\\left(\\mathbf {\\mathrm {Z}}\\right)\\;{=}\\; \\frac{1}{\\left\\Vert \\mathbf {\\mathrm {Z}}\\right\\Vert _2^2} \\, \\sum _{\\;(i,j)\\in \\,\\mathrm {I}} \\!z_{i,j} \\, z_{i+l,j+m} \\, .$ It follows from Definition REF that, given a non-zero matrix $\\mathbf {\\mathrm {Z}} = \\left\\lbrace z_{i,j}\\right\\rbrace $ , $(i,j) \\in \\mathrm {I}$ , one can measure the global amount of normalized auto-correlation between the entries of $\\mathbf {\\mathrm {Z}}$ , that is how far is $\\mathbf {\\mathrm {Z}}$ from being the realization of a white random field, via the following scalar whiteness measure ([10], [1]): $\\mathcal {W}(\\mathbf {\\mathrm {Z}}) \\;{:=}\\; \\left\\Vert \\mathbf {\\mathrm {S}}\\left(\\mathbf {\\mathrm {Z}}\\right)\\right\\Vert _2^2\\;{=}\\;\\sum _{(l,m)\\in \\,\\mathrm {L}} \\!\\!\\left(s_{l,m}\\left(\\mathbf {\\mathrm {Z}}\\right)\\right)^2 \\, ,$ with scalars $s_{l,m}\\left(\\mathbf {\\mathrm {Z}}\\right)$ defined in (REF )." ], [ "The proposed whiteness principle for Poisson noise", "In this section, we show how the residual whiteness principle proposed for additive white noise can be quite easily extended to the case of Poisson noise based on suitable random variable standardizations.", "For this purpose, first in Definition REF we recall the formal definition of Poisson random variable and Poisson independent random field, then in Definition REF we introduce their standard(ized) versions, whose main properties are finally highlighted in Proposition REF .", "Definition 5 (Poisson random variable and independent random field) A discrete random variable $Y$ is said to be Poisson distributed with parameter $\\lambda \\:{\\in }\\: {\\mathbb {R}}_{++}$ , denoted by $Y \\:{\\sim }\\: \\mathcal {P}(\\lambda )$ , if its probability mass function reads $\\mathrm {P}_Y(y\\mid \\lambda ) \\,\\;{=}\\; \\frac{\\lambda ^y e^{-\\lambda }}{y\\,!}", "\\, , \\quad y \\in \\mathbb {N}\\,.$ The expected value and variance of random variable $Y$ are given by $\\mathrm {E}\\left[Y\\right] \\,\\;{=}\\;\\, \\mathrm {Var}\\left[Y\\right] \\,\\;{=}\\;\\,\\lambda \\, .$ A random field $\\mathcal {Y} = \\left\\lbrace Y_{i,j}\\right\\rbrace $ is said to be independent Poisson distributed with parameter $\\mathbf {\\Lambda } = \\left\\lbrace \\lambda _{i,j}\\right\\rbrace $ , denoted by $\\mathcal {Y} \\sim \\mathcal {P}(\\mathbf {\\Lambda })$ , if it satisfies: $Y_{i,j} \\;{\\sim }\\; \\mathcal {P}\\left(\\lambda _{i,j}\\right) \\;\\:\\forall \\, (i,j) \\in \\mathrm {I}\\,, \\quad \\mathrm {P}_{\\mathcal {Y}}(\\mathbf {\\mathrm {Y}}\\mid \\mathbf {\\Lambda }) \\;{=}\\; \\prod _{(i,j)\\in \\mathrm {I}} \\mathrm {P}_{Y_{i,j}}(y_{i,j}\\mid \\lambda _{i,j}) \\, .$ Definition 6 (standard Poisson random variable and independent random field) Let $Y \\sim \\mathcal {P}(\\lambda )$ .", "We call the discrete random variable $Z$ defined by $Z \\,\\;{=}\\;\\, S_{\\lambda }(Y) \\:\\;{:=}\\;\\: \\frac{Y -\\mathrm {E}\\left[Y\\right]}{\\sqrt{\\mathrm {Var}\\left[Y\\right]}} \\:\\;{=}\\;\\: \\frac{Y-\\lambda }{\\sqrt{\\lambda }} \\:\\;{=}\\;\\: \\frac{1}{\\sqrt{\\lambda }} \\, Y - \\sqrt{\\lambda } \\, ,$ as standard Poisson distributed with parameter $\\lambda $ , denoted by $\\,Z \\:{\\sim }\\: \\widetilde{\\mathcal {P}}(\\lambda )$ .", "Let $\\mathcal {Y} \\sim \\mathcal {P}(\\mathbf {\\Lambda })$ .", "We call the random field defined by $\\mathcal {Z} \\,\\;{=}\\; \\left\\lbrace Z_{i,j}\\right\\rbrace \\,\\;\\;\\;\\mathrm {with}\\;\\;\\;\\, Z_{i,j} \\,\\;{=}\\;\\, S_{\\lambda _{i,j}}(Y_{i,j}) \\:\\;\\; \\forall \\, (i,j) \\in \\mathrm {I} \\, ,$ as independent standard Poisson distributed with parameter $\\mathbf {\\Lambda }$ , denoted by $\\mathcal {Z} \\sim \\widetilde{\\mathcal {P}}(\\mathbf {\\Lambda })$ .", "Proposition 1 Let $Z \\sim \\widetilde{\\mathcal {P}}(\\lambda )$ and let $S_{\\lambda }$ be the standardization function defined in (REF ).", "Then, the probability mass function, expected value and variance of random variable $Z$ are given by: $&\\mathrm {P}_{Z}\\,(z|\\lambda ) \\;{=}\\; \\displaystyle {\\frac{\\lambda ^{S_{\\lambda }^{-1}(z)} \\, e^{-\\lambda }}{\\left(S_{\\lambda }^{-1}(z)\\right)\\,!", "}}, \\;\\; z \\in \\left\\lbrace S_{\\lambda }(0),S_{\\lambda }(1),\\ldots \\right\\rbrace , \\;\\;S_{\\lambda }^{-1}(z) \\,\\;{=}\\;\\, \\sqrt{\\lambda }\\,z+\\lambda ,&\\\\&\\mathrm {E}\\left[\\,Z\\,\\right] \\;{=}\\; 0\\,, \\quad \\mathrm {Var}\\left[\\,Z\\,\\right] \\;{=}\\; 1 \\, .&$ Hence, any independent standard Poisson random field $\\mathcal {Z} \\sim \\widetilde{\\mathcal {P}}(\\mathbf {\\Lambda })$ is white.", "The scalar affine standardization function $S_{\\lambda }: \\mathbb {N} \\rightarrow \\lbrace S_{\\lambda }(0),S_{\\lambda }(1),\\ldots \\rbrace $ in (REF ) is bijective (as $\\lambda \\in {\\mathbb {R}}_{++}$ ), hence it admits the inverse $S_{\\lambda }^{-1}$ defined in (REF ).", "The expression of $\\mathrm {P}_Z$ in (REF ) thus comes from specifying the general form of the probability mass function of a discrete random variable defined by a bijective function of another discrete random variable.", "The fact that $Z$ has zero-mean and unit-variance - as stated in () - comes immediately from the definition of $S_{\\lambda }$ in (REF ).", "It thus follows from the definition of a standard Poisson independent random field $\\mathcal {Z} \\,\\;{=}\\; \\left\\lbrace Z_{i,j}\\right\\rbrace $ given in (REF ) and from statement () that: $Z_{i,j} \\sim \\widetilde{\\mathcal {P}}\\left(\\lambda _{i,j}\\right) \\,\\;{\\Longrightarrow }\\;\\,\\left\\lbrace \\!\\!\\begin{array}{rcl}\\mathrm {E}\\left[Z_{i,j}\\right] &\\!\\!\\!\\!", "{=}\\!\\!\\!\\!& \\mu _{\\mathbf {\\mathcal {Z}}} = 0 \\\\\\mathrm {Var}\\left[Z_{i,j}\\right] &\\!\\!\\!\\!", "{=}\\!\\!\\!\\!& \\sigma ^2_{\\mathbf {\\mathcal {Z}}} = 1\\end{array}\\right.", ", \\;\\forall \\,(i,j) \\, .$ Moreover, it clearly comes from independence of a non-standard Poisson random field $\\mathbf {\\mathcal {Y}}$ - formalized in (REF ) - and from the entry-wise definition of random field standardization in (REF ) that independence also holds true for a standard Poisson random field $\\mathcal {Z}$ ; in formula: $\\mathrm {P}_{\\mathcal {Z}}(\\mathbf {\\mathrm {Z}}\\mid \\mathbf {\\Lambda }) \\;{=}\\; \\prod _{(i,j)} \\mathrm {P}_{Z_{i,j}}(z_{i,j}\\mid \\lambda _{i,j}) \\, .$ Since independence implies uncorrelation and based on (REF ), we have $\\mathrm {Corr}\\left[Z_{i_1,j_1},Z_{i_2,j_2}\\right] \\;{=}\\; \\left\\lbrace \\begin{array}{ll}0 & \\text{for}\\;\\: (i_1,j_1) \\ne (i_2,j_2)\\,,\\\\\\mathrm {Var}\\left[ Z_{i_1,j_1}\\right] \\;{=}\\; \\sigma ^2_{\\mathbf {\\mathcal {Z}}} \\;{=}\\; 1 & \\text{for}\\;\\: (i_1,j_1) = (i_2,j_2) \\,.\\end{array}\\right.$ It follows from (REF ), (REF ) and from Definition REF that $\\mathbf {\\mathcal {Z}}$ is a weak stationary random field.", "Then, it comes from (REF ) and from Definition REF that the ensemble normalized auto-correlation $\\mathbf {\\mathrm {A}}\\left[\\mathbf {\\mathcal {Z}}\\right]\\;{=}\\;\\left\\lbrace a_{l,m}\\left[\\mathbf {\\mathcal {Z}}\\right]\\right\\rbrace $ satisfies $a_{l,m}\\left[\\mathbf {\\mathcal {Z}}\\right]\\;{=}\\;\\left\\lbrace \\begin{array}{ll}0 & \\text{for}\\;\\:(l,m) \\ne (0,0) \\, , \\\\1 & \\text{for}\\;\\:(l,m) = (0,0) \\, .\\end{array}\\right.$ Hence, based on Definition REF , we can conclude that any standard Poisson independent random field $\\mathbf {\\mathcal {Z}}$ is white.", "In light of Definition REF , the image formation model (REF ) can be written in probabilistic terms as follows: $\\mathbf {\\mathrm {Y}}\\;\\;\\text{realization of}\\;\\;\\mathcal {Y} \\sim \\mathcal {P}\\left(\\,\\overline{\\Lambda }\\,\\right)\\,,$ with matrix $\\overline{\\mathbf {\\Lambda }}$ defined in (REF ).", "Then, based on Definition REF , after introducing the matrix $\\mathbf {\\mathrm {Z}} \\,\\;{=}\\;\\, \\left\\lbrace z_{i,j}\\right\\rbrace \\;\\;\\mathrm {with}\\;\\; z_{i,j}\\,\\;{=}\\;\\,S_{\\,\\overline{\\lambda }_{i,j}}\\!\\left(y_{i,j}\\right)\\,\\;{=}\\;\\,\\frac{y_{i,j}-\\overline{\\lambda }_{i,j}}{\\sqrt{\\,\\overline{\\lambda }_{i,j}}} \\, ,$ the probabilistic model (REF ) can be equivalently written in standardized form as $\\mathbf {\\mathrm {Z}}\\;\\;\\text{realization of}\\;\\;\\mathcal {Z} \\sim \\widetilde{\\mathcal {P}}\\left(\\,\\overline{\\Lambda }\\,\\right) \\,.$ That is, matrix $\\mathbf {\\mathrm {Z}}$ in (REF ) with $\\overline{\\mathbf {\\Lambda }}$ in (REF ) is the realization of an independent standard Poisson random field $\\mathcal {Z}$ which, according to Proposition REF , is white.", "We note that $\\mathbf {\\mathrm {Z}}$ can not be computed in practice as it depends on $\\overline{\\mathbf {\\Lambda }}$ which, in its turn, depends on the unknown uncorrupted image $\\overline{\\mathbf {\\mathrm {X}}}$ .", "However, the whiteness property of $\\mathbf {\\mathrm {Z}}$ can be exploited for stating a new principle for automatically selecting the value of the regularization parameter $\\mu $ in the class of $\\mathcal {R}$ -KL variational models.", "Denoting by $\\,\\mathbf {\\mathrm {X}}^*(\\mu ) = \\left\\lbrace x_{i,j}^*(\\mu )\\right\\rbrace \\,$ the matrix form of the solution of a $\\mathcal {R}$ -KL model - e.g., of the TV-KL model in (REF ) - we introduce the $\\mu $ -dependent matrices $\\mathbf {\\Lambda }^*(\\mu ), \\mathbf {\\mathrm {Z}}^*(\\mu ) \\in {\\mathbb {R}}^{m_1 \\times m_2}$ given by $\\mathbf {\\Lambda }^*(\\mu ) &\\!\\!\\!", "{=}\\!\\!\\!&\\left\\lbrace \\lambda _{i,j}^*(\\mu )\\right\\rbrace \\,\\;{=}\\;\\, \\mathbf {\\mathrm {G}}\\left(\\mathbf {\\mathrm {H}}\\left(\\mathbf {\\mathrm {X}}^*(\\mu )\\right)\\right) + \\mathbf {\\mathrm {B}},\\\\\\mathbf {\\mathrm {Z}}^*(\\mu ) &\\!\\!\\!", "{=}\\!\\!\\!& \\left\\lbrace z_{i,j}^*(\\mu )\\right\\rbrace \\,\\;\\;\\mathrm {with}\\;\\;\\, z_{i,j}^*(\\mu ) \\,\\;{=}\\;\\,S_{\\,\\lambda _{i,j}^*(\\mu )}\\!\\left(y_{i,j}\\right) \\,\\;{=}\\;\\, \\frac{y_{i,j}-\\lambda _{i,j}^*(\\mu )}{\\sqrt{\\lambda _{i,j}^*(\\mu )}},$ The ideal goal of any criterion for choosing $\\mu $ in the TV-KL model is to select the value $\\mu ^*$ yielding the closest solution image $\\mathbf {\\mathrm {X}}^*(\\mu ^*)$ to the target uncorrupted image $\\overline{\\mathbf {\\mathrm {X}}}$ , according to some distance metric.", "The conjecture behind our proposal is that the closer the solution $\\mathbf {\\mathrm {X}}^*(\\mu )$ is to the target $\\overline{\\mathbf {\\mathrm {X}}}$ , the closer the matrix $\\mathbf {\\mathrm {Z}}^*(\\mu )$ defined in (REF )-() will be to $\\mathbf {\\mathrm {Z}}$ in (REF ), so the more $\\mathbf {\\mathrm {Z}}^*(\\mu )$ will resemble the realization of a white random field.", "Hence, the proposed criterion, that we refer to as the Poisson Whiteness Principle (PWP), consists in choosing the value of $\\mu $ which leads to the less auto-correlated matrix $\\mathbf {\\mathrm {Z}}^*(\\mu )$ .", "Based on the scalar auto-correlation measure introduced in (REF ), the proposed PWP reads: $\\boxed{\\begin{array}{c}\\text{Select}\\;\\;\\mu \\;{=}\\;\\mu ^* \\:{\\in }\\;\\displaystyle {\\operatornamewithlimits{arg\\,min}_{\\mu \\in {\\mathbb {R}}_{++}}\\left\\lbrace \\, W(\\mu ) \\,\\;{:=}\\;\\, \\mathcal {W}\\left(\\mathbf {\\mathrm {Z}}^*(\\mu )\\right)\\,\\right\\rbrace } \\, ,\\vspace{4.26773pt}\\\\\\text{with matrix $\\mathbf {\\mathrm {Z}}^*(\\mu )$ defined in (\\ref {eq:Lstar})-(\\ref {eq:Zstar}) and function $\\mathcal {W}$ in (\\ref {eq:W}).", "}\\end{array}}\\qquad \\mathrm {(PWP)}$" ], [ "Numerical solution by ADMM", "In this section, we address the numerical solution of the TV-KL model (REF ) for the IR and CTIR imaging problems, that is for matrix $\\mathbf {\\mathrm {H}}$ and function $\\mathbf {g}$ defined as in (REF ).", "Recalling the definition of TV in (REF ) and introducing the discrete gradient matrix $\\mathbf {} := (\\mathbf {}_h;\\mathbf {}_v) \\in {\\mathbb {R}}^{2n \\times n}$ with $\\mathbf {}_h,\\mathbf {}_v \\in {\\mathbb {R}}^{n \\times n}$ two finite difference matrices discretizing the first-order partial derivatives of image $\\mathbf {x}$ in the horizontal and vertical direction, respectively, we write the TV-KL model (REF ) in the following form: $\\mathbf {x}^*\\;{\\in }\\;\\operatornamewithlimits{arg\\,min}_{\\mathbf {x} \\in {\\mathbb {R}}^n}\\left\\lbrace \\sum _{i=1}^n\\Vert (\\mathbf {x̥})_i\\Vert _2 \\;{+}\\; \\mu \\,\\mathrm {KL}\\left(\\mathbf {g}(\\mathbf {\\mathrm {H}x}\\right)+\\mathbf {b};\\mathbf {y}) \\;{+}\\; \\iota _{{\\mathbb {R}}_+^n}(\\mathbf {x})\\right\\rbrace \\, ,$ where, with a little abuse of notation, $(\\mathbf {x̥})_i := \\left( \\left(\\mathbf {}_h \\mathbf {x}\\right)_i \\, ; \\, \\left(\\mathbf {}_v \\mathbf {x}\\right)_i \\right) \\in {\\mathbb {R}}^2$ , the discrete gradient of image $\\mathbf {x}$ at pixel $i$ .", "By introducing the auxiliary variables $\\mathbf {t}_1\\;{=}\\;\\mathbf {x̥} \\in {\\mathbb {R}}^{2n}$ , $\\mathbf {t}_2\\;{=}\\;\\mathbf {\\mathrm {H} x} \\in {\\mathbb {R}}^m$ and $\\mathbf {t}_3\\;{=}\\;\\mathbf {x} \\in {\\mathbb {R}}^n$ , problem (REF ) can be equivalently rewritten in the following linearly constrained form: $\\begin{array}{lcl}\\left\\lbrace \\mathbf {x}^*\\!,\\mathbf {t}_1^*,\\mathbf {t}_2^*,\\mathbf {t}_3^*\\right\\rbrace & \\!\\!\\!\\!", "{\\in }\\!\\!\\!\\!", "& \\displaystyle {\\operatornamewithlimits{arg\\,min}_{\\mathbf {x},\\mathbf {t}_1,\\mathbf {t}_2,\\mathbf {t}_3}\\left\\lbrace \\sum _{i=1}^n\\Vert \\mathbf {t}_{1,i}\\Vert _2 + \\mu \\,\\mathrm {KL}\\left(\\mathbf {g}(\\mathbf {t}_2)+\\mathbf {b};\\mathbf {y}\\right)+\\iota _{{\\mathbb {R}}_+^n}(\\mathbf {t}_3)\\!\\right\\rbrace }\\\\& \\!\\!\\!\\!\\!\\!\\!\\!", "&\\text{subject to:}\\;\\;\\mathbf {t}_1\\;{=}\\;\\mathbf {x̥},\\;\\; \\mathbf {t}_2\\;{=}\\;\\mathbf {\\mathrm {H} x},\\;\\;\\mathbf {t}_3\\;{=}\\;\\mathbf {x},\\end{array}$ where $\\mathbf {t}_{1,i} := (\\mathbf {x̥})_i \\in {\\mathbb {R}}^2$ .", "It is easy to prove - see, e.g., [7] - that, after introducing the total auxiliary variable $\\,\\mathbf {t}:=(\\mathbf {t}_1;\\mathbf {t}_2;\\mathbf {t}_3) \\in {\\mathbb {R}}^{m+3n}$ , problem (REF ) takes the form: $\\begin{array}{lcl}\\left\\lbrace \\mathbf {x}^*,\\mathbf {t}^*\\right\\rbrace & \\!\\!", "{\\in }\\!\\!", "& \\displaystyle {\\operatornamewithlimits{arg\\,min}_{\\mathbf {x},\\mathbf {t}}\\left\\lbrace \\,C_1(\\mathbf {x}) + C_2(\\mathbf {t})\\,\\right\\rbrace }\\\\&\\!\\!\\!\\!& \\text{subject to:}\\;\\;\\mathbf {\\mathrm {M}}_1 \\mathbf {x}+\\mathbf {\\mathrm {M}}_2\\mathbf {t}=\\mathbf {0},\\end{array}$ where the two cost functions $C_1: {\\mathbb {R}}^n \\rightarrow {\\mathbb {R}}$ and $C_2: {\\mathbb {R}}^{m+3n} \\rightarrow {\\mathbb {R}}$ are defined by $C_1(\\mathbf {x})\\;{=}\\;0, \\quad C_2(\\mathbf {t})\\;{=}\\; \\sum _{i=1}^n\\Vert \\mathbf {t}_{1,i}\\Vert _2 + \\mu \\, \\mathrm {KL}\\left(\\mathbf {g}(\\mathbf {t}_2)+\\mathbf {b};\\mathbf {y}\\right)+\\iota _{{\\mathbb {R}}_+^n}(\\mathbf {t}_3)\\,,$ and the two matrices $\\mathbf {\\mathrm {M}}_1 \\in {\\mathbb {R}}^{(m+3n) \\times n}$ and $\\mathbf {\\mathrm {M}}_2 \\in {\\mathbb {R}}^{(m+3n) \\times (m+3n)}$ read $\\mathbf {\\mathrm {M}}_1=\\left( \\mathbf {\\mathrm {D}};\\mathbf {\\mathrm {H}};\\mathbf {I}_n\\right)\\,,\\quad \\mathbf {\\mathrm {M}}_2=-\\mathbf {I}_{m+3n}\\,.$ Functions $C_1$ and $C_2$ in (REF ) are both proper, lower semi-continuous and convex, hence problem (REF )-(REF ) is a standard two-blocks separable optimization problem which can be solved by ADMM [4].", "The augmented Lagrangian function associated to problem (REF ) reads $\\mathcal {L}(\\mathbf {x},\\mathbf {t},\\mathbf {\\rho };\\beta ) &\\!\\!\\!\\!", "{=}\\!\\!\\!\\!& C_1(\\mathbf {x})+C_2(\\mathbf {t})+\\langle \\mathbf {\\rho },\\mathbf {\\mathrm {M}}_1\\mathbf {x}+\\mathbf {\\mathrm {M}}_2\\mathbf {t}\\rangle +\\frac{\\beta }{2}\\Vert \\mathbf {\\mathrm {M}}_1\\mathbf {x}+\\mathbf {\\mathrm {M}}_2\\mathbf {t}\\Vert _2^2,$ where $\\mathbf {\\rho }\\in {\\mathbb {R}}^{m+3n}$ is the vector of Lagrange multipliers associated to the linear constraint in (REF ) and $\\beta \\in {\\mathbb {R}}_{++}$ is the ADMM penalty parameter.", "Solving problem (REF ) amounts to seek the saddle point(s) of the augmented Lagrangian function which, according to the standard two-blocks ADMM, can be computed as the limit point of the following iterative procedure: $\\mathbf {x}^{(k+1)}\\:\\;{\\in }\\;\\:&\\operatornamewithlimits{arg\\,min}_{\\mathbf {x}\\,\\in \\,{\\mathbb {R}}^{n}}\\mathcal {L}(\\mathbf {x},\\mathbf {t}^{(k)},\\mathbf {\\rho }^{(k)};\\beta )\\\\\\mathbf {t}^{(k+1)}\\:\\;{\\in }\\;\\:&\\operatornamewithlimits{arg\\,min}_{\\mathbf {t}\\,\\in \\,{\\mathbb {R}}^{m+3n}}\\mathcal {L}(\\mathbf {x}^{(k+1)},\\mathbf {t},\\mathbf {\\rho }^{(k)};\\beta )\\\\\\mathbf {\\rho }^{(k+1)}\\:\\;{=}\\;\\:&\\mathbf {\\rho }^{(k)}\\;{+}\\; \\beta \\left(\\mathbf {\\mathrm {M}}_1\\mathbf {x}^{(k+1)}+\\mathbf {\\mathrm {M}}_2\\mathbf {t}^{(k+1)}\\right)\\,.$ In what follows, we will detail how to solve (REF )-() when tackling the IR and CTIR imaging problems." ], [ "The ", "Recalling the definition of the augmented Lagrangian function $\\mathcal {L}$ in (REF ), with functions $C_1,C_2$ in (REF ) and matrices $\\mathbf {\\mathrm {M}}_1, \\mathbf {\\mathrm {M}}_2$ in (REF ), after dropping the constant terms the $\\mathbf {x}$ -update problem in (REF ) reads $\\!\\!\\!\\!\\!\\!\\mathbf {x}^{(k+1)}&\\!\\!\\!\\!", "{\\in }\\!\\!\\!\\!&\\operatornamewithlimits{arg\\,min}_{\\mathbf {x}\\,\\in \\,{\\mathbb {R}}^{n}}\\left\\lbrace \\langle \\mathbf {\\rho }^{(k)},\\mathbf {\\mathrm {M}}_1\\mathbf {x}-\\mathbf {t}^{(k)}\\rangle +\\frac{\\beta }{2}\\Vert \\mathbf {\\mathrm {M}}_1\\mathbf {x}-\\mathbf {t}^{(k)}\\Vert _2^2\\right\\rbrace \\nonumber \\\\&\\!\\!\\!\\!", "{=}\\!\\!\\!\\!&\\operatornamewithlimits{arg\\,min}_{\\mathbf {x}\\,\\in \\,{\\mathbb {R}}^{n}}\\left\\lbrace Q^{(k)}(\\mathbf {x}) := \\frac{1}{2}\\Vert \\mathbf {\\mathrm {M}}_1\\mathbf {x}\\;{-}\\;\\mathbf {v}^{(k)}\\Vert _2^2\\right\\rbrace ,\\;\\: \\mathbf {v}^{(k)}\\;{=}\\;\\mathbf {t}^{(k)}-\\frac{1}{\\beta }\\,\\mathbf {\\rho }^{(k)}.$ Since the cost function $Q^{(k)}$ in (REF ) is quadratic and convex, it admits global minimizers which are the solutions of the linear system of normal equations: $\\mathbf {\\mathrm {M}}_1^{\\mathrm {T}} \\mathbf {\\mathrm {M}}_1 \\, \\mathbf {x}^{(k+1)} \\;{=}\\; \\mathbf {\\mathrm {M}}_1^{\\mathrm {T}} \\mathbf {v}^{(k)}\\;\\;{\\Longleftrightarrow }\\;\\; \\left(\\mathbf {\\mathrm {D}}^{\\mathrm {T}}\\mathbf {\\mathrm {D}} + \\mathbf {\\mathrm {H}}^{\\mathrm {T}}\\mathbf {\\mathrm {H}} + \\mathbf {\\mathrm {I}}_n \\right) \\mathbf {x}^{(k+1)} \\;{=}\\; \\mathbf {\\mathrm {M}}_1^{\\mathrm {T}} \\mathbf {v}^{(k)}.$ The coefficient matrix in (REF ) has full rank independently of matrices $\\mathbf {}$ and $\\mathbf {\\mathrm {H}}$ - i.e., of the finite difference discretization used for the gradient and of the imaging application considered - hence the solution of (REF ) is unique and reads $\\mathbf {x}^{(k+1)} = \\left(\\mathbf {\\mathrm {M}}_1^{\\mathrm {T}} \\mathbf {\\mathrm {M}}_1\\right)^{-1} \\mathbf {\\mathrm {M}}_1^{\\mathrm {T}} \\mathbf {v}^{(k)} \\, .$ For the IR inverse problem, upon the assumption of space-invariant blur and periodic boundary conditions, the coefficient matrix in (REF ) is block-circulant with circulant blocks.", "Hence, the above linear system can be solved very efficiently by one application of the 2D Fast Fourier Transform (FFT) and one application of the inverse 2D FFT.", "When addressing the CTIR problem, the structure of matrix $\\mathbf {\\mathrm {H}}$ - which, we recall, in this case is a Radon matrix - does not allow for a Fourier diagonalization of matrix $\\mathbf {\\mathrm {M}}_1^{\\mathrm {T}} \\mathbf {\\mathrm {M}}_1$ , thus yielding a significative computational burden related to the solution of linear system (REF ).", "A popular strategy for avoiding such difficulty is the linearized ADMM.", "It relies on computing $\\mathbf {x}^{(k+1)}$ as the global minimizer of a surrogate function $\\widehat{Q}^{(k)}$ of $Q^{(k)}$ in (REF ), namely $\\mathbf {x}^{(k+1)} \\;{=}\\; \\operatornamewithlimits{arg\\,min}_{\\mathbf {x}\\,\\in \\,{\\mathbb {R}}^{n}} \\,\\widehat{Q}^{(k)}(\\mathbf {x}) \\, ,$ where $\\widehat{Q}^{(k)}$ is a quadratic function of the following form $\\widehat{Q}^{(k)}(\\mathbf {x})&{=}&Q^{(k)}(\\mathbf {x}^{(k)}) + \\langle \\nabla Q^{(k)}(\\mathbf {x}^{(k)}),\\mathbf {x}-\\mathbf {x}^{(k)}\\rangle \\nonumber \\\\&& + \\frac{\\eta }{2} \\Vert \\mathbf {x}-\\mathbf {x}^{(k)} \\Vert _2^2, \\quad \\eta \\;{\\ge }\\; \\Vert \\mathbf {\\mathrm {M}}_1 \\Vert _2^2 \\, .$ It can be easily proved that any function $\\widehat{Q}^{(k)}$ in (REF ) is a quadratic tangent majorant of the original function $Q^{(k)}$ in (REF ) at point $\\mathbf {x}^{(k)}$ , that is it satisfies $\\begin{array}{c}\\widehat{Q}^{(k)}(\\mathbf {x}^{(k)}) \\;{=}\\; Q^{(k)}(\\mathbf {x}^{(k)}), \\quad \\nabla \\widehat{Q}^{(k)}(\\mathbf {x}^{(k)}) \\;{=}\\; \\nabla Q^{(k)}(\\mathbf {x}^{(k)}),\\\\\\widehat{Q}^{(k)}(\\mathbf {x}) \\;{\\ge }\\; Q^{(k)}(\\mathbf {x}) \\;\\forall \\,\\mathbf {x} \\in {\\mathbb {R}}^n \\, .\\end{array}$ It follows from (REF )-(REF ) that the new iterate $\\mathbf {x}^{(k+1)}$ computed by the linearized ADMM is given by $\\mathbf {x}^{(k+1)} &\\!\\!\\!\\!", "{=}\\!\\!\\!\\!& \\operatornamewithlimits{arg\\,min}_{\\mathbf {x}\\,\\in \\,{\\mathbb {R}}^{n}} \\left\\lbrace \\langle \\nabla Q^{(k)}(\\mathbf {x}^{(k)}),\\mathbf {x}\\rangle + \\frac{\\eta }{2} \\Vert \\mathbf {x}-\\mathbf {x}^{(k)} \\Vert _2^2\\right\\rbrace \\\\&\\!\\!\\!\\!", "{=}\\!\\!\\!\\!&\\mathbf {x}^{(k)} - \\frac{1}{\\eta } \\nabla Q^{(k)}(\\mathbf {x}^{(k)})\\\\&\\!\\!\\!\\!", "{=}\\!\\!\\!\\!&\\mathbf {x}^{(k)} - \\frac{1}{\\eta } \\mathbf {\\mathrm {M}}_1^{\\mathrm {T}} \\left( \\mathbf {\\mathrm {M}}_1 \\mathbf {x}^{(k)}-\\mathbf {v}^{(k)}\\right), \\quad \\eta \\;{\\ge }\\; \\Vert \\mathbf {\\mathrm {M}}_1 \\Vert _2^2 \\, ,$ where in (REF ) we dropped the constant terms, in () we set $\\mathbf {x}^{(k+1)}$ equal to the unique stationary point of the strongly convex cost function in (REF ) and, finally, in () we substituted the explicit expression of the gradient of the original cost function $Q^{(k)}$ defined in (REF )." ], [ "The ", "Recalling definitions (REF )-(REF ), the $\\mathbf {t}$ -subproblem in () reads $\\mathbf {t}^{(k+1)}&\\!\\!\\!\\!", "{\\in }\\!\\!\\!\\!&\\operatornamewithlimits{arg\\,min}_{\\mathbf {t}\\,\\in \\,{\\mathbb {R}}^{m+3n}}\\left\\lbrace C_2(\\mathbf {t})+\\langle \\mathbf {\\rho }^{(k)},\\mathbf {\\mathrm {M}}_1\\mathbf {x}^{(k+1)}-\\mathbf {t}\\rangle +\\frac{\\beta }{2}\\Vert \\mathbf {\\mathrm {M}}_1\\mathbf {x}^{(k+1)}-\\mathbf {t}\\Vert _2^2\\right\\rbrace \\nonumber \\\\&\\!\\!\\!\\!", "{=}\\!\\!\\!\\!&\\operatornamewithlimits{arg\\,min}_{\\mathbf {t}\\,\\in \\,{\\mathbb {R}}^{m+3n}}\\left\\lbrace C_2(\\mathbf {t})+ \\frac{\\beta }{2}\\Vert \\mathbf {t}-\\mathbf {q}^{(k)}\\Vert _2^2\\right\\rbrace ,\\;\\; \\mathbf {q}^{(k)}\\;{=}\\;\\mathbf {\\mathrm {M}}_1\\mathbf {x}^{(k+1)}+\\frac{1}{\\beta }\\,\\mathbf {\\rho }^{(k)}\\!.$ Then, by recalling the definition of function $C_2$ in (REF ) and introducing the vectors $\\mathbf {\\rho }_1^{(k)} \\in {\\mathbb {R}}^{2n}$ , $\\mathbf {\\rho }_2^{(k)} \\in {\\mathbb {R}}^m$ and $\\mathbf {\\rho }_3^{(k)} \\in {\\mathbb {R}}^n$ such that $\\mathbf {\\rho }^{(k)} = \\big (\\mathbf {\\rho }_1^{(k)};\\mathbf {\\rho }_2^{(k)};\\mathbf {\\rho }_3^{(k)}\\big )$ and the vectors $\\begin{array}{c}\\displaystyle {\\mathbf {q}_1^{(k)} = \\mathbf {\\mathrm {D}}\\mathbf {x}^{(k+1)}+\\frac{1}{\\beta }\\mathbf {\\rho }_1^{(k)} \\in {\\mathbb {R}}^{2n},\\quad \\;\\mathbf {q}_2^{(k)} = \\mathbf {\\mathrm {H}}\\mathbf {x}^{(k+1)}+\\frac{1}{\\beta }\\mathbf {\\rho }_2^{(k)} \\in {\\mathbb {R}}^m,}\\vspace{4.26773pt}\\\\\\displaystyle {\\mathbf {q}_3^{(k)} = \\mathbf {x}^{(k+1)}+\\frac{1}{\\beta }\\mathbf {\\rho }_3^{(k)} \\in {\\mathbb {R}}^n,}\\end{array}$ such that $\\mathbf {q}^{(k)} = \\big (\\mathbf {q}_1^{(k)};\\mathbf {q}_2^{(k)};\\mathbf {q}_3^{(k)}\\big )$ , problem (REF ) can be equivalently written as $\\mathbf {t}^{(k+1)} \\;{\\in }\\;\\operatornamewithlimits{arg\\,min}_{\\mathbf {t}\\,\\in \\,{\\mathbb {R}}^{m+3n}} \\left\\lbrace \\,T_1\\left(\\mathbf {t}_1\\right) \\;{+}\\; T_2\\left(\\mathbf {t}_2\\right) \\;{+}\\; T_3\\left(\\mathbf {t}_3\\right) \\,\\right\\rbrace \\, , \\;\\;\\text{with:}$ $\\begin{array}{rcrl}\\displaystyle {T_1\\left(\\mathbf {t}_1\\right)} &\\!\\!", "{=}\\!\\!\\!& \\displaystyle {\\sum _{i=1}^n\\Vert \\mathbf {t}_{1,i}\\Vert _2} &\\!\\!\\!", "{+}\\;\\: \\displaystyle {\\frac{\\beta }{2}\\,\\Vert \\mathbf {t}_1-\\mathbf {q}_1^{(k)}\\Vert _2^2\\,,}\\vspace{0.0pt}\\\\\\displaystyle {T_2\\left(\\mathbf {t}_2\\right)} &\\!\\!", "{=}\\!\\!\\!& \\displaystyle {\\mu \\, \\mathrm {KL}\\left(\\mathbf {g}(\\mathbf {t}_2)+\\mathbf {b};\\mathbf {y}\\right)}&\\!\\!\\!", "{+}\\;\\: \\displaystyle {\\frac{\\beta }{2}\\,\\Vert \\mathbf {t}_2-\\mathbf {q}_2^{(k)}\\Vert _2^2 \\, ,}\\vspace{4.26773pt}\\\\\\displaystyle {T_3\\left(\\mathbf {t}_3\\right)} &\\!\\!", "{=}\\!\\!\\!& \\displaystyle {\\iota _{{\\mathbb {R}}_+^n}(\\mathbf {t}_3)}&\\!\\!\\!", "{+}\\;\\: \\displaystyle {\\frac{\\beta }{2}\\,\\Vert \\mathbf {t}_3-\\mathbf {q}_3^{(k)}\\Vert _2^2 \\, .", "}\\vspace{5.69046pt}\\end{array}$ Therefore, the updates of variables $\\mathbf {t}_1$ , $\\mathbf {t}_2$ and $\\mathbf {t}_3$ can be addressed separately.", "Update of $\\mathbf {t}_1$ .", "It comes from (REF ) that the update of $\\mathbf {t}_1$ reads $\\mathbf {t}_1^{(k+1)} \\;{=}\\; \\operatornamewithlimits{arg\\,min}_{\\mathbf {t}_1 \\in {\\mathbb {R}}^{2n}}\\left\\lbrace \\sum _{i=1}^n\\left[ \\left\\Vert \\mathbf {t}_{1,i}\\right\\Vert _2 +\\frac{\\beta }{2}\\left(\\mathbf {t}_{1,i}-\\mathbf {q}_{1,i}^{(k)}\\right)^2\\right]\\right\\rbrace \\,.$ Hence, problem (REF ) is separable into $n$ independent 2-dimensional problems $\\mathbf {t}_{1,i}^{(k+1)}\\,\\;{=}\\;\\,\\operatornamewithlimits{arg\\,min}_{\\mathbf {t}_{1,i} \\,\\in \\, {\\mathbb {R}}^{2n}}\\left\\lbrace \\left\\Vert \\mathbf {t}_{1,i}\\right\\Vert _2 +\\frac{\\beta }{2}\\left(\\mathbf {t}_{1,i}-\\mathbf {q}_{1,i}^{(k)}\\right)^2\\right\\rbrace , \\quad i = 1,\\ldots ,n \\, ,$ which represent the proximal map of the Euclidean norm function $\\Vert \\,\\cdot \\,\\Vert _2$ in ${\\mathbb {R}}^2$ calculated at points $\\mathbf {q}_{1,i}^{(k)}$ , $i = 1,\\ldots ,n$ .", "Such a proximal map admits a well-known explicit expression which leads to the following closed-form solution of problem (REF ): $\\mathbf {t}_{1,i}^{(k+1)}\\,\\;{=}\\;\\,\\max \\left\\lbrace \\,\\left\\Vert \\mathbf {q}_{1,i}^{(k)} \\right\\Vert _2 - \\frac{1}{\\beta } \\,\\, , \\, 0 \\, \\right\\rbrace \\, \\frac{\\mathbf {q}_{1,i}^{(k)}}{\\left\\Vert \\mathbf {q}_{1,i}^{(k)}\\right\\Vert _2} , \\quad i = 1,\\ldots ,n \\, .$ where $\\, 0 \\, \\cdot \\, \\mathbf {0} \\,/\\,0 \\;{=}\\; \\mathbf {0}\\,$ is assumed.", "Update of $\\mathbf {t}_2$ .", "It follows from (REF ) that, after introducing the scalar $\\tau = \\mu /\\beta $ , the updated vector $\\mathbf {t}_2^{(k+1)}$ is given by $\\mathbf {t}_2^{(k+1)}&\\!\\!\\!\\!", "{\\in }\\!\\!\\!\\!& \\operatornamewithlimits{arg\\,min}_{\\mathbf {t}_2\\,\\in \\,{\\mathbb {R}}^m}\\left\\lbrace \\tau \\,\\mathrm {KL}(\\mathbf {g}(\\mathbf {t}_2)+\\mathbf {b};\\mathbf {y})\\;{+}\\;\\frac{1}{2}\\Vert \\mathbf {t}_2-\\mathbf {q}_2^{(k)}\\Vert _2^2\\right\\rbrace \\nonumber \\\\&\\!\\!\\!\\!", "{=}\\!\\!\\!\\!& \\operatornamewithlimits{arg\\,min}_{\\mathbf {t}_2\\,\\in \\,{\\mathbb {R}}^m}\\left\\lbrace \\sum _{i=1}^m \\left[ \\tau \\, g(t_i) - \\tau \\, y_i \\ln \\left(g(t_i)+b_i\\right)+\\frac{1}{2}\\left(t_i-q_i\\right)^2\\right]\\right\\rbrace ,$ where in (REF ) we substituted the explicit expression of the KL divergence term reported in (REF ), we dropped the constants and, for simplicity of notation, we set $t_i:=t_{2,i} \\in {\\mathbb {R}}$ and $q_i = q_{2,i}^{(k)} \\in {\\mathbb {R}}$ .", "Hence, similarly to the $\\mathbf {t}_1$ update problem in (REF ), the $m$ -dimensional minimization problem (REF ) is equivalent to the $m$ following 1-dimensional problems $t_i^{(k+1)}\\;{=}\\;\\operatornamewithlimits{arg\\,min}_{t_i\\,\\in \\,{\\mathbb {R}}}\\left\\lbrace \\tau \\,g(t_i) - \\tau \\, y_i \\ln \\left(g(t_i)+b_i\\right)+\\frac{1}{2}\\left(t_i-q_i\\right)^2\\right\\rbrace ,$ $i=1,\\ldots ,m$ .", "In the IR scenario, i.e.", "when $g(t_i)=t_i$ , the cost function in (REF ) is infinitely many times differentiable, strictly convex and coercive in its domain $t_i \\in (-b_i,+\\infty )$ .", "Hence, the solution $t_i^{(k+1)}$ of (REF ) exists, is unique and coincides with the unique stationary point of the cost function, given by $t_i^{(k+1)} = \\frac{1}{2}\\left[-(\\tau +b_i-q_i)+\\sqrt{(\\tau +b_i-q_i)^2+4\\left(q_i\\,b_i+\\tau (y_i-b_i)\\right)}\\,\\right]\\,.$ For the CTIR problem, i.e.", "when $g(t_i) = I_0 e^{-t_i}$ , problem (REF ) reads $t_i^{(k+1)}\\;{=}\\;\\operatornamewithlimits{arg\\,min}_{t_i\\,\\in \\,{\\mathbb {R}}}\\left\\lbrace \\tau \\,I_0\\, e^{-t_i}-\\tau \\, y_i \\ln \\left(I_0\\, e^{-t_i}\\!+b_i\\right) +\\frac{1}{2}(t_i-q_i)^2\\right\\rbrace .$ The cost function in (REF ) is infinitely many times differentiable and coercive in its domain $t_i \\in {\\mathbb {R}}$ , hence it admits global minimizers.", "However, in the general case of a nonzero background, i.e.", "when $b_i \\in {\\mathbb {R}}_{++}$ , problem (REF ) does not admit a closed-form solution and can only be addressed by employing iterative solvers.", "On the other hand, when $b_i=0$ the cost function is also strictly convex, hence $t_i^{(k+1)}$ in (REF ) is given by the unique solution of the first-order optimatily condition $-\\tau \\,I_0\\, e^{-t_i} + \\tau \\, y_i + t_i-q_i = 0\\,.$ The above nonlinear equation can be manipulated so as to give $w_i\\,e^{w_i} = \\tau \\,I_0\\,e^{\\tau \\,y_i-q_i}\\,,\\quad \\text{with}\\;\\; w_i = t_i+\\tau \\,y_i-q_i\\,.$ Equations of the form in (REF ) admit solutions that can be expressed in closed-form in terms of the so-called Lambert $W$ function [5].", "In particular, when the right-hand side is non-negative - which is our case as $\\tau \\, I_0 e^{\\tau \\,y_i-q_i} \\in {\\mathbb {R}}_{++}$ - then the equation admits a unique solution given by $w_i = W\\left(\\tau \\,I_0\\,e^{\\tau \\,y_i-q_i}\\right) \\, .$ It follows that problem (REF ) admits the unique solution $t_i^{(k+1)} = -(\\tau \\,y_i-q_i) + W\\left(\\tau \\,I_0\\,e^{\\tau \\,y_i-q_i}\\right) \\, .$ Update of $\\mathbf {t}_3$ .", "It comes from (REF ) that the $\\mathbf {t}_3$ -update problem reads $\\mathbf {t_3}^{(k+1)}\\;{\\in }\\;\\operatornamewithlimits{arg\\,min}_{\\mathbf {t}_3\\,\\in \\,{\\mathbb {R}}_+^n}\\,\\Vert \\mathbf {t}_3-\\mathbf {q}_3^{(k)}\\Vert _2^2 \\,,$ that is $\\mathbf {t_3}^{(k+1)}$ is given by the unique Euclidean projection of vector $\\mathbf {q}_3^{(k)}$ onto the non-negative orthant ${\\mathbb {R}}_+^n$ , which admits the following component-wise closed-form expression: $t_{3,i}^{(k+1)}\\;{=}\\;\\max \\left\\lbrace q_{3,i}^{(k)},0\\right\\rbrace \\,,\\quad i=1,\\ldots ,n\\,.$" ], [ "Computed examples", "In this section, we evaluate the performance of the proposed Poisson Whiteness Principle (REF ) for the automatic selection of the regularization parameter $\\mu $ in the TV-KL model in (REF ) employed for the image restoration and CT image reconstruction tasks.", "The proposed strategy is compared with the REF and the REF .", "The considered parameter selection rules are applied a posteriori.", "In other words, the TV-KL model is solved on a grid of different $\\mu $ -values; then, for each output image, we compute the discrepancy function, involved in the ADP and NEDP, and the whiteness measure, which is used for the PWP.", "The $\\mu $ -values selected by the ADP, NEDP and the PWP will be denoted by $\\mu ^{(A)}$ , $\\mu ^{(NE)}$ and $\\mu ^{(W)}$ , respectively.", "The quality of the output image $\\hat{\\mathbf {x}}$ with respect to the original image $\\bar{\\mathbf {x}}$ is measured by means of two scalar measures, namely the Structural Similarity Index (SSIM) [17] and the Signal-to-Noise-Ratio (SNR) defined by $\\text{SNR}(\\hat{\\mathbf {x}},\\bar{\\mathbf {x}}) = 10\\log _{10} \\frac{||\\bar{\\mathbf {x}}- \\mathrm {E}[\\bar{\\mathbf {x}}]||_{2}^{2}}{||\\bar{\\mathbf {x}}-\\hat{\\mathbf {x}}||_{2}^{2}}.$ In the performed tests, the ADMM iterations are stopped as soon as $\\delta _{\\mathbf {x}}^{(k)} = \\frac{\\Vert \\mathbf {x}^{(k)}-\\mathbf {x}^{(k-1)}\\Vert _2}{\\Vert \\mathbf {x}^{(k-1)}\\Vert _2}<10^{-6}\\,,\\qquad k\\in \\mathbb {N}\\setminus \\lbrace 0\\rbrace \\,,$ while the ADMM penalty parameter $\\beta $ is set manually so as to fasten the convergence of the alternating scheme." ], [ "Image restoration", "We start testing our proposal on the image restoration task, and consider two test images, namely satellite ($256\\times 256$ ) and cells ($236\\times 236$ ), with pixel values between 0 and 1, shown in Figures REF a, REF b.", "Figure: From left to right: original satellite (256×256256\\times 256), cells (236×236236\\times 236), shepp logan (500×500500\\times 500) and brain (238×253238\\times 253) test images considered for the numerical experiments.We simulate the acquisition process by multiplying the original images by a factor $\\kappa \\in \\mathbb {R}_{++}$ representing the maximum number of photons hitting the image domain, in expectation.", "Clearly, the lower the value of $\\kappa $ the noisier the data, yielding a more difficult image restoration problem.", "Then, the resulting images are been corrupted by space-invariant Gaussian blur, with blur kernel generated by the Matlab routine fspecial, which is characterized by two parameters: the band parameter, representing the side length (in pixels) of the square support of the kernel, and sigma, that is the standard deviation (in pixels) of the isotropic bivariate Gaussian distribution defining the kernel in the continuous setting.", "In our tests, we set band=5, sigma=1.", "Then, we add a constant emission background $\\mathbf {b}$ equal to $2\\times 10^{-3}$ , obtaining what we define as $\\mathbf {\\bar{\\lambda }} = \\mathbf {\\mathrm {H}}\\mathbf {\\bar{x}} + \\mathbf {b}$ .", "Finally, the observed image $\\mathbf {y} = \\mathbf {\\mathrm {poiss}}(\\mathbf {\\bar{\\lambda }})$ is pseudo-randomly generated by a m-variate independent Poisson realization with mean vector $\\mathbf {\\bar{\\lambda }}$ .", "In Figure REF c,d we show the Whiteness function $W(\\mu )$ , as defined in REF , for the first image satellite and $\\kappa =5$ (left) and $\\kappa = 10$ (right).", "The vertical dashed red lines correspond to the minimum of the function $W(\\mu )$ , i.e.", "to the chosen values of $\\mu $ according to the Poisson Whiteness Principle, namely $\\mu ^{(W)}$ .", "The black curves in Figure REF a,b represent the discrepancy function $\\mathcal {D}(\\mu ,\\mathbf {y})$ as defined in (REF ), while the green and magenta dashed lines represent the discrepancy values $\\Delta ^{(A)}$ and $\\Delta ^{(NE)}(\\mu )$ as defined in REF and REF , respectively.", "In Figure REF e,f, we show the SNR (in blue) and SSIM (in orange) values achieved for different $\\mu $ values with $\\kappa =5,10$ .", "The red, green and magenta vertical lines correspond to the $\\mu $ values chosen with the newly proposed method and the two considered versions of the DP.", "We remark that the $\\mu $ values selected by the discrepancy principles correspond to the intersection of $\\mathcal {D}(\\mu ,\\mathbf {y})$ and $\\Delta ^{(A)}$ ,$\\Delta ^{(NE)}(\\mu )$ , respectively.", "Note that, in the low-count regime, the PWP achieves higher values of SNR and SSIM if compared to the ADP and NEDP.", "Furthermore, at the bottom of Figure REF , we report, for different counting regimes $\\kappa $ , the values of the selected $\\mu $ , the SNR and SSIM values for the three considered strategies.", "For each $\\kappa $ , the highest values of SNR and SSIM are reported in bold.", "As already observed in Figure REF e,f, the PWP outperforms the ADP and NEDP in terms of SNR and SSIM for the low-middle counts acquisitions (up to $\\kappa =50$ ).", "For the higher counts NEDP and PWP achieve similar quality measures, with NEDP being slightly better.", "For a visual comparison, in Figure REF , we show the observed images and the output restorations obtained by employing ADP, NEDP and PWP for $\\kappa =5$ (top row) and $\\kappa =10$ (bottom row).", "In both cases, the NEDP and the PWP return similar results, with the latter being more capable of preserving the original contrast in the image.", "On the other hand, the output images obtained by selecting $\\mu $ according to ADP are strongly over-regularized.", "Figure: Test image satellite.", "From top to bottom: discrepancy curves, whiteness curves and achieved SNR/SSIM for κ=5\\kappa =5 (left) and κ=10\\kappa =10 (right).", "Output μ\\mu - and SNR/SSIM valuesobtained by the ADP, the NEDP and the PWP fordifferent κ\\kappa .Figure: Test image satellite.", "From left to right: observed image 𝐲\\mathbf {y}, reconstruction using the ADP, NEDP and PWP for κ=5\\kappa =5 (top row) and κ=10\\kappa =10 (bottom row).For the second test image, cells, we report in Figure REF the behavior of the discrepancy function $\\mathcal {D}(\\mu ,{y})$ , of the Whiteness function $W(\\mu )$ and of the SNR/SSIM curves obtained by applying the NEDP, the ADP and the PWP, for $\\kappa =5$ (left) and $\\kappa =10$ (right).", "The PWP returns larger quality measures, as it is the closest to the maximum SNR/SSIM achievable.", "From the table reported at the bottom of Figure REF , we note that the proposed $\\mu $ -selection criterion returns restored images outperforming the ones obtained via the NEDP and ADP both in terms of SNR and SSIM, for every $\\kappa \\ge 5$ .", "For $\\kappa =1.5$ the SNR and SSIM values of the PWP restoration are slightly lower, but very similar, to the one obtained with NEDP, while in all the other cases the difference between the PWP and the NEDP the difference is more marked.", "The restored images in Figure REF reflect the values recorded in the tables: the output of the PWP preserve more details and the original contrast if compared to NEDP, while the ADP restoration seems to be less subject to over-regularization if compared to the results obtained on the test image satellite.", "This can be ascribed to the number of zeros in the image, being significantly smaller in cells.", "Figure: Test image cells.", "From top to bottom: discrepancy curves, whiteness curves and achieved SNR/SSIM for κ=5\\kappa =5 (left) and κ=10\\kappa =10 (right).", "Output μ\\mu - and SNR/SSIM valuesobtained by the ADP, the NEDP and the PWP fordifferent κ\\kappa .Figure: Test image cells.", "From left to right: observed image 𝐲\\mathbf {y}, reconstruction using the ADP, NEDP and PWP for κ=5\\kappa =5 (top row) and κ=10\\kappa =10 (bottom row)." ], [ "CT image reconstruction", "For the CT reconstruction problem we consider the test images shepp logan ($500\\times 500$ , pixel size = $0.2$ mm) and brain ($238\\times 253$ , pixel size=$0.4$ mm), with pixel values between 0 and 1, shown Figures REF c, REF d, respectively.", "The acquisition process of the fan beam CT setup, i.e.", "the projection operator $\\mathbf {\\mathrm {H}}$ , is built using the ASTRA Toolbox [16] with the following parameters: 180 equally spaced angles of projections (from 0 to $2\\pi $ ), a detector with 500 pixels (detector pixel size = $1/3$ mm), distance between the source and the center of rotation = 300mm, distance between the center of rotation and the detector array = 200mm.", "Then, according to (REF ), we take the exponential of $-\\mathbf {\\mathrm {H}\\bar{x}}$ and multiply it by a factor $I_{0}\\in \\mathbb {N}\\setminus \\lbrace 0\\rbrace $ that plays the role of $\\kappa $ in the restoration scenario and represents the maximum emitted photon counts, i.e., the maximum number of photons that can reach each detector pixel if the X-rays are not attenuated.", "In the CT tests, we consider the background emission $\\mathbf {b}= \\mathbf {0}$ so that the solution of (REF ) can be expressed in closed-form in terms of the Lambert function.", "We thus compute the noise-free data $\\mathbf {\\bar{\\lambda }} = I_{0}e^{-\\mathbf {\\mathrm {H}}\\mathbf {\\bar{x}}}$ , while the acqisition $\\mathbf {y} = \\mathbf {\\mathrm {poiss}}(\\mathbf {\\bar{\\lambda }})$ is obtained by generating an $m$ -variate independent Poisson realization with mean vector $\\mathbf {\\bar{\\lambda }}$ .", "In analogy to the restoration case, in Figure REF , we report for the test image shepp logan the curve of the discrepancy function $\\mathcal {D}(\\mu ,\\mathbf {y})$ , as well as the Whiteness curve $W(\\mu )$ and the curves of the SNR and SSIM for the limiting values $I_0$ , i.e.", "$I_{0}=1.5$ (left) and $I_{0}=1000$ (right).", "In the case of $I_{0}=1.5$ the SNR/SSIM values achieved by ADP and NEDP are significanlty far from te optimal ones.", "On the other hand, PWP one is very close to the maximum of both the SNR and the SSIM.", "For $I_{0}=1000$ , the NEDP and the ADP select the same $\\mu $ , which allows to achieve a larger SSIM with respect to the one obatined by PWP, while our method still outperforms the other in terms of SNR.", "From the table at the bottom of Figure REF , we observe that the PWP outperforms the ADP and the NEDP in terms of SNR for each $I_{0}$ value, while the NEDP returns slightly better results in terms of SSIM for high-count acquisitions.", "The reconstruction results shown in Figure REF reflect the behavior of the plots.", "More specifically, for $I_{0} = 1.5$ the ADP reconstruction appears to be over-regularized; NEDP allows to reconstruct only the central ellipsis, which appear to be merged; finally, in the PWP reconstruction the two ellipsis are more visible and the white edge of the phantom is sharper.", "In the case of $I_{0} = 1000$ , the three reconstructions are similar, with the PWP being more capable of separating the three fine details highlighted in the super-imposed close-up.", "Figure: Test image shepp logan.", "From top to bottom: discrepancy curves, whiteness curves and achieved SNR/SSIM for I 0 =1.5I_{0}=1.5 (left) and I 0 =1000I_{0}=1000 (right).", "Output μ\\mu - and SNR/SSIM valuesobtained by the ADP, the NEDP and the PWP fordifferent I 0 I_0.Figure: Test image shepp logan.", "From left to right: observed data 𝐲\\mathbf {y}, reconstruction using the ADP, NEDP and PWP for I 0 =1.5I_{0}=1.5 (top row) and I 0 =1000I_{0}=1000 (bottom row).For the last test image, brain, we show in Figure REF the behaviour of the discrepancy function $\\mathcal {D}(\\mu ,\\mathbf {y})$ , of the Whiteness function $W(\\mu )$ , as well as of the SNR and SSIM values for $I_{0} = 1.5$ and $I_{0}=1000$ .", "Note that the PWP achieves higher SNR and SSIM values compared to the ADP and NEDP for lower values of $I_{0}$ .", "However, we observe that when considering higher values of $I_{0}$ , the ADP reconstruction can outperform PWP for some of the considered doses.", "The reconstruction computed by ADP, NEDP and PWP are shown in Figure REF : we can see a higher level of details in the PWP reconstruction, both in the the low-dose and high-dose case.", "For $I_{0}=1.5$ , only PWP is able to recover the upper part of the skull bone, while for $I_{0}=1000$ the difference mainly concerns the level of details present in the reconstruction, as shown in the close-ups.", "Figure: Test image brain.", "From top to bottom: discrepancy curves, whiteness curves and achieved SNR/SSIM for I 0 =1.5I_{0}=1.5 (left) and I 0 =1000I_{0}=1000 (right).", "Output μ\\mu - and SNR/SSIM valuesobtained by the ADP, the NEDP and the PWP fordifferent I 0 I_0.Figure: Test image brain.", "From left to right: observed data 𝐲\\mathbf {y}, reconstruction using the ADP, NEDP and PWP for I 0 =1.5I_{0}=1.5 (top row) and I 0 =1000I_{0}=1000 (bottom row)." ], [ "Conclusion", "In this work, we have discussed the introduction of a novel parameter selection strategy in variational models under Poisson data corruption.", "Our proposal relies on the extension of the whiteness principle to a standardized version of the Poisson noise corrupted observations.", "The derived Poisson Whiteness Principle has been tested on image restoration and CT reconstruction problems.", "In the latter case, we employed a linearized version of the ADMM which fasten the computations when the forward model operator does not present an advantegeous structure.", "The Poisson Whiteness Principle has been compared with the popular ADP and the NEDP, recently proposed by the same authors; the newly introduced approach has been shown to outperform the competitors especially in the lower-counting regimes.", "Acknowledgements All the authors are members of the “National Group for Scientific Computation (GNCS-INDAM)”.", "The research of FB, AL, FS has been funded by the ex60 project “Funds for selected research topics”, while MP acknowledges the contribution of “Young researchers funding” awarded by GNCS-INDAM." ] ]
2207.10481
[ [ "Metropolis Monte Carlo sampling: convergence, localization transition\n and optimality" ], [ "Abstract Among random sampling methods, Markov Chain Monte Carlo algorithms are foremost.", "Using a combination of analytical and numerical approaches, we study their convergence properties towards the steady state, within a random walk Metropolis scheme.", "We show that the deviations from the target steady-state distribution feature a localization transition as a function of the characteristic length of the attempted jumps defining the random walk.", "This transition changes drastically the error which is introduced by incomplete convergence, and discriminates two regimes where the relaxation mechanism is limited respectively by diffusion and by rejection." ], [ "Introduction", "Although Buffon's needle problem [1] may be considered as the earliest documented use of Monte Carlo sampling (18th century), the method was developed at the end of the second world war and dates from the early days of computer use [2], [3].", "With the increase in computational power, it has become a pervasive and versatile technique in basic sciences and engineering.", "It uses random sampling for solving both deterministic and stochastic problems, as found in physics, biology, chemistry, or artificial intelligence [4], [5], [6], [7], [8], [9], [10].", "Monte Carlo techniques also allow to assess risk in quantitative analysis and decision making [11], [12], and their methodological developments provide tools for economy, epidemiology or archaeology [13].", "It is then crucial to understand the type of errors which can be introduced as a consequence of the incomplete convergence of such algorithms.", "Our interest goes to Markov Chain Monte Carlo techniques [14], [15], that create correlated random samples from a target distribution; a special emphasis goes to the relaxation rate of these methods.", "From the target probability distribution, a sequence of samples is obtained by a random walk, with appropriate transition probabilities.", "The walker's density evolves at long times towards the target distribution, and quantities of interest follow from the law of large numbers and other methods of statistical inference [11], [12], [13], [14], [16], [17], [15], [18], [19].", "The Monte-Carlo method and its modern developments [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30] have now been adopted in many topics in and outside physics.", "A key issue deals with the speed of convergence of the algorithm: the larger the convergence time, the larger the error bars for the computed quantities.", "For most practical applications, if the amplitude $a$ of the random jumps is small, phase space is not sufficiently explored, even though most attempted jumps are accepted.", "Conversely, large jumps will lead to a large rejection probability, and to an equally ineffective method, see Appendix .", "In between, one expects an optimal jump size $a_{\\text{opt}}$ at which the convergence rate is maximal.", "There have been theoretical attempts in deriving $a_{\\text{opt}}$ for specific models [31].", "In practice, without a precise knowledge of $a_{\\text{opt}}$ , a widely accepted rule of thumb is to choose $a$ such that the acceptance probability is close to 50% for the attempted moves [17], [32], [16].", "In this paper, we show the existence of a new critical value $a^*$ , where an unexpected localization transition occurs such that the relaxation mechanism is drastically different for $a<a^*$ and $a>a^*$ .", "This deeply modifies the nature and amplitude of the error.", "The existence of the critical value $a^*$ is our main finding, and the novelty of this work.", "Besides, although there is no reason to expect any relation between $a_{\\text{opt}}$ and $a^*$ , we report, rather interestingly, a number of examples, where they coincide precisely.", "We emphasize that while a number of results have been proven for relaxation rates [33], [34], [35], we are interested in this work also in relaxation eigenmodes, which in turn open the way to accurate approximations for relaxation rates." ], [ "Master equation for Metropolis Monte-Carlo sampling and relaxation to equilibrium", "We start with by reminding general results on the master equation describing relaxation of the Metropolis Monte-Carlo algorithm.", "The spectral properties of Markov-Chains have been extensively studied in the Mathematical literature [36], [37].", "Here we instead focus on the eigenvectors (or their absence in the case of a localization transition) which have received much less attention.", "This introduction will allow us to introduce notations and to contrast the relaxation of a discrete Markov-chain with the relaxation properties that we find for the continuous case.", "The Markov Chain Monte Carlo method amounts to considering a random walker with position $x$ (here on the line), in the presence of an external confining potential $U(x)$ .", "We adopt the framework of the Metropolis algorithm [38], [39], [16], [15], [17].", "The position of the particle evolves in discrete time steps $n$ following the rule $x_n= {\\left\\lbrace \\begin{array}{ll}x_{n-1} +\\eta _n & {\\rm with}\\,\\, {\\rm prob.", "}\\,\\,\\, p= {\\rm min}\\left(1, e^{-\\beta \\, \\Delta U}\\right)\\quad \\\\\\\\x_{n-1} & {\\rm with}\\,\\, {\\rm prob.", "}\\,\\,\\, 1-p\\, ,\\end{array}\\right.", "}$ where $\\Delta U=U(x_{n-1} +\\eta _n)-U(x_{n-1})$ and $\\beta =1/(k_BT)$ denotes inverse temperature.", "The random jumps $\\eta _n$ at different times are independent, drawn from a continuous and symmetric probability distribution $w(\\eta )$ .", "In other words, the particle attempts at time $n$ a displacement $\\eta _n$ from its current location $x_{n-1}$ , which is definitely accepted (with probability 1) if it leads to an energy decrease, but is accepted with a lesser probability $e^{-\\beta \\Delta U}$ if the move leads to an energy increase $\\Delta U>0$ .", "A key quantity in what follows is the amplitude $a$ of the attempted jumps, that we introduce as the characteristic length associated with $w(\\eta )$ , taken to obey the scaling form $w(\\eta ) \\,=\\, \\frac{1}{a} f\\left(\\eta / a\\right) .$ Normalization demands that $\\int f=\\int w =1$ .", "The dynamics encoded in Eq.", "(REF ) can be written in terms of a Master equation for $P_n(x)$ , the probability density of the walker at time $n$ , $P_n(x)= \\int _{-\\infty }^{\\infty } F_\\beta (x,y)\\, P_{n-1}(y)\\, dy .$ The explicit form of the temperature-dependent kernel $F_\\beta $ is given below in Eq.", "(REF ).", "Generically, $P_n(x)$ converges towards the target distribution [40], [41], given by the (equilibrium) Gibbs-Boltzmann expression $P_{\\infty }(x) \\propto \\exp (-\\beta \\, U(x))$ , see Appendix .", "We assume that $U$ is confining enough so that $ \\exp (-\\beta \\, U(x))$ is integrable, and for simplicity that $U(x)=U(-x)$ .", "Our main interest is to find how quickly the dynamics converges towards the target density, and with which error $\\delta P_n(x) = P_n(x)-P_\\infty (x)$ .", "The convergence rate can be defined from the large time limit of the deviation from equilibrium of some observable ${\\cal O}(x)$ : $\\log \\Lambda \\,=\\, \\sup _{\\lbrace {\\cal O}(x),P_0(x)\\rbrace } \\lim _{n \\rightarrow \\infty } \\frac{1}{n} \\log \\left| \\int {\\cal O}(x) \\delta P_n(x) dx \\right|$ where the maximum is taken over all possible functions ${\\cal O}(x)$ and initial distributions $P_0(x)$ .", "If $\\Lambda < 1$ (given that $\\Lambda \\le 1$ ) the probability distribution $P_n(x)$ converges exponentially fast to the equilibrium distribution for large $n$ , i.e., $|P_n(x)- P_\\infty (x)| \\propto \\Lambda ^n \\propto e^{-n/\\tau }$ where $\\tau = - (\\log \\Lambda )^{-1} $ denotes the convergence time (in number of Monte-Carlo algorithm steps unit).", "The convergence rate $-\\log (\\Lambda ) >0$ is the figure of merit of the algorithm; the smaller the $\\Lambda $ , the larger the rate, the smaller the convergence time and the more efficient the sampling is.", "A quantity of central importance in the approach is the rejection probability $R(x)$ (see Appendix ), or more precisely the fraction of rejected moves per attempted jump: $R(x) = \\int _{-\\infty }^{\\infty } dy\\, w(y-x) \\left(1 - e^{-\\beta \\, \\left(U(y)- U(x)\\right)} \\right) \\, \\theta \\left(U(y)-U(x)\\right) , $ where $\\theta (z)$ is the Heaviside function: $\\theta (z)=1$ for $z> 0$ and $\\theta (z)=0$ for $z<0$ .", "Thus, the rejection probability $R(x)$ from the current location $x$ is zero if the new position $y$ occurs downhill.", "In the opposite case of an uphill move, it is $1- \\langle e^{-\\beta \\Delta U}\\rangle $ where the average is over all attempted jump lengths.", "The integral kernel of the Master equation is then given by: $F_\\beta (x,y) &= \\delta (x-y) R(x) + w(x-y) \\bigl [ \\theta \\left(U(y)-U(x)\\right) +e^{-\\beta \\,\\left(U(x)- U(y)\\right)}\\, \\theta \\left(U(x)-U(y)\\right) \\bigr ] .$ Averaging over the position of the particle yields the mean rejection probability $R_n= \\int R(x)\\, P_n(x)\\, dx$ which is monitored by default in all rejection-based algorithms.", "This is the quantity that the practitioner aims at keeping close to 50%, following a time honored rule of thumb stating that this provides efficient sampling [17], [32].", "In the limit $n\\rightarrow \\infty $ , this mean rejection probability approaches the stationary value $R_\\infty $ .", "Rigorous studies, in a one-dimensional harmonically confined setting with a Gaussian jump distribution, have found that the optimal acceptance probability $1-R_\\infty $ is close to 44%, while this quantity may decay when increasing space dimension [31].", "On intuitive grounds, one may expect a relation between $R(x)$ and the convergence rate of the algorithm.", "Indeed, starting from an arbitrary point $x_0$ at time $n=0$ , the density at time $n$ , given $x_0$ , can be written $P_n(x|x_0) = R(x_0)^n \\delta (x-x_0) + {p}_n(x|x_0)$ where ${p}_n(x|x_0)$ is a smooth function.", "Thus, an observable $\\cal O$ that would only measure the walker's presence in the immediate vicinity of $x_0$ , for instance ${\\cal O}_n(x_0) = {\\rm lim}_{\\epsilon \\rightarrow 0} \\int _{x_0 - \\epsilon }^{x_0 + \\epsilon } P_n(x) dx$ , would decay as $R(x_0)^n$ .", "The system as a whole cannot relax faster, and we obtain from Eq.", "(REF ) a lower bound for the convergence rate, corresponding to $\\Lambda >R(x_0)$ , which holds for all choices of $x_0$ : $\\Lambda \\ge \\max _{x_0} R(x_0) .$ Our objective is to study $\\Lambda $ as a function of $a$ , for a fixed choice of $U(x)$ and $f(z)$ .", "We expect $\\Lambda $ to be minimum at a well defined value $a=a_{\\text{opt}}$ .", "More precisely, for a given confining potential $U(x)$ and type of jumps $f(z)$ , the convergence rate and the resulting error are encoded in the spectral properties of the kernel $F_\\beta (x,y)$ in Eq.", "(REF ).", "We have attacked this question by four complementary techniques: the derivation of exact results, numerical diagonalization, numerical iteration of the Master equation, and direct Monte Carlo simulation of the random walk dynamics, with proper averaging over multiple realizations to gather statistics, see Appendix .", "We begin with a discretized approximation to the Master equation (REF ), for which Perron-Frobenius theorem shows that the equilibrium state, reached at large $n$ (formally $n\\rightarrow \\infty $ ), is unique [41]: it is given by $P_{\\infty }(x)$ .", "At any time, the probability density can furthermore be decomposed as $P_n(x) \\, =\\, \\sum _\\lambda {\\cal A}_\\lambda \\,{\\cal P}_\\lambda (x) \\, \\lambda ^n$ where the eigenvectors of $F_\\beta $ are denoted by ${\\cal P}_\\lambda (x)$ , and the eigenvalues $\\lambda $ can be proven to be real [36], see also Appendix .", "Indeed, detailed balance [16], [17], [15] allows to transform the Master equation into a self-adjoint problem, similarly to the mapping between the Fokker-Planck and Schrödinger equations [42].", "The precise form of the projection coefficients ${\\cal A}_\\lambda $ is not essential.", "Ordering eigenvalues in decreasing order ($\\lambda _0 > \\lambda _1 \\ge \\lambda _2\\ldots $ ), the eigenvalue $\\lambda _0=1$ is associated with equilibrium, with eigenvector $P_\\infty (x)$ .", "We also see that the asymptotic error $\\delta P_n$ behaves like ${\\cal P}_{\\lambda _1}(x)$ , and decays to 0 like $\\lambda _1^n$ (also meaning that $\\Lambda =\\lambda _1$ ).", "Finding the optimal $a$ is a minmax problem, where one should minimize $\\Lambda =\\lambda _1$ , i.e.", "the maximum eigenvalue, leaving aside the top (equilibrium) eigenvalue $\\lambda _0=1$ .", "Figure: The top panel shows the spectrum of F β F_\\beta for harmonic confinement U(x)=x 2 /2U(x) = x^2/2 as a function of jump amplitude aa,for a uniform jump distribution of range (-a,a)(-a,a).", "The color code, provided on the right-hand-side, is for theInverse Participation Ratio of the eigenvector associated to the eigenvalue displayed (see Appendix ).The upper envelope of the relaxation spectrum defines Λ\\Lambda , see Eq.", "(); shown by the red line, it reaches its minimum for a=a opt ≃3.33a=a_{\\text{opt}} \\simeq 3.33.This value coincides with the threshold a * a^* for localization.Here, 𝒩\\cal N, denoting the number of discrete relaxation modes(excluding the stationary state), is 0 for large jumps (a>a * a>a^*),while 𝒩{\\cal N} quickly grows as aa diminishes.Dashed lines show the bounds for the singular continuum,that appears in dark blue, see Eqs.", "() and ().The bottom panel is for the IPR associated to Λ\\Lambda , as a function of aa(same abscissa as the upper panel).The localization transition is signaled by the sharp jump at a=a * a=a^*.", "This threshold does not depend on the number of sites N d N_d, as long as N d N_d is large enough.", "Here N d =1000N_d=1000.The length unit is the thermal length, meaning the standard deviationof P ∞ (x)P_\\infty (x)." ], [ "Relaxation to equilibrium and localization", "While the above results hold for the discretized version of Eq.", "(REF ), explicit analytical calculations of the spectrum for a number of potentials $U(x)$ reveal that the eigenvector decomposition (REF ) fails in the continuum limit.", "In addition to the discrete spectrum with well defined eigenfunctions, a continuum of eigenvalues appears, with singular localized eigenfunctions which in the continuum limit collapse to a point $x_0$ where they take a finite value.", "The corresponding eigenvalue is $R(x_0)$ .", "We call this continuum of singular eigenvalues the singular spectrum, which is therefore bounded from below and above by $\\min _{x} R(x)$ and $\\max _{x} R(x)$ .", "Equation (REF ) now takes the form $P_n(x) \\, =\\, \\sum _{\\lambda \\in \\lbrace \\lambda _{0} \\ldots \\lambda _{{\\cal N}} \\rbrace } {\\cal A}_\\lambda \\,{\\cal P}_\\lambda (x) \\, \\lambda ^n \\,+\\, {\\cal L}_n(x) ,$ where ${\\cal L}_n(x)$ stems from the singular continuum.", "Here, the discrete summation runs over a finite (and possibly small) number of $1+\\cal N$ terms: ${\\cal N}\\ge 0$ since the term $\\lambda _0=1$ is necessarily present in the expansion, to ensure the proper steady state.", "The remaining term ${\\cal L}_n(x)$ localizes at large times $n\\rightarrow \\infty $ around a finite number of points $x_l$ where the rejection rate $R(x)$ in (REF ) is maximal: $\\lim _{n \\rightarrow \\infty } {\\cal L}_n(x)/{\\cal L}_n(x_l) = 0$ for any $x \\ne x_l$ .", "This property of the localizing term ${\\cal L}_n$ is valid only for the non-discretized master equation and is thus most directly established by analytical means.", "From our analytical computations, two possible scenarios emerge: (i) ${\\cal N} > 0$ for all $a$ and (ii) ${\\cal N} = 0$ for $a > a^*$ where $a^*$ gives the position for the localization transition; $a^*$ marks the transition from a diffusion governed evolution to a phase where relaxation is limited by rejected moves.", "In case (i), the eigenvalue $\\lambda _1$ lies above the singular continuum and $\\Lambda =\\lambda _1$ .", "The error is ruled by a “regular” eigenmode akin to what would be found in the discretized approximation.", "In case (ii) on the contrary, $\\lambda _1$ merges with the singular continuum at $a=a^*$ and the error is dominated by the localizing term ${\\cal L}_n(x)$ .", "Numerical simulations suggest that this localized scenario (ii) is the generic case, see also Appendix .", "In Fig.", "REF , we illustrate the merging between regular and singular spectrum for the harmonic potential with a flat jump distribution.", "To distinguish numerically the regular spectrum as in Eq.", "(REF ) from the singular one, we have discretized $F_\\beta (x,x^{\\prime })$ into a matrix of size $N_d\\times N_d$ , and computed the spectrum.", "Two methods have then been employed, both relying on a large $N_d$ analysis.", "For the regular part, the spacing between successive eigenvalues stay non-zero as $N_d\\rightarrow \\infty $ while they do vanish in the singular part.", "Another signature can be found with the eigenvectors by computing the inverse participation ratio (IPR) (see Appendix for the definition), usually used to quantify localization of quantum states [43].", "For a regular eigenvalue with a well defined continuum eigenvector, the IPR$\\,\\rightarrow 0$ as $1/N_d$ for large $N_d$ , while the IPR is much larger within the singular continuum as evidenced by the color code in Fig.", "REF .", "At $a=a^*$ , this singular part crosses the regular $\\lambda _1$ branch, leading to a gap closure.", "For $a>a^*$ , the singular continuum is dominant and governs relaxation.", "In Fig.", "REF , $a^*$ is shown by an arrow.", "Furthermore here, the structure of the spectrum ensures that $a^*=a_{\\text{opt}}$ , see Fig.", "REF ; at this point, $\\Lambda (a)$ features a cusp.", "Quite remarkably, the acceptance probability $1-R_n$ at $a=a^*=a_{\\text{opt}}$ tends at long times towards $0.455$ , close to the 50% rule of thumb alluded to above.", "Figure: Scaled evolution of the error δP n (x)=P n (x)-P ∞ (x)\\delta P_n(x) = P_n(x) - P_\\infty (x) vs xx for different times nn(indicated by the color code on the right),for the same system as in Fig.", ", with the same choice of length unit.", "Initial P 0 (x)=(2π) -1/2 exp(-(x-1) 2 /2)P_0(x) = (2 \\pi )^{-1/2} \\exp (-(x-1)^2/2).Comparison between a=3.6>a * a = 3.6>a^* (main graph) and a=3<a * a = 3<a^* (inset).Although the values of aa and the convergence rates are similar in the two graphs, the asymptotic errors are significantly different.The critical nature of the parameter $a=a^*$ can be appreciated by the behavior of the IPR of the slowest decay mode, as displayed in Fig.", "REF -bottom.", "The large value of the IPR for $a>a^*$ indicates that $\\delta P_n$ ceases to be spread over the whole system, but rather gets more and more “pinned” onto a discrete set of points; in the present case, this set reduces to a single point, $x_l=0$ .", "This results in the central dip in the error $\\delta P_n(x)$ observed in the main graph in Fig.", "REF , that becomes more narrow as time $n$ increases (see below).", "Fig.", "REF also reveals that a complete change of symmetry goes with the crossing of $a^*$ .", "For $a<a^*$ , the longest lived perturbation in the system is antisymmetric, see the inset of Fig.", "REF : given the symmetry of the confining potentials considered ($U(x)=U(-x)$ ), such a mode takes indeed longer to relax than symmetric ones.", "This can be understood from the mapping of our problem to a Schrödinger equation, for small $a$ , see Appendix REF : the first excited state, meaning the $\\lambda _1$ branch, has only one zero and is anti-symmetric.", "On the other hand, for $a>a^*$ , $\\delta P_n$ becomes symmetric after a transient (see the evolution from an early asymmetric situation towards symmetry in Fig.", "REF ).", "Figure: Comparison between the exact calculation presented in Appendix and the numerical data for the scaling behavior of localization.", "Box potential confinement with w(η)=3(1+a -2 η 2 )θ(a-|η|)/(8a)w(\\eta ) = 3(1+a^{-2}\\eta ^2)\\theta (a-|\\eta |)/(8 a).Lengths are expressed in unit of the box size LL, convergence to the scaling function is shown for a=2.1a = 2.1 and P 0 (x)=2θ(1/2-|x|)P_0(x) = 2 \\theta (1/2-|x|).", "Our analytical expression for the scaling function ϕ(z)\\varphi (z) and the proof that 𝒩=0{\\cal N} = 0, for this choice of w(η)w(\\eta ), are obtained for a>2>a * ≃1.79a > 2 > a^* \\simeq 1.79.To gain more insight into the localization phenomenon and its dynamics, we studied analytically the Master equation for confinement in a box, i.e.", "when $U(x)=0$ for $|x|<L$ and $U(x)=\\infty $ for $|x|>L$ .", "Such a case is rich enough to display the generic phenomenology of localization, while remaining sufficiently simple to allow for the derivation of exact results for several jump distributions $w(\\eta )$ , see Appendix .", "For cases where $w(\\eta )$ is minimum at $\\eta =0$ , we proved that ${\\cal N} = 0$ for sufficiently large $a$ as in the case of harmonic confinement.", "As in Fig.", "REF , the localization transition then manifests as a progressive collapse of the error $\\delta P_n = P_n(x) - P_\\infty (x)$ onto the point where rejection probability is maximal ($x_l=0$ ), with a spread which decays as $1/\\sqrt{n}$ .", "More precisely, in the vicinity of this point, we obtained the asymptotic form $\\delta P_n(x) \\,=\\, \\Lambda ^n n^{-\\gamma } \\, \\varphi (x\\sqrt{n})$ where $\\varphi (z)$ is a regular scaling function, and the exponent $\\gamma $ depends on $w(\\eta )$ and $U(x)$ .", "We found that a scaling function ansatz with $\\Lambda = 1$ also describes the relaxation of a zero temperature Metropolis Monte-Carlo algorithms towards a minimum [44].", "In the zero temperature limit the steady state is a Dirac delta function localized at the minimum of the potential and such a scaling form evolving into a Dirac delta function can be expected.", "At finite temperature however, the steady state has a finite width which is given by the thermal length, and thus the scaling-function ansatz does not directly follow from the ground state.", "The difference between the two cases can also be seen from the vanishing integral $\\int \\varphi (x) dx = 0$ in (REF ) while this integral is normalized to unity at zero temperature.", "Figure REF shows that such a form is well obeyed in the simulations, and that $\\gamma =1/2$ for the case displayed, in full agreement with our exact treatment that also explicitly provides $\\varphi (z)$ in Appendix (Eq.", "(REF )), shown by the continuous line.", "Numerical evidence shows that for the harmonic potential, $\\gamma =0$ .", "Analytical studies of the box potential where $w(\\eta )$ is maximum at $\\eta =0$ , provide examples where we can prove that ${\\cal N} = 1$ , in the large $a$ limit.", "The localisation transition in the error can consequently not be seen for a generic observable, but special choices of the observable or initial conditions allow to reveal a hidden localization transition, even in this case." ], [ "Analytical approximation to relaxation rates from Fokker-Planck eigenvectors", "We already mentioned that the critical amplitude $a^*$ separates two regimes, a regime $a < a^*$ where the dynamics is governed by the relaxation of diffusion eigenmodes and a regime $a > a^*$ where the relaxation is governed by the highest rejection probability.", "Surprisingly this knowledge provides a very precise approximation scheme to find quantitatively the full dependence of the relaxation rate $\\Lambda (a)$ on the jump length $a$ .", "In the limit of small $a$ , the master equation reduces to a Fokker-Planck equation, and it is possible to use the lowest eigenmodes of this Fokker-Planck equation to project the full master equation on a small finite dimensional-basis; the details of this procedure are described in Appendix and illustrated in Fig REF .", "We find that for $a < a^*$ , a very small number of diffusion eigenmodes provide a very accurate estimation of $\\Lambda (a)$ or good analytical approximations when the diagonalization of the reduced matrix is possible.", "On the contrary, for $a > a^*$ , the convergence of this procedure is very slow, and $\\Lambda (a)$ coincides with the maximum rejection probability.", "For a flat jump distribution $w(\\eta )$ in a harmonic potential, for which $a_{opt} = a^*$ , this procedure also provides an analytical estimate of the optimal mean acceptance probability, $1-R_\\infty \\simeq 0.455$ , close to values obtained by numerical diagonalization.", "A similar computation can be done for Gaussian jumps (see Appendix ), for which we get analytically $1-R_\\infty \\simeq 0.467$ , which improves the previously reported numerical estimate of $0.44$ [31], alluded to above.", "Figure: Harmonic confinement with flat jump distribution.", "Plot of the largest non-stationary eigenvalue as function of jump amplitude aa, obtained by 1) exact numerical diagonalization (dots) and 2) analytical approximation in the truncated Schrödinger equation basis, with increasing N s N_s, the number of anti-symmetric diffusion modes retained (see Appendix ).", "With N s =2N_s=2, the analytical predictions are already very accurate for a<a * a < a^*, convergence is very slow on the other side of the localization transition a>a * a > a^*An interesting issue is to assess how robust is the localization transition found: does it survive in higher dimensions or in the presence of interactions between particles?", "To investigate these, we have studied a) a non interacting model in dimensions 2 and 3, and b), an interacting system in dimension 1 and c) the situation where the confining potential features multiple local minima, see Appendix .", "In all cases, we found a localization transition, demonstrating its wider applicability.", "Analyzing the fate of the present localization transition for more complex potential landscapes, as found in disordered systems, is an interesting open problem." ], [ "Conclusions", "To summarize, we have uncovered that a localization transition does generically take place in Monte Carlo sampling, for a critical value $a^*$ of the amplitude of the jump distribution.", "A central result of this paper is to show that at $a=a^*$ , a singular continuum takes over the regular spectrum as the leading relaxation mode.", "We found that below $a < a^*$ , the relaxation rate can be determined very accurately by the projection of the full master equation on the leading relaxation modes of the Fokker-Planck dynamics.", "This opens the way to analytic calculation for the relaxation rate.", "For $a > a^*$ , the convergence of this expansion becomes much slower and the relaxation rate is instead given by maximal rejection probability.", "This results in a dynamical collapse, evidenced by a sharp increase of the Inverse Participation Ratio at $a=a^*$ , reminiscent of Anderson localization [45].", "However the underlying physical pictures differ.", "In the Anderson scenario, the localization length is given by the mean free path of a disordered potential.", "Here, the error progressively shrinks to a point when increasing time $n$ , without any corresponding limiting eigenvector $\\psi (x)$ with non zero norm $\\int |\\psi (x)|^2 dx > 0$ .", "Thus our study shows an example of a well known Markov process whose relaxation is not determined by the contribution of discrete eigenmodes, but by a progressive localization (collapse) on discrete points.", "Furthermore, we found that $a^*$ , when it exists, coincides with the optimal jump length $a_{opt}$ , although we are not able to prove it.", "We may surmise that the localization phenomenon has been overlooked so far for the reason that the upper part of the spectrum, $\\Lambda (a)$ , which rules relaxation, is continuous for all $a$ including the transition point $a^*$ ; it is the derivative $d\\Lambda /da$ that is discontinuous at $a^*$ .", "Yet, the error incurred, due to unavoidable lack of convergence at finite time, does change nature when crossing $a^*$ : its symmetry, amplitude, and scaling are deeply affected.", "The understanding of the localization transition in Monte Carlo relaxation modes may help to avoid excess events on the localization sites in the applications of Monte Carlo random walks." ], [ "Why an optimal jump amplitude?", "The relaxation time of the Metropolis algorithm, for a given functional form $w$ of the jumps (see Eq.", "(REF ) below), depends on the jump amplitude $a$ .", "On general grounds, this time should exhibit a non-monotonous behavior with a well-defined minimum at some specific amplitude $a_{\\text{opt}}$ (corresponding to a minimum convergence time $\\tau $ (minimum $\\Lambda $ , i.e.", "a maximum rate $-\\ln \\Lambda = 1/\\tau $ ).", "This is the so-called Goldilock’s principle [46].", "The rationale behind this expectation goes as follows: In the diffusive limit where $a$ is small, though most of the jumps are accepted, the particle moves over a limited region of space which results in a long time for exploring the full available space.", "Hence, we expect $\\tau $ to diverge, i.e.", "$\\Lambda \\rightarrow 1$ .", "We can be more specific, assuming a confinement potential of the form $U(x)=|x|^\\alpha $ with $\\alpha >0$ .", "At equilibrium, the walker's density $P$ will be concentrated within the thermal length $\\ell \\propto \\beta ^{-1/\\alpha }$ around the origin, and equilibrium will be reached after a characteristic time $\\tau $ such that $D\\tau = \\ell ^2$ , where $D$ is the diffusion coefficient.", "For our discrete time dynamics, we have $D\\propto a^2$ , so that we expect here $\\tau \\propto \\beta ^{-2/\\alpha } a^{-2}$ , meaning $\\Lambda -1 \\propto a^2$ In the opposite long jump limit with large $a$ , most of the moves are rejected and the particle hardly moves.", "As long as $w(0)$ is non-vanishing, increasing the jump amplitude $a$ simply reduces the displacement probability by a factor $1/a$ , while leading to the same sampling of phase space on the scale of the confinement length $\\ell \\ll a$ .", "Hence we expect the system to relax very slowly, i.e., the relaxation time $\\tau $ to diverge as $\\tau \\propto a$ , so that $\\Lambda -1 \\propto 1/a$ .", "This scaling law can only be altered for $w(0)=0$ .", "We thus expect an optimal finite jump amplitude $a=a_{\\text{opt}}$ , for a given functional form $w$ , where $\\Lambda (a)$ is minimal and hence the convergence is the fastest." ], [ "The formalism", "From the dynamics defined in the main text, we can write the Master equation obeyed by the walker's density as $P_n(x) = \\int _{-\\infty }^{\\infty } dx^{\\prime }\\, P_{n-1}(x^{\\prime })\\, w(x-x^{\\prime })\\, {\\rm min}\\left(1, e^{-\\beta \\, \\left(U(x)-U(x^{\\prime })\\right)}\\right) +\\left[1- \\int _{-\\infty }^{\\infty } dy\\, w(y-x)\\, {\\rm min}\\left(1, e^{-\\beta \\, \\left(U(y)-U(x)\\right)}\\right)\\right]\\, P_{n-1}(x)\\, ,$ where $w(\\eta )$ is the jump distribution and $U(x)$ the confining potential.", "At a given time step $n$ , the first term describes the probability flux to $x$ from all other positions $x^{\\prime }$ .", "The second term is for the probability that all attempted moves made by the particle at $x$ (to another arbitrary position $y$ ) are rejected.", "It proves convenient to replace the `min' function above by the identity ${\\rm min}\\left(1, e^{-\\beta \\, \\left(U(x)-U(x^{\\prime })\\right)}\\right)=\\theta \\left(U(x^{\\prime })-U(x)\\right) + e^{-\\beta \\, \\left(U(x)-U(x^{\\prime })\\right)}\\, \\theta \\left(U(x)-U(x^{\\prime })\\right)\\,$ where $\\theta (z)$ is the Heaviside theta function.", "The Master equation (REF ) can then be written as $P_n(x)= \\int _{-\\infty }^{\\infty } F_\\beta (x,x^{\\prime })\\, P_{n-1}(x^{\\prime })\\, dx^{\\prime }$ where the temperature dependent kernel is given by $F_\\beta (x,x^{\\prime }) &= & w(x-x^{\\prime })\\, \\left[\\theta \\left(U(x^{\\prime })-U(x)\\right)+ e^{-\\beta \\,\\left(U(x)- U(x^{\\prime })\\right)}\\, \\theta \\left(U(x)-U(x^{\\prime })\\right)\\right] \\nonumber \\\\& & + \\delta (x-x^{\\prime })\\,\\underbrace{\\left[ 1- \\int _{-\\infty }^{\\infty } dy\\, w(y-x^{\\prime })\\, \\left[ \\theta \\left(U(x^{\\prime })-U(y)\\right)+ e^{-\\beta \\,\\left(U(y)- U(x^{\\prime })\\right)}\\, \\theta \\left(U(y)-U(x^{\\prime })\\right)\\right]\\right]}_{R(x^{\\prime })} \\, .$ The kernel $F_\\beta (x,x^{\\prime })$ can be interpreted as the probability of a jump from $x^{\\prime }$ to $x$ at inverse temperature $\\beta $ .", "The term in square brackets on the second line of Eq.", "(REF ) is the rejection probability, that can be recast in $R(x^{\\prime }) = \\int _{-\\infty }^{\\infty } dy\\, w(y-x^{\\prime }) \\left(1 - e^{-\\beta \\, \\left(U(y)- U(x^{\\prime })\\right)} \\right) \\, \\theta \\left(U(y)-U(x^{\\prime })\\right) .$ Written as such, it directly expresses the fact that among all attempted moves from $x^{\\prime }$ to $y$ , only a fraction $1-e^{-\\beta \\, \\left(U(y)- U(x^{\\prime })\\right)}$ of those leading to an energy increase ($U(y)>U(x)$ ), is effectively rejected.", "All others attempts are accepted and thus do not contribute to $R(x^{\\prime })$ .", "A first check for the validity of the Master equation is that it should conserve the total probability $\\int _{-\\infty }^{\\infty } P_n(x)\\, dx=1$ .", "From (REF ), this means that kernel $F_\\beta (x,x^{\\prime })$ must satisfy the condition $\\int _{-\\infty }^{\\infty } F_\\beta (x,x^{\\prime })\\, dx=1 \\quad {\\rm for} \\,\\, {\\rm all}\\,\\, x^{\\prime } \\, .$ Indeed, substituting $F_\\beta (x,x^{\\prime })$ from (REF ) into the integral (REF ), it is easy to check that it satisfies the probability conservation for all $x^{\\prime }$ .", "Next, we verify explicitly that the Master equation (REF ), with $F_\\beta (x,x^{\\prime })$ given in (REF ), admits, as $n\\rightarrow \\infty $ , a stationary solution that is of the Gibbs-Boltzmann equilibrium form $P_\\infty (x) \\,=\\, \\frac{1}{Z} e^{-\\beta U(x)},$ where the partition function $Z$ is a normalization constant.", "Assuming a stationary solution exists as $n\\rightarrow \\infty $ in (REF ), it must satisfy the integral equation $P_{\\infty }(x)= \\int _{-\\infty }^{\\infty } F_\\beta (x,x^{\\prime })\\, P_{\\infty }(x^{\\prime })\\, dx^{\\prime } \\, .$ To verify this equality, we substitute $ P_{\\infty }(x^{\\prime })= (1/Z)\\, e^{-\\beta \\, U(x^{\\prime })}$ on the right hand side (rhs) of (REF ) and use the explicit form of $F_\\beta (x,x^{\\prime })$ from (REF ).", "By writing down each term on the rhs explicitly, it is straightforward to check that indeed for arbitrary symmetric jump distributions such that $w(x-x^{\\prime })=w(x^{\\prime }-x)$ , the rhs gives (after a few cancellations) $(1/Z) e^{-\\beta \\, U(x)}$ for arbitrary confining potential $U(x)$ .", "This is of course expected since the Metropolis rule indeed does satisfy detailed balance with respect to the Gibbs-Boltzmann stationary state." ], [ "Transformation to a self-adjoint problem", "Solving the Master equation (REF ) analytically for arbitrary potential is out of reach.", "A first difficulty one encounters is that the kernel $F_\\beta (x,x^{\\prime })$ in (REF ) is non-symmetric under the exchange of $x$ and $x^{\\prime }$ : the integral operator $F_\\beta (x,x^{\\prime })$ is not self-adjoint.", "This problem can be circumvented by applying the following `symmetrizing' trick [42].", "Let us first define a new quantity $Q_n(x)$ related simply to $P_n(x)$ via the relation $P_n(x)= e^{-\\beta \\, U(x)/2}\\, Q_n(x) \\, .$ Substituting this relation in (REF ), we see that $Q_n(x)$ satisfies the following integral equation $Q_n(x)= {\\widehat{K}}_\\beta Q_{n-1}(x) = \\int _{-\\infty }^{\\infty } K_\\beta (x,x^{\\prime })\\, Q_{n-1}(x^{\\prime })\\, dx^{\\prime }$ where the action of the integral operator ${\\widehat{K}}_\\beta $ is described by its kernel $K_\\beta (x,x^{\\prime })$ : $K_\\beta (x,x^{\\prime }) = w(x - x^{\\prime }) e^{-\\beta | U(x) - U(x^{\\prime }) |/2} + \\delta (x - x^{\\prime }) R(x)$ and the rejection probability $R(x)$ is defined in Eq.", "(REF ).", "Thus, for symmetric jump distribution $w(x-x^{\\prime })=w(x^{\\prime }-x)$ , $K_\\beta (x,x^{\\prime })$ is symmetric and we can consider $\\widehat{K}_\\beta $ as a real self-adjoint integral operator (operating on the real line) whose matrix element $\\langle y|\\widehat{K}_\\beta |y^{\\prime }\\rangle = K_\\beta (y,y^{\\prime })$ is given by Eq.", "(REF ).", "Besides, Eq.", "(REF ) admits a stationary solution $Q_{\\infty }(x)= \\frac{1}{Z}\\, e^{-\\beta \\, U(x)/2} .$ The solution of the integral equation (REF ) can be written as a linear combination of the eigenmodes of the operator $\\widehat{K}_\\beta $ , i.e., $Q_n(x)= \\sum _{\\lambda } {\\cal A}_\\lambda \\, \\psi _{\\lambda }(x)\\, \\lambda ^n$ where $\\psi _{\\lambda }(x)$ satisfies the eigenvalue equation $\\int _{-\\infty }^{\\infty } K_\\beta (x,x^{\\prime })\\, \\psi _{\\lambda }(x^{\\prime })\\, dx^{\\prime }= \\lambda \\, \\psi _{\\lambda }(x)\\,$ and the ${\\cal A}_\\lambda $ 's are arbitrary at this point.", "Consequently, from Eq.", "(REF ), $P_n(x) \\,=\\, \\sum _\\lambda {\\cal A}_\\lambda \\, \\psi _\\lambda (x) \\,e^{-\\beta U(x)/2} \\lambda ^n \\,=\\, \\sum _\\lambda {\\cal A}_\\lambda \\, {\\cal P}_\\lambda (x) \\lambda ^n \\quad \\hbox{with} \\quad {\\cal P}_\\lambda (x) \\,=\\, \\psi _\\lambda (x) \\,e^{-\\beta U(x)/2} ,$ as written in Eq.", "(9) in the main text.", "Since the operator $\\widehat{K}_\\beta $ is real self-adjoint, both its eigenvalues and eigenvectors are real valued [36].", "This property extends to the operator defined from $F_\\beta (x,x^{\\prime })$ , since $e^{-\\beta U(x)/2} \\,F_\\beta (x,x^{\\prime }) \\,=\\, e^{-\\beta U(x^{\\prime })/2} \\,K_\\beta (x,x^{\\prime }).$ Having a real spectrum is a non-trivial property, as the eigenvalues of Frobenius-Perron type of operators to which the original integral equation Eq (REF ) belongs are in general complex numbers inside the unit circle $|\\lambda |<1$ .", "The detailed balance rules which are used to derive the Metropolis algorithm actually constrain the eigenvalue of the associated integral equation to be real (at non zero temperatures) [36].", "The eigenvalue $\\lambda _0=1$ corresponds to the steady state solution $Q_{\\infty }(x)$ in (REF ); all other eigenvalues are real and strictly below 1.", "We have labeled the spectrum so that $1>\\lambda _1 \\ge \\lambda _2\\ldots $ .", "A particular interest goes into the eigenvalue $\\lambda _1$ that is closest to 1 from below, since it rules the long time dynamics." ], [ "The diffusive limit: Schrödinger reformulation and symmetry", "The distribution $w$ of attempted jumps is taken of the form (REF ) with $a$ representing a characteristic length.", "The limit of small $a$ is informative: the original Master equation reduces to a diffusive-like Fokker-Planck equation [42].", "In line with our preceding treatment, it is more convenient to work with the self-adjoint dynamics, which is described by an equivalent Schrödinger equation, as we proceed to show.", "For $a\\rightarrow 0$ , it is possible to Taylor-expand the eigenvalue equation (REF ).", "Introducing the second moment of the jump distribution $\\sigma ^2 &= \\int dy \\; y^2 w(y) ,$ and making use of the identity $\\int dy \\; w(x - y) (x - y) = 0$ together with the symmetry relations, we get $\\int dy \\; w(x - y) \\frac{(x-y)^2}{2} {\\rm sign}( U(x) - U(y) ) = \\int dy \\; w(x - y) \\frac{(x-y)^2}{2} {\\rm sign}( U^{\\prime }(x) (x -y) ) = 0$ valid when $U^{\\prime }(x) \\ne 0$ .", "These cancellations stem from the symmetry $w(x)=w(-x)$ .", "We thereby get: $(1 - \\lambda ) \\psi _\\lambda (x) &= \\sigma ^2 \\left( -\\frac{1}{2} \\psi _\\lambda ^{\\prime \\prime }(x) + \\frac{\\beta ^2}{8} U^{\\prime }(x)^2 \\psi _\\lambda (x) - \\frac{\\beta }{4} U^{\\prime \\prime }(x) \\psi _\\lambda (x) \\right)$ The relaxation rates of this equation can thus be determined from the eigenvalues $\\epsilon _n$ and eigenvectors of the effective Schrödinger equation ${\\widehat{H}} \\psi = -\\frac{1}{2} \\psi ^{\\prime \\prime }(x) + \\frac{\\beta ^2}{8} U^{\\prime }(x)^2 \\psi (x) - \\frac{\\beta }{4} U^{\\prime \\prime }(x) \\psi (x) = \\epsilon _n \\psi (x) .$ The connection reads $\\lambda _n \\, =\\, 1 - \\sigma ^2 \\epsilon _n,$ providing an explicit expression for the spectrum $\\lbrace \\lambda _n\\rbrace $ .", "We stress that a truncation of the Taylor expansion behind the derivation of the Schrödinger equation is justified if the length-scale on which the wavefunctions vary is large compared to $a$ , the typical amplitude of the jumps generated by $w(\\eta )$ .", "Thus Eq.", "(REF ) is not valid in the limit of the high energy modes $\\epsilon _n$ of the Schrödinger equation.", "This limitation of the Schrödinger picture can be anticipated from the fact that the eigenvalues of the original Master equation are in the interval $\\lambda \\in [-1,1]$ while the eigenvalues predicted by Eq.", "(REF ) extend to all the range $(-\\infty , 1]$ .", "The ground state of the Hamiltonian Eq.", "(REF ) has a vanishing ground state eigenvalue $\\epsilon _0 = 0$ with an eigenvector given by $\\psi _0 = e^{-\\beta U(x)/2}$ .", "This eigenvector describes the equilibrium probability distribution and is identical to the ground state of the original Eqs.", "(REF ,REF ), without the assumption of a small jump length.", "Note that since the original confining potential is symmetric in $x$ (even), so is the Schrödinger effective potential in Eq.", "(REF ), $ \\beta ^2 U^{\\prime }(x)^2 /8 - \\beta U^{\\prime \\prime }(x)/4$ .", "The Schrödinger reformulation then allows to understand why the longest lived eigenmode, for small $a$ , is antisymmetric: it corresponds to the first excited state, with an eigenfunction featuring a unique zero." ], [ "Analytical solutions in the truncated Schrödinger eigenbasis", "Since Eq.", "(REF ) is a Schrödinger equation, its (normalized) excited state eigenvectors $\\psi _n(x) \\;(n \\ge 1)$ are all orthogonal to $\\psi _0(x)$ and provide a natural basis for a variational estimation of the relaxation rate.", "Indeed, the definition of $\\Lambda $ in the main text as the leading relaxation mode (upper value of the relaxation spectrum, leaving aside the top eigenvalue $\\lambda =1$ corresponding to the equilibrium state) can be recast as $\\Lambda \\, =\\, \\max _{\\Phi \\perp \\psi _0}\\frac{\\int dy\\, \\int dy^{\\prime }\\, \\Phi (y) K_\\beta (y,y^{\\prime }) \\Phi (y^{\\prime })}{\\int \\Phi ^2(y)\\, dy} .$ As a consequence, by restricting to the first $N$ excited states (which are perpendicular to the ground state $\\psi _0(x)$ ), we get a lower bound in the form $\\Lambda &\\ge \\max _{c_1,...c_{N_s}} \\frac{\\int dy\\, \\int dy^{\\prime }\\, \\Phi (y) K_\\beta (y,y^{\\prime }) \\Phi (y^{\\prime })}{\\int \\Phi ^2(y)\\, dy} \\\\&\\Phi = c_1 \\psi _1 + ... + c_{N_s} \\psi _{N_s}$ When the Schrödinger equation limit is valid, Eqs.", "(REF )-(REF ) allow to approximate the relaxation rates of the Metropolis algorithm from the eigenvalues of the Schrödinger equation: $\\lambda _n = 1 - \\sigma ^2 \\epsilon _n .$ Upon increasing of the typical size of jump length, the operator $\\widehat{K}_\\beta $ will mix different Schrödinger eigenmodes and this estimate will no longer be valid.", "Solving the present optimization problem is equivalent to finding the largest eigenvalue of the reduced $N_s\\times N_s$ matrices $K^{(N_s)}$ with matrix elements $K_{nm} = \\int dy\\, \\int dy^{\\prime }\\, \\psi _n(y) K_\\beta (y,y^{\\prime }) \\psi _m(y^{\\prime }),$ with the truncation $1 \\le n, m \\le N_s$ where the positive integer $N_s$ gives the number of retained eigenfunctions.", "We will show in section that with a few modes only, very good quantitative estimates for $\\Lambda $ can be obtained by this approach, even where Eq.", "(REF ) is no longer valid, far from the small jump amplitude limit.", "In cases where the potential $U(x)$ is even (as assumed here), the eigenbasis $\\psi _n(x)$ will split into symmetric and anti-symmetric eigenfunctions.", "The master equation kernel $K_\\beta $ inherits the symmetry properties of the potential $U(x)$ and the matrix elements Eq.", "(REF ) will be non-zero only for wavefunctions from the same parity.", "The truncated matrix will thus split into a direct sum of even-even and odd-odd matrices.", "The mapping to the Schrödinger equation ensures that at least in the small jump limit, $\\Lambda $ will be in the odd sector, but we will show in section an example where this is not necessarily true for large $a$ ." ], [ "Potentials, sampling choice, and observables", "The claims put forward in the main text rely on the study of a number of confining potentials of the form $U(x) \\propto |x|^\\alpha $ , with $\\alpha >0$ .", "Some emphasis has also been put in the study of confinement by hard walls, the box potential, where $U(x)=0$ for $x\\in [-L,L]$ and $U(x)=\\infty $ for $|x|>L$ .", "In these potential landscapes, we have changed the sampling method, varying the distribution $f(\\eta )$ of attempted jumps.", "Scaling out the jump's typical length $a$ , we obtain the dimensionless distribution $f(z)$ : $w(\\eta ) \\,=\\, \\frac{1}{a} f\\left(\\frac{\\eta }{a}\\right) .$ Different choices were made, symmetric for simplicity ($f(z)=f(-z)$ ): Gaussian distribution of jumps $f(z) \\, = \\, \\frac{1}{\\sqrt{2\\pi }} \\,e^{-z^2/2}$ Exponential distribution $f(z) \\, = \\, \\frac{1}{2} e^{-|z|}$ Flat distribution $f(z) \\, = \\, \\theta \\left(\\frac{1}{2}-|z|\\right)$ Other more specific choices, as introduced to analyze the box confinement, see section .", "In order to study convergence to equilibrium, it is important to pay attention to the symmetry of the observables used, for it affects relaxation rates.", "This can be understood from the Schrödinger reformulation, where excited states of increasing order are alternatively even and odd in $x$ , while their energy is directly related to the relaxation rate, see Eq.", "(REF ).", "Therefore, we can use even observables (with even initial conditions) to suppress a slower relaxation rate corresponding to an odd mode, allowing to estimate $\\lambda _1$ and $\\lambda _2$ from sampling.", "In particular, we measured $\\mathcal {O}_1(x) = (x - 0.5)^2, \\quad \\hbox{ and } \\quad \\mathcal {O}_2(x) = |x| .$" ], [ "Probing localization with the Inverse Participation Ratio", "Since the transition we identify amounts to a localization of the convergence error onto well defined positions, it is essential to discriminate delocalized states, from localized ones.", "To this end, we discretize the integral in the Master equation Eq.", "(REF ) into a sum of $N_d$ terms, with a running position index to denote lattice sites $1 \\le i \\le N_d$ .", "We then introduce the inverse participation ratio for an eigenvector $\\Psi _\\lambda (x)$ as ${\\rm IPR}(\\lambda ) = \\left.", "\\sum _{i=1}^{N_d} |\\Psi _\\lambda (i)|^4 / \\left( \\sum _i |\\Psi _\\lambda (i)|^2 \\right)^{2}\\right.", ".$ This quantity can vary between two extremes.", "If the eigenfunction is completely delocalized over the whole system, so that $\\Psi _\\lambda (i)$ is a constant (normalization is irrelevant here), then $\\text{IPR}(\\lambda )=1/N_d$ , with $N_d \\gg 1$ .", "If on the other hand, $\\Psi _\\lambda (i)$ vanishes on all sites but one, then $\\text{IPR}(\\lambda )=1$ , irrespective of $N_d$ .", "If $\\Psi _\\lambda (i)$ , the discretization of an eigenvector $\\psi _\\lambda (x)$ , is well defined in the continuum limit, then $\\text{IPR} \\rightarrow 0$ as $N_d^{-1}$ .", "On the other hand, if part of the eigenfunction localizes, a slower decay as a function of $N_d$ will be observed and the discrete eigenfunction $\\Psi _\\lambda (i)$ will not converge to a well defined continuum limit." ], [ "Numerical diagonalization", "Numerical diagonalization of the discretized form of the Master equation Eq.", "(REF ) allows to find the spectrum of eigenvalues.", "The master equation was discretized by a uniform mesh with $N_d$ sites.", "The integration was replaced by a sum over accessible neighbors, ensuring probability conservation.", "For particles in a box $x \\in [-L,L]$ , the first and last points of the mesh were set to $-L$ and $L$ respectively.", "For the harmonic potential, the first and last points were set to $\\pm X_{\\text{max}}$ where $X_{\\text{max}}$ is the largest $|x|$ allowed by the mesh.", "The results in Fig.", "1 from the main text were obtained for $X_{\\text{max}} = 10$ (in units of thermal length in the harmonic potential).", "We increased $X_{\\text{max}}$ up to 30 to check that the results were independent on this choice of $X_{\\text{max}}$ .", "To obtain eigenvalues and eigenvectors, we used the diagonalization routines from the eigen++ library.", "To avoid the appearance of spurious complex eigenvalues due to rounding errors in diagonalization algorithms, we took advantage of detailed balance to transform the kernel of the integral Master equation into its symmetric form Eq.", "(REF ).", "Considering the fast increase of the numerical time required for full diagonalization with matrix size, we used this approach for $N_d \\le 10^4$ ." ], [ "Numerical iteration of the Master equation", "To study the relaxation of the error $\\delta P_n(x)$ , it is also possible to follow the evolution of a fixed initial state by successive iterations of the Master equation.", "This approach is computationally less demanding than full diagonalization.", "With this method, we ran simulations up to $N_d= 2\\times 10^5$ ." ], [ "Monte Carlo simulations", "To put our analytical calculations to the test and assess the accuracy of the predicted bounds, we have directly simulated the dynamics defined by the Master equation.", "The Metropolis rule, spelled out in the main text, defines a Markov chain which can be readily simulated by means of classical Monte Carlo.", "For large enough time $n$ , equilibrium will be reached and the walker's position will sample the Gibbs-Boltzmann distribution (REF ).", "The sampling scheme obeys detailed balance [16], which guarantees the existence of a steady state, that is furthermore unique for an ergodic irreducible chain [18].", "We are interested in the long-time approach towards the equilibrium distribution.", "To gather statistics, we perform the simulation until $n=30$ typically, and repeat this for $m=10^{10}$ or $10^{11}$ independent samples.", "At every time step $n$ , we compute a number of observables, see section REF .", "An observable $\\mathcal {O}$ is then averaged over all $m$ samples at fixed time $n$ , leading to $\\overline{\\mathcal {O}}$ .", "$\\overline{\\mathcal {O}}(n) \\,=\\, \\frac{1}{m} \\sum _{i=1}^m {\\cal O}^{(i)}(n)$ where the observable measured at time $n$ in the $i$ th sample is ${\\cal O}^{(i)}(n).$ Our Monte Carlo estimates of the largest eigenvalue are obtained by fits to the deviation from the equilibrium value at time $n$ of the form $|\\overline{\\mathcal {O}}(n) - \\left< \\mathcal {O} \\right>_\\text{eq}| = c_1 \\lambda _1^n + c_2 \\lambda _2^n$ , where $\\left< \\mathcal {O} \\right>_\\text{eq}$ is the equilibrium value, reached after long times.", "We exclude the first values (typically $n<5$ ) to minimize influence of transient behavior and $c_1$ and $c_2$ are free constants.", "Technically, we use an analytical value for $\\left< \\mathcal {O} \\right>_\\text{eq}$ , if known, or Monte Carlo results at $n=200$ , where the statistical error dominates over the systematic deviation.", "Performing this procedure for multiple jump distributions parametrized by $a$ allows us to gather measurements of relaxation rates, which we can compare to our analytical results and the other computational approaches.", "For a reliable fit, it is necessary to have good estimates of the standard errors of the measured mean values; Welford's algorithm has been used [47].", "The acceptance probability is computed during an independent simulation of a single particle over $1.1\\cdot 10^6$ Metropolis steps, where the first $10^5$ steps are ignored for the average.", "We verify the quality of the three numerical approaches by comparing them to each other, and to the analytical results for the case of the box potential.", "Figure REF shows the estimates of $\\Lambda $ obtained from the Monte Carlo simulations and $\\lambda _1$ obtained from the diagonalization (the upper envelope of the spectrum).", "Figure: Convergence rate Λ\\Lambda vs. jump amplitude aa for a harmonic confinement (U(x)=x 2 /2U(x) = x^2/2)and a flat w(η)w(\\eta ) distribution as in Eq.", "() (cf.", "Fig.", "1 of the main text).Comparison between the direct Monte Carlo simulations measure (MC) and the numerical diagonalization technique.", "Both methods agree very well.", "The Monte Carlo approach needs to look at symmetric and asymmetric observables to measure the different branches of the spectrum.The observables used are provided in Eq.", "().For a<a * ≃3.33a<a^*\\simeq 3.33, using the asymmetric observable𝒪 1 {\\cal O}_1 provides a very good estimate of the largest eigenvalue λ 1 =Λ\\lambda _1=\\Lambda .", "For a<a * a<a^*, using the even observable 𝒪 2 {\\cal O}_2 yields the second largest eigenvalueλ 2 \\lambda _2.", "For a>a * a>a^*, the largest of both(λ\\lambda from 𝒪 2 ){\\cal O}_2) gives a very good estimationof Λ\\Lambda .", "The singular continuum is shown by the grey region.As in the main text, aa is given in units of the thermal length at equilibrium.Figure REF shows the shape of the deviation $\\delta P_n(x)$ from the equilibrium distribution at a finite time, for uniform jumps in the harmonic potential.", "The direct Monte Carlo results and the results from the iteration of the Master equation are compatible within statistical fluctuations.", "This lends a high confidence in the results of the iteration for longer times, which are shown in the main manuscript.", "Figure: Shape of the rescaled deviation δP n (x)\\delta P_n(x) vs xx from the equilibrium distributionat finite times (here n=15n=15).", "The symbols show results of direct Monte Carloand the lines show the results of the iteration of the Master equation; both methodsare in good agreement.", "Same confinement and jump distribution as in Fig.", "." ], [ "The box potential", "Obtaining exact analytical results in the general case of a potential $U(x)$ in $|x|^\\alpha $ seems out of reach.", "Yet, the box potential, where the random walker moves freely between hard walls at $\\pm L$ , is a useful model system that presents the whole range of phenomena observed generically.", "A key aspect lies in the choice of the jump distribution scaling function $f(z)$ , that can lead to any of the two scenarios mentioned in the main text: a a gapped spectrum for which the discrete branch $\\lambda _1$ is above the singular continuum, for all jump amplitudes $a$ (regular case, where the singular continuum, although present, does not play a role in the long time error, and there is no localization); b a gapless spectrum where the singular continuum becomes the dominant relaxation mode for $a>a^*$ .", "This situation b where localization appears is the generic case.", "This is why we focused on case b in the main text.", "It is useful here to introduce the late-time rejection probabilities $R_a(0)$ at $x=0$ , and $R_a(\\text{edge})$ at the system's edge, meaning $x=\\pm L$ in the box case.", "Both depend on $a$ .", "For $a\\rightarrow 0$ , we have $R_a(0)<R_a(\\text{edge})$ : all moves from $x=0$ are accepted (vanishing rejection probability), while only half of them are, starting from the edge (both in the box case, and when exponent $\\alpha >1$ , leading to a convex-up confining potential).", "A careful inspection of all the numerical data we gathered shows that case a corresponds to $R_a(0)<R_a(\\text{edge})$ for all $a$ ; b is for the situation where $R_a(0)$ and $R_a(\\text{edge})$ do cross for $a=a^*$ , so that $R_a(0)> R_a(\\text{edge})$ for $a>a^*$ .", "It is then straightforward to realize that the behavior of $w(\\eta )$ at small $\\eta $ discriminates the two regimes: if $w(\\eta )$ decreases when increasing $|\\eta |$ , we have case a; if $w(\\eta )$ increases when increasing $|\\eta |$ , we have case b.", "We considered the family of polynomial $w$ -functions, for instance piecewise linear or quadratic such as $w^{(1)}(\\eta ) &= \\frac{1}{a (2 b + c)} \\left(b + c \\frac{a-|\\eta |}{a} \\right) \\theta (a-|\\eta |)\\\\w^{(2)}(\\eta ) &= \\frac{1}{a(2 b + 4 c/3)}\\left(b + c \\frac{a^2-\\eta ^2}{a^2} \\right) \\theta (a-|\\eta |)$ parameterized by the constants $b$ and $c$ , in addition to the jump size $a$ : positive values of $c$ define convex-down functions, pictorially written $w^\\cap (\\eta )$ and associated to case a; $c<0$ defines convex-up functions, denoted $w^\\cup (\\eta )$ , associated to case b.", "We take hereafter $L=1$ , without loss of generality." ], [ "Numerical results", "We show in Fig.", "REF the spectrum of $F_\\beta (x,x^{\\prime })$ obtained by numerical diagonalization with the parabolic jump distribution $w^{(2)}(\\eta )$ , either of the type $w^\\cap $ or $w^\\cup $ .", "The distinction between the gapped (a, with $w^\\cap $ ) and gapless (b, with $w^\\cup $ ) cases appears.", "At variance with case a, b shows a regime for $a>a^*\\simeq 1.79$ where the singular continuum defines the dominant relaxation mode, so that localization ensues.", "The crossing of the curves $R_a(0)$ and $R_a(\\text{edge})$ for $a$ slightly below $a^*$ is also visible.", "For the present box potential, $U(x)$ either vanishes inside the box, or diverges outside.", "Hence, the value of inverse temperature is irrelevant.", "We have checked that the qualitative results remain unchanged for all monotonous (for $\\eta > 0$ ) jump distributions $w(\\eta )$ even in $\\eta $ , in particular using the piecewise linear distribution $w^{(1)}(\\eta )$ .", "Thus for the box potential $U(x) = 0$ ($x \\in [-1,1]$ ) the presence or absence of localization is determined by whether $w(\\eta )$ is either minimum or maximum at $\\eta =0$ ." ], [ "Exact results on eigenvalues of the Monte-Carlo Master equation for a box potential", "We have seen above that the box potential subsumes the gapless/gapped spectra dichotomy, corresponding to the a absence/b presence of localization.", "Besides, the shape of the gapless spectrum shown in Fig.", "REF is closely reminiscent of its counterpart presented in the main text.", "We thus take advantage of the fact that exact results can be obtained with the box confinement, to shed new light on the localization phenomenon and its scaling properties.", "For a box potential $U(x) = 0$ ($x \\in [-1,1]$ ) the eigenvalue problem of the Monte-Carlo Master equation simplifies into: $\\lambda \\Psi _\\lambda (x)= \\int _{-1}^1 \\Psi _\\lambda (y) \\,w(x-y)\\, dy + \\left[1- \\int _{-1}^1 w(y-x)\\,dy\\right]\\, \\Psi _\\lambda (x)$ where $\\lambda $ is the eigenvalue and $\\Psi _\\lambda (x)$ is the eigenvector.", "For $a > 2$ and the two choices $w^{(p)}(\\eta )$ ($p=1,2$ ) from Eq.", "(REF ), this equation reduces to a second order differential equation which can be solved to yield a single eigenvalue $\\lambda < 1$ .", "This eigenvalue can be written in the form: $\\lambda ^{(p)}_1 = R^{(p)}(1) + (k^{(p)}-1)[R^{(p)}(1) - R^{(p)}(0)].$ Here, the rejection probability $R^{(p)}(x)$ , with index $p = 1$ and $p = 2$ is given by $R^{(p)}(x) = 1 - \\int _{-1}^{1} w^{(p)}(x-y) dy .$ The constants $k^{(p)}$ read $k^{(1)} \\simeq 1.439$ and $k^{(2)} \\simeq 1.356$ ; they are the solutions of $\\frac{1}{\\sqrt{k^{(1)}}} \\, {\\rm arccoth} \\sqrt{k^{(1)}} = 1 \\;,\\; \\sqrt{k^{(2)}}\\, {\\rm arccoth} \\sqrt{k^{(2)}} = \\frac{3}{2} .$ In both cases, $k^{(p)} > 1$ , which implies that if $R^{(p)}(1) > R^{(p)}(0)$ , the eigenvalue $\\lambda _1^{(p)} > R^{(p)}(1) = \\max _x R^{(p)}(x)$ (case a).", "On the contrary, if $R^{(p)}(1) < R^{(p)}(0)$ , $\\lambda _1^{(p)} < R^{(p)}(1) = \\min _x R^{(p)}(x)$ .", "Thus, it is indeed the comparison between $R^{(p)}(1)$ and $R^{(p)}(0)$ which determines if $\\lambda _1^{(p)}$ is above or below the singular continuum, thereby discriminating between a and b.", "Evaluating the integrals in Eq.", "(REF ) we find the explicit expressions (we remind that they are valid for $a>2$ ): $\\lambda _{1}^{(1)} &= \\frac{-2 a b + 2 a^2 b + c - 2 a c + a^2 c + k^{(1)} c}{a^2 (2 b + c)} \\\\\\lambda _{1}^{(2)} &= \\frac{-3 a^2 b + 3 a^3 b + c - 3 a^2 c + 2 a^3 c + 3 k^{(2)} c}{a^3 (3 b + 2 c)} .$ To summarize at this point, for both parametrizations of the jump distribution function and for $a > 2$ , there is a single eigenvalue $\\lambda < 1$ (besides the singular continuum).", "This eigenvalue lies above $\\max _x R(x)$ or below $\\min _x R(x)$ depending on whether $w(\\eta )$ is a maximum or b minimum at $\\eta = 0$ .", "Considering that the interval $(\\min _x R(x), \\max _x R(x))$ is actually filled with singular eigenvalues $\\lambda = R(x)$ , we describe this situation as $\\lambda _1$ lying above or below the singular continuum." ], [ "Exact results on the localization of the error $\\delta P_n(x)$", "We wish to describe analytically the relaxation of the error $\\delta P_n(x)$ when $\\lambda _1$ lies below $\\max _x R(x)$ .", "We remind that for the two parametrizations of $w(\\eta )$ from the previous section, a stronger result holds and that in this case, $\\lambda _1 < \\min _x R(x)$ .", "The Master equation for the error $\\delta P_n = P_n - P_\\infty $ reads: $\\delta P_{n+1}(x) = \\int _{-1}^{1} \\delta P_n(y) w(x - y) dy + R(x) \\delta P_n(x)\\qquad \\hbox{with}\\qquad R(x) = 1 - \\int _{-1}^{1} w(x-y) dy .$ Normalization implies $\\int \\delta P_{0}(x) dx = \\int \\delta P_{n}(x) dx = 0$ .", "For $a > 2$ and focusing on the case of a parabolic jump distribution in Eq.", "(REF ), it is possible to simplify notations: $w^{(2)}(\\eta ) = w_0 + w_2 \\,\\eta ^2 , \\qquad R(x) = r_0 - r_2 \\, x^2 , \\qquad r_0 = 1 - 2 w_0 - \\frac{2 w_2}{3} \\, , \\qquad r_2 = 2 \\, w_2 , \\quad \\hbox{with } w_2>0 .$ We then look at symmetric initial conditions $\\delta P_0(x) = \\delta P_0(-x)$ : $\\delta P_{n+1}(x) \\, =\\, w_2 \\int _{-1}^{1} \\delta P_n(y) y^2 dy \\, +\\, R(x)\\, \\delta P_n(x) .", "$ We introduce the generating function: $G(x, z) \\, =\\, \\sum _{n=0}^{\\infty } \\delta P_n(x) z^n\\,= \\, \\delta P_0(x) + w_2 z \\int _{-1}^{1} G(y, z) y^2 dy + z R(x) G(x, z)$ from which we get $G(x, z) &= \\frac{1}{1 - z R(x)} \\left( \\delta P_0(x) + w_2 z \\int _{-1}^{1} G(y, z) y^2 dy \\right) .$ We then solve for $\\displaystyle G_2(z) = \\int _{-1}^{1} G(x, z) x^2 dx .$ Integrating the Master equation we find: $G_2(z) &= \\frac{\\sqrt{r_2 z }}{\\sqrt{1 - r_0 z} \\arctan \\frac{ \\sqrt{r_2 z }}{\\sqrt{1 - r_0 z}} } \\int _{-1}^{1} \\frac{\\delta P_0(x) x^2}{1 - z R(x)} dx .$ Figure: Comparison of the numerical results for S n =∫ -1 1 δP n (x)x 2 dxS_n = \\int _{-1}^{1} \\delta P_n(x) x^2 dx and Q n (x)=R(x) -n δP n (x)Q_n(x) = R(x)^{-n} \\delta P_n(x) with the asymptotic estimates in Eqs. (,).", "We chose w (2) (η)=3(1+η 2 /a 2 )/(8a)w^{(2)}(\\eta ) = 3(1 + \\eta ^2/a^2)/(8 a) and a=2.1a = 2.1; the number of sites for the discretization of the Master equation wasN d =2×10 5 N_d = 2\\times 10^5.", "The initial conditions were δP 0 (x)=2θ(1/2-|x|)-1\\delta P_0(x) = 2 \\theta (1/2 - |x|) - 1 (r=1/2r = 1/2).", "We note that even if simulations with a finite N d N_d cannot reproduce the asymptotic power in the n→∞n \\rightarrow \\infty behavior (because of the discrete spectrum), the agreement at finite but large nn is nevertheless very good.To make further progress, we choose as an initial condition $\\delta P_0(x) = r^{-1} \\theta (r - |x|) - 1 ,$ which allows to compute the integral in Eq.", "(REF ) explicitly: $G_2(z) = \\int _{-1}^{1} G(x, z) x^2 dx \\,= \\, \\frac{2}{r_2 z}\\left(1 - \\frac{1}{r} \\frac{\\arctan r Z}{\\arctan Z} \\right) \\qquad \\hbox{with} \\qquad Z \\,= \\, \\sqrt{ \\frac{r_2 z}{1 - r_0 z} } .$ We note that $G_2(z)$ is the generating function for the series $S_n = \\int _{-1}^{1} \\delta P_n(x) x^2 dx$ which can be viewed as the error in the variance $x^2$ at step $n$ .", "The general method of singularity analysis [48] allows to find the asymptotic behavior of a series from the analysis of the singularities of its generating function in the complex plane which are nearest to the origin $z=0$ .", "For $G_2(z)$ the singularity closest to the origin is $z = r_0^{-1}$ .", "The asymptotic expansion of the generating function near this singularity allows us to find: $S_n &\\simeq -\\frac{2(1-r)}{r^2} \\frac{r_0^{3/2}}{(\\pi r_2)^{3/2}} \\frac{r_0^n}{n^{3/2}} - \\frac{(1 - r) r_0^{3/2} [ -48 r^2 r_0 + 4 \\pi ^2 (1 + r + r^2) r_0 - 15 \\pi ^2 r^2 r_2 ]}{4 \\pi ^{7/2} r^4 r_2^{5/2}} \\frac{r_0^n}{n^{5/2}} .$ We then introduce the functions $Q_n(x)$ as $\\delta P_n(x) &= R(x)^n Q_n(x) ,$ and the recurrence equation Eq.", "(REF ) becomes: $Q_{n+1}(x) &= \\frac{w_2}{R(x)} \\int _{-1}^{1} \\delta P_n(y) R(x)^{-n} y^2 dy + Q_n(x) \\\\&=\\frac{w_2}{R(x)} \\sum _{m = 0}^n \\int _{-1}^{1} P_m(y) R(x)^{-m} y^2 dy + Q_0(x) .$ Taking the limit $n \\rightarrow \\infty $ and under the proviso that the series converges, we find $Q_{\\infty }(x) &= \\frac{w_2}{R(x)} G_2( R(x)^{-1} ) + \\delta P_0(x)$ where $G_2(z)$ is defined in Eq.", "(REF ).", "The problem with this expression is that $R(x)^{-1} \\ge r_0^{-1}$ lies outside the radius of convergence $|z| \\le r_0$ of $G_2(z)$ , so this formula is valid only at $x = 0$ when $R(0) = r_0$ .", "Using the obtained value of $G_2(r_0^{-1})$ , we find $Q_{\\infty }(0) = 0 .$ We can then obtain an asymptotic estimate for $Q_n(0)$ : $Q_{n}(0) &= Q_{\\infty }(0) - \\frac{r_2}{2 r_0} \\sum _{m = n}^\\infty \\int _{-1}^{1} \\delta P_m(y) r_0^{-m} y^2 dy \\\\&\\simeq \\frac{1-r}{r^2 } \\frac{r_0^{1/2}}{\\pi ^{3/2} r_2^{1/2}} \\frac{4 n + 1}{2 n^{3/2}} + \\frac{(1 - r) r_0^{1/2} [ -48 r^2 r_0 + 4 \\pi ^2 (1 + r + r^2) r_0 - 15 \\pi ^2 r^2 r_2 ]}{12 \\pi ^{7/2} r^4 r_2^{3/2}} \\frac{r_0^n}{n^{3/2}} .$ From Eq.", "(REF ), it follows that $\\delta P_n(0) = r_0^n Q_n(0)$ where we used $R(0) = r_0$ .", "Equation (REF ) indicates that $Q_n(0)$ decays as a power law $n^{-1/2}$ for large $n$ : $\\delta P_n(0) \\simeq \\frac{2(1 - r)}{\\pi ^{3/2} r^2} \\sqrt{ \\frac{r_0}{r_2} } \\frac{r_0^{n}}{n^{1/2}} .$ Comparison of Eqs.", "(REF ,REF ) with numerical simulations of discretized approximation of the Master equation are shown in Fig.", "REF .", "For $x \\ne 0$ the series becomes diverging and $Q_\\infty (x)$ does not exist.", "The leading asymptotic behavior can be extracted from the singular behavior of $Q(x, z)$ near $z = r_0^{-1}$ : $\\delta P_n(x) \\simeq -\\frac{1-r}{r^2 x^2} \\left( \\frac{r_0}{\\pi r_2} \\right)^{3/2} \\frac{r_0^n}{n^{3/2}} .$ Interestingly, we find that the ratio $\\delta P_n(x)/\\delta P_n(0)$ (for $x \\ne 0$ ) does not decay exponentially but as a power law $n^{-1}$ .", "Comparing Eqs.", "(REF ) and (REF ), we thus proved the main property of the localizing contribution to the error $\\delta P_n(x)$ : $\\lim _{n \\rightarrow \\infty } \\delta P_n(x) / \\delta P_n(0) = 0 \\quad {\\rm whenever} \\quad x \\ne 0 .$ Figure: Relaxation of δP 0 (x)=2θ(1/2-|x|)-1\\delta P_0(x) = 2 \\theta (1/2 - |x|) - 1 (i.e.", "r=1/2r = 1/2) for w (2) (η)=3(1+η 2 /a 2 )/(8a)w^{(2)}(\\eta ) = 3(1 + \\eta ^2/a^2)/(8 a) and a=2.1a=2.1; same parameters as Fig. .", "This figure compares the rescaled δP n (x)\\delta P_n(x) as obtained by direct iteration of the Master equation at step n=700n = 700, with the scaling prediction of Eqs.", "() and ().", "The right panel shows convergence for n≤100n \\le 100; the moving discontinuity is a trace of the initial distribution P 0 (x)P_0(x), that is discontinuous at x=±1/2x=\\pm 1/2.Hence, as time proceeds, the central peak extends further, and ultimately reaches the scaling form shown on the left plot.", "We also illustrated the convergence to the scaling function Eq.", "() on Fig.", "3 from the main text using the same dataset.To find a uniform approximation to $\\delta P_n(x)$ , we assume the following scaling form $\\delta P_n(x) \\, = \\, \\frac{R(0)^{n}}{\\sqrt{n}} \\, \\varphi (x \\sqrt{n}) .$ Using Eq.", "(REF ), we can approximate $r_0^{-n} \\left( \\delta P_{n+1}(x) - R(x) \\delta P_n(x) \\right) &= \\frac{r_2 r_0^{-n} }{2} \\int _{-1}^{1} \\delta P_n(y) y^2 dy \\simeq -\\frac{1-r}{r^2} \\sqrt{ \\frac{r_0^{3}}{\\pi ^{3} r_2 n^3}} .", "$ On the other hand using Eq.", "(REF ), we find: $r_0^{-n} \\left( \\delta P_{n+1}(x) - R(x) \\delta P_n(x) \\right) &= \\frac{r_0}{\\sqrt{n+\\epsilon }} \\varphi (x \\sqrt{n+\\epsilon }) - (r_0 - r_2 x^2) \\varphi (x \\sqrt{n}) \\\\&\\simeq \\frac{r_2 x^2}{n^{1/2}} \\varphi (x \\sqrt{n}) - \\frac{\\epsilon r_0 }{2 n^{3/2}} \\varphi (x \\sqrt{n}) + \\frac{\\epsilon r_0 x}{2 n} \\varphi ^{\\prime }(x \\sqrt{n}) ,$ where we introduced a formal small expansion parameter $\\epsilon = 1$ and expanded to first order in $\\epsilon $ .", "Introducing ${\\widetilde{x}} = x \\sqrt{n}$ and combining Eqs.", "(REF ,REF ), we find a first order differential equation on the scaling function $\\varphi ({\\widetilde{x}})$ : $\\frac{r_0 {\\widetilde{x}}}{2} \\varphi ^{\\prime }({\\widetilde{x}}) + r_2 {\\widetilde{x}}^2 \\varphi ({\\widetilde{x}}) - \\frac{r_0 }{2} \\varphi ({\\widetilde{x}}) = -\\frac{1-r}{r^2} \\sqrt{ \\frac{r_0^{3}}{\\pi ^{3} r_2}} .$ Equation (REF ) admits a single symmetric solution which can be expressed in a compact form introducing the Dawson function: $D_+(x) = e^{-x^2} \\int _0^x e^{y^2} dy .$ We get $\\varphi ({\\widetilde{x}}) = \\frac{2(1 - r)}{\\pi ^{3/2} r^2} \\sqrt{ \\frac{r_0}{r_2} } \\left[ 1 - 2 {\\widetilde{x}} \\sqrt{\\frac{r_2}{r_0}} D_+\\left( {\\widetilde{x}} \\sqrt{\\frac{r_2}{r_0}} \\right) \\right] .$ From the results $D_+(0) =0 \\qquad \\hbox{and} \\qquad D_+(x)\\sim \\frac{1}{2 x} + \\frac{1}{4 x^3}\\quad \\hbox{for} \\quad x\\rightarrow \\infty ,$ we recover Eqs.", "(REF ) and (REF ).", "Hence, the scaling assumption (REF ) appears fully consistent.", "The comparison between $\\delta P_n(x)$ obtained by iteration of the Master equation and the prediction of the scaling form is shown in Fig.", "REF ." ], [ "Analytical calculation of the MC relaxation rate for an Harmonic potential", "In this section, we show two examples of analytic calculations in the truncated Schrödinger eigenbasis, as introduced in sub-sections REF ,REF .", "For a harmonic potential $U(x) = x^2/2$ , the (dimensionless) Schrödinger equation reduces to the celebrated eigenvalue equation of a quantum harmonic oscillator: $\\epsilon _n \\psi _n(x) = -\\psi _n^{\\prime \\prime }(x) + \\frac{x^2}{4} \\psi _n(x) .$ The corresponding eigenfunctions can be expressed through Hermite polynomials $H_n$ : $\\psi _n(x) &= \\frac{1}{N_n} e^{-x^2/4} H_n( 2^{-1/2} x ) \\\\N_n &= \\sqrt{ \\int dx H_{n}(2^{-1/2} x)^2 e^{-x^2/2} dx }$ where $N_n$ is the normalization.", "To obtain an approximation (and lower bound) for $\\Lambda $ , we calculate the matrix elements.", "$K_{nm} = \\int dy \\; \\int dy^{\\prime } \\psi _n(y) K_{\\beta = 1}(y, y^{\\prime }) \\psi _m(y^{\\prime })$ where the integral kernel is given by Eq.", "(REF ).", "Here, as in the main text, we have expressed positions in units of thermal length which amounts to setting $\\beta =1$ .", "This gives the following expression for $K_{nm}$ $K_{nm} &= \\int _{-\\infty }^{\\infty } dy \\; w(y) \\int _{-\\infty }^{\\infty } dx \\; \\psi _m(x - y/2) \\; \\psi _n(x + y/2) e^{-|x y|/2} + \\int _{-\\infty }^{\\infty } dx \\; \\psi _n(x) \\psi _m(x) R(x)$ where $R(x)$ is the rejection probability.", "For sufficiently simple expressions of $w(\\eta )$ and low values of indices $n$ and $m$ , the integrals can be evaluated analytically.", "For a symmetric potential $U(x)$ , the truncated matrix splits into a direct sum of odd-even subspaces, as discussed in REF .", "The sequence of $N_s\\times N_s$ truncated matrices built from odd eigenfunctions will be noted $K_o^{(N_s)}$ .", "For example, the matrix $K_o^{(1)}$ reduces to a single scalar $K_{11}$ while $K_o^{(2)}$ is the $2 \\times 2$ symmetric matrix with matrix elements corresponding to $K_{nm}$ for $n,m \\in \\lbrace 1,3\\rbrace $ ; higher order approximations are obtained similarly.", "Likewise with the even sector: the sequence of $N_s\\times N_s$ matrices $K_e^{(N_s)}$ is constructed from even wavefunctions.", "Since $\\psi _0(x)$ is an exact eigenvector for any value of the jump amplitude $a$ , the lowest order $K_e^{(1)}$ is given by the scalar $K_{22}$ ; the next order $K_e^{(2)}$ is given by $K_{nm}$ for $n,m \\in \\lbrace 2,4\\rbrace $ and so forth with increasing order $N_s$ .", "The relaxation rate is then approximated by $\\Lambda ^{(N_s)}_o = \\max {\\rm eigenvalues}(K_o^{(N_s)}) \\;\\;&,\\;\\;\\Lambda ^{(N_s)}_e = \\max {\\rm eigenvalues}(K_e^{(N_s)}) \\\\\\Lambda ^{(N_s)} &= \\max \\left\\lbrace \\Lambda ^{(N_s)}_o , \\Lambda ^{(N_s)}_e\\right\\rbrace .$ Below, we considered the case of an harmonic potential $U(x)$ for several possible shapes of $w(\\eta )$ .", "In all cases, we found that the following approximation is very accurate: $\\Lambda \\simeq \\max \\left\\lbrace \\Lambda ^{(N_s)}, \\max _x R(x) \\right\\rbrace $ where $R(x)$ is the rejection probability.", "This expression is operational even for small values $N_s = 2$ , and indistinguishable from numerical diagonalization at $N_s = 6$ ." ], [ "Harmonic potential with a flat jump distribution $w(\\eta )$", "We report explicit results for the lowest order terms for $w(\\eta ) = \\theta (a - |\\eta |)/(2 a)$ , indicating that the scaling function $f$ reads $f(z) = \\theta (1 - |z|)/2$ .", "We find: $K_{11}(a) &= 1-\\frac{a^3 \\text{erfc}\\left(\\frac{a}{2 \\sqrt{2}}\\right)-2 \\sqrt{\\frac{2}{\\pi }}\\left(a^2+8\\right) e^{-\\frac{a^2}{8}}+16\\sqrt{\\frac{2}{\\pi }}}{6 a} \\\\K_{13}(a) &= \\frac{-\\sqrt{2 \\pi } a^5\\text{erfc}\\left(\\frac{a}{2\\sqrt{2}}\\right)+4 \\left(a^4+a^2+8\\right)e^{-\\frac{a^2}{8}}-32}{20 \\sqrt{3 \\pi } a} \\\\K_{33}(a) &= 1-\\frac{1}{420} \\left(5 a^4+63 a^2+210\\right)a^2 \\text{erfc}\\left(\\frac{a}{2\\sqrt{2}}\\right)+\\frac{\\left(20 a^6+207a^4+372 a^2+2976\\right)e^{-\\frac{a^2}{8}}}{420 \\sqrt{2 \\pi }a}-\\frac{124 \\sqrt{\\frac{2}{\\pi }}}{35a}$ The steady state rejection probability $R_\\infty $ is given by: $R_\\infty = \\frac{2}{a} \\sqrt{\\frac{2}{\\pi }}\\left(e^{-\\frac{a^2}{8}}-1\\right)+\\text{erf}\\left(\\frac{a}{2 \\sqrt{2}}\\right) ,$ and the maximum rejection probability reads: $R(0) = \\max _x R(x) = 1 - \\frac{\\sqrt{\\pi /2}}{a} \\text{erf}\\left(\\frac{a}{\\sqrt{2}}\\right) .$ This gives explicit expressions for the first two orders: $\\Lambda ^{(1)} &= \\Lambda _o^{(1)} = K_{11}(a) \\\\\\Lambda ^{(2)} &= \\Lambda _o^{(2)} = \\frac{K_{11}(a)+K_{33}(a)}{2} + \\sqrt{ K_{13}(a)^2 + \\left( \\frac{K_{11}(a)-K_{33}(a)}{2} \\right)^2} .$ We do not report explicit expressions for higher $K_{nm}$ matrix elements, as expressions become more cumbersome.", "From Fig.", "4 (in the main text), we see that this approximation quickly converges for $a < a^*$ and that $\\Lambda ^{(2)}$ is already very close to the value of $\\Lambda $ obtained by numerical diagonalization.", "For $a > a^*$ , the convergence of this expansion is much slower and $\\Lambda $ is instead given by the maximum rejection probability, as explained in the main text.", "The rapid convergence of the approximation Eq.", "(REF ) with increasing $N_s$ was already illustrated on Fig.", "4 from the main text for the slowest relaxation mode $\\Lambda $ .", "Here we show that this approximation allows also to obtain accurate expressions for other sub-leading relaxation modes (see Fig.", "REF )." ], [ "Harmonic potential with a Gaussian jump distribution $w(\\eta )$", "For a Gaussian jump distribution $w(\\eta ) = \\frac{1}{a \\sqrt{2 \\pi }} \\exp \\left( -\\frac{\\eta ^2}{2 a^2} \\right) \\quad \\Leftrightarrow \\quad f(z) = \\frac{1}{\\sqrt{2 \\pi }} \\exp \\left( -\\frac{z^2}{2} \\right) ,$ we find the matrix elements for the odd subspace of the Schrödinger eigenbasis: $K_{11}(a) &= 1-\\frac{a^2}{2}+\\frac{a^2 }{\\pi }\\arctan \\left(\\frac{a}{2}\\right)+\\frac{2 a^3}{\\pi \\left(a^2+4\\right)} \\\\K_{13}(a) &= \\sqrt{\\frac{3}{2}}\\frac{ a^4}{\\pi } \\arctan \\left(\\frac{a}{2}\\right)+\\frac{\\left(12 a^4+80 a^2-3 \\pi \\left(a^2+4\\right)^2 a+96\\right) a^3}{2\\sqrt{6} \\pi \\left(a^2+4\\right)^2} \\\\K_{33}(a) &= 1 -\\frac{\\left(5 a^4+9 a^2+6\\right) a^2 }{2 \\pi } \\arctan \\left(\\frac{2}{a}\\right)+\\frac{\\left(15 a^8+187 a^6+834 a^4+1560a^2+1152\\right) a^3}{3 \\pi \\left(a^2+4\\right)^3}$ For the Gaussian jumps, $\\Lambda $ also depends on the matrix elements in the even subspace: $K_{22}(a) &= \\frac{2 \\left(3 a^3+\\pi \\right)-a^2 \\left(3a^2+4\\right) \\arctan \\left(\\frac{2}{a}\\right)}{2 \\pi } \\\\K_{24}(a) &= \\frac{a^3}{4\\sqrt{3} \\pi \\left(a^2+4\\right)} \\left[30 a^4+128 a^2-3\\left(a^2+4\\right) \\left(5 a^2+8\\right) a\\arctan \\left(\\frac{2}{a}\\right)+64\\right] \\\\K_{44}(a) &= 1-\\frac{\\left(35 a^6+80 a^4+72 a^2+32\\right)a^2 \\arctan \\left(\\frac{2}{a}\\right)}{8\\pi }+\\frac{\\left(105 a^8+940 a^6+2712a^4+3072 a^2+1920\\right) a^3}{12 \\pi \\left(a^2+4\\right)^2}$ We also get: $R(0) &= \\max _x R(x) = 1 - \\frac{1}{\\sqrt{1- a^2}}\\\\R_\\infty &= 1 - \\frac{2}{\\pi } \\arctan \\frac{2}{a}$ Figure REF compares the result of the Schrödinger eigenbasis approximation for a Gaussian $w(\\eta )$ to numerical eigenvalues for the discretized master equation.", "We do not find evidence of a localization transition for $\\Lambda $ , but instead a change of parity at $a = a_{opt} \\simeq 2.21845$ .", "As for the case of a flat jump distribution shown on Fig.", "REF , the Schrödinger eigenbasis approximation works very accurately for all the slowest relaxation modes until they cross the singular continuum.", "It seems that even if ${\\cal N} = 1$ for this case, the maximum rejection probability $\\max _x R(x)$ is still a very good approximation for $\\Lambda $ at large $a$ ($a \\ge 4$ ).", "Figure: Results for U(x)=x 2 /2U(x) = x^2/2 with a jump distribution w(η)=|η|θ(a-|η|)/a 2 w(\\eta ) = |\\eta | \\theta (a- |\\eta |)/a^2.", "The continuous curves show the results of the analytical calculation K o (6) K_o^{(6)} and K e (6) K_e^{(6)}; they provide a very good approximation for the leading eigenvalues up to the crossing with the maximum of the rejection probability." ], [ "Harmonic potential with jump distribution $w(\\eta ) = a^{-2} |\\eta | \\theta (a-|\\eta |)$", "Again for a harmonic potential, analytical results for this shape of $w(\\eta )$ can be obtained in the same way as above.", "We do not report them here, and only provide a comparison between numerical and analytical calculations on Fig.", "REF ." ], [ "Comparing the different jump distributions", "Among the three jump distributions worked out above, the last one provides the value $\\Lambda (a_{\\text{opt}}) = 0.61723$ , which is the lowest among the studied examples.", "In this respect, this jump distribution, at the optimal jump amplitude $a_{\\text{opt}}$ , yields the fastest method for sampling the equilibrium distribution.", "Results are summarized in Table REF .", "Figure: Comparing different jump distributions.", "Plots of the leading relaxation eigenvalue Λ\\Lambda as a function of the acceptance probability, for the three cases summarized in Table , corresponding to Figures, , .Figure REF compares the spectral results for the three jump distributions.", "We note that they correspond to a $w(\\eta )$ that is either increasing, flat, or decreasing with $|\\eta |$ .", "In spite of these differences, the leading relaxation eigenvalue $\\Lambda $ displays the same behaviour as a function of the acceptance probability $1-R_\\infty $ .", "In particular, the three cases feature optimality (smallest $\\Lambda $ , fastest convergence) for an acceptance probability close to 50%." ], [ "Generalization: beyond one dimension and inclusion of interactions", "While the results presented so far focused on one-dimensional dynamics, we here put to the test the generality of the localization transition by considering more generic models, beyond 1D or with interacting degrees of freedom.", "The analysis is here mostly numerical." ], [ "Beyond 1D", "Simulations in higher dimensions rapidly become demanding in terms of numerical resources.", "In two dimensions, it is still possible to use direct diagonalization to obtain the full eigenspectrum of the master equation and the IPR of the eigenvectors.", "An example of such a simulation is shown on Fig.", "REF : the results are very similar to the one dimensional simulation in Fig.", "1 (main text) except that $a^* \\simeq 2.6$ instead of $a^* \\simeq 3.3$ due to the two dimensional nature of attempted jumps.", "Figure: Spectrum of the Monte-Carlo master equation kernel for a two dimensional particle confined to the cell (-5,5)×(-5,5)(-5,5)\\times (-5,5) (discretized to 100 2 100^2 boxes) and in the potential U(x,y)=(x 2 +y 2 )/2U(x,y) = (x^2+y^2)/2 (β=1\\beta = 1).", "The attempted jumps are two dimensional changing both xx and yy in an interval (-a,a)(-a,a) centered around their initial values.", "Color shows IPR 1/2 ^{1/2}, where the square root is used to enhance contrast (the lower contrast in IPR values is related to the high symmetry of the potential U(x,y)U(x,y), see for example the higher contrast in Fig.", "where all symmetries are broken).Simulations in 3D are numerically more accessible if jumps are attempted in only one of the directions $x,y,z$ at a time.", "This makes the matrix representing the Master equation kernel sparse, allowing to find the time evolution of the error distribution $\\delta P_n = P_n - P_\\infty $ .", "We show in Fig.", "REF the evolution of the IPR of $\\delta P_n$ with the number of algorithm steps (time).", "A sharp transition from decreasing to increasing IPR as a function of time is seen around $a = 3.3$ .", "Since the attempted jumps are 1D, the localization transition takes place at the same value as for the 1D harmonic potential.", "Figure: A 3D example with the potential U(x,y,z)=(x 2 +y 2 +z 2 )/2U(x,y,z) = (x^2+y^2+z^2)/2 and a confinement volume (-5,5) 3 (-5,5)^3 discretized in 100 3 100^3 boxes.", "The initial distribution P 0 (x,y,z)P_0(x,y,z) is an off-centred Gaussian." ], [ "Interactions", "We provide a numerical example illustrating the localization transition in the Monte Carlo relaxation of interacting particles.", "We consider a case which is numerically tractable by full diagonalization, in analogy with Fig.", "1 from the main text and with Fig.", "REF .", "We consider two particles at positions $x_1$ and $x_2$ in a one dimensional box, with $x_1, x_2 \\in [-5,5]$ .", "The energy of a configuration $(x_1,x_2)$ is given by the potential: $U_\\pm (x_1, x_2) = \\frac{x_1^2 + x_2^2}{2} \\pm \\frac{2}{0.1 + |x_1 - x_2|} + x_1 - x_2$ where, depending on the plus or minus signs, the interaction between $x_1$ and $x_2$ is repulsive ($U_+$ ) or attractive ($U_-$ ).", "We simulate the steady state of this system using a Monte-Carlo algorithm, with jumps where we attempt to simultaneously change $x_1$ and $x_2$ in an interval $(-a,a)$ around their initial position.", "The spectrum of the corresponding master equation is shown in Fig.", "REF , indicating that a localization transition occurs in this case even when interactions are present.", "Switching from repulsive to attractive interaction changes the value of the optimal jump length $a^*$ , and the spread of the eigenspectrum.", "In both cases however, the IPR drastically increases for $a > a^*$ , indicating a localization transition.", "Figure: Spectrum of the Monte-Carlo master equation kernel for two interacting particles, with interaction potential given by Eq. ().", "Two situations were investigated,with a repulsive (left panel) or an attractive potential (right panel).", "The configuration space, restricted to the interval (-5,5)(-5,5), was discretized in 100×100100 \\times 100 cells." ], [ "Relaxation in presence of multiple local minima", "Finally, we illustrate numerically the relaxation spectrum for a Monte Carlo simulation in a 1D potential with many local minima.", "We take the potential: $U(x) \\, = \\, x^2/2 + 3 \\, \\sin 9 x$ inside a box $x \\in (-5,5)$ .", "This potential has many local minima as illustrated in the left panel of Fig.", "REF .", "The eigenspectrum (see Fig.", "REF right panel) features a localization transition at $a^* \\simeq 2.1$ as in the prototype cases with only a single minium.", "At variance with the spectrum for $U(x) = x^2/2$ (see Fig.", "1 from the main text), many quasi-degenerate eigenvalues are present near $\\lambda =1$ , for low values of the jump amplitude $a$ .", "In this regime indeed, hopping over the barrier is thermally activated and the mimima become metastable.", "Figure: (left panel) Example of a potential with many local minima given by Eq. ().", "(right panel) Monte Carlo relaxation eigenspectrum for this potential with a flat jump distribution as in Fig.", "1 from the main text.", "The box (-5,5)(-5,5) was discretized in 10 3 10^3 sites." ] ]
2207.10488
[ [ "PA-PUF: A Novel Priority Arbiter PUF" ], [ "Abstract This paper proposes a 3-input arbiter-based novel physically unclonable function (PUF) design.", "Firstly, a 3-input priority arbiter is designed using a simple arbiter, two multiplexers (2:1), and an XOR logic gate.", "The priority arbiter has an equal probability of 0's and 1's at the output, which results in excellent uniformity (49.45%) while retrieving the PUF response.", "Secondly, a new PUF design based on priority arbiter PUF (PA-PUF) is presented.", "The PA-PUF design is evaluated for uniqueness, non-linearity, and uniformity against the standard tests.", "The proposed PA-PUF design is configurable in challenge-response pairs through an arbitrary number of feed-forward priority arbiters introduced to the design.", "We demonstrate, through extensive experiments, reliability of 100% after performing the error correction techniques and uniqueness of 49.63%.", "Finally, the design is compared with the literature to evaluate its implementation efficiency, where it is clearly found to be superior compared to the state-of-the-art." ], [ "Introduction", "Physical unclonable functions (PUFs) have great prominence in today's secure device authentication and secure communication [1].", "A PUF takes advantage of randomness due to manufacturing process variation in integrated circuits and generates a unique response for every device by tapping into the sources of entropy such as variations in path delay or timing [1].", "An integrated circuit within the PUF maps the input challenge with a unique response, thus creating a challenge-response pair (CRP).", "These CRPs are utilized to design various security protocols, ranging from device attestation to data encryption.", "The concept of a PUF was first reported in [2].", "Based on the parameters of how the response is derived, various kinds of PUFs have been proposed in the literature.", "These parameters can be delay, memory, or the difference between the current in the rails.", "In the literature, PUF designs are categorized as delay-based and memory-based PUF - of which the prominent examples are Arbiter PUF [3], ring oscillator PUF [4], SRAM PUF [5], Butterfly PUF [6], Glitch PUF [7] and MEmory Cell-based Chip Authentication (MECCA) PUF [8].", "These advanced PUFs are being threatened with various attacks [9], [10].", "Therefore, we need to keep studying new constructions of PUFs that potentially thwart such attacks with minimalistic overhead.", "Arbiter PUF (APUF) is one of the delay-based PUF designs, where an arbiter decides the response of a PUF between the two data paths comprised of cross-coupled multiplexers [11].", "For this purpose, the arbiter PUF uses the analog timing difference between the two data paths and decides the output based on the timing difference between the two lines.", "Several modifications to the classical arbiter PUF are presented in the literature.", "In arbiter PUF, it is possible to predict the relation between the challenge and response through software modeling and programs.", "To prevent the modeling-based attacks, various modifications were proposed, such as multi-arbiter PUF [12], [13] and double arbiter PUF [14].", "The multi-arbiter PUF design consists of arbiters in each multiplexer stage and a multiplexer to choose the arbiter response.", "An alternative is to take the PUF response as XOR of the arbiter outputs, which improves the uniqueness, reliability, and robustness of the PUF.", "The arbiter is an electronic circuit that can identify a signal's first occurrence.", "A simple D flip-flop can be used as an arbiter where one signal is connected as the clock and the other as the data signal.", "Priority arbiter is typically used in the Network-On-Chip (NoC) [15], in order to determine the priority of the data request among the several requests [15].", "Different kinds of arbiters are proposed in literature [16], such as daisy chain arbiter, round-robin arbiter, and dynamic arbiter.", "Based on the application of the NoC, the arbiters are designed.", "The concept of priority arbiter from the NoC communication can be adopted into the PUF to improve the design efficiency.", "The advantage of having a priority arbiter is that the design has more non-linearity compared to the simple arbiter PUF.", "In this paper, a new priority arbiter is proposed, which demonstrates good uniformity.", "Based on that, a novel PA-PUF is designed.", "The major contributions of this work are reported as follows: A new PUF using the priority arbiter called PA-PUF is designed.", "The PA-PUF offers a uniqueness of 49.63 $\\%$ and uniformity of 49.45$\\%$ at the output.", "The non-linearity in the output of the PUF is increased with the use of a priority arbiter.", "We demonstrate the configurability of PA-PUF by varying the number of CRPs.", "The number of CRPs can be increased by increasing the length of the data path by introducing more feed-forward arbiters.", "The performance of the proposed priority arbiter PUF is studied as a function of the number of feed-forward arbiters in the data path and the length of the data path.", "For example, the uniqueness of the PUF can be increased by increasing the length of the data path.", "It offers a reliability of 94.5$\\%$ for a 128-bit response, which can be increased to 100$\\%$ by implementing Bose-Chaudhuri-Hocquenghem (BCH) error-correcting codes [17].", "Figure: Classical arbiter PUF design, red lines indicate the data path of a given challenge.The rest of the paper is structured as follows.", "Section  provides the design insights of the newly proposed PUF.", "The experimental results are given in section .", "Section  compares the proposed design with state-of-the-art designs from literature.", "Finally, Section  provides conclusions and outlook of this work." ], [ "Proposed Priority Arbiter PUF", "A classical arbiter PUF with multiplexers and an arbiter is depicted in Fig.", "REF .", "The arbiter PUF is a delay-based design and the delay difference is taken from the cross-coupled multiplexers as shown in the design.", "The challenge of the arbiter PUF is given as select signals of the multiplexers in the data path, a low to high transition given at the input of the multiplexer will be propagated to the arbiter in the path selected by the challenge of the PUF.", "The arbiter is implemented using a D flip-flop, and the same is presented with the possible operations in Fig.", "REF , with two signals as Top ($T$ ) and Center ($C$ ).", "When the signal $C$ arrives first, then the output of the arbiter is led to `0'; otherwise, the output is `1'.", "The same is explained in the Fig.", "REF (a) $\\&$ (b).", "Fig.", "REF presents the symmetric path of the circuit through the multiplexers.", "The multiplexers' select signals (challenge) generate different delay paths, resulting in a unique response for every combination.", "The major disadvantage with the arbiter PUF is that the uniqueness of the PUF is less.", "There are several modifications to the classical arbiter PUF by adding a feed-forward path [18] in the design to introduce the non-linearity in the results.", "In arbiter PUF, it is possible to predict the relation between the challenge and response through software modeling and programs.", "To prevent this modeling prediction, various modifications have been proposed in the past.", "One alternative of arbiter PUF is multi-arbiter PUF [12], [13] and double arbiter PUF [14].", "All these methods are proposed to improve the uniqueness, reliability, and robustness of the PUF output.", "Since the order of non-linearity in the arbiter is less, so by considering multi-arbiter designs, the non-linearity can be increased in the design.", "Next, we will propose a priority arbiter-based PUF design.", "Figure: (a) Feed-forward arbiter for the proposed PA-PUF; T, C and B as the top, center and bottom data lines; F0,F1F0, F1 and F2F2 are the three outputs from the feed-forward arbiter (b) Three-input priority arbiter; T, C and B as the top, center and bottom data lines and R as response bit.A new three-input priority arbiter is proposed to decide the response as per the priority of the input's arrival time.", "The arbiter used in the classical arbiter is a two-input circuit.", "Now, an extra input is added to increase the non-linearity in the output.", "The proposed three-input priority arbiter diagram is shown in Fig.", "REF , which is designed with three D-flip flops, two 2-to-1 multiplexers, and an XOR logic gate.", "The three inputs, T, C, and B (top, center, and bottom) are given to the D flip-flops.", "Next, the select signal and the data line of the multiplexer are taken from the outputs of the D flip-flops.", "Finally, the output of multiplexers is applied to the input of the XOR gate to generate a response bit.", "Figure: Priority arbiter possible output (R) combinations with T, C and B as the top, center and bottom data lines.The operation of the proposed priority arbiter is shown in Fig.", "REF , where the output is decided based on the priority of the input arrival times.", "Some of the possible conditions are considered in the plot.", "Consider a case where the arrival times of these signals are as follows, T, C, and B (first case in Fig.", "REF ).", "Since T comes first in time, the outputs of the bottom and center D flip-flops (DFF B and DFF C) are `0' and `0', while the top D flip-flop (DFF T) output is `1'.", "After multiplexer output, the input of the XOR gate will be `1' and `0'.", "This results in the output of the arbiter being `1'.", "The Fig.", "REF demonstrates the possible conditions $\\lbrace T,B,C\\rbrace ,\\lbrace T,C,B\\rbrace ,\\lbrace B,T,C\\rbrace ,\\lbrace B,C,T\\rbrace ,\\lbrace C,B,T\\rbrace $ and $\\lbrace C,T,B\\rbrace $ out of these six conditions, the output is `0' in three cases and `1' in three cases.", "This further results in achieving good uniformity in the output of the priority arbiter.", "The same circuit with different connections of the inputs will lead to non-uniform outputs (with non-equal probability of 0's and 1's).", "Figure: Schematic of proposed PA-PUF which includes three parallel multiplexer lines (T, C, and B) and a priority arbiter at the end to generate the response bit.The PA-PUF is presented in Fig.", "REF , where the modifications to the classical arbiter PUF have been done by adding a third data path in addition to the two data paths.", "Fig.", "REF shows the working of the PA-PUF for a given challenge (marked in red).", "This shows that just by increasing the hardware by one-third, the non-linearity in the output can be increased to an extreme.", "In the case of arbiter PUF, when the output is `1', then it can be understood that signal $T$ arrives before signal $C$ .", "While in the case of priority arbiter PUF, as explained in Fig.", "REF , the output is `1', in three cases.", "Hence, it is difficult to find which signal comes first.", "Moreover, the design is extended by introducing the feed-forward paths to increase the robustness in the design.", "The priority arbiter PUF with the feed-forward path is shown in Fig.", "REF , while the feed-forward arbiter for the priority arbiter PUF is shown in Fig.", "REF .", "Figure: Proposed PA-PUF with feed-forward arbiter." ], [ "Experimental results", "The proposed PA-PUF is designed using Verilog programming and implemented on Nexys Video Artix-7 FPGA board.", "The responses of the PUF are collected using the universal asynchronous receiver/transmitter (UART) protocol.", "Next, we will discuss the proposed PA-PUF (with a feed-forward arbiter) and the implications of deriving a security key.", "For a good PUF design, it should satisfy some important criteria such as intra-chip Hamming distance, inter-chip Hamming distance.", "Based on the results of the inter-chip and intra-chip Hamming distance, we can analyse the other important parameters such as uniformity, robustness, uniqueness, bit-aliasing, and reliability.", "Intra Hamming distance (HD) is the Hamming distance between the responses of the PUF design within the chip.", "Ideally, a single bit change in the challenge should result in a 50% Hamming distance in the responses bits.", "The intra HD can be calculated by the formula given in equation REF .", "$Intera\\ HD = \\sum _{i=1}^k\\frac{HD(R_i,R_{i+1})}{n} \\times 100$ Here, `k' is the total number of challenges given to the PUF, $R_i$ and $R_{i+1}$ are the responses to the challenges $C_i$ and $C_{i+1}$ respectively.", "The challenges are differed by one-bit change, and Fig.", "REF shows the Hamming distance in responses of the 128-bit PA-PUF.", "Fig.", "REF also shows the HD has a maximum at the half of the responses, and the plot follows the Gaussian distribution.", "Figure: Intra-chip Hamming distance plot of 128-bit PA-PUF.Inter HD is the Hamming distance of the responses between two chips of the same family.", "We have used three FPGAs of the same family/configuration to produce the results on inter HD.", "Fig.", "REF shows the inter HD between responses of two FPGA boards, which is calculated by the formula given in equation REF .", "$Inter\\ HD = \\frac{2}{k(k-1)}\\sum _{i=1}^{k-1}\\sum _{j=i+1}^k\\frac{HD(R_i,R_j)}{n}$ Where, $i$ and $j$ are two different FPGAs.", "$R_i$ and $R_j$ are the responses from $FPGA_i$ and $FPGA_j$ for the challenge C respectively.", "$k$ is number of PUF designs (3 in our case).", "Figure: Inter-chip Hamming distance plot of 128-bit PA-PUF.Figure: Uniformity of 128-bit PA-PUF along with 50% probability line." ], [ "Robustness", "Any response bit of the PUF should not be stuck at logic `0' or `1'.", "The stable `0', stable `1' and unstable bits are calculated for various sizes of the PA-PUF design.", "The results are given in Table REF and the results reveal that the design has good robustness.", "Table: Robustness results of stable `0' , `1' and unstable bits." ], [ "Uniformity and Bit-aliasing", "The response should have an equal number of 0`s and 1`s.", "This can be calculated using the `uniformity' of the PUF.", "The ideal value of uniformity is 50$\\%$ , which means the PUF response is uniformly distributed.", "Fig.", "REF shows the distribution of 0`s and 1`s in response bits.", "Table REF gives the values for various sizes of the PUF response.", "The other parameter of interest is the `bit-aliasing', which is calculated over different chips/devices.", "The experiment is conducted on three Nexys Video Artix-7 FPGA boards of the same family.", "The PUF design is placed in the exact physical locations across different boards to find whether any particular bit position is permanently connected to the logic `0' or `1' irrespective of the challenge or a board.", "The ideal value of the bit aliasing is 50$\\%$ , which is near to the values recorded in Table REF .", "Table: Uniformity and bit-aliasing for the proposed PUF." ], [ "Reliability and Uniqueness", "Reliability and uniqueness are the most important metrics in deciding the design of the PUF.", "The uniqueness is defined as how uniquely the design can identify from one chip to the other chip of the same family.", "The ideal value of the uniqueness is 50$\\%$ ; hence the response of the PUF from one chip to the other is differed by half of its length.", "Uniqueness is calculated using the `inter Hamming distance' of the response, and the calculated values are tabulated in Table REF for various sizes of the proposed PUF.", "Further, the reliability is also given in Table REF , which is calculated using the `intra-Hamming distance' of the response by giving the same challenge over million times, and Fig.", "REF depicts the Hamming distance plot to calculate the reliability.", "The error that occurred in the response of the PUF overages can be corrected using the error correction mechanisms [20] and [21].", "It has been shown in Fig.", "REF that after BCH error correction codes, reliability can increase from 94.5% to 100% in PA-PUF.", "Table: Uniqueness and Reliability for various sizes of the proposed PA-PUF.Figure: Hamming distance of the proposed PA-PUF with and without feed-forward arbiters and the classical arbiter PUF." ], [ "Machine Learning-based Modelling Attacks", "The key purpose of PUFs can be defeated by modeling the PUF structure and by being able to predict its output.", "In a series of works [22], [9], [10], it has been shown that nearly all variants of delay-based PUFs can be efficiently modeled using machine learning.", "To further boost security, XOR-based arbiter PUFs are introduced.", "It was also shown in a recent study [23] that learning XOR-based arbiter PUFs is possible up to a limit on the number of parallel arbiter chains.", "Complementing this research direction, alternative PUF design frameworks [24] or restricted visibility of the PUF outputs [25] are proposed.", "Since our proposed design applies to the general studies on arbiter-based PUFs, it will come under the same purview of attacks and resilience presented earlier.", "Hence, we focus on the lightweight PUF design itself and reserve the study of detailed analysis of modeling attacks for the future.", "In that context, it will be interesting to juxtapose the priority arbiter design against the XOR-based arbiter chain merger.", "Based on the performance metrics calculation, the comparisons of the proposed PUF with the existing designs is summarized in Table REF .", "It is evident from the results that the proposed priority arbiter PUF has good results compared to the existing designs.", "Figure: Comparison of uniqueness and reliability with respect to data path's length (16-bit challenge + feed-forward arbiters).Table: Comparison of the performance metrics of the ring oscillator PUF; Ideal value of reliability is 100%\\% while the remaining parameters have an ideal value of 50%\\%.The length of the data path plays a prominent role in deriving the response.", "In addition to the length of the multiplexer-data path, the number of feed-forward arbiters also plays a key role.", "Fig.", "REF shows the comparison of the classical arbiter PUF, proposed priority arbiter PUF, and the priority arbiter PUF with the feed-forward arbiter.", "The plot reveals that the classical arbiter PUF is not able to produce a various number of responses while the proposed designs have more number of CRPs.", "Since the intra-Hamming distance plot has a maximum at half of its response length only for the priority arbiter with the feed-forward arbiter.", "The number of CRPs in a PA-PUF can be increased by using the feed-forward arbiters.", "Further, the number of feed-forward arbiters used as a select signal to the multiplexer in the data path also influences the response.", "The plot shown in Fig.", "REF reveals that as the number of feed-forward arbiters increases in the design, the reliability of the PUF reduces and the uniqueness of the PUF improves.", "Since the number of CRPs can be increased by increasing the data path length and by introducing more feed-forward arbiters, the PUF is able to generate more variations in the output.", "Hence, the reliability is going to reduce.", "So, one can decide the number of feed-forward arbiters as per their requirements of uniqueness and reliability.", "Table: Comparisons of different PUF designs; `U' is the uniqueness and `R' is the reliability." ], [ "Conclusions", "This paper proposed a new priority arbiter with a uniform output with an equal probability of 0's and 1's.", "We also proposed a new priority arbiter-based PUF (PA-PUF) with three data path lines and a priority arbiter.", "Further, the feed-forward arbiters are introduced to get new select signals to the multiplexers in the data path to improve the number of challenge-response pairs.", "Finally, we demonstrated the results of the proposed PA-PUF on Nexys Video Artix-7 FPGA board.", "The experimental results show that the proposed PA-PUF has a reliability of 94.5$\\%$ , which can be improved to 100% by implementing error correction techniques.", "Moreover, the PA-PUF improves the PUF uniformity to 49.45% and uniqueness to 49.63$\\%$ .", "We plan to study the attacks on PA-PUF in more detail, practically and theoretically, in the future." ] ]
2207.10526
[ [ "Differentiable Integrated Motion Prediction and Planning with Learnable\n Cost Function for Autonomous Driving" ], [ "Abstract Predicting the future states of surrounding traffic participants and planning a safe, smooth, and socially compliant trajectory accordingly is crucial for autonomous vehicles.", "There are two major issues with the current autonomous driving system: the prediction module is often decoupled from the planning module and the cost function for planning is hard to specify and tune.", "To tackle these issues, we propose an end-to-end differentiable framework that integrates prediction and planning modules and is able to learn the cost function from data.", "Specifically, we employ a differentiable nonlinear optimizer as the motion planner, which takes the predicted trajectories of surrounding agents given by the neural network as input and optimizes the trajectory for the autonomous vehicle, thus enabling all operations in the framework to be differentiable including the cost function weights.", "The proposed framework is trained on a large-scale real-world driving dataset to imitate human driving trajectories in the entire driving scene and validated in both open-loop and closed-loop manners.", "The open-loop testing results reveal that the proposed method outperforms the baseline methods across a variety of metrics and delivers planning-centric prediction results, allowing the planning module to output close-to-human trajectories.", "In closed-loop testing, the proposed method shows the ability to handle complex urban driving scenarios and robustness against the distributional shift that imitation learning methods suffer from.", "Importantly, we find that joint training of planning and prediction modules achieves better performance than planning with a separate trained prediction module in both open-loop and closed-loop tests.", "Moreover, the ablation study indicates that the learnable components in the framework are essential to ensure planning stability and performance." ], [ "Introduction", "Making safe, socially-compatible, and human-like decisions is a fundamental capability of autonomous vehicles (AVs).", "The key to realizing this ability is to accurately forecast the future trajectories of surrounding traffic participants and plan a safe and smooth trajectory according to the result.", "The current autonomous driving stack usually treats prediction and planning as separate and sequential parts (see Fig.", "REF (a)), which means the prediction module is independent of the planning module, ignoring the impact of the AV's behaviors on other agents.", "However, this setting is flawed because prediction and planning tasks are highly interrelated and interdependent, especially in urban driving scenarios where complex interactions between the AV and other traffic participants are common.", "Moreover, another challenge with the traditional method is how to specify the cost function that can properly evaluate the future motion plans and achieve a delicate trade-off between different requirements, e.g., collision avoidance, travel efficiency, and ride comfort.", "Manually tuning the parameters of the cost function is laborious and time-consuming, and might only be applicable in certain scenarios.", "Although some methods have been proposed to learn the cost function from driving data, such as continuous inverse optimal control (CIOC) [1] and sampling-based maximum-entropy inverse reinforcement learning (IRL) [2], as well as max-margin method [3], they are not coupled with the prediction module and based on the impractical assumption that the prediction results are perfect.", "Some other methods have turned to a pure data-driven setting [4], which utilizes a holistic model to directly output planned AV trajectory while predicting other agents' future trajectories (see Fig.", "REF (b)), implicitly handling the interactions between agents.", "However, such methods fall short of safety guarantee, robustness, and reliability.", "In this paper, we propose a fully differentiable prediction and planning framework (see Fig.", "REF (c)) that integrates the two modules with a differentiable optimizer to explicitly consider the interactions, such that the prediction and cost function can be adjusted to produce better and human-like trajectories.", "Specifically, we construct a Transformer-based neural network to predict the future trajectories of surrounding agents simultaneously.", "Then, the prediction results are channeled into a differentiable nonlinear optimizer, which explicitly plans a trajectory for the AV to follow.", "When training the framework, the planned trajectory is compared against the human driving one and the loss can be back-propagated to the prediction module and cost function weights.", "The whole framework is trained to match the human driving trajectories (interactions among humans) in the entire driving scene, not only the AV but also all surrounding agents.", "Our framework is able to deliver more planning-centric prediction results, enabling the system better adapt to the scenarios in which they are deployed.", "In summary, the main contributions of this paper are listed as follows: We propose a fully differentiable structured learning framework that integrates prediction and planning modules for autonomous driving, enabling the prediction results better fit to the downstream planning task and the cost function learnable from real-world driving data.", "We train the proposed framework with a large-scale real-world driving dataset that covers a wide variety of complex urban driving scenarios and validate it in both open-loop and closed-loop manners.", "We demonstrate that the proposed framework outperforms the baseline methods (end-to-end and traditional pipeline) in both open-loop and closed-loop tests and carry out an ablation study to investigate the importance of each component.", "There has been a large body of learning-based approaches for trajectory prediction to excellent effect because deep neural networks can handle complex environments with multiple interacting agents and various road structures.", "Some recent works utilizing vector representation of scene context and Transformer-based networks have advanced the forecasting accuracy even further [5], [6], [7], [8].", "However, most of the existing models are formulated to predict each agent’s future trajectory independently, which is computationally inefficient and might produce inconsistent results.", "Thus, some recent approaches focus on multi-agent joint prediction to generate future trajectories for all agents of interest in a consistent manner [9], [10], [11], [12].", "Importantly, using the multi-agent prediction method can allow the model to capture the interactions between agents and facilitate the planning task.", "Therefore, in our proposed framework, we employ the multi-agent prediction setting and provide each agent a local vectorized map that shows the possible paths and nearby crosswalks.", "We utilize Transformer modules to model the interactions between agents and their relations to different elements on the local map." ], [ "Motion planning", "Motion planning is a long-researched area and there are a variety of approaches such as trajectory optimization, graph search, random sampling, and more recently learning-based methods.", "Learning-based methods exploit neural networks as driving policies that generate actions or trajectories directly from sensor inputs or perception results, such as deep imitation learning [13], [14] and reinforcement learning [15], [16], [17].", "Though simple and efficient, the neural network-based motion planner suffers from poor interpretability and generalization capability and also lacks stability and safety guarantees.", "Therefore, we turn to classic motion planning methods [18] and a popular choice is the optimization-based method as it is flexible and achieves maximal control of the trajectory.", "In particular, we employ a differentiable least-squares nonlinear optimizer as the motion planner, so that the optimizer can be integrated into the neural framework and its parameters (e.g., cost function weights) can be learned at the same time.", "In contrast to other planning methods that often work in static environments or assume a perfect prediction of surrounding obstacles, our proposed approach integrates the prediction of the surrounding agents and jointly trains the predictor and planner to achieve better performance of robustness." ], [ "Integrated prediction and planning", "To incorporate the AV's effects on other agents in the planning process, one attempt is to generate a set of candidate trajectories for the AV, and the prediction of other agents is conditioned on the AV's trajectories.", "The best planning trajectory could be selected after evaluating these candidate plans and corresponding prediction results [19], [20], [21], [22].", "However, the drawback is that generating candidate trajectories to cover possible actions and ensure optimality is computationally intensive and also constraints the flexibility of the trajectory.", "An alternative method is to directly predict the cost map or occupancy map of the environment across the short future [23], [24], [25], but the high-dimensional grid is not necessary for road-like structured environments and would significantly slow down the inference and planning speed.", "Another promising approach is the end-to-end holistic neural network model that implicitly captures the prediction-planning interactions, such as SceneTransformer [4], which jointly outputs a motion plan for the AV and predicted trajectories for other agents.", "Although end-to-end models enjoy enhanced accuracy and simple inference, they cannot explicitly compute the feedback loop between planning and prediction and thus cannot guarantee safety.", "Therefore, some other methods have tried to couple the prediction model with classic optimization methods because of their interpretability and reliability.", "For example, MATS predicts the parameters of mixtures of affine time-varying systems [26], which are utilized by a model predictive controller (MPC) for motion planning.", "Though efficient due to a small number of parameters, MATS does not incorporate the scene context in forecasting, and its performance is compromised.", "In this paper, we propose to output the trajectories of surrounding agents using a neural network, and combine it with differentiable optimization steps that explicitly consider ride comfort, safety, and traffic rules, making the feedback between planning and prediction differentiable and enhancing both safety and human driving similarity." ], [ "Problem formulation", "We formulate the driving scene with the AV and a varying number of diverse interacting traffic participants as a continuous-space discrete-time system, where the AV is denoted as $A_0$ and other agents are denoted as $A_1, \\dots , A_{N}$ .", "Each agent including the AV has a semantic class (i.e., vehicle, bicycle, or pedestrian), and its state at time $t$ is denoted as $\\mathbf {s}_t^i$ , where $i$ is the agent index.", "We also introduce the scene context, such as a vectorized high-definition map and traffic light signals, to the system and denote it as $\\mathbf {M}_t$ .", "At time $t$ , given the historical states of all agents for the previous $H$ timesteps $\\mathbf {X}_t = \\mathbf {s}^{0:N}_{t-H:t}$ and the scene context $\\mathbf {M}_t$ , the predictor needs to predict $K$ possible joint future trajectories of all agents for the next $T$ timesteps $\\lbrace \\mathbf {Y}_k|k=1,\\cdots ,K\\rbrace $ , $\\mathbf {Y}_k = \\mathbf {\\hat{s}}^{0:N}_{t+1:T}$ and the probability of each future $\\lbrace p_k|k=1,\\cdots ,K\\rbrace $ .", "The planner needs to optimize the AV's future trajectory $\\mathbf {\\hat{s}}^{0}_{t+1:T}$ according to the prediction results and cost function.", "Accordingly, the proposed framework consists of two parts, as shown in Fig.", "REF .", "First, we build up a holistic neural network to embed the history states of agents and scene context into high-dimensional spaces, encode the interactions between agents and the scene context using Transformer modules, and finally decode different future trajectories and their probabilities.", "Second, we employ a differentiable optimizer as a motion planner to explicitly plan a future trajectory for the AV according to the most-likely prediction result and initial motion plan.", "Since the motion planner is differentiable, the gradient from the planner can be back-propagated to the prediction module and cost function, making the whole framework end-to-end learnable.", "Figure: The proposed integrated prediction and planning framework.", "The neural network predictor is utilized to predict the future states of surrounding agents and initial guess for the motion planner; and the differentiable motion planner with learnable cost function is employed to explicitly plan an AV trajectory.", "All the components are connected and end-to-end differentiable." ], [ "Multi-agent prediction and initial plan", "Input representation.", "There are two types of input representations to the network: historical agent states and scene context.", "For each agent, its historical state is a sequence of dynamic features for the past and current timesteps, including its 2D position, heading angle, velocity, and bounding box size.", "We consider the nearest ten agents around the AV, whose observation data is stored in a fixed-size tensor with the missing agents padded as zeros.", "For the scene context, we consider two types of map elements, which are lanes and crosswalks.", "Each map element is presented by a vector, which consists of a sequence of waypoints with different associated features.", "For each agent, we build a local scene context, which encompasses the possible lanes it might take and surrounding crosswalks.", "The features of the waypoint in the drivable lane include the positions and headings of the lane center and lane boundaries, as well as the speed limit and traffic signals (e.g., traffic light state and stop sign), and the features of the waypoint in the crosswalk are the positions of points enclosing the crosswalk area.", "All the agents' and map elements' positional attributes are translated to the AV's local coordinate system.", "Encoding agent history and scene context.", "To encode an agent’s observed historical states, we fed the states into long short-term memory (LSTM) networks.", "Each type of agent shares an LSTM encoder, and all agents including the AV are stacked to form a tensor of agents' encoded historical states.", "The local scene context encoder consists of a lane encoder for processing the lanes and a crosswalk encoder for processing the crosswalk vectors.", "The lane encoder uses a multi-layer perceptron (MLP) to encode numeric features (e.g., positions, directions, and speed limits) and embedding layers to encode discrete features (e.g., traffic light state, lane type, and stop sign), while the crosswalk encoder is another MLP to encode the numeric features.", "For each agent, we find its nearby six lanes and four crosswalks as map vectors, encode them separately, and concatenate the encoded map vectors to form a tensor of the agent's local scene context.", "Modeling agent-agent and agent-scene interactions.", "To model the interactions between the agents, we first abstract the relations of agents as a fully connected graph (i.e., all the nodes in the graph are connected with each other), where nodes represent agents and edges represent their relations.", "We use a two-layer self-attention Transformer encoder [27] as the agent-agent interaction encoder to process the graph, where the query, key, and value are the encoded agents' historical trajectory features.", "With each agent's encoded local scene context in hand, we employ two cross-attention Transformer encoders as the agent-map interaction encoder: one is for modeling the agent's attention on each lane vector (focusing on waypoints in the vector), and the other is for agent's attention on each crosswalk vector.", "We utilize an agent's interaction feature as query, and a single map vector (a sequence of encoded waypoints) as key and value.", "The operation is called multiple times to process all the map vectors from an agent, resulting in a sequence of agent-map vectors attention features.", "Then we introduce a multi-modal attention Transformer encoder [7], which is essentially an ensemble of cross-attention modules, to output different relations between the agent and map vectors.", "We still use an agent's interaction feature as query, and agent-map vectors attention as key and value, yielding different encodings of the agent's relationship with the local scene context.", "Likewise, we apply the multi-modal encoder to all agents to extract their possible interactions with the scene context and stack the results along the future axis.", "Decoding multi-modal future.", "For each agent, its historical state and agent-agent interaction encodings are repeated and concatenated with the multi-modal agent-map interaction encoding to form a final feature vector.", "For the surrounding agents, we decode their possible future trajectories through a shared MLP from the final feature vector.", "For the AV, we decode its future control actions (i.e., acceleration and steering angle) from the feature vector through an MLP, which are translated to trajectory by a kinematic model.", "We also output multiple possible trajectories for the AV to better model the interaction between agents.", "To predict the probability of each future (joint trajectories for all agents), we use max-pooling to aggregate the information from all agents and map vectors and an MLP to decode the probability.", "The motion planner will take as input the future (i.e., initial plan and other agent's prediction) with the highest probability." ], [ "Problem statement", "The control variables for the AV are $\\mathbf {u}_t = \\lbrace a_t, \\delta _t \\rbrace $ , where $a_t$ is acceleration and $\\delta _t$ is steering angle.", "The state variable for an agent is $\\mathbf {s}_t = \\lbrace x_t, y_t, \\theta _t, v_t\\rbrace $ , where $(x_t, y_t)$ is 2D position, $\\theta _t$ is heading angle, and $v_t$ is velocity.", "For the AV, its state is updated according to the kinematic bicycle model [28] in Eq.", "REF given the vehicle control input $\\mathbf {u}_t$ .", "$\\begin{aligned}v_{t+1} &= v_t + a_t \\Delta t, \\\\x_{t+1} &= x_t + v_t \\cos (\\theta _t) \\Delta t, \\\\y_{t+1} &= y_t + v_t \\sin (\\theta _t) \\Delta t, \\\\\\theta _{t+1} &= \\theta _t + \\frac{v_t}{L} \\tan (\\delta _t) \\Delta t,\\end{aligned}$ where $L$ is the wheelbase of the vehicle and $\\Delta t$ is the time interval.", "The kinematic model is differentiable, thus permitting optimization for both the network and nonlinear optimizer.", "Formally, we consider an open-loop finite-horizon optimal control problem with time horizon $T$ in Eq.", "REF .", "Given an initial state of the system $\\mathbf {x}_{init}$ , we need to optimize a sequence of control inputs $\\mathbf {u}_{1:T} = \\lbrace \\mathbf {u}_1, \\mathbf {u}_{2}, \\cdots , \\mathbf {u}_{T} \\rbrace $ according to the cost function $\\mathbf {c}(\\mathbf {x}_{1:T}, \\mathbf {u}_{1:T}; \\mathbf {w})$ based on the sequence of future states $\\mathbf {x}_{1:T} = \\lbrace \\mathbf {x}_1, \\mathbf {x}_{2}, \\cdots , \\mathbf {x}_{T} \\rbrace $ .", "The state variable $\\mathbf {x}_t$ consists of states of the AV and other surrounding agents.", "$\\begin{split}& \\min _{\\mathbf {x}_{1:T}, \\mathbf {u}_{1:T}} \\sum _{t=1}^T \\mathbf {c}_t(\\mathbf {x}_t, \\mathbf {u}_t; \\mathbf {w}_t), \\\\s.t.", "\\ & \\mathbf {x}_{t+1} = f(\\mathbf {x}_t, \\mathbf {u}_t), \\ \\mathbf {x}_1 = \\mathbf {x}_{init},\\end{split}$ where $\\mathbf {w}_t$ is the cost function parameter (potentially time-varying), $f$ is a predictive model of the system, in which the states of the AV are updated according to the control actions and kinematic model, while other agents' future state sequences are computed by the prediction neural network.", "The predictive model and cost function can be learned by differentiating through the optimization problem [29], [30] with the aim to imitate human driving trajectories.", "Specifically, we formulate the cost function as the sum of squared objectives and convert the problem to non-linear least-squares optimization, which can be solved using iterative gradient-based methods." ], [ "Cost function", "The cost function is a linear combination of various carefully crafted terms that encode different aspects of driving including travel efficiency, ride comfort, traffic rules, and most importantly safety.", "The details of the different cost terms are given below.", "Travel efficiency.", "We encourage the AV to reach the destination as fast as possible but not run above the speed limit.", "Therefore, the cost of travel efficiency is defined as the difference between the AV's speed $v_t$ and speed limit: $\\mathbf {c}_t^{speed} = v_t - v_{limit},$ where $v_{limit}$ is the speed limit of the lane.", "Ride comfort.", "Human drivers prefer comfortable maneuvers, and we introduce four sub-costs to represent the ride comfort factors the AV needs to optimize.", "They are longitudinal acceleration $a_t$ and longitudinal jerk $\\dot{a}_t$ , as well as steering angle $\\delta _t$ and steering change rate $\\dot{\\delta }_t$ for lateral stability and comfort.", "$\\begin{aligned}\\mathbf {c}_t^{acc} = a_t, \\\\\\mathbf {c}_t^{jerk} = \\dot{a}_t, \\\\\\mathbf {c}_t^{steer} = \\delta _t, \\\\\\mathbf {c}_t^{rate} = \\dot{\\delta }_t,\\end{aligned}$ where $\\dot{a}_t = \\frac{\\Delta a_t}{\\Delta t}$ and $\\dot{\\delta }_t = \\frac{\\Delta \\delta _t}{\\Delta t}$ are the discrete forms of longitudinal jerk and steering rate respectively.", "Traffic rules.", "The AV is expected to adhere to the traffic rules and structure of the road.", "Thus, to promote staying close to the lane (route) centerline and following the lane direction, two sub-costs are designed.", "$\\begin{aligned}\\mathbf {c}_t^{pos} = p_t - p_{l, \\perp }, \\\\\\mathbf {c}_t^{head} = \\theta _t - \\theta _{l, \\perp },\\end{aligned}$ where $p_t$ and $p_{l, \\perp }$ are the positions of the AV and closest point from the lane's centerline to the AV respectively, and $\\theta _t$ and $\\theta _{l, \\perp }$ are the heading angles of the AV and its closest point on the lane respectively.", "In addition, obeying traffic lights should be treated as a hard constraint for the AV.", "Here, we replace the hard constraint with a soft penalty term, which can be assigned with a large cost weight.", "We assume the AV runs along a predefined route and its running distance is $s_t$ , which is derived from $s_t=\\sum _{t^{\\prime }=1}^t v_{t^{\\prime }} \\Delta t$ , and the stop line position (if encountering a red light) on the route is $s_{stop}$ .", "We can formulate the cost of violating traffic lights using the hinge loss: $\\mathbf {c}_t^{traffic} = {\\left\\lbrace \\begin{array}{ll} s_t - s_{stop}, & s_t \\ge s_{stop} \\\\ 0, & \\text{otherwise} \\end{array}\\right.", "}.$ It means that if the AV goes past the stop line at a red light, a large penalty will be induced, thus forcing the AV to stop near the stop point.", "Safety.", "Keeping a safe distance from other traffic participants on the road to avoid collision is a fundamental requirement for AVs.", "However, optimizing the safe distances to all other agents in the scenario is unnecessary and time-consuming.", "Therefore, we take advantage of the Frenet frame [31], [32], which decouples the vehicle trajectory into the longitudinal direction along a predefined driving route and the lateral direction perpendicular to the route, to define the interactive agents.", "As illustrated in Fig.", "REF , all other agents' positions are projected to the Frenet frame with the AV's route as the reference path.", "At each future timestep, those agents whose predicted positions are within the route's conflict area are defined as the interactive agents.", "The calculation of safe distance at a specific timestep is given as: $d_{safe} = \\min _{i} \\parallel p_t - p_{t}^i \\parallel _2,$ where $p_{t}^i$ is the predicted position of the interactive agent $i$ at future timestep $t$ .", "We also employ the hinge loss to formulate the safety cost to ensure the safe distance is large enough.", "$\\mathbf {c}_t^{safety} = {\\left\\lbrace \\begin{array}{ll} \\epsilon - d_{safe}, & d_{safe} \\le \\epsilon \\\\ 0, & \\text{otherwise} \\end{array}\\right.", "},$ where $\\epsilon $ is the minimum safety distance requirement, which is defined as sum of the lengths of two agents and a safety gap.", "Figure: Illustration of the calculation of safe distance.", "Other agents are first projected to the Frenet frame to find the interacting agents, and the calculation of distances is in the Cartesian frame.Overall.", "The overall cost function can be formulated as the sum of squares of all the cost terms weighted by their corresponding weights across the time horizon.", "Therefore, the objective function for the motion planner is: $\\min \\ \\frac{1}{2} \\sum _{t} \\sum _{i} \\parallel w_{t}^{i} \\mathbf {c}_t^i \\parallel ^2,$ where $w_{t}^{i}$ is the cost weight for feature $i$ at time $t$ .", "Note that we can fix the weights for feature $i$ at every timestep to make the cost function time-invariant, which is the case in this paper.", "Because the computation for the safety cost is expensive, we only consider the trajectory waypoints at $[0.1, 0.3, 0.6, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0]$ seconds, which could significantly speed up the optimization without compromising safety." ], [ "Differentiable optimization", "The Gauss-Newton algorithm is utilized to solve the nonlinear optimization problem [30].", "It is an iterative least-squares optimization approach, where at each iteration step $i$ , the cost function is first linearized at the current estimation of the control variables using first-order approximation: $\\mathbf {c}(\\mathbf {u}) \\approx \\mathbf {c}(\\mathbf {u}^{(i)}) + \\mathbf {J}_{\\mathbf {c}}(\\mathbf {u}^{(i)}) \\Delta \\mathbf {u},$ where $\\mathbf {J}_{\\mathbf {c}}(\\mathbf {u}^{(i)})$ is the Jacobian matrix.", "The problem now reduces to a minimization of the squared sums of the right-hand side of Eq.", "REF , which is a linear least-squares problem.", "We can explicitly solve the linear system using Cholesky decomposition of the normal equations to obtain the update step $\\Delta \\mathbf {u}$ , and eventually update the control variables: $\\mathbf {u}^{(i+1)} = \\mathbf {u}^{(i)} + \\alpha \\Delta \\mathbf {u}.$ where $0 < \\alpha \\le 1$ is the step size to decrease the update and thus mitigate divergence.", "Since the above procedure is fully differentiable, we can set up other differentiable components (e.g., prediction and cost function weights) and integrate them into the planner, enabling the whole architecture differentiable.", "Moreover, to start the optimization, one has to provide an initial guess for the control variables, which is crucial to the convergence to the global optimum, and we use the control actions given by the prediction network as the initial guess, which is also differentiable." ], [ "Learning process", "At a high level, we regard the motion planner as the primary component of our framework.", "During the forward pass, the motion planner takes as input the prediction results of other agents and the initial plan (with the highest probability), as well as the cost function, to start the trajectory optimization and iteratively updates the planned trajectory until passing a convergence check or reaching the maximum number of iterations.", "At the end of the optimization, the motion planner will output a trajectory for the AV to follow.", "In the backward pass, we evaluate the loss function on the planned trajectory, compute gradients with respect to the initial plan, cost function weights, as well as the predicted trajectories of other agents, and back-propagate the loss through the neural components and update the parameters, in order to generate better quality trajectories.", "Since all operations in the framework are differentiable, we can train the entire framework in an end-to-end fashion from real-world driving data, and we implement the framework using PyTorch and Theseus [33].", "The loss function for training the framework encompasses four terms: imitation loss and planning cost for the AV, prediction loss for the other agents, as well as the score loss to predict the probabilities of different futures.", "The overall loss is defined as: $\\mathcal {L} = \\lambda _1 \\mathcal {L}_{imitation} + \\lambda _2 \\mathcal {L}_{cost} + \\lambda _3 \\mathcal {L}_{prediction} + \\lambda _4 \\mathcal {L}_{score},$ where $\\lambda _1$ , $\\lambda _2$ , $\\lambda _3$ , and $\\lambda _4$ are the weights to scale the different loss terms.", "For imitation and prediction, we only back-propagate the smooth L1 losses on the trajectories through the individual future that most closely matches the ground truth: $\\begin{aligned}\\mathcal {L}_{imitation} & = \\sum _{k=1}^K \\mathbb {1}(k=\\hat{k}) \\ \\text{smooth}{L_1} (\\xi ^k_{AV} - \\xi _{AV}^{gt}),\\\\\\mathcal {L}_{prediction} &= \\sum _{k=1}^K \\mathbb {1}(k=\\hat{k}) \\sum _{i=1}^N \\text{smooth}{L_1} (\\xi ^k_i - \\xi _i^{gt}),\\end{aligned}$ where $\\mathbb {1}$ is the indicator function, $\\hat{k}$ is the index of the joint future of multiple agents with the smallest L2 distance to the ground truth, and $\\xi _i^{gt}$ is the individual ground-truth trajectory.", "The score loss is defined as: $\\mathcal {L}_{score} = \\sum _{k=1}^K \\mathbb {1}(k=\\hat{k}) \\log p_k,$ where $p_k$ is the probability of future $k$ ." ], [ "Dataset", "We train and validate our framework on a large-scale real-world driving dataset, Waymo open motion dataset [34], focusing on urban driving scenarios with complex and diverse road structures and traffic interactions.", "The dataset contains 104,000 unique driving scenes (each is 20 seconds in duration) with detailed map data and agent tracks.", "We choose 10,156 various driving scenes from the dataset, and 80% of them are used for training and 20% for validation and testing.", "We segment the 20-second long driving scene into several frames, and each frame contains 7-second object tracks (2s of the past and 5s of the future) at 10Hz and map data for the area.", "The planning horizon is 5 seconds, which means the framework is tasked to predict the surrounding agents’ future trajectories and plan the AV's trajectory 5 seconds into the future.", "The total number of frames for training is 113,622 and 28,396 for validation and open-loop testing." ], [ "Evaluation", "We evaluate the performance of the framework in both open-loop and closed-loop manners.", "In the open-loop testing, the motion planner plans a trajectory for the AV based on the current states and prediction results at each frame, and we compare the planned trajectory against the AV's ground-truth trajectory.", "In the closed-loop testing, we construct a log-replay simulator, where at each step, the AV will take the first action from its planned trajectory and update its state, while the neighboring agents will follow their record trajectories in the dataset.", "We set up the following metrics to evaluate the performance of the framework.", "Safety: the collision rate is employed to measure the safety performance of the system.", "For open-loop testing, collision is calculated based on the AV's planned trajectory and the ground-truth future trajectories of other agents.", "If a collision is detected at any step in the plan, then the frame is deemed as a collision.", "For closed-loop testing, we check if the AV collides with other agents at every time step during the simulation.", "Traffic rule violation: we consider deviating from the route and passing over the stop line at a right light as traffic rule violations, and we calculate the total number of frames where the AV violates the traffic rules.", "Human driving similarity: we use the position errors between the planned trajectory in open-loop testing or rollout trajectory in closed-loop testing and the ground truth to quantify the human likeness.", "We calculate the position errors at different timesteps (i.e., 1s, 3s, 5s in open-loop testing and 3s, 5s, 10s in closed-loop testing).", "Vehicle dynamics: three metrics are introduced to quantify rider comfort and feasibility of the planned trajectory, which are longitudinal acceleration and jerk, as well as lateral acceleration.", "They are absolute values averaged over time in a scene and will be compared against human driving metrics.", "Prediction: we calculate the average displacement error (ADE) and final displacement error (FDE) of the multi-agent joint trajectories from the most-likely future to reflect the performance of the prediction module." ], [ "Comparison baselines", "We compare against the following baseline or ablated methods to reveal the effects and advantages of the proposed framework.", "Vanilla imitation learning: based on the same scene context, we use only imitation learning to train the network to directly generate the trajectory for the AV.", "Imitation learning with prediction sub-task: in addition to the imitation learning-based planner, we train the network with the sub-task of predicting other agents' future trajectories.", "This baseline corresponds to the holistic model method.", "Planning with separate prediction: we separately train a prediction module without the motion planner and use a planner with the trained prediction model and a pre-specified cost function in testing.", "This baseline corresponds to the sequential prediction and planning method." ], [ "Network", "The network outputs three possible futures (joint trajectories of all agents) and their probabilities.", "For other agents, we predict the displacements relative to their current locations instead of their coordinates, which shows significant improvement in prediction accuracy.", "The learning of the cost function is also incorporated into the network and implemented by a small MLP, which takes a fixed dummy input and outputs the weights of the cost function.", "We fix the cost weights for traffic light violations and collisions as large values (larger than two orders of magnitude) because they are hard constraints." ], [ "Training", "We use a batch size of 32 and an Adam optimizer with a learning rate that starts from 2e-4 and decays by a factor of 0.5 every 4 epochs.", "The total number of training epochs is 20 and we pre-train the framework for 5 epochs without the planner to get a good initial prediction result and control actions.", "For speed considerations, we set the max number of iterations for the motion planner to 2, which will also encourage the network to produce high-quality initial plans.", "The step size for the Gauss-Newton update is $0.4$ , and the weights for the loss function are set to $\\lambda _1 = 1$ , $\\lambda _2 = 0.001$ , $\\lambda _3 = 0.5$ , and $\\lambda _4 = 1$ .", "We also clip the gradient norm of the network parameters with the max norm of the gradients as 5." ], [ "Testing", "In the testing process, the max number of iterations for the motion planner is set to 50, the step size is set to $0.2$ , and the absolute error tolerance is set as 1e-2, in order to plan a high-quality trajectory for the AV.", "Acceleration and jerk in the longitudinal and lateral directions are calculated based on the positions and headings on the trajectory.", "To check if a collision happens, we use a list of circles to approximate an object at each timestep.", "If the distance between any two circles' origins of the given two objects is lower than a threshold, it is considered as they collide with each other." ], [ "Open-loop test", "Fig.", "REF shows two representative scenarios where the neural network predictor makes multi-future predictions, as well as the probabilities of the predicted futures.", "In Scenario 1, the AV consistently chooses to stop behind the leading vehicle in all futures, while different interacting behaviors among the two agents in the intersection area are predicted.", "The future (joint trajectories) closest to ground truth is assigned with a higher probability.", "In Scenario 2, the predictor delivers different future predictions for both the AV and the other interacting agent at an unsignalized intersection.", "For example, the AV would choose to go if the other agent is predicted to turn left and stop if the other agent's trajectory is predicted to conflict with the AV's route.", "The close-to-ground truth futures are also assigned with higher probabilities.", "These results indicate that the neural network predictor is able to provide accurate multi-modal joint predictions for all agents in the scene and a good initial guess for the motion planner.", "Figure: The multi-modal predictions given by the neural network predictor.", "The colored solid lines are predicted trajectories and black dotted lines are ground-truth trajectories.", "The red box stands for the AV, the colored boxes represent the ten nearest agents to the AV, and other agents are shown in black boxes.Table: Open-loop evaluation of the proposed method against baseline methodsFig.", "REF displays some typical interactive scenarios, demonstrating the proposed framework's ability to plan a safe, traffic rule-compliant trajectory for the AV, according to the prediction results of other surrounding agents' future states (including vehicles, cyclists, and pedestrians).", "The results reveal that the motion planner is capable of handling a variety of urban driving scenarios with a mixture of interacting traffic participants, including making a smooth right turn, yielding to pedestrians on the crosswalk, stopping at the red light, yielding to another vehicle at an unprotected left turn, etc.", "In Fig.", "REF , we compare the proposed method with other baseline methods in two interactive scenarios.", "We can see that without explicit constraints, the neural network-planned trajectory gradually deviates from the lane centerline, while the motion planner's output trajectory adheres to the road structure.", "Our proposed method can generate planning-aware or planning-centric prediction results, which can help the planner deliver more close-to-human trajectories compared to using separate prediction results.", "For example, in the merging scenario, our method predicts that the interacting vehicle should yield to the AV at a stop sign, delivering more accurate prediction and planning results.", "In the interaction with a cyclist, our method predicts that the cyclist may go in front of the AV (however not really happen) and thus plans a slow-down trajectory, which is in accordance with the human-like one.", "It indicates that although the prediction result is not accurate, the planning module would benefit from such planning-centric results and produce human-like plans.", "The prediction of other agents in our framework is analogous to the belief of the driver's mind, rather than the reflection of exact world dynamics.", "Figure: Qualitative results of the proposed framework in open-loop testing.", "The colored solid lines are the planned or predicted trajectories for AV or surrounding agents, and black dotted lines are the ground truth trajectories.Figure: Qualitative comparison between the proposed method and baseline methods in open-loop testing.", "Top: interacting with a vehicle; bottom: interacting with a cyclist.Following the previously defined evaluation metrics, we list the quantitative results of our method and other baseline methods in Table REF .", "The results of open-loop testing indicate that the proposed method significantly outperforms the imitation learning-based methods in terms of collision and traffic rule violation rates.", "The imitation learning-based methods have a much higher off-route rate due to the lack of explicit constraints on following the route, and without considering the ride comfort, they yield an unacceptable lateral acceleration, which is significantly worse than the planning-based methods and human driving.", "On the other hand, the planning-based methods deliver much lower acceleration and jerk, which ensures smoothness and ride comfort.", "Because the neural network is only trained to predict the trajectories by direct regression, imitation learning with the multi-agent prediction sub-task method achieves the smallest planning error (compared with the ground-truth trajectories).", "However, since imitation learning methods ignore other important factors, they perform very poorly in the closed-loop test and barely finish a task (see the next subsection).", "One major conclusion from the results is that our proposed method, which jointly trains the predictor and planner, outperforms the planner with a separate trained prediction module, in terms of both planning and prediction errors.", "It reinforces our claim that the proposed method can produce planning-centric prediction results, which improve planning performance and also prediction accuracy.", "Another interesting finding is that though the vanilla imitation learning method performs the worst, adding the prediction sub-task can significantly reduce the collision rate and planning error, which shows the importance of multi-agent joint prediction.", "Table: Closed-loop evaluation of of the proposed method against baseline methods" ], [ "Closed-loop test", "To fully reveal the planning performance of our proposed method, we conduct a closed-loop test with 100 replayed scenes from the testing set.", "Specifically, we build a log-replay simulator to roll out the AV's planned trajectory, where other agents follow their original tracks in the dataset and only the AV's state gets updated.", "At each timestep, the AV plans a trajectory according to the prediction result and takes the first actions from its planned trajectory, and it replans at the next time step.", "The simulation horizon is 15 seconds and the interval is 0.1 seconds.", "To evaluate the closed-loop testing performance, we add a progress metric, which measures the running distance of the AV until it reaches the end of the scene, collides with other agents, or goes off route.", "We measure the position error between the AV's rollout trajectory and its ground-truth one at several time steps to reflect the similarity to the human driver.", "The results of the closed-loop test are given in Table REF and we provide the supplementary videoshttps://mczhi.github.io/DIPP/ of our method navigating in a variety of urban scenarios for better visualization and evaluation of our method.", "Note that the collision rate is just a lower bound of safety because other agents do not react to the AV (e.g., if the ego vehicle drives slower than the human in the data).", "The results in Table REF indicate that the imitation learning-based methods cannot finish a single scene without causing a collision or going off route.", "Although the planning error of the IL+prediction method is the smallest in open-loop testing, the position errors of IL-based methods are significantly worse.", "The ride comfort is also compromised as IL-based methods deliver much higher jerk and acceleration.", "This is because the IL-based methods suffer from distributional shifts, which means the compounding errors lead the AV to a situation that deviates from the training distribution and degrade its performance.", "On the other hand, the planning-based methods leveraging structural information and domain knowledge perform significantly better despite learning purely from an offline dataset.", "The motion planner with a separate trained prediction module shows similar yet worse performance compared to our proposed method, which highlights the advantages of integrated training of planning and prediction modules.", "The supplementary videos encompass various driving scenes where the proposed framework can handle different tasks such as cruising, obeying traffic lights, making smooth turns, and more importantly interacting with other road users.", "We also add some scenarios where other vehicles would take adversarial actions toward the AV (not intentionally as the AV has deviated from its original track), but our planning framework can handle such emergencies and avoid collisions." ], [ "Ablation study", "To unveil the importance and function of each key component in our proposed framework, an ablation study is conducted.", "We set up three baselines in which the learnable components (i.e., initial plan, cost function, and prediction) are dropped out one at a time.", "For the non-learnable cost function baseline, the cost function used by the motion planner is manually tuned to balance the different cost terms.", "For the non-learnable initialization baseline, the initial guess to the motion planner is fixed as $(a_t=0, \\delta _t=0)$ at all timesteps.", "For the non-learnable prediction baseline, the motion prediction module is a constant turn rate and velocity model.", "The results of the ablation study are summarized in Table REF and Table REF .", "In open-loop testing, we select the planning and prediction errors as the main metrics, and the planning error is averaged over three timesteps (1s, 3s, and 5s) and ADE is used as the prediction error.", "In closed-loop testing, we employ the failure rate (sum of the collision and off-route rates), progress, and average position error (3s, 5s, and 10s) as the main metrics.", "Table: Ablation study on the importance of each component in open-loop testingTable: Ablation study on the importance of each component in closed-loop testingThe open-loop testing results in Table REF indicate that learnable initialization is the most important part of the framework to ensure the planned trajectory close to the human driving one.", "This suggests that a good initial plan for the planner is critical to solving the optimization and learnable initialization could substantially improve the planning performance.", "Another important factor for planning is the cost function, and we demonstrate that learning the cost function from data can render the planned trajectories closer to humans.", "Surprisingly, using only a physics-based non-learnable prediction model would not significantly deteriorate the planning performance in an open-loop setting though the prediction error is notably higher.", "The closed-loop testing results in Table REF reveal that the prediction module plays a more important role in ensuring safety and human likeness.", "The gap between open-loop and closed-loop performance is due to that the closed-loop testing set contains many interactive scenarios, where accurately predicting other agents' future states is required.", "In addition, learnable initialization and cost function are necessary to stabilize the planning process (solving the optimization problem) and obtain better planning performance, which otherwise could lead to instability to find viable solutions and consequently worse safety and reliability.", "In summary, all three learnable components in our framework are integral to maintaining the final planning performance." ], [ "Discussions", "The direct benefit of our method is to integrate structural information and domain knowledge into machine learning models, or from a different perspective, to apply machine learning models to some hard-to-solve points in traditional optimization problems.", "We apply our method to the challenging autonomous driving task and focus on solving real-world problems (e.g., how to specify the cost function and how other agents' actions would affect the ego) by learning from real-world data.", "The integration of prediction, cost function, and planning modules in our framework leads to end-to-end, decision-focused learning, that trains the whole pipeline to directly optimize the final planning performance.", "Experiments demonstrate that the proposed method has excellent open-loop and closed-loop performance, as well as robustness against distributional shifts and adversarial agents, in spite of only learning from offline data.", "In addition, the proposed method outperforms the separated planning and prediction method in both planning and prediction performance, which underscores the advantage of our method to integrate the prediction and planning modules in an end-to-end differentiable way.", "Nonetheless, some limitations of this work should be acknowledged.", "First is the computation time of the framework.", "The average runtime of the prediction+planning step is 0.11s per sample at training (with NVIDIA RTX 3080 GPU, 2 iterations) and 0.98s at testing (with AMD 3900X CPU, max 50 iterations).", "The bottleneck in the framework is the computation of Jacobin in solving the least-squares optimization.", "However, we need to note that this work is just a prototype of the proposed framework and the running efficiency has not been optimized.", "We plan to further improve the efficiency of the computation pipeline to reduce the runtime in solving the optimization, potentially enabling the framework to run near real-time.", "Another limitation is that we do not thoroughly consider the uncertainty or multi-modality of prediction, only taking the most probable one as the planner input.", "In future work, the planner can take all modalities and uncertainties along the timesteps into account to make more informed decisions or perform contingency planning." ], [ "Conclusions", "We propose a differentiable integrated multi-agent interactive prediction and motion planning framework.", "A Transformer-based predictor is established to predict joint future trajectories of surrounding agents and provide an initial guess for the planner.", "The predicted trajectories, initial plan, and learnable cost function are channeled to an optimization-based differentiable motion planner, allowing every component in the framework to be differentiable and trained end-to-end from real-world driving data.", "The framework is validated with a large-scale urban driving dataset in both open-loop and closed-loop manners.", "The open-loop testing results reveal that our proposed method shows better planning and prediction performances compared to the traditional pipeline method and better ride comfort and safety metrics than IL-based methods.", "The closed-loop testing results indicate that planning-based methods significantly outperform IL-based methods (though have better similarity to human trajectories in open-loop testing) in terms of all metrics, which suggests that our method overcomes the distributional shift problem that is common in offline learning.", "Moreover, the proposed method outperforms the pipeline method in closed-loop testing, which emphasizes the benefit of joint training of prediction and planning modules.", "In the ablation study, we find that all learnable components in the framework are integral to maintaining the stability of the optimizer and desired planning performance.", "The proposed framework could have a broad impact on the deployment of autonomous driving decision-making systems by enabling all components learnable from real-world data and thus accelerating the development process." ] ]
2207.10422
[ [ "Active platform stabilisation with a 6D seismometer" ], [ "Abstract We demonstrate the control scheme of an active platform with a six degree of freedom (6D) seismometer.", "The inertial sensor simultaneously measures translational and tilt degrees of freedom of the platform and does not require any additional sensors for the stabilisation.", "We show that a feedforward cancellation scheme can efficiently decouple tilt-to-horizontal coupling of the seismometer in the digital control scheme.", "We stabilise the platform in the frequency band from 250 mHz up to 10 Hz in the horizontal degrees of freedom and achieve a suppression factor of 100 around 1 Hz.", "Further suppression of ground vibrations was limited by the non-linear response of the piezo actuators of the platform and by its limited range (5 {\\mu}m).", "In this paper we discuss the 6D seismometer, its control scheme, and the limitations of the test bed." ], [ "Introduction", "The LIGO [1] and Virgo [2] detectors have made a number of gravitational wave detections from massive compact objects [3], [4], [5], [6].", "Sources of these waves range from two recent neutron star black hole systems [7], and binary black holes [8], [9], [10], with one detection of an intermediate mass black hole of mass $\\sim 150\\,\\rm M_{\\odot }$  [10].", "A multimessenger event was also observed from a binary neutron star merger which verified localisation and decreased the false alarm rate of the detection [11].", "Low frequency sensitivity of the detectors determine the likelihood of observing more massive systems such as intermediate mass black hole binaries between $100-1000M_\\odot $ aswell as providing early warning signals.", "The merger time of binary systems scale with frequency as $f^{-8/3}$ , enabling opportunities for multimessenger detections.", "For the LIGO detectors, these signals are cloaked by the non-stationary control noise of the isolation scheme of the core optics [12], [13], [14].", "The LIGO isolation scheme consists of a four stage pendulum suspended from state of the art two stage twelve axis platforms for the detectors' core optics [15], [16], [17].", "Despite the orders of magnitude suppression achieved, the angular controls for the core optics limit the detectors' sensitivity below 30 Hz [18], [19].", "Improved sensing of the isolated platforms would reduce the input motion to the suspension chain, reducing the injection of noise from the local damping on the optics.", "Suppression of platform tilt is limited by the lack of absolute rotation sensors on the platforms.", "The platform tilt also plagues the translational readout with an unfavourable coupling of $g/\\omega ^2$  [20], [21],where $g$ is the local gravitational acceleration and $\\omega $ the angular frequency.", "Investigations into improved sensing of the platforms are being explored by a number of groups who develop novel inertial sensors.", "Krishna Venkateswara at the University of Washington has employed the out of vacuum beam rotation sensor (BRS) [22] at LIGO for feedforward correction of translational sensors.", "The University of Washington is also developing an in vacuum cylindrical rotation sensor (CRS).", "The University of Western Australia have developed the ALFRA rotational accelerometer which has the advantage of multi-orientation such that it can also be mounted vertically [23].", "Optical gyroscopes have also been investigated at Caltech and MIT which make use of the Sagnac effect to measure absolute rotation [24], [25].", "Further improvements to low noise translational inertial sensing have been demonstrated by the Nikhef and VU groups in Amsterdam [26], and the Belgium China collaboration [27], [28] with custom interferometric inertial sensors.", "In this paper we present an initial version of the 6D seismometer detailed in [29].", "The basis behind the design is a softly suspended extended reference mass which is readout in six degrees of freedom (6D).", "Unlike the inertial sensors discussed above, the approach differs by utilising a simple mechanical design which enables cross couplings.", "Complexity is moved to the signal processing where the degrees of freedom must be untangled.", "We demonstrate the viability of the device for use in feedback by stabilising a rigid isolated platform in six degrees of freedom.", "First we discuss the experimental design, and then move through the control scheme, indicating the performance achieved and the shortcomings of the test bed used." ], [ "Optomechanical design", "The seismometer consists of a single extended reference mass suspended from a fused silica fibre [30], [31].", "Optical shadow sensors known as Birmingham Optical Sensors and Electromagnetic Motors (BOSEMs) [32] were employed for the readout scheme, which measured the relative displacement between the proof mass and the platform.", "The test bed was a rigid stabilisation platform which was actuated using six piezo legs in a hexapod style formation.", "The experimental set up is shown in Fig.", "REF and experimental parameters, highlighting the resonant frequencies of the proof mass, are summarised in Table REF .", "Ideally the eigenmodes of the mass should be as low as possible to enable inertial sensing to lower frequencies.", "The stiffest degree of freedom in our setup is the vertical one and the corresponding eigenfrequency of its bounce mode is 10 Hz.", "The other two translations degrees of freedom were softer with eigenmodes of 0.62 Hz.", "The eigenfrequencies were determined by the fibre length which was constrained by the height of the vacuum chamber.", "Resonant frequencies for the tilt modes (RX, RY) were tuned to 100 mHz and 90 mHz by compensating the elastic restoring coefficient of the fibre with the gravitational anti-spring.", "The distance between the effective pivot point of the wire and the centre of mass, $d$ enabled tuning of the effective restoring torque as indicated in Eq.", "(REF ) [33], $\\centering \\omega _{\\rm X}^2 \\approx \\frac{g}{L}, \\qquad \\omega _{\\rm RY}^2 \\approx \\frac{m g d + k_{el}}{I_y},$ where $m$ mass, $k_{el}$ the elastic restoring coefficient, and $I_y$ is its moment of inertia about the y-axis.", "Table: A list of parameters and nominal valuesThe soft angular modes of the system result in large oscillations which ring down over extended periods of time.", "In particular, the ring down time of the torsion mode (RZ) is several months.", "In order to maintain the BOSEM sensors within their linear regime, we implemented damping loops on the seismometer's resonant modes using coil-magnet pairs.", "The damping loops actuated directly on the mass in narrow frequency bands around its resonances and reduce the mass motion down to $\\sim \\mu $ m level.", "Fig.", "REF shows the damped signals using the BOSEM actuation with no control of the platform.", "Large translational motion in X leaks into the other degrees of freedom, which can be seen from the presence of the microseism and resonant peak at 0.62 Hz.", "Reduction of the X (Y) platform motion diminishes this effect as the platform tracks the motion of the proof mass.", "Experimental investigations into the BOSEM sensing and actuation noise found that the stiffest mode (Z) was limited by sensor noise below 10 Hz, and that the digital-to-analog converter noise from our control system dominates the RZ motion below 10 mHz.", "The following sections discuss the control strategy used to stabilise the actuated platform and the issues faced when controlling a multi degree of freedom system.", "We then look at the achieved performance and suggest improvements for the system." ], [ "Control strategy", "In this section, we discuss the stabilisation technique of the actuated platform relative to the 6D seismometer.", "First, we present our solution to the control problem.", "We found that the key element for the successful stabilisation is the feedforward subtraction of the measured longitudinal signals (X and Y) from the tilt signals (RY and RX).", "Second, we discuss the control problem that is relevant to the class of actuated platforms with cross couplings between different degrees of freedom on the level of $\\approx 1$ %." ], [ "Diagonalisation of the tilt modes", "In the case of a symmetric fibre neck and mass, the circular cross section results in an infinite number of principle axes, resulting in no preferential axes around which the tilt motion occurs.", "This was initially assumed and an arbitrary direction for the X and Y axes was chosen.", "We discovered a discrepancy between the tilt resonances such that $f_{\\rm RX} \\ne f_{\\rm RY}$ .", "Investigations determined that asymmetry in the fibre neck, where bending occurs, gave rise to two perpendicular principal axes around which tilting occurred.", "The asymmetry resulted in non-identical elastic restoring constants, $k_{el}$ , for RX and RY, where the frequency splitting of the modes was further exacerbated by the tunable gravitational restoring torque, $mgd$ .", "Measurement of the degrees of freedom were determined using a sensing matrix, $\\mathbf {S}$ , which converted the six BOSEM signals, $\\vec{B}$ into the six degrees of freedom, $\\vec{X}$ , such that $\\vec{X} = \\mathbf {S}\\vec{B}$ .", "The preferential axes for tilt caused coupling of the RX eigenmode into the sensed RY motion (and RY to RX).", "Analysis of the individual BOSEM signals allowed us to determine the angular misalignment of our original axes compared to the principal axes due to the fibre asymmetry.", "A rotation matrix, $\\mathbf {R}$ was implemented to align the sensing with the eigenmodes of the principal axes, $\\vec{X}_{eig}$ , such that, $\\centering \\vec{X}_{eig} = \\mathbf {R}\\vec{X} = \\mathbf {R}\\mathbf {S}\\vec{B}.$ Similar to the sensing matrix, the platform actuation was set to align its principle rotation axis with the 6D seismometer." ], [ "Horizontal-to-tilt decoupling", "The platform causes movement of the suspension frame and the test mass which is shown in Fig.", "REF .", "However, the test mass is considered to be inertial above the pendulum resonant frequencies.", "The coupling of platform motion, $X_{\\rm P}$ and $RY_{\\rm P}$ , to the sensor outputs, $X$ and $RY$ , can be written as $\\begin{split}& X = T(f) X_{\\rm P} + L \\times RY_{\\rm P} \\hspace{28.45274pt}, \\\\& RY = K(f) X_{\\rm P} + RY_{\\rm P},\\end{split}$ where $L$ is the fibre length.", "X and RY as well as Y and RX are intrinsically coupled by the pendulum.", "Transfer functions $T(f)$ and $K(f)$ are determined by the pendulum and pitch resonances and are discussed in details in [33].", "According to Eq.", "(REF ), the coupling of X and RY and, similarly, Y and RX degrees of freedom is frequency dependent.", "Therefore, we implement a filter to diagonalise the degrees of freedom as shown in Fig.", "REF .", "We found that the control system requires the subtraction of X (Y) from RY (RX), hence a 2x2 diagonalisation is necessary for stability.", "We determined the feedforward filter by solving Eq.", "(REF ) relative to $X_{\\rm P}$ and $RY_{\\rm P}$ .", "Since the solutions are given by the equations $\\begin{split}& X_{\\rm P} = \\frac{1}{T-L K}\\left(X - L \\times RY \\right), \\\\& RY_{\\rm P} = \\frac{T}{T-L K}\\left(RY - \\frac{K}{T}X \\right) ,\\end{split}$ the feedforward filter should be given by the equation $\\frac{K}{T} = \\frac{\\omega _{\\rm RY}^2}{-\\omega ^2 + \\frac{i \\omega \\omega _{\\rm RY}}{Q_{\\rm RY}} + \\omega _{\\rm RY}^2}\\frac{1}{L} \\approx -\\frac{\\omega _{\\rm RY}^2}{\\omega ^2 L}$ at $\\omega \\gg \\omega _{\\rm RY}$ .", "However, during our experimental studies we found that $\\sim \\omega ^{-2}$ dependence is only valid up to $\\omega \\approx 10 \\omega _{\\rm RY}$ .", "At higher frequencies, the transfer function flattens due to the direct coupling of horizontal motion to our vertical sensors dedicated for RX and RY.", "Therefore, we fitted the feedforward filter to the transfer function $K/T + \\alpha $ , where $\\alpha $ is a small number on the order of $10^{-2}$ .", "The result of the feedforward cancellation is shown in Fig.", "REF ." ], [ "Platform stabilisation", "Application of the feedforward scheme discussed above enabled successful stabilisation of the platform with 6 single-input-single-output loops.", "The upper unity gain frequency was constrained to 10 Hz due to the forest of mechanical resonances of the vacuum chamber and its supporting structure above 14 Hz.", "The resonances modify the actuation path of the feedback control scheme, and due to the large number of modes, it was implausible to digitally remove the resonances from all degrees of freedom.", "The bandwidths achieved for the angular modes were 70 mHz-10 Hz for the tilt modes (RX, RY), and 10 mHz-10 Hz for RZ.", "For the longitudinal degrees of freedom (X, Y) the bandwidth attained was 250 mHz-10 Hz, where the lower unity gain frequency was limited by the cross-couplings of the platform actuation between the X and Y degrees of freedom.", "Above 1 Hz, the cross-coupling between X and Y degrees of freedom is caused by the imperfect actuation diagonalisation matrix and is on the order of 1%.", "However, the coupling grows significantly towards lower frequencies making the response in X and Y to the excitation in X equal at 40 mHz.", "The large cross-coupling is caused by the tilt-to-horizontal coupling and imperfections of the actuation system: excitation in X also drives RX, resulting in the unpleasant $g/\\omega ^2$ tilt coupling into the Y degree of freedom.", "As a consequence, the open loop transfer function of the X degree of freedom is altered when control of Y is simultaneously enganged according to the equation $H_{\\rm mod} = H + \\frac{\\beta _x \\beta _y G^2}{1-H}.$ Here, $H = H_x = H_y$ is the open loop transfer function when stabilisation of only one degree of freedom (X or Y) is active, $G$ is the servo gain as shown in Fig.", "REF .", "The additional factor is proportional to the cross-coupling of the X degree of freedom to Y, $\\beta _x$ , and to the similar coefficient from Y to X, $\\beta _y$ .", "The additional factor increases the magnitude of the open loop transfer function and makes the closed loop behaviour unstable if the lower unity gain frequency of the feedback loop is below 90 mHz for $|\\beta _y| = |\\beta _x| = 10^{-2}$ .", "We could reduce the actuation imperfections $\\beta _x$ and $\\beta _y$ down to 0.3% by gain matching the piezo actuators.", "However, the hysteresis of the actuators causes time-dependent changes to the gains of the piezos depending on the control system.", "Since the actuation system is non-linear, we can not reduce the cross-coupling coefficients $\\beta _x$ and $\\beta _y$ to the levels below 1% consistently.", "As a result, we have reduced the control bandwidth in the X and Y degrees of freedom to avoid the instabilities caused by the actuation cross-couplings.", "However, we expect that the problem of non-linear cross-coupling between X and Y degrees of freedom is not present in the suspended active platforms utilised in LIGO [15]." ], [ "Vibration isolation", "Non-linearity of the actuation path reduced the desired bandwidth of the feedback control system.", "However, high gain stabilisation of all six degrees of freedom was achieved once correct implementation of the feedforward scheme between X, RY and Y, RX was performed.", "For the 5 softer degrees of freedom this resulted in two orders of magnitude suppression around 1 Hz as shown in Fig.", "REF .", "Vertical suppression was limited due to the stiff resonant frequency, reducing the bandwidth over which stabilisation occurred.", "Below 1 Hz the actuation in Z was negligible due to non-inertial sensing which would result in sensor noise injection.", "Reduction of the resonant frequency can be achieved by suspending the system from a soft blade spring to reduce the bounce mode, or by increasing the tension on the fibre.", "The Glasgow group are currently developing higher stress fibres for use in third generation detectors [34].", "The majority of the sensed low frequency motion came from the translational modes, X and Y, and were dominated by the microseismic motion between $0.2\\,\\rm Hz$ and the 0.62 Hz resonant peaks.", "The large motion leaked into the other degrees of freedom and can be seen by the red reference traces (no stabilisation) in Fig.", "REF due to the imperfections of the sensing scheme.", "Implementation of the feedforward scheme described in Sec.", "REF suppressed the coupling into the tilt modes by an order of magnitude in the frequency band from 0.1 to 1 Hz (Fig.", "REF ).", "The error signals in Fig.", "REF show the achievable isolation for the current system, however, this is limited by the BOSEM readout noise highlighted by the magenta traces.", "Further broadband suppression down to the readout noise level was constrained by the limited bandwidth for all degrees of freedom.", "Improved readout noise would enable the isolation to be solely limited by the bandwidth and the issues discussed in Sec.", "REF .", "We have demonstrated the viability of stabilising a six axis platform using a 6D seismometer.", "The system was operated in high gain with a maximised bandwidth, providing simultaneous control of all six degrees of freedom.", "We were able to achieve isolation of more than an order of magnitude at 1 Hz for 5 of 6 degrees of freedom.", "We found the two key principles of the successful control strategy: sensing diagonalisation of the tilt modes and decoupling of the horizontal-to-tilt motion.", "The control techniques are a necessity to diagonalise the degrees of freedom involved in feedback control and to make the overall control system stable.", "The system can be further improved in three directions.", "First, the sensing noise of optical shadow sensors can be improved by two orders of magnitude using interferometric inertial sensors [35], [36], [37].", "Interferometric sensing has been employed to the system and is currently being optimised to reduce the readout noise.", "Second, the system is susceptible to drift motion for the angular degrees of freedom due to thermal gradients, stress relaxations in the fibre and in the metal proof mass.", "We have acquired a fused silica proof mass (discussed in [33]) which has the potential to reduce the drift motion of the suspended mass due to its low thermal expansion coefficient and lack of plastic deformations.", "Themal shielding is also being installed to further isolate the proof mass.", "Finally, the actuation of the platform can be improved by suspending it and using coil-magnets actuators similar to the LIGO platforms [15].", "As an intermediate step we may introduce viton sheets to provide passive isolation to damp the high frequency resonant modes of our chamber.", "This will allow the upper unity gain frequency of the control loops to be increased improving the achievable isolation." ], [ "Acknowledgements", "We thank Rich Mittleman for his valuable internal review and also members of the LIGO SWG groups for useful discussions.", "The authors acknowledge the support of the Institute for Gravitational Wave Astronomy at the University of Birmingham, STFC 2018 Equipment Call ST/S002154/1, STFC 'Astrophysics at the University of Birmingham' grant ST/S000305/1, STFC QTFP \"Quantum-enhanced interferometry for new physics\" grant ST/T006609/1.", "A.S.U is supported by STFC studentships 2117289 and 2116965.", "A.M contributed in the design of the coil magnet actuation scheme for damping of the test mass.", "A.M, J.V.D, and C.M.L are funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No.", "865816)." ] ]
2207.10417
[ [ "Long-duration Gamma-ray Burst and Associated Kilonova Emission from\n Fast-spinning Black Hole--Neutron Star Mergers" ], [ "Abstract Here we collect three unique bursts, GRBs\\,060614, 211211A and 211227A, all characterized by a long-duration main emission (ME) phase and a rebrightening extended emission (EE) phase, to study their observed properties and the potential origin as neutron star-black hole (NSBH) mergers.", "NS-first-born (BH-first-born) NSBH mergers tend to contain fast-spinning (non-spinning) BHs that more easily (hardly) allow tidal disruption to happen with (without) forming electromagnetic signals.", "We find that NS-first-born NSBH mergers can well interpret the origins of these three GRBs, supported by that: (1) Their X-ray MEs and EEs show unambiguous fall-back accretion signatures, decreasing as $\\propto{t}^{-5/3}$, which might account for their long duration.", "The EEs can result from the fall-back accretion of $r$-process heating materials, predicted to occur after NSBH mergers.", "(2) The beaming-corrected local event rate density for this type of merger-origin long-duration GRBs is $\\mathcal{R}_0\\sim2.4^{+2.3}_{-1.3}\\,{\\rm{Gpc}}^{-3}\\,{\\rm{yr}}^{-1}$, consistent with that of NS-first-born NSBH mergers.", "(3) Our detailed analysis on the EE, afterglow and kilonova of the recently high-impact event GRB\\,211211A reveals it could be a merger between a $\\sim1.23^{+0.06}_{-0.07}\\,M_\\odot$ NS and a $\\sim8.21^{+0.77}_{-0.75}\\,M_\\odot$ BH with an aligned-spin of $\\chi_{\\rm{BH}}\\sim0.62^{+0.06}_{-0.07}$, supporting an NS-first-born NSBH formation channel.", "Long-duration burst with rebrightening fall-back accretion signature after ME, and bright kilonova might be commonly observed features for on-axis NSBHs.", "We estimate the multimessenger detection rate between gravitational waves, GRBs and kilonovae from NSBH mergers in O4 (O5) is $\\sim0.1\\,{\\rm{yr}}^{-1}$ ($\\sim1\\,{\\rm{yr}}^{-1}$)." ], [ "Introduction", "In observations, it is usually adopted that a critical duration of $T_{90}\\sim 2\\,{\\rm {s}}$ separates gamma-ray bursts (GRBs) into long- and short-duration populations , .", "Long-duration GRBs (lGRBs) have been identified to be originated from massive collapsar by their association with broad-line Type Ic supernovae , and their exclusive hosts in star-forming galaxies , .", "It has long been suspected that neutron star mergers, including binary neutron star (BNS) and neutron star–black hole (NSBH) mergers, are potential origins of short-duration GRBs , , , .", "Due to natal kicks impacted to the binaries at birth and long inspiral delays before mergers, NS mergers are believed to occur in low-density environments with significant offsets away from the centers of their host galaxies , supported by observations , , .", "NS mergers can release an amount of neutron-rich matter , , that allows elements heavier than iron to be synthesized via the rapid neutron-capture process (r-process).", "It was predicted that the radioactive decay of these r-process nuclei would power an ultraviolet-optical–infrared thermal transient named “kilonova” , .", "The smoking-gun evidence for the BNS merger origin of sGRB and kilonova was the multimessenger observations of the first BNS merger gravitational-wave (GW) source GW170817 detected by the LIGO/Virgo Collaboration and subsequent associated electromagnetic (EM) signals, including an sGRB GRB 170817A triggered by the Fermi Gamma-ray Burst Monitor , , , , a broadband jet afterglow from radio to X-ray with an off-axis viewing angle , , , , , and a fast-evolving kilonova transient , , , , , , , , .", "With the confirmation of the origin of sGRB and kilonova from the BNS merger population, one may especially expect to further establish the connection between NSBH mergers and their associated EM counterparts.", "However, although two high-confidence NSBHs (i.e., GW200105 and GW200115) and a few marginal NSBH GW candidates were detected during the third observing run of LVC , , , EM counterparts by the follow-up observations of these GWs were missing , , , , , , except an amphibious association between a subthreshold GRB GBM-190816 and a subthreshold NSBH event , .", "One plausible explanation for the lack of detection of an EM counterpart is that present EM searches were too shallow to achieve distance and volumetric coverage for the probability maps of LVC events , , .", "Furthermore, detailed studies on these NSBH candidates , , , , , revealed that they were more likely to be plunging events and could hardly produce any bright EM signals owing to near-zero spins of the primary BHs, since NSBH mergers tend to make tidal disruptions and drive bright EM counterparts if the primary BHs have high aligned-spins , , , , .", "Due to the lack of smoking-gun evidence, it is unclear whether NSBH mergers can contribute to the sGRB population .", "On the one hand, the majority of NSBH binaries are believed to originate from the classic isolated binary evolution scenario (involving a common-envelope) , , , .", "In this scenario, the primary BHs are usually born first and have negligible spins consistent with the properties of LVC NSBH candidates , .", "Conversely, if the NSs are born first, the progenitors of the BHs would be tidally spun up efficiently by the NSs in close binaries (orbital periods $\\lesssim 2\\,{\\rm d}$ ) and finally form fast-spinning BHs .", "A fractional of these NS-first-born NSBH systems formed in close binaries can merge within Hubble time.", "Therefore, compared with BH-first-born NSBH mergers, NS-first-born NSBH mergers are easier to allow tidal disruption to happen and drive bright GRB emissions.", "Because NS-first-born NSBH mergers may only account for $\\lesssim 20\\%$ NSBH populations , , , GRB populations contributed from NSBH mergers should be limited.", "On the other hand, most disrupted NSBH mergers can eject much more materials and lead to more powerful fall-back accretions than BNS mergers , .", "Furthermore, $r$ -process heating might affect the fall-back accretion of marginally bound matter .", "A late-time fall-back accretion of these materials may happen after tens of seconds of the merger if the remnant BH has a mass of $\\gtrsim 6-8\\,M_\\odot $ .", "Because most NSBH mergers can remain BHs with masses in this range, suggested that an extended emission (EE) caused by the fall-back accretion of $r$ -process heating materials can be an important signal to distinguish NSBH GRBs from BNS GRBs.", "Thus, it is plausible that the energy budgets, durations, and other observed properties of NSBH GRBs could differ from those of BNS mergers.", "Very recently, the observations of an lGRB (i.e., GRB 211211A) associated with a kilonova emission at a redshift $z = 0.0763$ (luminosity distance $D_{\\rm L} \\approx 350\\,{\\rm Mpc}$ ) was reported by a few groups , , , , , , .", "The burst was characterized by a spiky main emission (ME) phase with a duration of $\\sim 13\\,{\\rm {s}}$ , an EE phase lasting $\\sim 55\\,{\\rm {s}}$ , and a temporal lull between these two phases.", "Since the observation property of its associated kilonova emission was similar to that of AT2017gfo suggested that the burst could happen in another spatially nearby galaxy at a higher redshift.", "The near infrared emission following GRB 211211A could be thermal emission from dust, heated by UV radiation produced by the interaction between the jet plasma and the cirumstellar medium, rather than a kilonova emission.", ", , indicating an origin of a compact binary coalescence, it was a challenge to interpret the intrinsically long duration of the burst.", "proposed that a merger of a near-equal-mass NS–white dwarf binary can well explain the ME of GRB 211211A, since the accretion of some high-angular-momentum white dwarf debris onto the remnant NS can prolong the burst duration.", "suggested a strong magnetic flux may surround the central engine of GRB 211211A, which results in the long-time accretion process due to the magnetic barrier effect , .", "Besides GRB 211211A, two other redshift-known ($z$ -known) lGRBs, i.e., GRB 060614 and GRB 211227A, were proposed to derive from compact binary coalescences.", "GRB 060614 , , were found to be associated with a kilonova candidate , while GRB 211227A showed a large physical offset from the host center and lacked a supernova signature that should have been observed at the location of the burst .", "In this Letter, we study the properties of these three merger-origin lGRBs, especially for GRB 211211A, and show that a single explosive population via the NS-first-born NSBH merger can account for their origin.", "Here, the cosmological parameters are taken as $H_0 = 67.4\\,{\\rm km}\\,{\\rm s}^{-1}\\,{\\rm Mpc}^{-1}$ , $\\Omega _{\\rm m} = 0.315$ , $\\Omega _\\Lambda = 0.685$ ." ] ]
2207.10470
[ [ "Frozen spin ratio and the detection of Hund correlations" ], [ "Abstract We propose a way to distinguish Hund systems from conventional Mott ones by identifying key signatures (presumably detectable in experiments) of Hund physics at both one- and two-particle levels.", "The defining feature is the {\\it sign} of the derivative of the correlation strength with respect to electron filling.", "The underlying physics behind this proposal is that the response (either increase or decrease) of the correlation strength with respect to the occupation change is closely related to Hund fluctuations: ferromagnetic charge fluctuations between the lowest-energy atomic multiplets and excited high-spin ones in a neighboring charge subspace.", "It is the predominance of these fluctuations that promotes Hund metallicity.", "We provide analysis using multiorbital Hubbard models and further corroborate our argument by taking La$_2$CuO$_4$ and Fe-pnictides as a representative material example, respectively, of Mott and Hund systems." ], [ "$Z$ and {{formula:5bde8860-f009-40e5-b545-c8bfe7c9de77}} for a degenerate-two-orbital model on a Bethe lattice", "Figure REF displays additional data for $Z$ vs. $p$ and $R_\\mathrm {s}^{-1}$ vs. $p$ at three different values of $U$ .", "One can notice from Fig.", "REF that the value of $J/U$ above which $\\partial Z/ \\partial p <0$ and $\\partial R_\\mathrm {s}^{-1}/ \\partial p <0$ increases as $U$ is increased; the Mott behavior becomes predominant even up to a fairly large value of $J/U$ .", "Thus, the effect of $J$ gets weakened as $U$ is increased.", "A rationale for this phenomenon can be drawn from the generic form of Kondo couplings obtained via the Schrieffer-Wolff transformation of multiorbital impurity models, which read $\\mathcal {J}_{i}/V^2 = \\mathcal {J}^{J=0}_i/V^2 + \\mathcal {O}({J}/{U^2}) + \\mathcal {O}({J^2}/{U^3}) + \\cdots $ [Eqs.", "(REF )–()].", "Here, $\\mathcal {J}^{J=0}_i$ refers to the Kondo couplings when $J/U=0$ .", "Thus, at a regime of $U \\gg J$ , $\\mathcal {J}_i \\simeq \\mathcal {J}^{J=0}_i$ by which the effect of Hund coupling $J$ on $\\mathcal {J}_i$ s becomes largely suppressed.", "Figure: ZZ and R s -1 R_\\mathrm {s}^{-1} as a function of pp for a degenerate-two-orbital model on a Bethe lattice at three different values of UU (U=2U=2, 3, and 4).", "n 0 =3n_0=3 for all the cases.", "We used a higher simulation temperature for the calculation of R s -1 R_\\mathrm {s}^{-1} (T=0.02T=0.02) than for ZZ (T=0.005T=0.005) so that 〈S z (τ)S z (0)〉| τ=1/(2T) ≫0 \\langle S_z(\\tau ) S_z(0) \\rangle \\vert _{\\tau =1/(2T)} \\gg 0." ], [ "$Z$ and {{formula:44963121-96c4-4eb8-827b-9d917e022f5e}} for a degenerate-two-orbital model on a square lattice", "In the main text, we mainly focus on an infinite-dimensional Bethe lattice with semielliptical density of states (DOS) in order to focus on generic features rather than material specific ones.", "In realistic lattices such as a square lattice, a van Hove singularity (vHS) can exist near the Fermi level.", "This singularity features a divegence in the DOS [Fig.", "REF (d)], largely affecting the strength of electron correlations by effectively suppressing low-energy hopping processes; see, e.g., Refs [6], [32], [29], [38] for related discussions.", "Thus, one may ask whether this vHS have any effects on the signs of $\\partial Z/ \\partial p$ and $\\partial R_\\mathrm {s}^{-1}/ \\partial p$ .", "Figure REF presents $Z$ and $R_\\mathrm {s}^{-1}$ as a function of $p$ for a square lattice with nearest-neighbor (NN) and next-nearest-neighbor (NNN) hopping amplitudes, $t=0.25$ and $t^{\\prime }=0.08$ , respectively.", "Namely, the kinetic part of our Hamiltonian for the square lattice reads $H_\\mathrm {K} = -t\\sum _{\\langle ij \\rangle ,\\sigma } d^{\\dagger }_{i\\sigma }d_{j\\sigma } - t^{\\prime }\\sum _{\\langle \\langle ij \\rangle \\rangle ,\\sigma } d^{\\dagger }_{i\\sigma }d_{j\\sigma }$ , where $\\langle ij \\rangle $ and $\\langle \\langle ij \\rangle \\rangle $ denotes, respectively, the NN and NNN sites.", "Here, $D=1$ and $U=2.5$ .", "The results of Bethe lattice are also plotted for comparison.", "While the correlation strength itself is enhanced in the square lattice than in the Bethe lattice due to the presence of the vHS near the Fermi level, qualitatively the similar behavior is observed for both $Z$ and $R_\\mathrm {s}^{-1}$ : $\\partial Z/ \\partial p$ and $\\partial R_\\mathrm {s}^{-1}/ \\partial p$ change their signs from plus to minus by $J/U$ or by $p$ .", "Figure: The density of states, ZZ vs. pp, and R s -1 R_\\mathrm {s}^{-1} vs. pp for (a–c) an infinite-dimensional Bethe lattice and (d–f) a two-dimensional square lattice with NN and NNN hopping amplitudes, t=0.25t=0.25 and t ' =0.08t^{\\prime }=0.08, respectively, for degenerate two orbitals.", "D=1D=1, U=2.5U=2.5, and n 0 =3n_0=3 for both lattices.", "The vertical dashed lines in (a) and (d) denote the Fermi level for p=0p=0.", "We used a higher simulation temperature for the calculations of R s -1 R_\\mathrm {s}^{-1} (T=0.02T=0.02) than for ZZ (T=0.005T=0.005) so that 〈S z (τ)S z (0)〉| τ=1/(2T) ≫0 \\langle S_z(\\tau ) S_z(0) \\rangle \\vert _{\\tau =1/(2T)} \\gg 0." ], [ "$R_\\mathrm {s}^{-1}$ for a degenerate-three-orbital model on a Bethe lattice", "A degenerate-three-orbital model has been served as a prototypical system for Hund metal physics [12], [8].", "Here, unlike the cases of degenerate-two-orbital models, $J$ lifts degeneracy of the ground state atomic multiplets even when $p=0$ , whereby $J$ strongly enhances the correlation strength by forming a large composite spin moment [10], [12], [8].", "Indeed, as can be clearly seen from Fig.", "REF , the correlation strength (as measured by $R_\\mathrm {s}^{-1}$ ) is largely enhanced at $p=0$ by turning on $J$ .", "At any rate, $\\partial R_\\mathrm {s}^{-1}/\\partial p$ changes its sign by $J$ and by $p$ as is discussed for two-orbital models; see Fig.", "REF .", "Note here that very low-$T$ calculations are required to reach the coherence temperature below which the long-lived quasiparticles are formed in three-orbital models [8], [47], which is computationally demanding for our DMFT calculations adoping a continuous-time quantum Monte Carlo algorithm.", "Hence, $Z$ may not be a good measure of the correlation strength even for the lowest temperature practically accessible within our computation scheme.", "In this respect, we present only $R_\\mathrm {s}^{-1}$ in Fig.", "REF ." ], [ "Kondo couplings from the Schrieffer-Wolff transformation", "Here, we derive Kondo couplings by applying the canonical Schrieffer-Wolff (SW) transformation to a relevant impurity model.", "We follow the strategy depicted in Refs.", "[14], [20], [22].", "Note first that the original lattice model is mapped onto the Anderson-type impurity model in DMFT.", "Let us thus write down the impurity Hamiltonian with $\\mathrm {SU}(2)_\\mathrm {spin} \\otimes \\mathrm {SU}(M)_\\mathrm {orbital}$ symmetry: $H_\\mathrm {imp} = H_\\mathrm {loc} + H_\\mathrm {bath} + H_\\mathrm {hyb},$ where $H_\\mathrm {loc} &= U\\sum _{\\eta }{n_{\\eta \\uparrow } n_{\\eta \\downarrow }}+ \\sum _{\\eta < \\eta ^{\\prime },\\sigma \\sigma ^{\\prime }}(U^{\\prime }-J\\delta _{\\sigma \\sigma ^{\\prime }}){ n_{ \\eta \\sigma } n_{ \\eta ^{\\prime } \\sigma ^{\\prime }}} + J\\sum _{\\eta \\ne \\eta ^{\\prime }}d^{\\dagger }_{\\eta \\uparrow } d^{\\dagger }_{\\eta ^{\\prime } \\downarrow } d_{\\eta \\downarrow } d_{\\eta ^{\\prime } \\uparrow } +\\sum _{\\eta ,\\sigma }(\\epsilon _\\eta - \\mu )n_{\\eta \\sigma }, \\\\H_\\mathrm {bath} &= \\sum _{\\mathbf {k}\\eta \\sigma }\\epsilon _{\\mathbf {k}\\eta } \\psi ^{\\dagger }_{\\mathbf {k}\\eta \\sigma }\\psi _{\\mathbf {k}\\eta \\sigma }, \\\\H_\\mathrm {hyb} &= \\sum _{\\eta \\sigma }V \\psi ^{\\dagger }_{\\eta \\sigma }d_{\\eta \\sigma } + \\mathrm {h.c.},$ Here, $\\psi ^{\\dagger }$ ($\\psi $ ) is the creation (destruction) operator for bath states.", "$U^{\\prime }=U-J$ .", "We will later get back to the form of Eq.", "(1) in the main text for Eq.", "(REF ).", "Our goal is, by integrating out valence fluctuations, to construct an effective Kondo model for low-energy physics: $H_\\mathrm {eff} = H_\\mathrm {int} + H_\\mathrm {bath},$ where $H_\\mathrm {int}$ is given by the SW transformation: $\\begin{split}H_\\mathrm {int} = &-P^{n_0} H_\\mathrm {hyb} \\Bigg \\lbrace \\sum _{i}\\frac{P^{n_0+1}_i}{E^{n_0+1}_i} + \\sum _{i}\\frac{P^{n_0-1}_i}{ E^{n_0-1}_i} \\Bigg \\rbrace H_\\mathrm {hyb} P^{n_0} \\\\= &-\\sum _{i,\\lbrace \\eta \\rbrace , \\lbrace \\sigma \\rbrace } \\frac{V^2}{E^{n_0+1}_i}\\psi ^{\\dagger }_{\\eta _1\\sigma _1} \\psi _{\\eta _2\\sigma _2} P^{n_0} d_{\\eta _3\\sigma _3} P^{n_0+1}_i d^{\\dagger }_{\\eta _4\\sigma _4} P^{n_0} \\delta _{\\eta _1 \\eta _3} \\delta _{\\eta _2 \\eta _4} \\delta _{\\sigma _1 \\sigma _3} \\delta _{\\sigma _2 \\sigma _4} \\\\& - \\sum _{i,\\lbrace \\eta \\rbrace , \\lbrace \\sigma \\rbrace } \\frac{V^2}{E^{n_0-1}_i}\\psi _{\\eta _1\\sigma _1} \\psi ^{\\dagger }_{\\eta _2\\sigma _2} P^{n_0} d^{\\dagger }_{\\eta _3\\sigma _3} P^{n_0-1}_i d_{\\eta _4\\sigma _4} P^{n_0} \\delta _{\\eta _1 \\eta _3} \\delta _{\\eta _2 \\eta _4} \\delta _{\\sigma _1 \\sigma _3} \\delta _{\\sigma _2 \\sigma _4},\\end{split}$ with $P^{n_0}$ being the projector to the atomic ground state multiplet (with eigenvalue $\\varepsilon ^{n_0}$ ) in charge $N=n_0$ subspace of Eq.", "(REF ).", "$P^{n_0\\pm 1}_i$ project onto the atomic multiplets of Eq.", "(REF ) having eigenvalues of $\\varepsilon ^{n_0\\pm 1}_i$ in charge $n_0\\pm 1$ subspaces.", "The subscript $i$ is the index for labeling different multiplets.", "$E^{n_0\\pm 1}_i \\equiv \\varepsilon ^{n_0\\pm 1}_i - \\varepsilon ^{n_0} $ refers to the charge excitation energy.", "Now, we use the following relation holding for $\\mathrm {SU}(M)$ symmetry: $\\delta _{il}\\delta _{kj} = \\frac{1}{M} \\delta _{ij}\\delta _{kl} + \\frac{1}{2}\\sum _{\\alpha }\\tau ^\\alpha _{ij}\\tau ^\\alpha _{kl},$ where $\\tau $ is the generator of the $\\mathrm {SU}(M)$ symmetric group, namely, $\\tau ^\\alpha $ corresponds to Pauli matrices ($\\sigma ^\\alpha $ ) for $M=2$ , and to Gell-Mann matrices for $M=3$ .", "We hereafter use Einstein summation convention for simplicity.", "Inserting Eq.", "(REF ) into Eq.", "(REF ) for both spin and orbital leads to the following form of $H_\\mathrm {int}$ : $H_\\mathrm {int} = \\mathcal {J}_\\mathrm {p} \\psi ^{\\dagger }_{\\eta \\sigma }\\psi _{\\eta \\sigma } + \\mathcal {J}_s S^\\alpha \\psi ^{\\dagger }_{\\eta \\sigma } \\Big ( \\frac{\\sigma ^\\alpha _{\\sigma \\sigma ^{\\prime }}}{2} \\Big ) \\psi _{\\eta \\sigma ^{\\prime }} + \\mathcal {J}_\\mathrm {o} T^\\alpha \\psi ^{\\dagger }_{\\eta \\sigma } \\Big ( \\frac{\\tau ^\\alpha _{\\eta \\eta ^{\\prime }}}{2} \\Big ) \\psi _{\\eta ^{\\prime }\\sigma } + \\mathcal {J}_\\mathrm {so} S^\\alpha T^\\beta \\psi ^{\\dagger }_{\\eta \\sigma } \\Big ( \\frac{ \\sigma ^\\alpha _{\\sigma \\sigma ^{\\prime }}}{2} \\frac{\\tau ^\\beta _{\\eta \\eta ^{\\prime }}}{2} \\Big ) \\psi _{\\eta ^{\\prime }\\sigma ^{\\prime }} $ $S^\\alpha = d^{\\dagger }_{\\eta \\sigma } (\\sigma ^\\alpha _{\\sigma \\sigma ^{\\prime }}/2) d^{\\dagger }_{\\eta \\sigma ^{\\prime }}$ and $T^\\beta = d^{\\dagger }_{\\eta \\sigma } (\\tau ^\\beta _{\\eta \\eta ^{\\prime }}/2) d^{\\dagger }_{\\eta ^{\\prime } \\sigma }$ are the spin and orbital operators for impurity degrees of freedom.", "$\\mathcal {J}_\\mathrm {p}$ , $\\mathcal {J}_\\mathrm {s}$ , $\\mathcal {J}_\\mathrm {o}$ , and $\\mathcal {J}_\\mathrm {so}$ are Kondo couplings for potential scattering, spin, orbital, and spin-orbit terms, which are given by: $\\mathcal {J}_\\mathrm {p} \\langle \\phi _0 | I_S \\otimes I_T | \\phi _0 \\rangle &= -\\frac{V^2}{2M}\\Bigg \\lbrace \\frac{\\langle \\phi _0 | d_{\\eta \\sigma } P^{n_0+1}_i d^{\\dagger }_{\\eta \\sigma } | \\phi _0 \\rangle }{E^{n_0+1}_i} - \\frac{\\langle \\phi _0 | d^{\\dagger }_{\\eta \\sigma } P^{n_0-1}_i d_{\\eta \\sigma } | \\phi _0 \\rangle }{E^{n_0-1}_i} \\Bigg \\rbrace , \\\\\\mathcal {J}_\\mathrm {s} \\langle \\phi _0 | S^{\\alpha } \\otimes I_T | \\phi _0 \\rangle &= -\\frac{V^2}{M} \\sigma ^{\\alpha }_{\\sigma \\sigma ^{\\prime }} \\Bigg \\lbrace \\frac{\\langle \\phi _0 | d_{\\eta \\sigma ^{\\prime }} P^{n_0+1}_i d^{\\dagger }_{\\eta \\sigma } | \\phi _0 \\rangle }{E^{n_0+1}_i} - \\frac{\\langle \\phi _0 | d^{\\dagger }_{\\eta \\sigma } P^{n_0-1}_i d_{\\eta \\sigma ^{\\prime }} | \\phi _0 \\rangle }{E^{n_0-1}_i} \\Bigg \\rbrace , \\\\\\mathcal {J}_\\mathrm {o} \\langle \\phi _0 | I_S \\otimes T^\\beta | \\phi _0 \\rangle &= -\\frac{V^2}{2} \\tau ^{\\beta }_{\\eta \\eta ^{\\prime }} \\Bigg \\lbrace \\frac{\\langle \\phi _0 | d_{\\eta ^{\\prime } \\sigma } P^{n_0+1}_i d^{\\dagger }_{\\eta \\sigma } | \\phi _0 \\rangle }{E^{n_0+1}_i} - \\frac{\\langle \\phi _0 | d^{\\dagger }_{\\eta \\sigma } P^{n_0-1}_i d_{\\eta ^{\\prime } \\sigma } | \\phi _0 \\rangle }{E^{n_0-1}_i} \\Bigg \\rbrace , \\\\\\mathcal {J}_\\mathrm {so} \\langle \\phi _0 | S^{\\alpha } \\otimes T^\\beta | \\phi _0 \\rangle &= -V^2 \\sigma ^{\\alpha }_{\\sigma \\sigma ^{\\prime }} \\tau ^{\\beta }_{\\eta \\eta ^{\\prime }} \\Bigg \\lbrace \\frac{\\langle \\phi _0 | d_{\\eta ^{\\prime } \\sigma ^{\\prime }} P^{n_0+1}_i d^{\\dagger }_{\\eta \\sigma } | \\phi _0 \\rangle }{E^{n_0+1}_i} - \\frac{\\langle \\phi _0 | d^{\\dagger }_{\\eta \\sigma } P^{n_0-1}_i d_{\\eta ^{\\prime } \\sigma ^{\\prime }} | \\phi _0 \\rangle }{E^{n_0-1}_i} \\Bigg \\rbrace .", "$ Here, $| \\phi _0 \\rangle $ denotes the atomic ground state multiplet in the charge $n$ subspace.", "Since the first term in Eq.", "(REF ) is irrelevant for dynamics of local moments [20], [22], we discard $\\mathcal {J}_\\mathrm {p}$ from our discussion.", "Table: Eigenstates and eigenvalues of Eq.", "() with ϵ 1 =ϵ 2 =0\\epsilon _1=\\epsilon _2=0 obeying SU (2) spin ⊗ SU (2) orbital \\mathrm {SU(2)}_\\mathrm {spin} \\otimes \\mathrm {SU(2)}_\\mathrm {orbital} symmetry.", "The first entry in a ket of an eigenstate is a state of orbital-1 and the second is of orbital-2.While the discussion below is valid for any $\\mathrm {SU}(2) \\otimes \\mathrm {SU}(M)$ models, let us focus on the case of two orbitals ($M=2$ ) with $n_0=3$ .", "For generic $\\mathrm {SU}(2) \\otimes \\mathrm {SU}(M)$ cases, refer to Ref. [20].", "Eigenstates and eigenvalues of Eq.", "(REF ) are listed in Table REF .", "We have a freedom to choose $| \\phi _0 \\rangle $ , $\\alpha $ , and $\\beta $ to evaluate Eqs.", "(REF )–().", "Hence, for convenience, $| \\phi _0 \\rangle = |\\uparrow ,\\uparrow \\downarrow \\rangle $ and $\\alpha =\\beta =3$ .", "Kondo couplings are now given by: $\\mathcal {J}_\\mathrm {s} &= \\frac{V^2}{2} \\Bigg ( \\frac{2}{E^{n_0+1}_{16}} - \\frac{2}{E^{n_0-1}_{6}} + \\frac{1}{E^{n_0-1}_{7}} + \\frac{1}{E^{n_0-1}_{10}} + \\frac{2}{E^{n_0-1}_{11}} \\Bigg ) = \\frac{V^2}{2} \\Bigg ( \\frac{2}{ E_0^{|4,0\\rangle }} -\\frac{1}{ E_0^{|2,1\\rangle }} + \\frac{3}{ E_0^{|2,0\\rangle }} \\Bigg ), \\\\\\mathcal {J}_\\mathrm {o} &= \\frac{V^2}{2} \\Bigg ( \\frac{2}{E^{n_0+1}_{16}} + \\frac{2}{E^{n_0-1}_{6}} + \\frac{1}{E^{n_0-1}_{7}} + \\frac{1}{E^{n_0-1}_{10}} - \\frac{2}{E^{n_0-1}_{11}} \\Bigg ) = \\frac{V^2}{2} \\Bigg ( \\frac{2}{ E_0^{|4,0\\rangle }} +\\frac{3}{ E_0^{|2,1\\rangle }} - \\frac{1}{ E_0^{|2,0\\rangle }} \\Bigg ), \\\\\\mathcal {J}_\\mathrm {so} &= 2V^2 \\Bigg ( \\frac{2}{E^{n_0+1}_{16}} + \\frac{2}{E^{n_0-1}_{6}} - \\frac{1}{E^{n_0-1}_{7}} - \\frac{1}{E^{n_0-1}_{10}} + \\frac{2}{E^{n_0-1}_{11}} \\Bigg ) = 2V^2 \\Bigg ( \\frac{2}{ E_0^{|4,0\\rangle }} +\\frac{1}{ E_0^{|2,1\\rangle }} + \\frac{1}{ E_0^{|2,0\\rangle }} \\Bigg ), $ where the subscript $i$ of $E^{n_0\\pm 1}_i$ ($\\equiv \\varepsilon ^{n_0\\pm 1}_i - \\varepsilon ^{n_0}_{13}$ ) refers to the index of the eigenstate in Table REF .", "In the second equalities of the above equations, we re-write terms using $E_k^{|N,S\\rangle }$ which denotes the excitation energy from the ground state $|3,1/2\\rangle $ to the excited atomic multiplet $|N,S \\rangle $ .", "The subscript $k$ ($k \\in \\lbrace 0,1,...\\rbrace $ ) refers to the $k$ -th lowest eigenvalue in the corresponding $|N,S\\rangle $ subspace.", "We henceforth restrict ourselves to a region where $E_k^{|N,S\\rangle }>0$ and $p>0$ .", "Table: Eigenstates and eigenvalues of Eq.", "(1) with U ' =U-2JU^{\\prime }=U-2J and ϵ 1 =ϵ 2 =0\\epsilon _1=\\epsilon _2=0.", "The first entry in a ket of an eigenstate is a state of orbital-1 and the second is of orbital-2.We now consider responses of these couplings due to changes in filling.", "To mimic the effect of a small increase in $p$ , let us consider a situation where $\\mu $ is slightly decreased by $d\\mu $ ($>0$ ), i.e., $\\mu \\rightarrow \\mu - d\\mu $ .", "The concomitant changes in $\\mathcal {J}_i$ s are given by: $-\\Big ( \\frac{\\partial \\mathcal {J}_\\mathrm {s}}{\\partial \\mu } \\Big ) d\\mu &= \\frac{V^2}{2} \\Bigg \\lbrace -\\frac{2}{ \\big (E^{|4,0\\rangle }_0\\big )^2 } -\\frac{1}{ \\big (E^{|2,1\\rangle }_0\\big )^2 } + \\frac{3}{ \\big (E^{|2,0\\rangle }_0\\big )^2 } \\Bigg \\rbrace d\\mu \\quad \\mathrm {for}~\\mathrm {spin}, \\\\-\\Big ( \\frac{\\partial \\mathcal {J}_\\mathrm {o}}{\\partial \\mu } \\Big ) d\\mu &= \\frac{V^2}{2} \\Bigg \\lbrace -\\frac{2}{ \\big (E^{|4,0\\rangle }_0\\big )^2 } +\\frac{3}{ \\big (E^{|2,1\\rangle }_0\\big )^2 } - \\frac{1}{ \\big (E^{|2,0\\rangle }_0\\big )^2 } \\Bigg \\rbrace d\\mu \\quad \\mathrm {for}~\\mathrm {orbital}, \\\\-\\Big ( \\frac{\\partial \\mathcal {J}_\\mathrm {so}}{\\partial \\mu } \\Big ) d\\mu &= 2V^2 \\Bigg \\lbrace -\\frac{2}{ \\big (E^{|4,0\\rangle }_0\\big )^2 } +\\frac{1}{ \\big (E^{|2,1\\rangle }_0\\big )^2 } + \\frac{1}{ \\big (E^{|2,0\\rangle }_0\\big )^2 } \\Bigg \\rbrace d\\mu \\quad \\mathrm {for}~\\mathrm {spin{\\text{-}}orbital}.", "$ When $J=0$ , $E_0^{|2,1\\rangle }$ is equal to $E_0^{|2,0\\rangle }$ .", "Thus, Eqs.", "(REF )–() are all positive implying that all the Kondo coupling constants evolve in a way to weaken the correlation strength.", "When $J>0$ and $p>0$ , on the other hand, $E_0^{|2,1\\rangle }$ is smaller than the other $E_i^{|N,S\\rangle }$ s. In this case, Eqs.", "(REF )–() are controlled mainly by the terms related to $E_0^{|2,1\\rangle }$ .", "Thus, we arrive at the following relations for $J>0$ : $-\\Big ( \\frac{\\partial \\mathcal {J}_\\mathrm {s}}{\\partial \\mu } \\Big ) d\\mu &\\approx -\\frac{1}{2}\\frac{V^2}{\\big (E^{|2,1\\rangle }_0\\big )^2 } d\\mu \\quad \\mathrm {for}~\\mathrm {spin}, \\\\-\\Big ( \\frac{\\partial \\mathcal {J}_\\mathrm {o}}{\\partial \\mu } \\Big ) d\\mu &\\approx + \\frac{3}{2}\\frac{V^2}{ \\big (E^{|2,1\\rangle }_0\\big )^2 } d\\mu \\quad \\mathrm {for}~\\mathrm {orbital}, \\\\-\\Big ( \\frac{\\partial \\mathcal {J}_\\mathrm {so}}{\\partial \\mu } \\Big ) d\\mu &\\approx + 2\\frac{V^2}{ \\big (E^{|2,1\\rangle }_0\\big )^2 } d\\mu \\quad \\mathrm {for}~\\mathrm {spin{\\text{-}}orbital}.$ The above relations indicate that only $\\mathcal {J}_\\mathrm {s}$ decreases due to Hund fluctuations as $p$ is increased.", "On the contrary, $\\mathcal {J}_\\mathrm {o}$ increases with $p$ favoring the screening of orbital degrees of freedom, consistent with the enhanced spin-orbital separation by $p$ for a finite $J$ in three-orbital models [22], [23].", "Table: Eigenstates and eigenvalues of Eq.", "(1) with U ' =U-2JU^{\\prime }=U-2J, ϵ 1 =Δ/2\\epsilon _1=\\Delta /2, and ϵ 2 =-Δ/2\\epsilon _2=-\\Delta /2.", "a≡-Δa\\equiv -\\Delta and b≡J 2 +Δ 2 b\\equiv \\sqrt{J^2+\\Delta ^2}.", "The first entry in a ket of an eigenstate is the state of orbital-1 and the second is of orbital-2.Having evidenced that the sign of $-(\\partial \\mathcal {J}_\\mathrm {s} / \\partial \\mu ) d\\mu $ is influenced by $J$ , we now consider Eq.", "(1) in the main text with $U^{\\prime }=U-2J$ for Eq.", "(REF ).", "In this case, only $\\mathrm {SU(2)}$ symmetry of spin is retained.", "Using eigenstates and eigenvalues listed in Table REF and Table REF , we get the following relations for $\\mathcal {J}_\\mathrm {s}$ : $\\mathcal {J}_\\mathrm {s} &=\\frac{V^2}{2} \\Bigg ( \\frac{2}{ E_0^{|4,0\\rangle }} -\\frac{1}{ E_0^{|2,1\\rangle }} + \\frac{2}{ E_0^{|2,0\\rangle }} + \\frac{1}{ E_1^{|2,0\\rangle }} \\Bigg ) \\;\\; \\mathrm {for}~\\Delta = 0, \\\\\\mathcal {J}_\\mathrm {s} &=\\frac{V^2}{2} \\Bigg ( \\frac{2}{ E_0^{|4,0\\rangle }} -\\frac{1}{ E_0^{|2,1\\rangle }} + \\frac{2J^2}{ (a+b)^2+J^2} \\frac{1}{ E_0^{|2,0\\rangle }} + \\frac{1}{ E_1^{|2,0\\rangle } } + \\frac{2(a+b)^2}{ (a+b)^2+J^2} \\frac{1}{ E_2^{|2,0\\rangle }} \\Bigg ) \\;\\; \\mathrm {for}~\\Delta > 0, $ where $a\\equiv -\\Delta $ and $b\\equiv \\sqrt{J^2+\\Delta ^2}$ .", "Applying the same procedure used for getting Eq.", "(REF ) for $J>0$ results in Eq.", "(4) and Eq.", "(5) in the main text: $-\\Big ( \\frac{\\partial \\mathcal {J}_\\mathrm {s}}{\\partial \\mu } \\Big ) d\\mu &\\approx -\\frac{1}{2}\\frac{V^2}{\\big (E^{|2,1\\rangle }_0\\big )^2 } d\\mu \\;\\; \\mathrm {for}~\\Delta = 0, \\\\-\\Big ( \\frac{\\partial \\mathcal {J}_\\mathrm {s}}{\\partial \\mu } \\Big ) d\\mu &\\approx -\\frac{1}{2} \\frac{V^2}{ \\big (E_0^{|2,1\\rangle } \\big )^2} d\\mu + \\frac{1}{\\big \\lbrace \\sqrt{1+(\\Delta /J)^2}-\\Delta /J \\big \\rbrace ^2+1} \\frac{V^2}{ \\big (E_0^{|2,0\\rangle } \\big )^2} d\\mu \\;\\; \\mathrm {for}~\\Delta > 0.", "$ For the discussion of the above two formulae, see the main text." ], [ "$\\mathcal {J}_\\mathrm {s}$ , {{formula:ed18a35c-22b3-46f2-988c-c3a2620bc748}} , and {{formula:b257dd75-d34f-4fd1-a905-e5db6b87ea50}} as a function of {{formula:43159181-8f46-4976-b5a4-fdbe914ab85b}} for two-orbital models on a Bethe lattice", "Figure REF displays $Z_1$ ($Z$ of the frontier orbital), $R_\\mathrm {s}^{-1}$ , and $\\mathcal {J}_\\mathrm {s}$ as a function of $p$ for two-orbital models on a Bethe lattice with $\\Delta \\ge 0$ .", "Although $\\mathcal {J}_\\mathrm {s}$ is a bare Kondo coupling which will be scaled via renormalization group flow, the behavior of $\\mathcal {J}_\\mathrm {s}$ as a function of $p$ is qualitatively consistent with that of $Z_1$ and $R_\\mathrm {s}^{-1}$ .", "Refer to the main text for the related discussion.", "Figure: 𝒥 s \\mathcal {J}_\\mathrm {s} (multiplied by D/V 2 D/V^2), Z 1 Z_1, and R s -1 R_\\mathrm {s}^{-1} as a function of pp for generic two-orbital models on a Bethe lattice at three different values of Δ\\Delta : Δ=0\\Delta =0 (first row), 0.5 (second row), and 0.8 (third row).", "U=2.5U=2.5 and n 0 =3n_0=3 for all the cases.", "We used a higher simulation temperature for the calculation of R s -1 R_\\mathrm {s}^{-1} (T=0.02T=0.02) than for Z 1 Z_1 (T=0.005T=0.005) so that 〈S z (τ)S z (0)〉| τ=1/(2T) ≫0 \\langle S_z(\\tau ) S_z(0) \\rangle \\vert _{\\tau =1/(2T)} \\gg 0.", "To evaluate 𝒥 s \\mathcal {J}_\\mathrm {s} [Eq.", "() and Eq.", "()], values of E k |N,S〉 E_k^{|N,S\\rangle } determined from the DMFT at T=0.005T=0.005 are used." ] ]
2207.10421
[ [ "Emergent space-time meets emergent quantum phenomena: observing quantum\n phase transitions in a moving frame" ], [ "Abstract In material science, it was established that as the number of particles in a material gets more and more, especially in the thermodynamic limit, various macroscopic quantum phenomena such as superconductivity, superfluidity, quantum magnetism, Fractional quantum Hall effects and various quantum or topological phase transitions (QPT) emerge in such non-relativistic quantum many-body systems.", "This is the essence of P. W. Anderson's great insight `` More is different ''.", "However, there is still a fundamental component missing in this general picture: How the `` More is different '' becomes different in different inertial frames?", "Here we address this outstanding problem.", "We propose there is an emergent space-time corresponding to any emergent quantum phenomenon, especially near a QPT.", "We demonstrate our claim by studying one of the simplest QPTs:Superfluid (SF)-Mott transitions of interacting bosons in a square lattice observed in a frame moving with a constant velocity $ \\vec{v} $ relative to the underlaying lattice.", "We first construct two effective actions, then perform microscopic calculations on a lattice.", "These new effects are contrasted to the Doppler shifts in a relativistic quantum field theory, Unruh effects in an accelerating observer and possible emergent curved space-time from the Sachdev-Ye-Kitaev model.", "As a byproduct, we comment on the emergent space-time in the fractional Quantum Hall systems and the associated chiral edge states.", "We also stress that despite they share many similar properties, the emergent particles transform very differently than their corresponding elementary particles." ], [ " Introduction", "Poincare and Einstein's special theory of relativity tells us that there is no preferred inertial frame [1].", "The Hamiltonian or Lagrangian take identical form and owns exactly the same set of symmetries in all inertial frames.", "The laws in two inertial frames are just related by the Lorentz transformation which lead to interesting phenomena such as a moving stick becomes shorter, a running clock gets shorter and relativistic Doppler effect, etc.", "However, the story may change in materials or atomic molecular and Optical (AMO) systems which explicitly break the Lorentz invariance.", "There is indeed a preferred inertial frame where the substrate holding the materials or AMO system such as a lattice is static.", "In this static frame, as advocated by P.W.", "Anderson \" More is different \"[2], one can observe various emergent quantum many body phenomena such as various symmetry breaking phases, topological phases and Quantum or topological phase transitions (QPTs) between them.", "The physical laws in two inertial frames are still related by the Lorentz transformation (LT) which reduces to Galileo transformation (GT) at a low velocity.", "So the Hamiltonian or Lagrangian may take the largest symmetries in the static frame, but take reduced symmetries in the moving frame.", "How these emergent phenomena change when they are observed in the moving frame remains an outstanding problem.", "In this work, we will show that the GT between two inertial frames leads to much richer and more dramatic phenomena than those in relativistic systems.", "Our work may extend emergent quantum many body phenomena to also emergent space-time from a QPT ( Fig.REF ), therefore considerably enrich P.W.", "Anderson's original great insight \" More is different \" [2].", "Quantum phase transitions (QPT) is one of the most fantastic phenomena in Nature [3], [4], [5].", "For example, Superfluid to Mott transitions [6], [7], [8], [9], [10], Anti-ferromagnetic state to Valence bond solid transition [11], [12], magnetic states to quantum spin liquid transition [13], [14], [15], [16], [17], [18], [19], the magnetic transitions in itinerant systems [20] or quantum Hall (QH) to QH or insulator transitions [21], [22], [23], [24], [25], [26], topological phase transitions of non-interacting fermions or interacting systems [27], [28], [29], [30] are among the most popular QPTs.", "In materials, one usually manipulates or controls a system by applying a magnetic field, electric field, change doping, making a twist, adding a strain or pressure, to tune the system to go through various QPTs.", "Ultra-cold atoms loaded on optical lattices can provide unprecedented experimental systems for the quantum simulations and manipulations of some quantum phase and phase transitions.", "For example, the Superfluid to Mott (SF-Mott) transitions have been successfully realized by loading ultracold atoms in various optical lattice [7], [8], [31].", "However, due to the charge neutrality, it is difficult to tune the cold atom systems by applying a magnetic field or electric field [31].", "Here, we study the same QPTs observed in a moving inertial system and show that it leads to new quantum phase through novel quantum phase transition.", "Most importantly, new space-time structure emerge from the QPT.", "It also demonstrates that doing measurements in a moving frame becomes an effective way not only measure various intrinsic and characteristic properties of the materials, but also to realize various quantum and topological phases and to tune phase transitions, most importantly to probe the emergent space-time near a QPT.", "In this work, we focus on quantum phase transitions with symmetry breaking, topological phase transitions will be discussed in a separate publication [32].", "Figure: P. W. Anderson's \" More is different \" on emergent quantum phenomena need to be augmentedby the corresponding the Galileo transformation (GT) denoted by the question markon the emergent space-time to be presented in this work.QFT: Quantum Field theory, QMS: Quantum many-body system.", "QPT: Quantum phase transitions.See also Fig.. One typical QMS is the SF-Mott QPT in a periodic lattice potential which is the main focus ofthe work.", "Another is interacting electron system in an external magnetic field which may show FQH with topological order andassociated edge states.", "It will be briefly studied in appendix F-H.We take the boson-Hubbard model of interacting bosons at integer fillings in a square lattice as the simplest example to show a QPT in a lab frame Fig.REF a: $H_{BH} = -t \\sum _{ \\langle ij \\rangle } ( b^{\\dagger }_{i} b_{j} + h.c. ) -\\mu \\sum _{i} n_{i} + \\frac{U}{2} \\sum _{i} n_{i} ( n_{i} -1 )$ which displays a SF-Mott transition around $ t/U \\sim 1 $ .", "Obviously, the very existence of the lattice breaks the Galileo and Lorentz invariance.", "It is also responsible for the existence of the Mott insulating state and the SF-Mott transition.", "This is in sharp contrast to the Helium 4 [38], [40] to be studied in Sec.IX or Quantum Hall systems [21], [22], [23], [26] to be briefly touched in appendix F-H which respects the Galileo invariance.", "Its phase diagram [6] is shown in Fig.REF .", "Here we first focus on the 2d superfluid (SF) to Mott transitions [6], [9], [10], [7] in Fig.REF with the dynamic exponent $ z=1 $ , then study the one with $ z=2 $ in Fig.REF in Sec.X.", "The effective action consistent with all the symmetries is: SL=dd2x [||2+vx2|x|2+vy2|y|2 + r ||2+ u ||4+ ] Where $ \\psi $ is the complex order parameter ( see appendix E ) which has the effective mass $ r $ tunes the SF-Mott transition with $ z=1 $ .", "$ r > 0, \\langle \\psi \\rangle =0 $ is in the Mott state which respects the $ U(1) $ symmetry, $ r < 0, \\langle \\psi \\rangle \\ne 0 $ is in the SF state which breaks the $ U(1) $ symmetry.", "It is related to the microscopic parameters in Eq.REF by $ r \\sim (t/U)_c - t/U $ at a fixed chemical potential $ \\mu =U/2 $ ( see also Fig.REF ).", "After the scaling away the $ v_x, v_y $ , it has an emergent Lorentz invariance [34] whose characteristic velocity is the intrinsic velocity $ v_x, v_y $ instead of the speed of light $ c_l $ in the relativistic quantum field theory [33].", "The Lorentz transformation reduces to the Galileo transformation when $ v/c_l \\ll 1 $ [35].", "It has a exact Time-reversal symmetry $ T $ : $ \\psi (\\vec{x},t) \\rightarrow \\psi (\\vec{x},-t) $ and $ i \\rightarrow -i $ .", "It also has a particle-hole (PH) ( because it is called charge conjugation ( C ) symmetry in the relativistic quantum field theory.", "So we adopt this notation in the following ) under $ \\psi (\\vec{x},t) \\rightarrow \\psi ^{*}(\\vec{x},t) $ which dictates the particle spectrum is related to that of a hole by $ \\omega _{+} ( \\vec{k} )= \\omega _{-} ( \\vec{k} ) $ , also a parity $ P $ which dictates $ \\omega _{\\pm } ( \\vec{k} )= \\omega _{\\pm } ( -\\vec{k} ) $ .", "The lattice breaks the Galileo invariance and is static in the lab frame.", "Then one try to detect these SF-Mott transitions in a frame which is moving with the velocity $ \\vec{c}=c \\hat{y} $ along the $ y- $ axis with respect to the lab frame.", "This moving frame could be a fast-moving train or space-craft/satellite in the space.", "To address what will an observer in the moving frame see, one just perform a Galileo transformation $ y^{\\prime }= y + ct, t^{\\prime }=t $ to the moving frame where the prime means the moving frame and no-prime means the lab frame.", "In the real time, it implies $ \\partial _y \\rightarrow \\partial ^{\\prime }_y,\\partial _t \\rightarrow \\partial ^{\\prime }_t + c \\partial ^{\\prime }_y $ .", "In the imaginary time $ \\tau = it $ , it implies $ \\partial _y \\rightarrow \\partial ^{\\prime }_y,\\partial _\\tau \\rightarrow \\partial ^{\\prime }_\\tau -i c \\partial ^{\\prime }_y $ .", "Because the effective action in the lab frame has the emergent \"Lorentz\" invariant [34] instead of Galileo invariant, so the Galileo transformation may lead to some dramatic effects.", "Boosting the action Eq.", "with $ \\partial _y \\rightarrow \\partial ^{\\prime }_y,\\partial _\\tau \\rightarrow \\partial ^{\\prime }_\\tau -i c \\partial ^{\\prime }_y $ and $ \\psi ( \\vec{x}, t) \\rightarrow \\psi ( \\vec{x}^{\\prime } - \\vec{c} t^{\\prime }, t^{\\prime })= \\psi ^{\\prime }( \\vec{x}^{\\prime }, t^{\\prime }) $ , using the invariant space-time measure $ \\int d\\tau d^2x = \\int d\\tau ^{\\prime } d^2 x^{\\prime } $ and the functional measure $ \\int D\\psi D\\psi ^{*} = \\int D\\psi ^{\\prime } D\\psi ^{\\prime *} $ lead to the following effective action in the moving frame ( for notational simplicity, we drop the $ \\prime $ ) : SM =dd2x [(*-icy*) (-icy) +vx2|x|2 +vy2|y|2 + r ||2+ u ||4+ ] where the space-time is related by the intrinsic velocity $ v_x, v_y $ instead of the speed of light $ c_l $ in the relativistic QFT.", "One need to stress [36], [37] $ (\\partial _\\tau \\psi ^*-ic\\partial _y\\psi ^*)(\\partial _\\tau \\psi -ic\\partial _y\\psi ) \\ne |(\\partial _\\tau \\psi -ic\\partial _y\\psi )|^2= (\\partial _\\tau \\psi ^*-ic\\partial _y\\psi ^*)(\\partial _\\tau \\psi + ic\\partial _y\\psi ) $ .", "See the appendix E for the derivation from the lattice model Eq.REF .", "Because the particle-number remains conserved in the moving frame, it still keeps the exact $ U(1) $ symmetry and the $ C $ symmetry.", "It breaks the emergent Lorentz invariance, the $ T $ and the $ P $ , but keeps its combination the latter two which can be called $ PT $ .", "The $ C $ dictates the particle spectrum is related to that of a hole by $ \\omega _{+} ( \\vec{k} )= \\omega _{-} ( -\\vec{k} ) $ .", "It is this C which plays crucial roles in all the calculations, especially the boson-vortex duality transformation, and the RG analysis along the $ z=1 $ line in Fig.REF b.", "So it still keeps CPT symmetry.", "Indeed, the action Eq.", "has the largest symmetries in the static frame $ c=0 $ , but takes the reduced symmetries in a moving frame $ c \\ne 0 $ .", "So the static frame is indeed the preferred frame.", "In this work, we show that the quantum phase and phase transitions observed in the moving frame display quite different phenomena than those observed in the lab frame.", "Figure: The SF-Mott transition with the dynamic exponent z=1 z=1 or z=2 z=2 in Fig.is realized in the lab frame, but observed in a frame moving with a velocity c c with respect to the lab frame.The optical lattice is static in the lab frame.", "The phase diagrams for z=1 z=1 and z=2 z=2 in the moving frame is presented in Fig.", "and respectively.", "The question marks in the moving frame will be shown to stand for the Doppler shiftsin the excitation spectrum, new quantum phases and novel QPTs.Exchanging the role of the lab and moving frame does not change the results, because both are related by Galileo transformation anyway.In a practical experimental scattering detection shown in Fig., it is more convenient to set the emitter and the receiver static in the lab frame and set the sample moving with a constant velocity.", "This set-up may also be used to probe the new space-time structure emerging from the z=1 z=1 or z=2 z=2 QPT ( Fig.", ").", "(c) Driving a superflow in a superfluid will also be presented in Sec.", "IX-A, and -B.", "(d) A moving impurity in a superfluid is a different class of problem, will only be commented in Sec.", "IX-C.All these different, but related problems (a)-(d) will be investigated in a unified picture.Before starting to analyze in detail Fig.REF , it is important to distinguish the 4 different cases.", "The cases (1-3) have been discussed in some previous literatures.", "The case (4) is the main goal to be achieved in this manuscript.", "(1) Controlling the moving object in Fig.REF d: An impurity moving in a superfluid.", "It was discussed in [38], [40] and more recently in [41].", "If an object moves in a superfluid at $ T=0 $ with a velocity below the critical velocity $ v < v^{O}_c $ , there is no viscosity.", "However, when $ v > v^{O}_c $ , a viscosity arises due to the emission of elementary excitations such as vortex rings [39], [42].", "We will comment on this class in Sec.IX-C. (2) Controlling the superfluid in Fig.REF c: The SF is flowing with a finite velocity $ v $ .", "It was discussed in [38], [40] and more recently in [43].", "As shown in [43], the flow of a SF with $ v > v^{SF}_c $ may not destroy SF, but the order parameter develops small additional components at the critical momentum, therefore reduce the superfluid density.", "However, when increasing $ v $ further, the fate of SF is still not known yet.", "As a by-product, we will revisit this class from our effective action approach in either phase or dual density representation in Sec.IX-A and IX-B.", "We will also analyze its difference than class-1 above and class-4 below which is the main event of this paper.", "This story was also reviewed in Appendix E by microscopic calculations such as Bogliubov method and Galileo transformation.", "(3) Controlling the supercurrent in a superconductor: There are similar phenomena in fermionic systems such as a gapped BCS superconductor: when the super-current is below a critical value $ j< j_{c1}\\sim \\Delta /k_F $ where $ \\Delta $ is the gap and $ k_F $ is the Fermi momentum, there is no resistance, but when $ j > j_{c1} $ , there is a first-order transition to a normal state [44].", "More recent studies [45] show that there could be a narrow window of gapless superconductors when $ j_{c1} < j < j_{c2} \\sim e/2 j_{c1} $ before it finally turns to normal at $ j >j_{c2} $ .", "For gapless $ p- $ wave such as He3-A phase [46] or $ d-$ wave superconductors such as high temperature cuprate superconductors, it was believed any supercurrent drives it to a gapless superconductor, so $ j_{c1}=0 $ [47], but there is still not a complete theory yet.", "This part is on fermions, so will be discussed in a separate work on a moving topological fermionic superfluid [32].", "(4) Moving the whole sample in Fig.REF a which is the main event of this paper.", "The optical lattice is at rest in the lab frame, the ultra-cold atoms are loaded on top of it.", "but is observed in a moving frame ( Fig.REF b,REF , REF and REF ).", "Or equivalently, the whole sample of the lattice, the cold atoms and the trap is set to move along a track Fig.REF .", "So it is different than both case (1) and (2) as analyzed in detail in Sec.IX-C.", "The status of the Mott and the SF in the moving frame need to be determined by the effective action Eq.. We are particularly interested in how the Mott and SF near the QPT response to the boost, especially when it is beyond a critical velocity.", "We develop both microscopic calculations and the symmetry based effective actions to achieve such a goal which is summarized in Fig.REF , REF and Fig.REF a,b.", "We first take the effective action approach where we take the boost velocity $ c $ as an independent parameter.", "For $ z=1 $ case, when $ c $ is below a critical velocity, the Mott state and the SF state remain, but their excitation spectrum suffer a Doppler shift.", "However, in the SF side, when $ c$ increases above the critical velocity, the SF becomes a boosted SF (BSF) carrying a finite momentum which spontaneously breaks the $ C $ symmetry ( also the $ CPT $ symmetry ), the transition from the SF to BSF has an exotic, but exact dynamic exponent $ (z_x=3/2,3) $ .", "There are both Doppler shifted Goldstone and Higgs mode in the SF phase, but only Doppler shifted Goldstone mode inside the BSF phase, but no Higgs mode which disappears due to the spontaneously broken $ C $ symmetry.", "In the Mott side, when $ c$ increases above a larger critical velocity than the one inside the SF, the Mott phase turns into the BSF phase with the dynamic exponent $ z=2 $ , also a Type-II dangerously irrelevant operator (DIO) which leads to a Doppler shift term in both the Mott and the BSF near the $ z=2 $ transition line.", "We also evaluate the conserved Noether currents of the higher-order derivative non-relativistic QFT describing the three phases.", "The BSF phase carries the conserved Noether currents due to the spontaneously broken $ C $ symmetry, but the Mott and SF phase do not.", "So the currents can be used as an order parameter characterizing the spontaneously breaking $ C $ symmetry and also to distinguish BSF from the SF.", "By performing a field theory renormalization group (RG) developed for non-relativistic quantum field theory up to two loops, we show that the universality class from the Mott to the SF remains the same as the 3D XY class with $ z=1 $ .", "It is the charge conjugation ( C ) symmetry which dictates the result holds exactly to all loops Charge-vortex duality transformation in the moving frame is also performed to study these new quantum phase transitions in the moving frame.", "Finite temperature properties of quantum or classical ( $ \\hbar \\rightarrow 0 $ limit ) scaling functions of various physical quantities, especially their dependencies on the velocity $ c $ are explored.", "As a byproduct, by a quantum-classical correspondence, we show that the SF to the BSF transition with $ z=(3/2,3) $ may also describe the dynamic transition between the classical sound waves in a medium.", "For $ z=2 $ case in the lab frame, it still has the exact $ P $ and $ T $ symmetry, but the $ C $ symmetry was explicitly broken.", "It also has an emergent Galileo invariance.", "In the moving frame, the Mott to the SF with $ z=2 $ turns into the Mott to the BSF transition, still with $ z=2 $ , but a shifted boundary favoring the BSF phase.", "Then we investigate how to put the GT in a lattice and derive its specific form of the boost in the tight-binding limit on a lattice.", "By performing both non-perturbative and perturbative microscopic calculations on the boost term, we derive the effective actions with $ z= 1 $ and $ z=2 $ under the boost and also establish the connections between the phenomenological parameters in the effective actions and the microscopic parameters in the boson Hubbard model.", "By combining the results achieved from the effective actions and the microscopic calculations in a lattice which are two complementary approaches, we map out qualitatively the global phase diagram of the Boson Hubbard model in the moving frame.", "We show that the quantum phases and QPTs depend sensitively on the inertial frames.", "Counter-intuitively, a Mott insulating phase near the SF-Mott transition may become a BSF phase, but not the other way around.", "For $ z=1 $ , we conclude the BSF is not reachable in the moving frame.", "For $ z=2 $ , we find a type-II dangerously irrelevant term which breaks explicitly the emergent Galileo invariance.", "It is irrelevant near the $ z=2 $ QCP, but leads to the Doppler shift term inside the BSF phase.", "We also study the effects of several other leading irrelevant operators which breaks the emergent Galileo invariance.", "Then we apply our formalism to study the effects of directly boosting the SF which leads to new classes of QPTs.", "If the instability is driven by SF Goldstone mode near $ k=0 $ , then there is a SF to BSF transition with the dynamic exponent $ (z_x=3/2,z_y=3) $ subject to a logarithmic correction.", "If the instability is driven by a roton mode near $ k=k_0 $ , then there is a SF to a stripe supersolid (SSS) transition with the dynamic exponent $ z=1 $ which is in the same universality class of the boosted Mott-SF transition in the $ z= 1 $ case, then a SSS to stripe solid QPT with $ z=2 $ .", "We show that despite applying a pressure only leads to a direct first QPT from SF to a solid, applying a boost leads to two QPTs from SF-SSS-stripe solid, so becomes an effective way to generate a supersolid which is still an elusive novel state of matter.", "The latter case may apply to Helium 4 where the rotons in the SF phase plays an important role.", "We comment on the class-1 problem listed above: a moving impurity, a moving optical lattice and a moving straight wall and treat all the three classes of problem in a unified theoretical framework.", "Finally, by analyzing the connections between first quantization in the many-body wavefunctions and effective actions in the second quantization on and off a lattice, also at different hierarchy energy levels, we demonstrate that new space-time structure could emerge from a quantum or topological phase transition.", "Contrasts to the Doppler shifts in a relativistic quantum field theory, Unruh effects in an accelerating observer and non-relativistic $ AdS_{d+1}/CFT_d $ at $ d=1,2 $ are made.", "Our systematic and complete methods can be extended to study all the quantum or topological phase transitions, even some dynamic classical transitions in any dimension.", "By analyzing the applicabilities and limitations of various established detection methods in a moving frame, we suggest that doing various scattering measurements in a moving sample may become an effective way to tune various quantum and topological phases and phase transitions, also probe the emergent space-time structure near any QPT.", "As a byproduct, from the GT and GI point of view, we comment on the two competing theories describing the gapless compressive state at $ \\nu =1/2 $ : the HLR theory versus Son's Dirac fermion theory.", "The rest of the paper is organised as follows: In the sections II, we will discuss the three QPTs along the three path I,II,III in Fig.REF b respectively.", "Then by employing the field theory renormalization group developed to study non-relativistic QFT, we will study the universality class from the Mott to SF transition along path I in Sec.III.", "We will evaluate the currents in all the three phases in Fig.REF , then perform charge-vortex duality transformation in the moving frame along the path I in Sec.IV.", "We derive the scaling functions of various physical quantities along the three paths in Sec.V and also its classical limit $ \\hbar \\rightarrow 0 $ .", "In Sec.VI, we present the $ z=2 $ case in a moving frame and also contrast with the $ z=1 $ case discussed in the previous version.", "In Sec.VII, we put GT in a lattice, then derive the boost form in the tight binding limit.", "We explore the bare space-time encoded in the many-body wavefunctions in the first quantization and the emergent space-time encoded in the tight-binding limit in the second quantization.", "In Sec.VIII, we derive the effective actions with $ z= 1 $ and $ z=2 $ under the boost.", "The derivation not only provide the physical meanings of the order parameters and the phenomenological parameters in the effective actions, but also bring new insights to the emergent space-time structure near the two QPTs.", "In Sec.IX, we apply our formalism to the case of directly driving the SF in Helium 4 and study the roles of roton under such a driving.", "In Sec.X, we contrast our findings to the relativistic Doppler effects, Unruh effects by an accelerating observer, Helium 4 phase diagram and also comment on the lack of Lorentz invariance in $ AdS_{d+1}/CFT_d $ at $ d=1,2 $ .", "In Sec.XI, we analyze applicable experimental detections in a moving sample.", "Conclusions and perspectives are presented in Sec.XII.", "In several appendices, we perform systematic investigations on GT in various quantum and classical systems and also clarify some confusing treatments on GT in some previous literatures.", "In Appendix A, we develop the Hamiltonian formalism which is complementary to the Lagrangian formalism thoroughly used in the main text.", "In appendix B, we show that our theory in Sec.IV may also describe the dynamic classical transition between sound waves in a moving frame.", "In Appendix C, we analyze the Galileo invariance in a single particle Schrodinger equation in the first quantization and the breaking of Galileo invariance by a spin-obit coupling in the second quantization.", "In Appendix D, we study the excitation spectrum in a SF away from the integer fillings and stress the crucial differences than at the integer fillings discussed in the main text.", "The microscopic perturbation approach to Galileo transformation in a moving SF is presented in Appendix E and compared with the effective action approach in Sec.IX.", "In appendix F and G, we apply our formalism to study GT on many body systems in an external magnetic field, then fractional quantum Hall effects (FQH ).", "In appendix H, we apply the method used in Appendix E to study a moving SF to examine the propagating chiral edge mode of a bulk FQH." ], [ " The effective phase diagram of $ z=1 $ in a moving frame ", "Before starting, we like to stress that Eq.", "is achieved just from symmetry principle, so it is not known how this effective boost $ c $ in this effective action related to the bare boost $ v $ relative to the underlying lattice.", "It is also not known if the other phenomenological parameters such as $ v_x, v_y $ also depends on $ v $ .", "This important question can only be addressed by microscopic calculations on the original boson Hubbard model Eq.REF and will be achieved in Sec.VII and VIII.", "In Sec.I-VI, we will simply take $ c $ as a given parameter and also assume all the other parameters are independent of $ c $ , then work out its phase diagram in the moving frame Fig.REF for $ z=1 $ and Fig.REF for $ z=2 $ .", "This effective action approach is interesting on its own, because it may be applied to many other systems such as the SOC system in a Zeeman field [65].", "Then in Sec.VII and VIII, by performing the GT on the lattice model, we will establish the relations between these phenomenological parameters and the microscopic parameters $ v $ and those in the boson Hubbard model Eq.REF .", "The phenomenological effective action, RG analysis, charge-vortex duality and scaling functions approach in Sec.I-VI and the microscopic calculations on a lattice in Sec.VII-VIII are complementary to each other.", "Their combination will lead to the global phase diagram of Eq.REF in a moving frame with the velocity $ v $ relative to the lattice shown in Fig.REF .", "Expanding Eq.", "leads to SM=dd2r [||2-i2c*y+vx2|x|2+(vy2-c2)|y|2 +a|y2|2 + b |y |4 + r ||2+ u ||4 + ] where the higher derivative or higher order $a,b >0$ terms are not important in the lab frame [48], but may become important in the moving frame, especially near the new quantum phase transitions in Fig.REF .", "In principle, one need also add $ d |\\psi |^2 |\\partial _y^2\\psi |^2 $ to Eq., but it will not change the physical results, so for the notational simplicity, we omit it in the following.", "When writing in terms of the metric $ g_{\\mu ,\\nu } \\partial _{\\mu } \\psi ^{*} \\partial _{\\nu } \\psi $ in $ (\\tau , y ) $ space-time in Eq., it is $ g_{\\tau , \\tau }=1, g_{\\tau , y}=g_{y, \\tau }=-i c, g_{y, y}= v^2_y-c^2 $ .", "The crossing ( or off-diagonal ) metric component $ g_{\\tau , y} $ is the only new term in the moving frame generated by the Galileo transformation which does not appear in Eq.", "in the lab frame.", "It is this new term which represents the new space-time structure near the $ z=1 $ QPT and plays important roles in the moving frame.", "Because the boost $ c $ adds a new tuning parameter, both $\\gamma =v_y^2-c^2 $ and $ \\mu $ can change sign and tune various QPTs in Fig.REF .", "Taking the mean field ansatz $ \\langle \\psi \\rangle =\\sqrt{\\rho }e^{i(\\phi +k_0y)}$ leads to the energy density E[,k0]=[(vy2-c2) k02+a k04 + r ]+ u 2 Minimizing $E[\\rho ,\\phi ,k_0]$ with respect to $\\rho $ and $k_0$ results in k02 ={ll 0, c2 < vy2 c2-vy22a, c2 > vy2 .,              0 ={ll 0, r > 0 and c2 < vy2 -r 2U, r < 0 and c2 < vy2 0, r > (c2-vy2)24a and c2 > vy2 -r2U+(c2-vy2)28aU, r < (c2-vy2)24a and c2 > vy2 .", "Figure: (a) The phase diagram of the effective action Eq.", "with z=1 z=1 as a function of r r and c c in the moving frame.", "For the boson-Hubbard model Eq., r∼(t/U) c -t/U r \\sim (t/U)_c - t/U at a fixed chemical potential μ=U/2 \\mu =U/2 .The two tuning parameters are the effective mass r r and the boost c c .", "c=0 c=0 axis recovers that in the lab frame.The BSF is the new phase which spontaneously breaks the C C symmetry and also the only phase carrying a flowing current.The Mott, SF and Boosted SF (BSF) phase meet at the multi-critical point M M .The Goldstone modes exist in both the SF and BSF, but the Higgs mode exists only in the SF protected by the C C symmetry,disappears in the BSF phase due to its spontaneously breaking of the C C symmetry.As shown in Sec.VII-1, the order parameter to distinguish between SF and BSF can be taken as the bilinear currents Eq..The three QPTs along the three paths I with z=1 z=1 , II with z=(3/2,3) z=(3/2,3) and IIIa or IIIb with z=2 z=2 are presented in the following sections.", "All the four paths can be scanned by scattering measurements shown in Fig..(b) The Renormalization Group (RG) flow of (a).", "The arrows stand for the RG flow directions.The boost 0<c<v 0 < c < v is exactly marginal which stands for the Doppler shift inside the SF phase, so the z=1 z=1 line is a line of fixed points all in 3D XY universality class.The k 0 k_0 of BSF is also exactly marginal which stands for the ordering wavevector in the BSF,so the z=2 z=2 line is also a line of fixed points all in 2d zero density SF-Mott transition class.The RG flow inside the BSF phase is along the constant contour of k 0 k_0 .", "The RG flows lead to the finite temperature scalings in Sec.VIII.The CFT of this novel M point remains to be explored.Here we assume the boost c c and the intrinsic velocity v y v_y as independent parameters.The microscopic evaluations in Sec.VII-VIII show thatthe (3/2,3) (3/2, 3 ) transition line is not reachable for the microscopic boson Hubbard model in Fig.a,b.But as shown in Sec.IX-A, it is reachable for a boosted SF in the class-2 in Fig.c.The phase diagram is summarized in Fig.REF .", "Due to the $ C $ symmetry inside the Mott phase, $ \\omega _{+} ( \\vec{k} )= \\omega _{-} ( -\\vec{k} ) $ , one can look at the instability from either particle or hole band $\\omega _{\\pm }=\\sqrt{v_x^2k_x^2+v_y^2k_y^2}\\pm ck_y$ ." ], [ " The Mott to SF transition along the path I with $ z= 1 $ . ", "Along the path I in Fig.REF b, at the mean-field level, we can substitute $\\psi \\rightarrow \\sqrt{\\rho _0}e^{i\\phi _0}$ into the boosted effective action Eq.", "S= r 0+ u 02 When $ r > 0$ , it is in the Mott phase with $ \\langle \\psi \\rangle =0$ .", "When $ r < 0$ , it is in the SF phase with $ \\langle \\psi \\rangle =\\sqrt{\\rho _0}e^{i\\phi _0}$ where $\\rho _0=-r/2u$ and $\\phi _0$ is a arbitrary angle due to the $ U(1)$ symmetry.", "(a) The Mott state: In the Mott phase, $ r > 0$ , one can write $\\psi =\\psi _R+i \\psi _I$ as its real part and imaginary part and expand the action upto second order SMI =dd2r =R,I [(-icy )2 +vx2(x )2 +vy2(y )2 + r ( )2] which lead to 2 degenerate gapped modes with the effective mass $ r > 0 $ : R,I= r +vx2kx2+vy2ky2-cky which indicates the dynamic exponent $ z=1 $ .", "(b) The SF phase: In the SF phase, $ r < 0 $ , we can write the fluctuations in the polar coordinates $\\psi =\\sqrt{\\rho _0+\\delta \\rho }e^{i(\\phi _0+\\phi )}$ and expand the action up to the second order in the fluctuations: $\\mathcal {S}& = & \\frac{1}{2\\rho _0}\\int d\\tau d^2r\\Big ([(\\partial _\\tau -ic\\partial _y)\\delta \\rho ]^2+[v_x^2(\\partial _x\\delta \\rho _+)+v_y^2(\\partial _y\\delta \\rho )]+4\\rho _0U(\\delta \\rho )^2 \\nonumber \\\\& + &\\rho _0^2[(\\partial _\\tau -ic\\partial _y)\\phi ]^2+\\rho _0^2[v_x^2(\\partial _x\\phi )^2+v_y^2(\\partial _y\\phi )^2] + \\cdots \\Big )$ where $ \\cdots $ means the coupling between the two modes.", "Due to the $ C $ symmetry dictating $ z=1 $ ( See Sec.VI ), one finds one gapless Goldstone $ \\phi $ mode and one gapped Higgs $ \\delta \\rho $ mode: $\\omega _{\\text{G}}& = & \\sqrt{v_x^2k_x^2+v_y^2k_y^2}-c k_y \\nonumber \\\\\\omega _{\\text{H}}& = & \\sqrt{4\\rho _0 u +v_x^2k_x^2+v_y^2k_y^2}-c k_y$ Note that it is the $ C $ symmetry which ensures the separation of the real part from the imaginary part when $ r > 0 $ in the Mott phase in Eq.REF and the separation of the Higgs mode from the Goldstone mode when $ r < 0 $ in the SF phase in Eq.REF .", "Intuitively, one can say the two degenerate gapped modes in Eq.REF turn into the Goldstone mode and the Higgs mode in Eq.REF through the $ z=1 $ QPT from the Mott phase to the SF phase.", "(c) The QCP: a SF-Mott transition still with $ z=1 $ in the moving frame: RG analysis If putting $ c=0 $ in the effective action Eq., it is nothing but a 3D XY universality class with the critical exponents $ \\nu =0.67, \\eta =0.04 $ which is emergent Lorentz invariant.", "The $ z=1 $ is protected by the Lorentz invariance at $ c=0 $ .", "The interaction $ u $ term is relevant at $ c=0 $ and controlled by the 3D XY fixed point.", "Any $ c > 0 $ breaks the Lorentz invariance explicitly.", "So the action is neither Lorentz invariant nor Galileo invariant.", "The $ c $ term is exact marginal suggesting a line of fixed points.", "The RG flow of $ u $ along the fixed line is determined by RG calculations in Sec.VI.", "It was shown that despite the lack of Lorentz invariance at $ c \\ne 0 $ , the $ C $ symmetry still detects the dynamic exponent $ z=1 $ .", "The Mott to the SF transition at $ 0 < c < v_y $ remains in the 3D XY universality class.", "One can also look at the $ z=1 $ QCP from 3d CFT point of view: at $ c=0 $ , it has the emergent pseudo-Lorentz invariance, respects the exact C, P and T separately ( then CPT ) and scale invariance with $ z=1 $ .", "It is a fixed point in the 3D XY universality class.", "Any $ 0 < c < v $ just adds a marginal direction to the 3d CFT: it breaks P and T separately, but still keeps C, PT ( then CPT ) and scale invariance with $ z=1 $ .", "Note that Lorentz invariance always implies $ z=1 $ , but not otherwise.", "Of course, in the CFT in materials, the LI is always a pseudo one [34].", "In the CFT in the string theory such as SUSY Yang-Mills ( see also the conclusion section ), the LI is the real one.", "Despite the $ z=1 $ line has a $ c=0 $ limit, the other two QPTs have no $ c=0 $ counter-parts, so can only be observed in the moving frame.", "As stressed in the introduction, just from the symmetry point of view, there is indeed a preference frame where the lattice is at rest, and there is an enhanced symmetry.", "In a sharp contrast, in the relativistic QFT, all the inertial frames are related by LT, all the physical laws take the same form, therefore have the same symmetry." ], [ " The SF to BSF quantum Lifshitz transition along path II with $ (z_x, z_y)=(3/2,3) $ . ", "Inside the SF phase at a fixed $ r < 0 $ , as $ c $ increases along the path II in Fig.REF b, there is a quantum Lifshitz transition from the SF phase to the BSF driven by the boost of the Goldstone mode in Eq.REF .", "Because the gapped Higgs mode remains un-critical across the transition, one can simply drop it.", "In fact, as shown below, the Higgs mode disappears in the BSF side due to the explicit $ C $ symmetry breaking inside the BSF.", "Although the Goldstone mode to the quadratic order in Eq.REF is enough inside the SF phase, when studying the transition to the BSF, one must incorporate higher derivative terms and also higher order terms in Eq.", "to the Goldstone mode in Eq.REF .", "A simple symmetry analysis leads to the following bosonic quantum Lifshitz transition from the SF to BSF in terms of the phase degree of freedom ( which can also be derived by substituting $\\psi =\\sqrt{\\rho _0+\\delta \\rho }e^{i(\\phi _0+\\phi )}$ into Eq., then integrating out the Higgs mode $ \\delta \\rho $ in Eq.REF ): $\\mathcal {S}_{SF-BSF}=\\int d\\tau d^2x[(\\partial _\\tau \\phi -ic\\partial _y\\phi )^2+v_x^2(\\partial _x\\phi )^2+v_y^2(\\partial _y\\phi )^2+a(\\partial _y^2\\phi )^2+b(\\partial _y\\phi )^4]$ where $a,b>0$ terms come from those in Eq., especially $ \\gamma = v^2_y- c^2 $ is the tuning parameter which drives the quantum Lifshitz transition from the SF phase to the BSF phase [51].", "A simple scaling shows that when $ z=1 $ inside the SF phase $ [a]=-2, [b]=-3 $ , so they are irrelevant inside the SF phase, but become important near the SF-BSF transition as to be shown in the following.", "The mean-field state can be written as $\\phi =\\phi _0+k_0 y$ .", "Substituting it to the effective action Eq.REF leads to: S0 (vy2-c2)k02+bk04 At a low boost $c^2 < v_y^2 $ , $k_0=0$ is in the SF phase.", "At a high boost $ r<0, c^2 > v_y^2 $ $k_0^2=(c^2-v_y^2)/2b$ which shows the BSF phase has the modulation $ \\pm k_0 $ along the $ y-$ axis.", "Note that the numerical value of $ k_0 $ in Eq.REF is different than that listed in Eq.. That should not disturbing at all, because the former applies near the SF-BSF transition, the latter applies near the $ z=2 $ Mott-SF transition to be discussed in the next section.", "(a) The excitation spectrum in the SF phase At a low boost $c^2<v_y^2$ inside the SF phase, the quantum phase fluctuation can be written as $\\phi =\\phi _0+\\phi $ .", "Expanding the action Eq.REF upto the second order leads to: SSF =dd2r [(-icy)2 +vx2(x)2 +vy2(y)2] which reproduces the gapless Goldstone mode in Eq.REF inside the SF phase: k =vx2 kx2+vy2ky2-cky which is consistent with the Goldstone mode in Eq.REF .", "The Higgs mode was dropped at very beginning, so can not be seen in Eq.REF .", "(b) The spontaneous $ C $ symmetry breaking in the BSF phase Due to the exact $ C $ symmetry, $ \\pm k_0 $ in Eq.REF are related by the $ C $ symmetry, so the ground state could take $ \\pm $ sign or its any linear combination.", "To determine the ground state, we take the most general mean field ansatz: $\\langle \\psi \\rangle =\\sqrt{\\rho }(c_1e^{ik_0y}+c_2e^{-ik_0y})$ with $|c_1|^2+|c_2|^2=1$ .", "Substituting Eq.REF into Eq., integration over the space kills the oscillating parts and leads to its energy density: E[,k0,c1,c2]=[(vy2-c2)k02+ak04 + r ]+(1+2|c1|2|c2|2) (u+ bk40) 2 $ u >0$ and $ b > 0 $ dictates the minimization condition $c_1c_2=0$ .", "So the ground-state is either $\\langle \\psi \\rangle =\\sqrt{\\rho }e^{ik_0y}$ or or $\\langle \\psi \\rangle =\\sqrt{\\rho }e^{-ik_0y}$ which implies the spontaneously breaking of the $ C $ symmetry.", "We call such a $ C $ symmetry broken SF state the Boosted superfluid (BSF) [50].", "(c) The excitation spectrum in the BSF phase Inside the BSF phase, the quantum phase fluctuations can be written as $ \\phi =\\phi _0+k_0y+\\phi $ .", "Expanding the action upto the second order in the phase fluctuations leads to SBSF =dd2r [(-icy)2 +vx2(x)2 +(vy2+6bk02)(y)2] which leads to the gapless Goldstone mode inside the BSF phase: $\\omega _\\mathbf {k}=\\sqrt{v_x^2 k_x^2+(3c^2-2v_y^2)k_y^2+ak_y^4}-ck_y$ where one can see $3c^2-2v_y^2>= 2(c^2-v_y^2)+ c^2 > c^2 $ when $c^2>v_y^2$ , thus the $\\omega _\\mathbf {k}$ is stable in BSF phase.", "Due to the spontaneous $ C $ symmetry breaking, the Higgs mode may not even exist anymore in the BSF phase.", "This result will also be confirmed further in the next section from the Mott to the BSF transition.", "(d) The exotic QCP scaling with the dynamic exponents $ (z_x=3/2, z_y=3 ) $ It is instructive to expand the first kinetic term in Eq.REF as: $\\mathcal {S}=\\int d\\tau d^2r[Z(\\partial _\\tau \\phi )^2-2i c \\partial _\\tau \\phi \\partial _y\\phi +v_x^2(\\partial _x\\phi )^2 + \\gamma (\\partial _y \\phi )^2+a(\\partial _y^2\\phi )^2+b(\\partial _y\\phi )^4]$ where $ Z $ is introduced to keep track of the renormalization of $ (\\partial _\\tau \\phi )^2 $ , $ \\gamma = v^2_y- c^2 $ is the tuning parameter.", "The scaling $ \\omega \\sim k^3_y, k_x \\sim k^2_y $ leads to the exotic dynamic exponents $ (z_x=3/2, z_y=3 ) $ .", "Then one can get the scaling dimension of $ [\\gamma ]=2 $ which is relevant, as expected, to tune the transition, but $ [Z]=[b]=-2 < 0 $ , so both are leading irrelevant operators[51] which determine the finite $ T $ behaviours and corrections to the leading scalings.", "Setting $ Z=b=0 $ in Eq.REF leads to the Gaussian fixed action at the QCP where $ \\gamma =0 $ .", "Exotically and interestingly, it is the crossing matric $ g_{\\tau , y}=g_{y, \\tau }=-i c $ in Eq.REF which dictates the quantum dynamic scaling near the QCP.", "It is a direct reflection of the new emergent space-time near the $ z=1 $ QPT.", "As a byproduct, the results achieved here can also be applied to study the classical dynamic phase transitions in sound waves in a medium to be discussed in Appendix B." ], [ " The Mott to BSF Transition along the path III with $ z=2 $ . ", "When $k_0\\ne 0$ along the path III in Fig.REF b, it is convenient to introduce the new order parameter $\\psi =\\tilde{\\psi }e^{ik_0y}$ , then the original action Eq.", "can be expressed in terms of $\\tilde{\\psi }$ $\\mathcal {S} = \\int d\\tau d^2r& \\big ( &|\\partial _\\tau \\tilde{\\psi }|^2-2ck_0\\tilde{\\psi }^*\\partial _\\tau \\tilde{\\psi }-i2c\\partial _\\tau \\tilde{\\psi }^*\\partial _y\\tilde{\\psi }+v_x^2|\\partial _x\\tilde{\\psi }|^2+(v_y^2-c^2+6ak_0^2)|\\partial _y\\tilde{\\psi }|^2 \\nonumber \\\\& - & i2k_0(v_y^2-c^2+2ak_0^2)\\tilde{\\psi }^*\\partial _y\\tilde{\\psi }+ [r + (v_y^2-c^2) k_0^2 + a k_0^4]|\\tilde{\\psi }|^2+U|\\tilde{\\psi }|^4+ \\cdots \\big )$ where $\\cdots $ denotes all the possible high-order derivative term.", "Setting $i2k_0(v_y^2-c^2+2ak_0^2)\\tilde{\\psi }^*\\partial _y\\tilde{\\psi }=0$ leads to $k_0^2=\\frac{c^2 -v_y^2}{2a}$ which is the same as Eq.", "at $ c^2 > v_y^2 $ .", "As shown in the last section, due to the spontaneous $ C $ symmetry breaking in the BSF, one can only take one of the $ \\pm k_0 $ value.", "By using $ 2ak^2_0=c^2-v_y^2$ , one can simplify the above action to S=dd2r [ ||2 -i2c*y +2ck0* +vx2|x|2 +4ak02|y|2 +(r-a k04)||2+ u ||4 ] where one can observe $ k_0 \\ne 0 $ leads to a linear derivative term $ \\tilde{\\psi }\\partial _\\tau \\tilde{\\psi }^* $ .", "It dictates the dynamic exponent $ z=2 $ .", "(a) Scaling analysis near the $ z=2 $ QCP Simple scaling analysis shows that the first term $|\\partial _\\tau \\tilde{\\psi }|^2$ is irrelevant with scaling dimension $ -2 $ , the second term is the metric crossing term $\\partial _\\tau \\tilde{\\psi }^*\\partial _y\\tilde{\\psi }$ which is irrelevant with the scaling dimension $ -1 $ , the third ( linear derivative ) term $\\tilde{\\psi }^*\\partial _\\tau \\tilde{\\psi }$ leads to $ z=2 $ .", "After only keeping the leading irrelevant term which is the metric crossing term, we arrive at the effective action: S=dd2r ( Z1* -iZ2*y +vx2|x|2 +vy2|y|2 -||2 + u||4 ) where $Z_1=-2ck_0 $ , $Z_2=2c$ , $\\tilde{v}_x^2=v_x^2$ , $\\tilde{v}_y^2=4ak_0^2=2(c^2-v_y^2)$ .", "It is the effective chemical potential: $\\tilde{\\mu }=-r+ak_0^4 =-(r-r_c),~~ r_c=a k^4_0= \\frac{ ( c^2- v^2_y )^2 }{4a} > 0$ which tunes the Mott to BSF transition.", "As shown in Path-IIIa and IIIb, there are two independent ways to tune $\\tilde{\\mu } $ : Vertical path-IIIa, at a fixed $ r > 0 $ , one increases $ c $ , therefore $ k_0 $ or Horizontal path-IIIb, at a fixed $ \\gamma < 0 $ , one increases $ -r $ .", "Now we focus on near the $ z=2 $ quantum critical line.", "Due to $ [Z_2]=-1 $ , $ Z_2 $ metric crossing term gets to zero quickly under the RG flows, so can be treated very small $ Z_2 \\ll Z_1 $ .", "In the following, we will use this fact to simplify the excitation spectrum in the Mott and BSF phase and also stress the roles of the leading irrelevant operator $ Z_2 $ .", "(b) Excitations in the Mott phase: In the Mott phase, $\\tilde{\\mu } < 0$ and $\\rho _0=0$ , the excitation spectrum is = -+vx2kx2+vy2ky2Z1-Z2ky =vy2k*2 - Z1-Z2k* +vx2kx2Z1-Z2k* +(Z12vy2 - Z22)(ky-k*)2(Z1-Z2k*)3 + where $k_*=\\frac{Z_1}{Z_2}[1-\\sqrt{1-(Z_2^2\\tilde{\\mu })/(v_y^2Z_1^2)}] $ .", "Consider the $Z_2\\ll Z_1 $ limit, the result is simplified as =Z-11[ - +vx2kx2+vy2(ky-k* )2 ] where $ k_{*}= \\frac{Z_2\\tilde{\\mu }}{2 Z_1 v_y^2} $ which vanishes at the phase boundary $ \\tilde{\\mu }=0 $ .", "The excitation spectrum has one minimum at $ (0,0) $ right at the QCP, but at $ (0, k_{*} ) $ away from it inside the Mott phase.", "It indicates the condensation at $ k_0 $ with the dynamic exponent $ z=2 $ .", "It can be contrasted to Eq.REF where the Doppler shift term appears outside of the square root indicating $ z=1 $ .", "It contains also both particle and hole excitations.", "Here it appears just as a shift in $ k_y $ implying $ z=2 $ , it only contains the particle excitation spectrum, while the hole excitation is at much higher energy, so can be dropped.", "It is the leading irrelevant metric crossing term $ Z_2 $ which leads to the shift of the minimum away from the origin.", "(c) Excitations in the BSF phase: In the BSF phase, $\\tilde{\\mu } > 0$ and $\\rho _0 >0$ , the excitation spectrum are: =Z-11 [ 4 u 0[vx2 kx2+( vy2+ u 0Z22)ky2] +2 u 0Z2 ky ] which is always stable inside the BSF phase.", "It shows the Doppler shift term $ \\pm $ corresponds to the P and H excitations respectively.", "Due to $ z=2 $ QCP, the magnitude and phase are conjugate to each other, so the Higgs mode inside the SF phase in Fig.REF does not exist anymore inside the BSF phase.", "As argued below Eq.REF , this fact is also due to the spontaneous $ C $ symmetry breaking inside the BSF phase.", "The Doppler shift term $ \\sim u \\rho _0 c $ inside the BSF phase near the $ z=2 $ line can be contrasted to that $ =c $ near the $ z=(3/2,3) $ line in Eq.REF , also that $ =c $ inside the SF phase in Eq.REF .", "The Doppler shift term $ = c $ in Eq.REF inside the Mott phase near the $ z=1 $ line can also be contrasted to that inside the Mott phase $ k_{*} $ in Eq.REF near the $ z=2 $ line.", "These facts show that at the effective action level, as soon as the Doppler shift in Eq.REF inside the Mott phase near the $ z=1 $ line is given, then the Doppler shifts in all the other phases and regimes are fixed and connected by the Multi-critical (M) point.", "They could be renormalized to different values.", "As shown in the microscopic calculations in Sec.VII and VIII, this value $ c $ in the Mott phase is given in Eq.REF in terms of the bare boost velocity respect to the lattice $ v $ , and also the microscopic parameters $ t_0, t_{b1}, t_{b2} $ determined by the Wannier functions.", "So it could even change sign when $ v $ moves past $ v_c=t_0/t_{b1} $ .", "Eq.REF and Eq.REF show that it is the type-II dangerously irrelevant metric crossing term $ Z_2 $ which leads to the Doppler shift term in the Mott and BSF phase respectively.", "They can be contrasted to Eq.REF along the path-I and Eq.REF along the path-II respectively.", "So we reach consistent results from the $ z=2 $ line and the $ z=(3/2, 3 ) $ line in Fig.REF .", "So far, we analyzed the effective action Eq.", "by mean field theory + Gaussian fluctuations.", "The results may be valid well inside the phases, but will surely break down near the QPTs.", "It becomes important to study the nature of the QPTs by performing renormalization group (RG) analysis.", "Unfortunately, the conventional Wilsonian momentum shell method seems in-applicable to study the RG in he moving frame.", "Very fortunately, the field theory methods developed for non-relativistic quantum field theory in [24], [10], [25] by one of the authors can be effectively applied in the moving frame.", "Following this method, we will perform the RG to investigate the nature of the QCP from Mott to SF in Fig.REF along the path I in Fig.REF .", "We stress the important roles played by the $ C $ symmetry.", "1.", "RG of the self-energy at two -loops Figure: The self-energy of the complex bosons to one loop (a) and two loops (b) of Eq..", "The line indicates the propagator Eq..", "The arrow gives the creation of the complex bosons.The dot stands for the interaction u u .", "k=(k 0 ,k →) k= ( k_0, \\vec{k} ) means the 3 momentum.In the following, by applying the method [10], [24], [25] developed for the field renormalization RG for non-relativistic QFT, the idea is to split the integrals to frequency and momentum, then always perform the integral over the frequency first, then perform dimensional regularization in the momentum space only.", "However, in relativistic QFT, due to the Euclidian invariance, such a splitting is not necessary, the frequency and momentum can be combined into a 4 momentum.", "The advantage of this method over the traditional Wilsonian momentum shell method is that it is systematic expansion going to any loops.", "Because the field theory method only focus on canceling the UV divergence at $ T=0 $ , so one can just look at the massless case at the QCP $ r=0 $ .", "From Eq., one can identify the bare single particle ( boson ) Green function in $ ( \\vec{k}, \\omega _n ) $ space: $G_0 ( \\vec{k}, \\omega _n )= \\langle \\psi (\\vec{k}, \\omega _n) \\psi ^{*}(\\vec{k}, \\omega _n) \\rangle =\\frac{1}{ -( i \\omega _n -c k_y )^2 + k^2_x + k^2_y }=\\frac{i}{2k}[ \\frac{1}{ \\omega _n +i \\epsilon _{+}( \\vec{k} ) } - \\frac{1}{ \\omega _n - i \\epsilon _{-}( \\vec{k} ) } ]$ where $ \\epsilon _{\\pm }( \\vec{k} )= k \\pm c k_y \\ge 0 $ is the particle-hole excitation energy respectively.", "The sum of them is $ \\epsilon _{+}( \\vec{k} ) + \\epsilon _{-}( - \\vec{k} )= 2 \\epsilon ( \\vec{k} )= 2 k $ .", "The $ C $ symmetry dictates $ \\epsilon _{+}( \\vec{k} )= \\epsilon _{-}( - \\vec{k} ) > 0 $ which plays crucial roles in the RG.", "At $ c=0 $ , it reduces to the usual PH symmetry which dictates $ \\epsilon _{+}( \\vec{k} )= \\epsilon _{-}( \\vec{k} )= \\epsilon _{-}( -\\vec{k} ) $ .", "We first look at the boson self-energy at one -loop Fig.REF a.", "$(4a)= \\int \\frac{d^d q}{ (2 \\pi )^d } \\int \\frac{d \\nu }{ 2 \\pi } G( \\vec{q}, \\nu )=\\int \\frac{d^d q}{ (2 \\pi )^d } \\int \\frac{d \\nu }{ 2 \\pi }\\frac{i}{2q}[ \\frac{1}{ \\nu +i \\epsilon _{+}( \\vec{q} ) } - \\frac{1}{ \\nu - i \\epsilon _{-}( \\vec{q} ) } ]= \\int \\frac{d^d q}{ (2 \\pi )^d }\\frac{1}{2q}=0$ where we first perform the integral over the frequency, then doing dimensional regularization in the momentum space.", "One can see that at one-loop order, $ c $ does not even appear, so it is identical to the relativistic case.", "One loop is trivial.", "To find a non-vanishing anomalous dimension for the boson field, one must get to two loops Fig.REF b : $(4b) &= & \\int \\frac{d^d q}{ (2 \\pi )^d } \\int \\frac{d q_0}{ 2 \\pi } \\int \\frac{d^d p}{ (2 \\pi )^d } \\int \\frac{d p_0}{ 2 \\pi }G( \\vec{q}, q_0 ) G( \\vec{p}, p_0 ) G( \\vec{k}+ \\vec{p} - \\vec{q}, k_0+p_0-q_0 ) =\\nonumber \\\\& & \\int \\frac{d^d q}{ (2 \\pi )^d } \\int \\frac{d q_0}{ 2 \\pi } \\int \\frac{d^d p}{ (2 \\pi )^d } \\int \\frac{d p_0}{ 2 \\pi }\\frac{i}{2q} \\frac{i}{2p} \\frac{i}{2 | \\vec{k} +\\vec{p} -\\vec{q} | }[ \\frac{1}{ q_0 +i \\epsilon _{+}( \\vec{q} ) } - \\frac{1}{ q_0 - i \\epsilon _{-}( \\vec{q} ) } ]\\nonumber \\\\& \\times & [ \\frac{1}{ p_0 +i \\epsilon _{+}( \\vec{p} ) } - \\frac{1}{ p_0 - i \\epsilon _{-}( \\vec{p} ) } ][ \\frac{1}{ k_0+p_0-q_0 +i \\epsilon _{+}( \\vec{k}+\\vec{p} -\\vec{q} ) } - \\frac{1}{k_0+p_0-q_0 - i \\epsilon _{-}( \\vec{k}+\\vec{p} -\\vec{q} ) } ]\\nonumber \\\\& = & \\int \\frac{d^d q}{ (2 \\pi )^d } \\int \\frac{d^d p}{ (2 \\pi )^d }\\frac{1}{2q} \\frac{1}{2p} \\frac{1}{2 | \\vec{k} +\\vec{p} -\\vec{q} | }\\frac{ q + p+ | \\vec{k} +\\vec{p} -\\vec{q} | }{ ( k_0 + i c k_y )^2 + ( q + p+ | \\vec{k} +\\vec{p} -\\vec{q} | )^2 }\\nonumber \\\\&= & \\frac{u^2}{ 4 \\pi ^2 \\epsilon } [ ( k_0 + i c k_y )^2 + k^2 ] + \\cdots $ where we only list the field renormalization UV divergence and $ \\cdots $ means the UV finite parts.", "when one perform the frequency integral, one must pick two poles at the two opposite side of the frequency integral to get a non-vanishing answer.", "Putting $ c=0 $ gives back to the relativistic case.", "Because $ c $ always appears in the combination of $ k_0 + i c k_y $ , so the UV divergency is identical to that of the $ c=0 $ case.", "It gives the identical anomalous dimension to the $ c=0 $ case.", "So the dynamic exponent $ z=1 $ at least to two loops.", "In fact, we expect that the $ C $ symmetry dictates $ z=1 $ is exact in Fig.2.", "2.", "RG of the interaction at one -loop Figure: The renormalization of the interaction vertex of Eq.", "upto one loop.Now we move to the interaction vertex Fig.REF .", "Setting the external frequency and momentum $ \\omega = \\omega _1 + \\omega _2, \\vec{k}= \\vec{k}_1 + \\vec{k}_2 $ , Fig.REF a can be written as: $(5a) & = & \\int \\frac{d^d q}{ (2 \\pi )^d } \\int \\frac{d \\nu }{ 2 \\pi } G( \\vec{q}, \\nu ) G( \\vec{k} - \\vec{q}, \\omega - \\nu )\\nonumber \\\\& = & \\int \\frac{d^d q}{ (2 \\pi )^d } \\int \\frac{d \\nu }{ 2 \\pi }\\frac{i}{2q} \\frac{i}{2 | \\vec{k} -\\vec{q} | } [ \\frac{1}{ \\nu +i \\epsilon _{+}( \\vec{q} ) } - \\frac{1}{ \\nu - i \\epsilon _{-}( \\vec{q} ) } ][ \\frac{1}{ \\omega -\\nu +i \\epsilon _{+}( \\vec{k} -\\vec{q} ) } - \\frac{1}{\\omega - \\nu - i \\epsilon _{-}( \\vec{k} -\\vec{q} ) } ]\\nonumber \\\\& = & \\int \\frac{d^d q}{ (2 \\pi )^d } \\frac{1}{2q | \\vec{k} -\\vec{q} | }\\frac{ q + | \\vec{k} -\\vec{q} | }{ ( \\omega + i c k_y )^2 + ( q + | \\vec{k} -\\vec{q} | )^2 }\\nonumber \\\\&= & \\frac{u^2}{ 8 \\pi ^2 \\epsilon } + \\cdots $ where $ \\cdots $ means the UV finite parts.", "when one perform the frequency integral, one must pick two poles at the two opposite side of the frequency integral to get a non-vanishing answer.", "Putting $ c=0 $ gives back to the relativistic case.", "Because $ c $ always appears in the combination of $ \\omega + i c k_y $ , so the UV divergency is identical to that of the $ c=0 $ case.", "One can get a similar expression in Fig.REF (b), (c) cases.", "So the $ \\beta $ function $ \\beta ( u ) $ is identical to the $ c=0 $ case.", "This shows that the boost $ c $ is exactly marginal with always the combination $ \\omega + i c k_y $ appearing in all the physical quantities ( see Fig.REF b and Eq.REF ).", "Despite the critical exponent is the same as the $ c=0 $ case, various physical quantities at a finite $ T $ still depend on $ c $ as calculated in the following.", "3.", "Finite temperature RG Following the method developed in [10], [24], [25], we can also study the RG at a finite temperature.", "The strategy is that even for a relativistic QFT at $ T=0 $ , any finite temperature breaks the Lorentz invariance, so the imaginary time direction has to be treated separately from the space, the summation over the imaginary frequency has to be performed first before doing the dimensional regularization in the momentum space only.", "Now we look at the boson self-energy at one-loop and a finite temperature Fig.REF a, Eq.REF becomes: $(4a) ( T >0 ) & = & u \\int \\frac{d^d q}{ (2 \\pi )^d } \\frac{1}{\\beta } \\sum _{i \\nu _n }\\frac{1}{2q}[ \\frac{1}{ i \\nu - \\epsilon _{+}( \\vec{q} ) } - \\frac{1}{ i \\nu + \\epsilon _{-}( \\vec{q} ) } ] \\nonumber \\\\&= & u [ \\int \\frac{d^d q}{ (2 \\pi )^d }\\frac{1}{2q} + 2 \\int \\frac{d^d q}{ (2 \\pi )^d }\\frac{1}{2 q} \\frac{1}{ e^{ \\beta \\epsilon _{+}( \\vec{q} ) }-1 } ] \\nonumber \\\\&= & 2 u (k_B T)^2 \\int \\frac{d^3 q}{ (2 \\pi )^3 } \\frac{1}{2 q}\\frac{1}{ e^{q+ cq_y}-1 },~~~~~~ d=3$ where we evaluate the last line at the upper critical dimension $ d_u=3 $ , drop the first term which is the $ T=0 $ result in Eq.REF .", "So $ c $ does appear at any $ T > 0 $ .", "Setting $ c=0 $ recovers the result in the lab frame $ u (k_B T)^2/2 \\pi ^2 \\int ^{\\infty }_0 \\frac{ x dx }{ e^x-1} = u (k_B T)^2/24 $ .", "The interaction $ u $ is marginally irrelevant at $ d_u=3 $ and will lead to logarithmic corrections to Eq.REF .", "For $ d=2 < d_u=3 $ , the integral in Eq.REF is IR divergent, this is expected because the Gaussian fixed point flows to the Wilson-Fisher fixed point, then it goes to the $ d=2 $ scaling analysis at Sec.VII.", "Fig.REF b can also be similarly evaluated at a finite $ T $ , It is evaluated in Eq.REF at $ T=0 $ where the pole structure in the six terms considerably simplify the final UV divergent answer.", "But at a finite $ T $ , all the six terms contribute due to the boson distribution factor.", "The cancellation of the $ T=0 $ UV divergence in Eq.REF by the counter-terms leads to a finite answer at $ T > 0 $ ." ], [ " The Noether current and the charge -vortex duality along the path I", "So far, we analyzed the effective action Eq.", "by mean field theory + Gaussian fluctuations.", "Here we will study it by non-perturbative duality transformation.", "It was well known that there is a charge-vortex duality at $ z=1 $ case in the lab frame [67], [68], [69].", "The Mott insulating phase is due to the condensation of vortices.", "The charge -vortex duality in the moving frame provides a non-perturbative proof of $ z =1 $ from the Mott to SF transition along the path I when $ c < v_y $ tuned by $ r $ .", "It also provides an exact proof that the boost $ c $ is exactly marginal with always the combination $ \\omega + i c k_y $ appearing in all the physical quantities ( see Fig.REF b ).", "But one need to study the conserved Noether current first before investigating the charge vortex duality.", "In Sec.A, we will also pay special attentions to the higher derivative terms such as the $ a, b $ terms in Eq.." ], [ " The conserved Noether current in the moving frame ", "The global $ U(1) $ symmetry $ \\psi \\rightarrow \\psi e^{ i \\chi } $ leads to the conserved Noether current $ \\tilde{J}_\\mu =( J_\\tau , J_x, \\tilde{J}_y ) $ in Eq.", ": $J_\\tau & = & i( \\psi ^{*} \\tilde{\\partial }_\\tau \\psi - \\psi \\tilde{\\partial }_\\tau \\psi ^{*} )= i[ ( \\psi ^{*} \\partial _\\tau \\psi - \\psi \\partial _\\tau \\psi ^{*} ) -ic ( \\psi ^{*} \\partial _y \\psi - \\psi \\partial _y \\psi ^{*} ) ]\\nonumber \\\\J_x & = & i v^2_x ( \\psi ^{*} \\partial _x \\psi - \\psi \\partial _x \\psi ^{*} )\\nonumber \\\\\\tilde{J}_y & = & i v^2_y ( \\psi ^{*} \\partial _y \\psi - \\psi \\partial _y \\psi ^{*} ) -ic J_{\\tau } = J_y-ic J_{\\tau }$ which is odd under the C, but even under PT, so odd under the CPT [81].", "Both $ J_\\tau $ and $ \\tilde{J}_y $ contain the effects from the boost $ c $ .", "It also show the current along the $ y $ direction $ \\tilde{J}_y $ is the sum of the intrinsic one $ J_y $ and the one due to the boost $ -ic J_{\\tau } $ .", "They satisfy $\\partial _\\tau J_\\tau + \\partial _x J_x+ \\partial _y (J_y - ic J_\\tau ) =0$ which is equivalent to Eq.REF .", "This equivalence gives the physical meaning of the particle-hole 3-currents $ {J}_\\mu =( J_\\tau , J_x, J_y ) $ introduced in the charge-vortex duality.", "1.", "The Noether current in the Mott, SF and BSF phases Now we evaluate the mean field 3-currents in all the three phases in Fig.REF .", "Plugging in $ \\langle \\psi \\rangle =\\sqrt{\\rho _0} e^{i k_0 y } $ into Eq.REF leads to [56]: $J_\\tau & = & i 2 k_0 c \\rho _0, \\nonumber \\\\J_x & = & 0 \\nonumber \\\\\\tilde{J}_y & = & 2 k_0 \\rho _0 ( c^2-v^2_y ) ,~~~J_y= -2 k_0 \\rho _0 v^2_y$ where the factor of $ i $ is due to the imaginary time $ \\tau =i t $[36].", "Near the $ z=2 $ line in Fig.REF , $ k_0= \\pm \\sqrt{ \\frac{c^2-v^2_y}{2a} } $ listed in Eq.. As shown in Sec.V, due to the $ C $ symmetry breaking inside the BSF phase, one can only pick one of the $ \\pm $ .", "The Mott state with $ \\rho _0=0 $ carries nothing dictated by the $ C $ symmetry.", "The BSF near the $ z=2 $ line starts to carry both the density $ J_{\\tau } $ and the current $ \\tilde{J}_y $ which comes from both the particle and the hole.", "Note that the $ a $ term in Eq.", "also contribute to the conserved current.", "In general, any quantum field theory contains a high order derivative such as $ L( \\psi , \\partial _\\mu \\psi , \\partial _\\mu \\partial _\\nu \\psi ) $ , the high derivative term [121] also contributes to the conserved Noether current: $J^a_\\mu = \\frac{ \\partial {\\cal L} }{ \\partial ( \\partial _\\mu \\phi _i ) } \\frac{ \\delta \\phi _i }{ \\delta \\omega _a}+ \\frac{ \\partial {\\cal L} }{ \\partial (\\partial _\\mu \\partial _\\nu \\phi _i) } \\partial _\\nu ( \\frac{ \\delta \\phi _i }{ \\delta \\omega _a} )-[\\partial _\\nu \\frac{ \\partial {\\cal L} }{ \\partial (\\partial _\\mu \\partial _\\nu \\phi _i) }] \\frac{ \\delta \\phi _i }{ \\delta \\omega _a}$ where the sum over $ i $ is assumed.", "When adding back $ a $ term's contribution: $J^a_y= -i a ( \\psi ^{*} \\partial ^3_y \\psi - \\psi \\partial ^3_y \\psi ^{*} )= - 2a k^3_0 \\rho _0$ one finds $ \\tilde{J}_y =2a k^3_0 \\rho _0,~~~J_y= - k_0 \\rho _0 ( c^2 + v^2_y ) $ .", "Near the $ z=(3/2,3) $ line in Fig.REF , $ k_0= \\pm \\sqrt{ \\frac{c^2-v^2_y}{2b} } $ listed in Eq.REF .", "Again due to the $ C $ symmetry breaking inside the BSF phase, one can only pick one of the $ \\pm $ .", "The SF state with $ k_0=0 $ carries nothing dictated by the $ C $ symmetry.", "However, the $ b $ term in Eq.", "also contribute to the conserved current.", "When adding back $ b $ term's contribution: $J^b_y= i 2b | \\partial _y \\psi |^2 ( \\psi ^{*} \\partial _y \\psi - \\psi \\partial _y \\psi ^{*} )$ one finds $ \\tilde{J}_y =0,~~~J_y= -2 k_0 \\rho _0 c^2 $ .", "2.", "The Noether current serve as the order parameter to distinguish BSF from the SF In short, the bi-linear currents Eq.REF which is odd under C and PT ( still keep CPT ) can be taken as the order parameter to distinguish the SF from the BSF: in the former, the $ C $ is respected, the currents vanish.", "in the latter, the $ C $ is spontaneously broken, the currents shown in Eq.REF does not vanish.", "More precisely, it is $ J_y $ which acts as the order parameter to distinguish the BSF from the SF phase in Fig.REF .", "In fact, it contains both the magnitude $ \\rho _0 $ and the phase $ k_0 $ .", "Of course, $ \\psi $ remains the order parameter to distinguish the Mott from the SF or BSF." ], [ " The charge-vortex duality in the moving frame along the Path-I. ", "We will investigate the duality first in the boson picture, then from the dual vortex picture.", "Both lead to consistent results.", "Because we focus along the Path-I in Fig.REF , we can ignore the higher order terms such as the $ a $ and $ b $ term in Eq.. 1.", "Duality transformation in the boson picture We start from the hard-spin representation Eq.REF : $\\mathcal {L}_b=(\\partial _\\tau \\theta -ic\\partial _y\\theta )^2 + v_x^2(\\partial _x\\theta )^2 + v_y^2(\\partial _y\\theta )^2$ where the angle $ \\theta $ includes both the spin-wave and vortex excitation.", "To simplify the transformation, one can scale away $ v_x k_x \\rightarrow k_x, v_y k_y \\rightarrow k_y $ , so $ c \\rightarrow c/v_y < 1 $ near the Mott to the SF transition.", "To perform the duality transformation, one can decompose $ \\theta \\rightarrow \\theta + \\phi $ which stands for the the spin-wave and vortex respectively.", "Introducing the 3-currents $ J_\\mu =( J_\\tau , J_x,J_y ) $ to decouple the three quadratic terms leads to: $\\mathcal {L}_b= \\frac{1}{2} J^2_\\mu + i J_\\tau (\\partial _\\tau \\theta -ic\\partial _y\\theta + \\partial _\\tau \\phi -ic\\partial _y \\phi ) +i J_x (\\partial _x\\theta + \\partial _x \\phi ) + i J_y( \\partial _y\\theta + \\partial _y\\phi )$ Then integrating out $ \\theta $ leads to the conservation of the three current: $(\\partial _\\tau - i c \\partial _y ) J_\\tau + \\partial _x J_x+ \\partial _y J_y=0$ which is equivalent to Eq.REF .", "This equivalence shows that the 3-currents $ J_\\mu =( J_\\tau , J_x,J_y ) $ is nothing but the ones listed in Eq.REF directly derived from the Noether theorem.", "According to Eq.REF and REF , there are two equivalent ways to proceed the duality transformation one is to introduce the three derivatives $ \\tilde{\\partial }_\\mu =( \\partial _\\tau - i c \\partial _y, \\partial _x, \\partial _y ) $ , so Eq.REF can be written as $ \\tilde{\\partial }_\\mu J_\\mu =0 $ or introduce the three current $ \\tilde{J}_\\mu =( J_\\tau , J_x, J_y-i c J_\\tau ) $ , so Eq.REF can be written as $ \\partial _\\mu \\tilde{J}_\\mu =0 $ .", "It turns out the first way is more convenient, so we take it in the following.", "Note that the derivatives along the three directions still commute with each other in this boosted frame, so Eq.REF implies $J_{\\mu }= \\epsilon _{\\mu \\nu \\lambda } \\tilde{\\partial }_{\\nu } a_{\\lambda }$ where $ a_{\\lambda } $ is a non-compact $ U(1) $ gauge field.", "Then Eq.REF reduces to $\\mathcal {L}_v= \\frac{1}{4} \\tilde{f}^2_{\\mu \\nu } + i 2 \\pi a_\\mu \\tilde{j}^{v}_{\\mu }$ where $ \\tilde{f}_{\\mu \\nu }= \\tilde{\\partial }_\\mu a_\\nu - \\tilde{\\partial }_\\nu a_\\mu $ is the gauge invariant field strength ( see below Eq.REF ) and $ \\tilde{j}^{v}_{\\mu }= \\frac{1}{2 \\pi } \\epsilon _{\\mu \\nu \\lambda } \\tilde{\\partial }_{\\nu } \\tilde{\\partial }_{\\lambda } \\phi $ is the vortex current.", "Now we introducing the dual complex order parameter $ \\psi _v $ and considering $ \\tilde{\\partial }_\\mu =( \\partial _\\tau - i c \\partial _y, \\partial _x, \\partial _y ) $ , Eq.REF can be written in terms of $ \\psi _v $ .", "It leads to Eq.REF where we explicitly wrote $ \\tilde{\\partial }_\\mu =( \\partial _\\tau - i c \\partial _y, \\partial _x, \\partial _y ) $ out in the kinetic term, but only keep it implicitly in $ \\tilde{f}_{\\mu \\nu } $ .", "2.", "Duality transformation in the vortex picture In the lab frame, one can perform the well known charge -vortex duality on Eq.", ": Lv =|(-i a )v |2 + v2vx |(x-i ax)v|2 + v2vy |(y-i ay)v|2 + rv|v|2+ uv |v |4+ 14 f2 + When $ r_v < 0 $ , it is in the Mott phase $ \\langle \\psi _v \\rangle \\ne 0 $ , $ r_v > 0 $ , it is in the SF phase $ \\langle \\psi _v \\rangle = 0 $ .", "In addition to the emergent Lorentz invariance, the T, PH and P symmetry, the Global $ U(1) $ symmetry of the boson is promoted to the local ( gauge ) symmetry $ \\psi _v \\rightarrow \\psi _v e^{ i \\chi }, a_{\\mu } \\rightarrow a_{\\mu } + \\partial _\\mu \\chi $ .", "The gauge invariance is completely independent of the emergent Lorentz invariance, it is also much more robust than the emergent Lorentz invariance in materials or AMO systems.", "Then going to the moving frame by substituting $ \\partial _\\tau \\rightarrow \\partial _\\tau -ic\\partial _y $ into Eq.REF leads to: Lv =(-icy -i a )*v (-icy + i a )v + |(x-i ax)v|2 + |(y-i ay)v|2 + rv|v|2+ uv |v |4+ 14 f2 + where $ \\tilde{f}_{\\mu \\nu }= \\tilde{\\partial }_\\mu a_\\nu - \\tilde{\\partial }_\\nu a_\\mu $ .", "As stressed in [36], one can see the sign difference between the boost $ ic \\partial _y $ and the time component of the gauge field $ a_\\tau $ in the first term.", "This different structure in the boost and gauge field could be important in the lattice version of Eq.REF to be discussed in the conclusion section.", "The boost also generalize the original $ U(1) $ gauge invariance $ \\psi _v \\rightarrow \\psi _v e^{ i \\chi }, a_{\\mu } \\rightarrow a_{\\mu } + \\partial _\\mu \\chi $ in the lab frame to the $ \\tilde{U}(1) $ gauge invariance in the moving frame: v v e i ,    a a + To perform the duality transformation, it is convenient to get to the hard spin representation of Eq.REF ( For the notational convenience in the following, we replace $ a_{\\mu } $ in Eq.REF and REF by $ A_{\\mu } $ in Eq.REF ): $\\mathcal {L}_v=(\\partial _\\tau \\theta -ic\\partial _y\\theta -A_\\tau )^2 + (\\partial _x\\theta - A_x )^2 + (\\partial _y\\theta -A_y )^2+ \\frac{1}{4} \\tilde{F}^2_{\\mu \\nu }$ where the angle $ \\theta $ includes both the spin-wave and vortex excitation.", "Following the similar procedures as done in the boson representation (1) decomposing $ \\theta \\rightarrow \\theta + \\phi $ which stands for the \"spin-wave\" and \"vortex\" respectively.", "(2) Introducing the 3-currents $ J_\\mu =( J_\\tau , J_x,J_y ) $ to decouple the three \"quadratic\" terms in Eq.REF .", "(3) Integrating out $ \\theta $ leads to the conservation of the three \" boson\" current: $ \\tilde{\\partial }_\\mu J_\\mu =0 $ which implies $ J_{\\mu }= \\epsilon _{\\mu \\nu \\lambda } \\tilde{\\partial }_{\\nu } a_{\\lambda } $ where $ a_{\\lambda } $ is a non-compact $ U(1) $ gauge field.", "Then we reach: $\\mathcal {L}_b= \\frac{1}{4} \\tilde{F}^2_{\\mu \\nu }-i A_{\\mu } \\epsilon _{\\mu \\nu \\lambda } \\tilde{\\partial }_{\\nu } a_{\\lambda } + i 2 \\pi a_\\mu \\tilde{j}^{v}_{\\mu } + \\frac{1}{4} \\tilde{f}^2_{\\mu \\nu }$ where $ \\tilde{f}_{\\mu \\nu }= \\tilde{\\partial }_\\mu a_\\nu - \\tilde{\\partial }_\\nu a_\\mu $ is the $ \\tilde{U}(1) $ gauge invariant field strength and $ \\tilde{j}^{v}_{\\mu }= \\frac{1}{2 \\pi } \\epsilon _{\\mu \\nu \\lambda } \\tilde{\\partial }_{\\nu } \\tilde{\\partial }_{\\lambda } \\phi $ is the \"vortex\" current.", "Now integrating out $ A_{\\mu } $ leads to a mass term for $ a_\\nu $ .", "$\\mathcal {L}_b= i 2 \\pi a_\\mu \\tilde{j}^{v}_{\\mu } + \\frac{1}{4} \\tilde{f}^2_{\\mu \\nu } + \\frac{1}{2} ( a_\\mu )^2$ where the mass term makes the Maxwell term in-effective in the low energy limit.", "Note that this \"vortex \" current in the vortex representation is nothing but the original boson current in the boson representation.", "Now using $ \\tilde{\\partial }_\\mu =( \\partial _\\tau - i c \\partial _y, \\partial _x, \\partial _y ) $ , it leads back to Eq.REF .", "After introducing the dual complex order parameter $ \\psi $ which is nothing but the original boson leads back to Eq..", "It is easy to see Eq.REF has two sectors, the vortex degree of freedoms $ \\psi _v $ and the gauge field $ a_{\\mu } $ .", "Then as shown in Eq.REF , going to a moving frame adds a boost to both sectors.", "So far, our boson-vortex duality is limited to the path I with $ c < v_y $ in Fig.REF , it would be interesting to push it to path II and path III.", "So the boost could trigger instabilities in the two sectors respectively.", "Note that the artificial gauge field here is different from the Electro-Magnetism (EM), the former's intrinsic velocity is $ v \\ll c_l $ , the latter is just the speed of light $ c_l $ .", "So one can safely use the GT in the former, but must use the LT first in the latter, then keep upto the linear term in $ v/c_l $ when taking the small $ v/c_l $ limit as shown in Appendix F and G. It is worth to note that the boson-vortex duality transformation focus on only low energy sector, so the Higgs mode may not be seen in such a duality transformation.", "However,it was shown in Sec.IV, the Higgs mode is irrelevant anyway from the SF to BSF transition.", "It remains interesting to achieve Fig.REF from the vortex representation.", "3.", "Galileo transformation of the dual gauge field in the vortex picture The $ \\tilde{U}(1) $ gauge invariance Eq.REF indicates that under the GT: y = y+ c t,    t=t = -i c y,   y=y where we define the imaginary time $ \\tau =it $ .", "If one define the dual gauge field as: $\\tilde{a}_0 = a_0 + i c a_y,~~~~ \\tilde{a}_x= a_x,~~~~\\tilde{a}_y= a_y$ Then the $ \\tilde{U}(1) $ gauge invariance Eq.REF can be implemented as v v e i ,    a a + which means $ \\tilde{a}_{\\mu } $ just transforms as the $ U(1) $ dual gauge field.", "In fact, as to be shown in Appendix F, Eq.REF is nothing but the GT of the gauge field.", "This can also be seen by looking how the $ \\tilde{U}(1) $ field strength $ \\tilde{f}_{\\mu \\nu }= \\tilde{\\partial }_\\mu a_\\nu - \\tilde{\\partial }_\\nu a_\\mu $ transform under the GT.", "By using the definition Eq.REF , one can find $\\tilde{f}_{0 \\alpha }=f_{0 \\alpha } + ic f_{\\alpha y},~~~\\tilde{f}_{\\alpha \\beta }=f_{\\alpha \\beta }$ where $ f_{\\mu \\nu }= \\partial _\\mu \\tilde{a}_\\nu - \\partial _\\nu \\tilde{a}_\\mu $ are expressed in terms of $ \\tilde{a}_\\mu $ .", "The extra term $ ic f_{\\alpha y} $ is due to the fact the Maxwell term is not GI.", "It is well known that the Maxwell term is due to the SF Goldstone mode, so the SF Goldstone mode is not GI either, consistent with the results achieved in Sec.II-B.", "At $ 2+1 $ d, $ f_{\\mu \\nu } $ can be expressed as $ E_{\\alpha }=f_{0 \\alpha }, B_z = f_{xy} $ .", "Then Eq.REF can be expressed as: $\\tilde{E}_{x}=E_x + ic B_z,~~~ \\tilde{E}_{y}=E_{y },~~~\\tilde{B}_{z}=B_z$ which is nothing but how the artificial dual \" EM \" field transforms under the GT boost in the Euclidean space-time.", "The extra term $ ic B_z $ is due to the fact the Maxwell term is not GI.", "However, as shown in the appendix F, G, H, the Chern-Simon term is GI.", "The artificial dual gauge field here is described by Maxwell term.", "But its GT can also be applied to Chern-Simon gauge field in the bulk FQH and its associated edge properties in real time formalism in the appendix F,G,H." ], [ " Finite temperature properties ", "Here, we derive the finite temperature effects and quantum critical scaling functions of various physical quantizes along the three paths in Fig.1 and Fig.2 in the moving frame.", "Even we may not be able to get all the analytic expressions in some cases, we still stress the important effects of the boost $ c $ and compare to those in the lab frame." ], [ " Near $ z=1 $ QCP along the path I ", "As shown in Sec.VI and VII, the Mott to the SF transition at $ 0 < c < v $ is in the same universality class as that in the lab frame, namely in the 3D XY class with the critical exponent $ z=1, \\nu =0.67, \\eta =0.04 $ .", "The boost $ 0 < c < v $ is exactly marginal with always the combination $ \\omega + i c k_y $ appearing in all the physical quantities ( see Fig.REF b ).", "Armed with these facts, one can write down the scaling functions for the Retarded single particle Green function, the Retarded density-density correlation function, compressibility and the specific heat in the Mott side near the QCP in Fig.REF a: $& & G^R ( \\vec{k}, \\omega ) ={\\cal A} ( \\frac{\\hbar \\sqrt{v_x v_y} }{k_B T} )^2 ( \\frac{ k_B T }{\\Delta } )^{\\eta } \\Psi ( \\frac{ \\hbar \\tilde{\\omega } }{ k_B T}, \\frac{ \\hbar \\tilde{k} }{ k_B T }, \\frac{ \\Delta }{ k_B T } )\\nonumber \\\\& & \\chi ^R ( \\vec{k}, \\omega ) =\\frac{ k_B T }{ \\hbar v_x v_y } \\Phi ( \\frac{ \\hbar \\tilde{\\omega } }{ k_B T}, \\frac{ \\hbar \\tilde{k} }{ k_B T }, \\frac{ \\Delta }{ k_B T } )\\nonumber \\\\& & \\kappa = \\frac{ k_B T }{ \\hbar ^2 v_x v_y } A ( \\frac{ \\Delta }{ k_B T } ),~~~~~~~ C_v = \\frac{ T^2}{ \\hbar ^2 v_x v_y } B ( \\frac{ \\Delta }{ k_B T } )$ where $ {\\cal A } \\sim r^{ \\eta \\nu } $ is the single particle residue, $ \\Delta \\sim r^{ \\nu } $ is the Mott gap inside the Mott phase, $ \\tilde{\\omega }=\\omega - c k_y $ is the Doppler shifted frequency in the moving frame, $ \\tilde{k}= \\sqrt{ v^2_x k^2_x + v^2_y k^2_y } $ is the scaled momentum and the compressibility is defined by $ \\kappa = \\lim _{\\vec{k} \\rightarrow 0 } \\chi ^R ( \\vec{k}, \\omega =0 ) $ .", "From the two conserved quantities, one can also form the Wilson ratio $ W=\\kappa T/C_v $ .", "In the SF side near the QCP, $ \\frac{ \\Delta }{ k_B T } $ need to be replaced by $ \\frac{ \\rho _s }{ k_B T } $ where $ \\rho _s \\sim | r |^{ ( d+z-2 ) \\nu } \\sim |r |^{\\nu } $ is the SF density.", "As shown in Fig.REF a and Sec.VI-B-3, all the scaling functions in the SF side will run into singularity at a finite $ T=T_{KT} $ signaling the finite KT transition.", "In the narrow window near the KT transition, the scaling function reduce to those of classical KT transition shown in Eq.REF .", "From the effective action Eq., one can evaluate the three scaling functions explicitly.", "From the three currents Eq.REF , following the method developed in [59], one can also evaluate the retarded density-density correlation function $ \\kappa ^R ( \\vec{k}, \\omega _n ) $ .", "Note that the scaling functions $ \\Psi $ and $ \\Phi $ are identical to those in the lab frame with $ c=0 $ studied in Ref.", "[58], [59].", "The $ c $ dependence is absorbed into the scaling variable $ \\tilde{\\omega }=\\omega - c k_y $ .", "However, the scaling functions $ A $ and $ B $ do depend on $ c $ explicitly due to the frequency summation and the momentum integral ( see Eq.REF ).", "From Eq., one can identify the single particle ( boson ) Green function in $ ( \\vec{k}, \\omega _n ) $ space in the Mott side $ r > 0 $ : $G_0 ( \\vec{k}, \\omega _n )= \\langle \\psi (\\vec{k}, \\omega _n) \\psi ^{*}(\\vec{k}, \\omega _n) \\rangle =\\frac{1}{ -( i \\omega _n -c k_y )^2 + \\tilde{k}^2 + r }= \\frac{-1}{ ( i \\omega _n - \\epsilon _{+}( \\vec{k} ) )( i \\omega _n + \\epsilon _{-}( \\vec{k} ) ) }$ where $ \\epsilon _{\\pm }( \\vec{k} ) = \\sqrt{ \\tilde{k}^2 + r } \\pm c k_y > 0 $ well inside the Mott phase $ r > 0 $ .", "From Eq., one can also get the free energy density well inside the Mott: $f= 2 k_B T \\int \\frac{ d^d \\vec{k} }{ (2\\pi )^d } \\log ( 1-e^{- \\beta \\epsilon _{+}( \\vec{k} ) } )+ \\int \\frac{ d^d \\vec{k} }{ (2\\pi )^d }\\epsilon _{+}( \\vec{k} )$ where we have used the $ C $ symmetry $ \\epsilon _{+}( \\vec{k} )= \\epsilon _{-}( -\\vec{k} ) $ to get rid of the hole excitation spectrum in favor or that of the particle.", "The last term is the ground state energy at $ T=0 $ .", "From the free energy, one can immediately evaluate the specific heat: From Eq., one can also get the free energy well inside the Mott: $C_v= - T \\frac{ \\partial ^2 f }{ \\partial T^2} =\\int \\frac{ d^d \\vec{k} }{ (2\\pi )^d } \\frac{ e^{ \\beta \\epsilon _{+}( \\vec{k} ) }}{ ( e^{ \\beta \\epsilon _{+}( \\vec{k} )}-1)^2 }( \\frac{ \\epsilon _{+}( \\vec{k} ) }{k_B T } )^2$ Plugging in the $ \\epsilon _{+}( \\vec{k} ) $ in the Mott phase leads to $ C_v (T) $ in Eq.REF well inside the Mott side where one can just use the mean field theory $ \\Delta = r $ .", "Eq.REF breaks down near the QCP.", "However, one can apply the simple scaling analysis $ C_v \\sim T^{d/z} \\sim T^2 $ for the $ z=1 $ QCP.", "As explained below Eq.REF , the coefficient does depend on $ c $ .", "The dynamic density-density response function is $\\chi (\\vec{k},i\\omega _n)=-T\\sum _{\\vec{q}, i \\nu _m } G_0( \\vec{k}+ \\vec{q}, i\\omega _n+i\\nu _m ) G_0( \\vec{q}, i \\nu _m )$ which can be similarly evaluated.", "So the Wilson ratio can also be obtained.", "One can even compute the momentum carried by the quasi-particle: $\\frac{\\vec{P}_{p} }{V}= \\int \\frac{ d^d \\vec{k} }{ (2\\pi )^d } \\hbar \\vec{k} f( \\epsilon _{+}( \\vec{k} ) )$ As demonstrated in Eq.REF in Sec.VIII-B, the drift velocity $ \\vec{c} $ is quite small, so Eq.REF can be simplified to: $\\frac{\\vec{P}_{p} }{V}= \\vec{c} \\frac{ \\hbar ^2 }{ 6 \\pi ^2 } \\int ^{\\infty }_{0} k^4[ - \\frac{ \\partial f( \\epsilon ( \\vec{k} ) ) }{ \\partial \\epsilon ( \\vec{k} ) }] \\sim \\vec{c} T^4$ inside a SF phase.", "It will be exponentially suppressed inside the Mott phase due to its gap." ], [ " Near $ ( z_x=3/2, z_y=3 ) $ QCP along the path II ", "Here the fundamental degree of freedoms is the phase $ \\phi $ in Eq.REF of the boson $ \\psi = \\sqrt{\\rho _0} e^{i \\phi } $ .", "If dropping the leading irrelevant operators $ [Z]=[b]=-2 $ and neglecting the vortex excitations, it becomes a Gaussian theory.", "So we first calculate the phase-phase correlation function, then the boson-boson correlation functions.", "In contrast to the $ z=1 $ case discussed in the last subsection, this QPT only happens in the moving frame, no $ c=0 $ analog.", "1.", "Correlation functions in the Quantum regimes On both sides, the phase-phase correlation function: $\\langle \\phi ( -\\vec{k}, -\\omega _n) \\phi (\\vec{k}, \\omega _n) \\rangle = \\frac{-1}{ ( i \\omega _n - \\epsilon _{+}( \\vec{k} ) )( i \\omega _n + \\epsilon _{-}( \\vec{k} ) ) }$ where $ \\epsilon _{\\pm }( \\vec{k} ) = \\sqrt{v^2_x k^2_x + v^2_y k^2_y + a k^4_y } \\pm c k_y > 0 $ , in the SF $ v_y > c $ , at the QCP $ v_y = c $ , $ v^2_y \\rightarrow 3 c^2- 2 v^2_y $ in the BSF phase $ v_y < c $ .", "$ \\epsilon _{+}( \\vec{k} ) + \\epsilon _{-}( \\vec{k} )= 2 \\epsilon ( \\vec{k} ) = 2 \\sqrt{v^2_x k^2_x +v^2_y k^2_y + a k^4_y } $ is an even function of $ \\vec{k} $ where $ c $ drops out.", "One can find the single particle ( boson ) Green function: $G ( \\vec{x}, \\tau ) & = & \\langle \\psi ( \\vec{x}, \\tau ) \\psi ^{*} (0,0) \\rangle = \\rho _0 e^{- g( \\vec{x}, \\tau ) }\\nonumber \\\\g( \\vec{x}, \\tau ) & = & \\frac{1}{\\beta } \\sum _{ i \\omega _n } \\int \\frac{ d^d \\vec{k} }{ (2\\pi )^d }\\frac{ 1- e^{i ( \\vec{k} \\cdot \\vec{x} - \\omega _n \\tau ) } }{ 2 \\epsilon ( \\vec{k} ) }[ \\frac{1}{ i \\omega _n - \\epsilon _{+}( \\vec{k} ) } - \\frac{1}{ i \\omega _n + \\epsilon _{-}( \\vec{k} ) } ]$ The frequency integral can be done first by paying the special attention to $ \\tau =0=\\beta $ at any finite $ T $ : $g( \\vec{x}, \\tau ) = \\int \\frac{ d^d \\vec{k} }{ (2\\pi )^d } \\frac{1}{ 2 \\epsilon ( \\vec{k} ) }[ \\frac{ e^{i \\vec{k} \\cdot \\vec{x} - \\epsilon _{+}( \\vec{k} ) \\tau } - e^{ - \\beta \\epsilon _{+}( \\vec{k} ) } }{ 1- e^{ - \\beta \\epsilon _{+}( \\vec{k} ) } }-\\frac{ e^{ \\beta \\epsilon _{-}( \\vec{k} ) } - e^{i \\vec{k} \\cdot \\vec{x} + \\epsilon _{-}( \\vec{k} ) \\tau } }{ e^{ \\beta \\epsilon _{-}( \\vec{k} ) }-1 } ]$ where one can use the $ C $ symmetry $ \\epsilon _{+}( \\vec{k} )=\\epsilon _{-}( -\\vec{k} ) $ to get an expression only in terms of $ \\epsilon _{+}( \\vec{k} ) $ .", "Now we look at several special cases: (1) Putting $ T=0 $ leads to $g( \\vec{x}, \\tau ) = \\int \\frac{ d^d \\vec{k} }{ (2\\pi )^d }\\frac{ e^{i \\vec{k} \\cdot \\vec{x} - \\epsilon _{+}( \\vec{k} ) \\tau } -1 }{ 2 \\epsilon ( \\vec{k} ) }$ Its equal time at $ d=2 $ is $ g(\\vec{x}, 0) =\\int \\frac{ d^2 \\vec{k} }{ (2\\pi )^2 }\\frac{ e^{i \\vec{k} \\cdot \\vec{x} } -1 }{ 2 \\epsilon ( \\vec{k} ) }\\sim 1/ |x| $ is independent of $ c $ , $ G( \\vec{x},0 ) \\sim \\rho _0 e^{ -1/|\\vec{x}| } $ .", "but its auto- correlation ( equal-space ) $ g( 0, \\tau ) = \\int \\frac{ d^2 \\vec{k} }{ (2\\pi )^2 }\\frac{ e^{ - \\epsilon _{+}( \\vec{k} ) \\tau } -1 }{ 2 \\epsilon ( \\vec{k} ) } $ does depend on $ c $ .", "(2 ) Putting equal-time $ \\tau =\\beta $ in Eq.REF leads to the equal time : $g( \\vec{x}, 0 ) = \\int \\frac{ d^d \\vec{k} }{ (2\\pi )^d } \\frac{ e^{i \\vec{k} \\cdot \\vec{x} } - 1}{ 2 \\epsilon ( \\vec{k} ) }[ \\frac{1}{ e^{ \\beta \\epsilon _{+}( \\vec{k} ) } -1 }- \\frac{1}{ e^{ -\\beta \\epsilon _{-}( \\vec{k} ) } -1 } ]$ Putting $ \\epsilon _{+}( \\vec{k} )= \\epsilon _{-}( \\vec{k} )= \\epsilon ( \\vec{k} ) $ , one recovers the well-known $ c=0 $ result $ g( \\vec{x}, 0 ) = \\int \\frac{ d^d \\vec{k} }{ (2\\pi )^d } \\frac{ e^{i \\vec{k} \\cdot \\vec{x} } - 1}{ 2 \\epsilon ( \\vec{k} ) } \\coth \\frac{ \\beta \\epsilon ( \\vec{k} ) }{2} $ .", "In the BSF phase, Eq.REF should be replaced by $G ( \\vec{x}, \\tau ) = \\langle \\psi ( \\vec{x}, \\tau ) \\psi ^{*} (0,0) \\rangle = \\rho _0 e^{ i k_0 y } e^{- g( \\vec{x}, \\tau ) }$ where in the $ g( \\vec{x}, \\tau ) $ , one just replaces $ v^2_y \\rightarrow 3c^2- 2 v^2_y $ .", "It is the modulation $ e^{ i k_0 y } $ in the correlation function which distinguishes the BSF from the SF.", "2.", "The thermodynamic quantities in the quantum regimes Applying Eq.REF to the SF or BSF side, one can find the specific heat: $C_v= \\frac{T^2}{v_x v_y } \\int \\frac{ d^2 \\vec{k} }{ (2\\pi )^2 } \\frac{ e^{ k+ \\alpha k_y} ( k+ \\alpha k_y)^2 }{ ( e^{ k+ \\alpha k_y} - 1 )^2 }= f(\\alpha ) T^2$ where $ \\alpha = c/v_y < 1 $ in the SF side and $ \\alpha = c/\\sqrt{3c^2-2 v^2_y} < 1 $ in the BSF side.", "This is consistent with the scaling $ C_v \\sim T^{d/z} \\sim f(\\alpha ) T^2 $ with $ z= 1 $ .", "At the QCP $ f( \\alpha =1 ) $ diverges where we find $C_v= \\frac{T}{v_x v_y } \\int ^{\\infty }_{- \\infty } \\frac{ d k_x }{ 2\\pi } \\int ^{\\infty }_{0} \\frac{ d k_y }{ 2\\pi }\\frac{ e^{ g(k_x, k_y, \\alpha ) } g^2(k_x, k_y, \\alpha ) }{ ( e^{ g(k_x, k_y, \\alpha ) } - 1 )^2 } + O ( T^2 ) + \\cdots ,~~~~~g(k_x, k_y, \\alpha )= \\frac{ k^2_x + \\alpha k^4_y }{ 2 k_y }$ where the integral over $ k_y $ is only half of the line, the other half contributes to the subleading $ T^2 $ term.", "It is consistent with the scaling $ C_v \\sim T ^{1/z_x + 1/z_y } \\sim T $ with $ ( z_x=3/2, z_y=3 ) $ at the QCP.", "The superfluid density near the SF to BSF transition scales as: $\\rho _s \\sim v_x \\gamma \\sim | c-v_y | \\sim | \\alpha -1 |$ So all the scaling functions can be written in terms of $ \\rho _s/k_BT $ on both sides.", "3.", "In the classical regime near the KT transition: the $ \\hbar \\rightarrow 0 $ limit So far, we only look at the quantum effects of the boost $ \\partial _{\\tau } \\rightarrow \\partial _{\\tau } -ic \\partial _{y} $ .", "Now we look at its classical effects originating from such a substitution, namely, the $ \\hbar \\rightarrow 0 $ limit [98].", "At a finite temperature, setting the quantum fluctuations ( the $ \\partial _{\\tau } $ term ) vanishing, in Eq.REF or Eq.REF , then both equations reduce to SKT =1kB T d2r [ vx2(x)2 + 2 (y)2 + a (2y)2 ] where $ \\gamma ^2= v^2_y- c^2 $ inside the SF phase and $ \\gamma ^2=2(c^2-v_y^2) $ inside the BSF phase.", "By setting the $ \\partial _{\\tau } $ term vanishing, the crucial crossing metric terms also vanish.", "This facts suggest that the effects of boosts is mainly quantum effects, but still have important classical remanent effects encoded in the coefficient $ \\gamma $ in Eq.REF .", "It indicates the finite temperature phase transition is still in Kosterlize-Thouless (KT) universality class with a reduced $ T_{KT} \\sim \\rho _s \\sim | c-v_y | $ shown in Eq.REF .", "The classical boson correlation functions in the narrow window around the classical KT transition line show algebraic decay order, just like those in the classical KT transition.", "$G ( \\vec{x} ) & = & \\langle \\psi ( \\vec{x} ) \\psi ^{*} (0 ) \\rangle = \\rho _0 e^{- g( \\vec{x} ) } = ( |x|/a )^{- \\frac{ T}{ \\rho _s } }\\nonumber \\\\g ( \\vec{x} ) &= & k_B T \\int \\frac{ d^d \\vec{k} }{ (2\\pi )^d }\\frac{ e^{i \\vec{k} \\cdot \\vec{x} } -1 }{ v^2_x k^2_x + \\gamma ^2 k^2_y } \\sim \\frac{ T}{ \\rho _s } \\ln |x|/a$ where $ a $ is the lattice constant ( not confused with the higher order coefficient $ a $ in Eq.REF ).", "It can be contrasted to the quantum regimes listed in Eq.REF and REF .", "As shown in Eq.REF , inside the KT transition above the BSF phase has the modulation factor $ e^{i k_0 y } $ .", "Of course, the classical regime around the classical KT transition line squeezes to zero at the QCP $ \\gamma =0 $ as shown in Fig.REF ." ], [ " Near the $ z=2 $ QCP along the path III ", "As shown in the previous sections and Fig.REF b, the $ k_0 $ of BSF is also exactly marginal which stands for the ordering wavevector in the BSF, so the $ z=2 $ line is a line of fixed points all in 2d zero density SF-Mott transition class with the exact critical exponent $ z=1, \\nu =1/2, \\eta =0 $ subject to logarithmic corrections at the upper critical dimension $ d=2 $ .", "The RG flow is along the constant contour of $ k_0 $ .", "Armed with these facts, one can write down the scaling functions for the Retarded single particle Green function, the Retarded density-density correlation function, compressibility and the specific heat near the QCP in Fig.REF c, d: $& & G^R ( \\vec{k}, \\omega _n ) =e^{ i k_0 y } \\frac{\\hbar }{ k_B T} \\Psi ( \\frac{ \\hbar Z_1 \\omega }{ k_B T}, \\frac{ \\hbar \\tilde{k} }{ \\sqrt{ k_B T} },\\frac{ \\tilde{\\mu } }{ k_B T } )\\nonumber \\\\& & \\chi ^R ( \\vec{k}, \\omega _n ) =\\frac{ 1 }{ \\hbar v_x \\tilde{v}_y }\\Phi ( \\frac{ \\hbar Z_1 \\omega }{ k_B T}, \\frac{ \\hbar \\tilde{k} }{ \\sqrt{ k_B T} }, \\frac{ \\tilde{\\mu } }{ k_B T } )\\nonumber \\\\& & \\kappa = \\frac{1}{ \\hbar v_x \\tilde{v}_y } A ( \\frac{ \\tilde{\\mu } }{ k_B T } ),~~~~~~ C_v = \\frac{ T }{ v_x \\tilde{v}_y} B ( \\frac{ \\tilde{\\mu } }{ k_B T } )$ where $ Z_1 \\omega $ is the scaled frequency with $ Z_1=-2ck_0 $ in the moving frame, $ \\tilde{k}= \\sqrt{ v^2_x k^2_x + \\tilde{v}^2_y k^2_y } $ is the scaled momentum with $ \\tilde{v}^2_y= 2 (c^2-v^2_y) $ , the effective chemical potential $ \\tilde{\\mu }=-( r-r_c), r_c=a k^4_0 >0 $ is listed in Eq.REF .", "The fact that $ c $ is exactly marginal, so does $ k_0 $ is reflected in the arguments of the scaling functions.", "The characteristic frequency scales as $ \\Delta \\sim \\tilde{\\mu }^{z\\nu } \\sim \\tilde{\\mu } $ upto some logarithmic correction.", "Note the dramatic changes from the $ z=1 $ scaling sets in Eq.REF to the $ z=2 $ scaling sets in Eq.REF .", "Again, the scaling functions $ \\Psi $ and $ \\Phi $ are identical to that listed in Ref.", "[9], [58], [59].", "The $ c $ dependence is absorbed into the 3 scaling variables $ Z_1\\omega , \\tilde{k} $ and $ \\tilde{\\mu } $ .", "However, the scaling functions $ A $ and $ B $ do depend on $ c $ explicitly due to the frequency summation and the momentum integral.", "In the BSF side near the QCP, $ \\frac{ \\tilde{\\mu } }{ k_B T } $ need to be replaced by $ \\frac{ \\tilde{\\rho }_s }{ k_B T } $ where $ \\tilde{\\rho }_s \\sim \\tilde{\\mu }^{ (d+z-2) \\nu } \\sim \\tilde{\\mu }=-(r -r_c) $ upto some logarithmic correction is the SF density in the BSF.", "As shown in Fig.REF c,d, all the scaling functions in the BSF side runs into a singularity at a finite $ T=T_{KT} $ signaling the finite KT transition.", "In the narrow window near the KT transition, the scaling function reduce to those of classical KT transition [102] shown in Eq.REF .", "Note that it is the effective chemical potential $ \\tilde{\\mu }=-( r-r_c), r_c=a k^4_0 >0 $ listed in Eq.REF which tunes the Mott to the BSF transition.", "So one can either tune the bare mass $ r $ or the boost velocity $ c $ to tune the transition ( Fig.REF c or Fig.REF d ) respectively.", "Especially, at a fixed $ r > 0 $ inside the Mott state, one can tune it into the BSF just by increasing the boost velocity.", "The energy come from boosting the moving frame.", "The effects of the dangerously irrelevant $ Z_2 $ ( the metric crossing term ) has not been considered in the scaling function near the QCP, but become important inside the two phases as shown in Sec.IV.", "In summary, one can see the importance of the metric crossing term which represents the new space-time structure emerging from the QPT: it is marginal, dominant and irrelevant near the $ z=1 $ , $ z=(3/2, 3 ) $ and $ z=2 $ QCP respectively.", "The first has the $ c=0 $ limit in the lab frame, the latter two do not have, so only happen in the moving frame.", "The interaction $ u $ is relevant and marginally irrelevant near the $ z=1 $ and $ z=2 $ respectively.", "Of course, despite the SF to BSF transition with $ z=(3/2, 3 ) $ is a Gaussian one, it is the interaction which leads to the very existence of the SF, BSF and the QPT between the two." ], [ " The effective phase diagram of $ z=2 $ in a moving frame ", "In this section, we study the $ z=2 $ SF-Mott transitions in a lab frame and observe it in a moving frame and contrast to the $ z=1 $ ones addressed in the previous sections.", "We also analyze the intrinsic relations between the Galileo transformation and the symmetry breaking in the SF phase.", "At integer fillings and in the absence of the $ C $ ( or PH ) symmetry, the SF-Mott transition in Eq.REF in the lab frame ( Fig.REF a and Fig.REF ) can be described by the $ z=2 $ effective action SL=dd2r [Z1 * +vx2|x|2+vy2|y|2 -p ||2+ u ||4 + ] where the space time is related by $ z=2 $ , the chemical potential $ \\mu _p $ tunes the SF-Mott transition, $ \\mu _p < 0, \\langle \\psi \\rangle =0 $ is in the Mott state which respects the $ U(1) $ symmetry, $ \\mu _p > 0, \\langle \\psi \\rangle \\ne 0 $ is in the SF state which breaks the $ U(1) $ symmetry.", "It is related to the microscopic parameters in Eq.REF by $ \\mu _p \\sim t/U- (t/U)_c $ at a fixed chemical potential $ \\mu $ or $ \\mu _p \\sim \\mu - \\mu _c $ at a fixed $ t/U $ or ( see also Fig.REF ).", "In the following, for the notational simplicity, we set $ Z_1=1 $ and also drop the subscript $ p $ .", "We will put back $ Z_1 $ in the Sec.IX-B where we also consider $ |\\partial _\\tau \\psi |^2 $ term.", "Eq.", "explicitly breaks the $ C $ symmetry, but has the $ P $ and $ T $ symmetry ( therefore no such thing like CPT as in the $ z=1 $ emergent pseudo-Lorentz theory ).", "It has an emergent Galileo invariance.", "So we expect performing a Galileo transformation to a moving frame should not change its form.", "This is indeed the case as demonstrated in the following.", "Performing the Galileo transformation described in Sec.I leads to the following effective action in the moving frame ( Fig.REF b ) ( again, for the notational simplicity, we drop the $ \\prime $ in the moving frame ): SM=dd2r [*(-icy)+vx2|x|2+vy2|y|2 -||2+u||4 + ] The mean field ansatz $\\psi =\\sqrt{\\rho }e^{i(\\phi +k_0y)}$ leads to the energy density E[,k0] =(c k0+vy2k02-)+ u 2 Minimizing $E[\\rho ,k_0]$ with respect to $\\rho $ and $k_0$ results in k0=-c2vy2,       = {ll 0,<-c24vy2 2u+c28 u vy2,>-c24vy2 .", "It is easy to see that due to the explicit C- symmetry breaking of $ z=2 $ action Eq., the sign of $ k_0 $ is automatically given.", "This is in sharp contrast to in Eq.", "and Eq.REF in the $ z=1 $ case where the sign of $ k_0 $ is determined by the spontaneous $ C $ -symmetry breaking.", "It is convenient to introduce the new order parameter $ \\psi =\\tilde{\\psi }e^{ik_0y} $ , then $\\partial _y\\psi =e^{ik_0y}(\\partial _y+ik_0)\\tilde{\\psi }$ and S =dd2r (* -i(c+2k0vy2)*y +vx2|x|2 +vy2|y|2 -(-ck0-vy2k02)||2+u||4 + ) Setting $i(c+2k_0v_y^2)\\tilde{\\psi }^*\\partial _y\\tilde{\\psi }=0$ leads to $k_0=-\\frac{c}{2v_y^2}$ and S=dd2r (* +vx2|x|2 +vy2|y|2 -(+vy2k02)||2+ u ||4) which as expected, remains the same as original $z=2$ theory in the lab frame after identifying the chemical potential in the moving frame $ \\tilde{\\mu }=\\mu +v_y^2k_0^2$ .", "After identifying $ v^2_y=1/2m $ , then $ \\tilde{\\mu }=\\mu +v_y^2k_0^2 = \\mu + \\frac{m c^2}{2} $ precisely match the Galileo transformation shown in Eq.REF .", "Setting $ \\tilde{\\mu }=0 $ leads to the $ z=2 $ phase boundary in the moving frame: $\\mu = - \\frac{c^2}{4 v^2_y} < 0$ which gives the $ z=2 $ line in Fig.REF .", "Figure: (a) The phase diagram of action Eq.", "with z=2 z=2 observed in the moving frame Fig.bas a function of μ\\mu and cc.", "For Eq., μ p ∼t/U-(t/U) c \\mu _p \\sim t/U- (t/U)_c at a fixed chemical potential μ \\mu or μ p ∼μ-μ c \\mu _p \\sim \\mu - \\mu _c at a fixed t/U t/U in Fig..c=0 c=0 recovers to that in the lab frame.", "Due to the explicit C- symmetry breaking, no Higgs mode inside the BSF phase.The blue line with z=2 z=2 is in the same universality class as the corresponding z=2 z=2 linein Fig..", "The green dashed line with (z x ,z y )=(3/2,3) (z_x,z_y)=(3/2,3 ) differs from that in Fig.by its subjection to Logarithmic corrections from the marginally irrelevant w w termin Eq.", "or Eq..The dashed line means that it does not exist in the microscopic boson Hubbard model Eq.,just crashes onto the μ \\mu axis.", "But it exists when directly boosting the SF in the class-2 shown in Sec.IX-A and Fig.c.The BSF phase in Fig.", "spontaneouslybreaks the C- C- symmetry, while the C- C- symmetry was explicitly broken from very beginning.", "So the BSF phase here breaks the translational invariance.", "Its ordering wavevector k 0 k_0 can be viewed as the order parameter distinguishing the BSF from the SF.", "(b) The Renormalization Group (RG) flow of (a).", "The arrows stand for the RG flow directions.The RG flow inside the BSF phase is along the constant contour of k 0 k_0 .So the z=2 z=2 line is a line of fixed points all in 2d zero density SF-Mott transition class.In summary, Eq.", "is invariant under the Galileo transformation: $\\tilde{\\psi } = \\psi e^{-ik_0y},~~~~ \\tilde{\\mu }= \\mu + \\frac{m c^2}{2} = \\mu + \\frac{c^2}{4 v^2_y}$ which can be contrasted to the Lorentz transformation in Eq.REF for relativistic QFT.", "Eq.REF , Eq.REF and also Eq.REF .", "Obviously, the functional measure $ \\int D\\psi D\\psi ^{*} = \\int D \\tilde{\\psi } D \\tilde{\\psi }^{*} $ in the path-integral in Eq..", "The reason for this absorption should be traced back to the fact that the boost term $ \\psi ^*(-ic\\partial _y)\\psi = - \\frac{ ic }{2} ( \\psi ^* \\partial _y \\psi - \\partial _y \\psi ^* \\psi ) $ in Eq.", "is nothing but a conserved current along the $ \\hat{y} $ direction due to the $ U(1) $ symmetry, so can be absorbed by the transformation Eq.REF .", "In the Mott phase $ \\mu < 0 $ with a Mott gap $ - \\mu $ , increasing the boost, the Mott gap decreases until to zero signifying the QPT to the BSF phase.", "The scaling analysis in Sec.II-C and Sec.V-C with $ z=2 $ also hold here.", "Compared to Fig.REF , the $ z=1 $ line is absent in Fig.REF ." ], [ " The Doppler shift in the BSF phase due to a Type-II dangerously irrelevant term ", "It is important to observe that despite Eq.", "has emergent Galileo-invariance.", "Its SF state breaks the Galileo invariance spontaneously.", "In fact, one could also perform another procedure: starting from the $ z=2 $ action Eq.", "in the lab frame, then get the effective action inside the SF phase first, then boosting the SF phase.", "For the $ z=1 $ case, boosting the effective action, then do the symmetry breaking in the SF phase or vice versa reaches the same answer.", "For the $ z=2 $ , the two procedures do not commute.", "The first procedure was done in the first paragraph in this section, leads to the Galileo invariant action Eq., this is the end of story.", "The vice versa procedure is the suitable one to do when boosting the SF directly as to be done in Sec.IX [49].", "When incorporating the two leading irrelevant terms listed in Eq.REF into Eq.", "L=* +vx2|x|2 +vy2|y|2 -(+vy2k02)||2+ u ||4 + i V | |2 *y + iw *3y + where $ V= 8 c \\frac{ u_h }{ \\Delta _h } ( \\frac{ \\lambda }{ \\Delta _h } )^4 $ listed below Eq.REF and $ w \\propto c= \\alpha t_{b1} v $ listed in Eq.REF .", "both terms break the C- symmetry and have scaling dimension $ [V]=[w]=-1 $ , so are dangerously irrelevant.", "The $ \\cdots $ means the always allowed terms $ a | \\partial ^2_y \\tilde{\\psi }|^2 + b | \\partial _y \\tilde{\\psi }|^4 +d |\\tilde{\\psi }|^2 | \\partial _y \\tilde{\\psi }|^2 $ .", "They are less relevant than the two terms kept.", "Of course, the two C- symmetry breaking terms are excluded in the $ z=1 $ case Eq., but are allowed here at the $ z=2 $ case.", "In the following, for the notational simplicity, we drop the $ \\tilde{} $ .", "Plugging the mean field ansatz $\\psi =\\sqrt{\\rho }e^{i\\phi }$ into the action Eq.REF leads to the energy density E[,] =-+ u 2 Minimizing $E[\\rho ,\\phi ]$ with respect to $\\rho $ results in 0= {ll 0,<0 2u,>0 .", "In the superfluid phase, $\\mu >0$ and $\\rho _0>0$ , then $\\psi =\\sqrt{\\rho _0+\\delta \\rho }e^{i\\phi }$ and LL:SF[ , ] = i+140[vx2(x)2+vy2(y)2] +0[vx2(x)2+vy2(y)2] + u ()2 + 2 V 0 y + w 0 ( y )3 + where, in addition to $ i\\delta \\rho \\partial _\\tau \\phi $ , the newly added last two terms also break the C- symmetry $ \\phi \\rightarrow -\\phi $ in the $ ( \\delta \\rho , \\phi ) $ representation.", "It is tempting to include $ a \\phi \\partial ^3_y \\phi $ term, but this term is a total derivative term, so can be dropped.", "It shows $ ( \\delta \\rho , \\phi ) $ becomes a conjugate variable, so no Higgs mode, in sharp contrast to the SF phase in the $ z=1 $ case presented in Eq.REF .", "Of course, as stressed in Sec.IV, despite the existence of the Higgs mode inside the SF near the $ z=1 $ line, it plays no roles in the SF to BSF transition in Fig.REF .", "Due to the spontaneous $ C $ symmetry breaking, it disappears in the BSF anyway.", "Integrating out $\\delta \\rho $ lead to the action in the moving frame: LM:SF[ ]= 12 u [(-i 2 0 V y)]2 +0[vx2(x)2+vy2(y)2] + w 0 ( y )3 which takes the same form as Eq.REF and leads to the exotic superfluid Goldstone mode in the moving frame: =2 u 0(vx2kx2+vy2ky2)-2 0 V ky As shown in Sec.III, the SF to BSF transition in Fig.REF is solely driven by the Goldstone mode, the Higgs mode is irrelevant.", "Here due to the explicit C- symmetry breaking, the Higgs mode does not exist in the first place.", "However, due to the smallness of $ \\rho _0 $ near the QPT, also the smallness of $ V $ listed below in Eq.REF , Eq.REF is always stable, no QPT is possible.", "From Eq.REF , one can also integrate out $ \\phi $ to get an effective action in terms of $ \\delta \\rho $ SM:SF[ ]=d d2 k 12 ( - k, - ) [ - (i -2 0 V ky)2 0 ( v2x k2x + v2y k2y ) + U ] ( k, ) which leads to the density-density correlation function ( or sound mode in the SF ): ( k, )= ( - k, - ) ( k, ) = 0 ( v2x k2x + v2y k2y ) -(i - 2 0 V ky)2 + 0 U ( v2x k2x + v2y k2y ) whose pole, under the analytic continuation $ i \\omega \\rightarrow \\omega + i \\delta $ also leads to Eq.REF .", "Again, this is because the $ \\delta \\rho $ is always conjugate to the phase $ \\phi $ through the QPT from the SF to the BSF.", "This is in sharp contrast the $ z=1 $ case where the $ \\delta \\rho $ is the Higgs mode which simply decouples from the Goldstone mode in the long-wave-length limit." ], [ " A leading irrelevant metric-crossing term breaking the emergent Galileo invariance explicitly ", "As said in the introduction, the microscopic system Eq.REF is not Galileo invariant.", "To see the effects of the boost which break the Galileo invariance, one must add some boosting terms to Eq.", "which are irrelevant near the $ z=2 $ QCP, but break the Galileo invariance explicitly.", "This will be achieved in this section.", "In fact, in Eq.REF , we drop the two linear derivative terms $ \\frac{1}{2} \\partial _\\tau \\delta \\rho + \\rho _0 i \\partial _\\tau \\phi $ .", "This is justified because the topological vortex excitations are irrelevant inside the SF phase, so the phase windings in the phase $ \\phi $ can be safely dropped.", "However, one must check if it remains so under the GT.", "Under the boost, they become $ \\frac{1}{2} \\partial _\\tau \\delta \\rho + \\rho _0 i \\partial _\\tau \\phi - \\frac{i}{2} c \\partial _y \\delta \\rho + \\rho _0 c \\partial _y \\phi $ .", "Again, the two extra terms generated by the boost are still linear in $ \\partial _y $ , so still vanish after the integration by parts.", "So the result achieved in the last section remain valid.", "Let us consider the typical irrelevant second order derivative term which breaks the Galileo invariance explicitly: ||2 [(-icy)*][(-icy)] =||2-i2c*y-c2|y|2 where again the $ i $ in the density $ J_{\\tau } $ is due to the imaginary time $ \\tau =i t $ .", "Thus a more complete effective action than Eq.", "is LM = Z1*(-icy)+Z2(-icy)*(-icy)+vx2|x|2+vy2|y|2-||2+u||4+ Plugging in the Mean field ansatz $\\psi =\\sqrt{\\rho _0}e^{i(\\phi +k_0y)}$ leads to the energy density E[,k0] =[Z1ck0+(vy2-Z2 c2)k02 -]+ u 2 In the following, we will drop the $ b $ term which can be shown to be irrelevant in the following discussions.", "Similar to Sec.IV, near the $ z=2 $ QCP, due to $ [Z_2]=-2 $ , one can study the $ Z_2 \\ll Z_1 $ limit which holds away from the $ z=1 $ tip in Fig.REF .", "The minimization with respect to $k_0$ and $\\rho $ leads to the saddle point solution k0=-Z1 c2(vy2-Z2 c2),       0=(0,-Z1 ck0-(vy2-Z2 c2)k022U) which leads to a small correction to Eq..", "The critical line between the Mott $(\\rho _0=0)$ and the BSF $(\\rho _0>0)$ are given by c=-Z12c24(vy2-Z2c2) which leads to a small correction to Eq.REF .", "The mean-field phase diagram is qualitatively the same as $Z_2=0$ case with the slight change of $ k_0 $ and the phase boundary $ \\mu _c $ listed in Eq.REF and Eq.REF .", "When $k_0\\ne 0$ , it is convenient to introduce a new order parameter $\\psi =\\tilde{\\psi }e^{ik_0y}$ , Eq.REF becomes: SM=dd2r [ (Z1-2Z2 c k0)* +Z2||2 -i2Z2 c*y+vx2|x|2 +(vy2-Z2 c2)|y|2 -(-Z1ck0-vy2k02+Z2 c2k02)||2 + u ||4-i(Z1c+2k0vy2-2Z2 k0c2)*y].", "Setting the linear in $ k_y $ term $Z_1c+2k_0v_y^2-2Z_2 k_0c^2=0$ recovers the $k_0$ in Eq.REF .", "One reach the final action in terms of $\\tilde{\\psi }$ : S=dd2r [ (Z1-2Z2 c k0)* +Z2||2 -i2Z2 c*y +vx2|x|2 +(vy2-Z2 c2)|y|2 -||2+U||4 ] where $\\tilde{\\mu } =\\mu +\\frac{Z_1^2c^2}{4(v_y^2-Z_2c^2)}$ and $Z_1-2Z_2 c k_0=\\frac{Z_1v_y^2}{v_y^2-Z_2 c^2} >0$ .", "Setting $\\tilde{\\mu }=0 $ recovers Eq.REF .", "Now we arrive at the same form as the effective action Eq.REF .", "Similar scaling analysis below Eq.REF apply here also.", "After dropping the more irrelevant term $ |\\partial _\\tau \\tilde{\\psi }|^2 $ and keep only the leading irrelevant metric crossing term, we arrive at the same action as Eq.REF : S=dd2r ( Z1* -iZ2*y +vx2|x|2 +vy2|y|2 -||2+U||4 ) where $ \\tilde{Z}_1 =Z_1-2Z_2ck_0$ , $ \\tilde{Z}_2 =-2Z_2c$ , $\\tilde{v}_x^2=v_x^2$ , $\\tilde{v}_y^2=v_y^2-Z_2c^2$ .", "Then the discussions following Eq.REF apply here also.", "Of course, setting $ \\tilde{Z}_1=1, \\tilde{Z}_2=0 $ recovers Eq..", "So we conclude that it is the metric crossing term $ \\tilde{Z}_2 $ which leads to the shift of the value $ k_0 $ and the critical chemical potential $ \\mu _c $ .", "As shown in Sec.IV, it is this metric crossing term which leads to the Doppler shift term in both the Mott and BSF phases.", "Its contribution to the conserved Noether currents will be addressed in the following sub-section.", "The metric crossing $ \\tilde{Z}_2 $ term has the same scaling dimension as the $ V $ and $ w $ term in Eq.REF with $ [\\tilde{Z}_2]=[V]=[b]-1 $ , so considering the $ \\tilde{Z}_2 $ term will not change the results achieved in Sec.A qualitatively." ], [ " The conserved Noether current with $ z=2 $ in the moving frame ", "1.", "The emergent Galileo invariant case The global $ U(1) $ symmetry $ \\psi \\rightarrow \\psi e^{ i \\chi } $ leads to the conserved Noether current $ \\tilde{J}_\\mu =( J_\\tau , J_x, \\tilde{J}_y ) $ in Eq.", ": $J_\\tau & = & -i \\psi ^{*} \\psi \\nonumber \\\\J_x & = & i v^2_x ( \\psi ^{*} \\partial _x \\psi - \\psi \\partial _x \\psi ^{*} )\\nonumber \\\\\\tilde{J}_y & = & i v^2_y ( \\psi ^{*} \\partial _y \\psi - \\psi \\partial _y \\psi ^{*} ) -c \\psi ^{*} \\psi = J_y-ic J_{\\tau }$ which also show the current along the $ y $ direction $ \\tilde{J}_y $ is the sum of the intrinsic one $ J_y $ and the one due to the boost $ -ic J_{\\tau } $ , identical to the last equation in Eq.REF .", "They are PT even [81] and satisfy $\\partial _\\tau J_\\tau + \\partial _x J_x+ \\partial _y (J_y - ic J_\\tau ) =0$ which is identical to Eq.REF .", "Now we evaluate the 3-currents in all the three phase in Fig.REF .", "Plugging in $ \\langle \\psi \\rangle =\\sqrt{\\rho _0} e^{i k_0 y } $ into Eq.REF leads to: $J_\\tau & = & -i \\rho _0, \\nonumber \\\\J_x & = & 0 \\nonumber \\\\\\tilde{J}_y & = & -2 v^2_y k_0 \\rho _0 - c \\rho _0$ where again the factor of $ i $ in $ J_{\\tau } $ is due to the imaginary time.", "The Mott state with $ \\rho _0=0 $ carries no current.", "This is expected, because the Mott has neither p- or h- carriers, so no current.", "Near the $ z=2 $ line in Fig.REF , substituting $ k_0= -\\frac{ c}{ 2 v^2_y} < 0 $ into Eq.REF , so the BSF near the $ z=2 $ line carries $ J_\\tau = -i \\rho _0, J_x=0 $ and $ \\tilde{J}_y = 0 $ which means the intrinsic current just cancels that due to the boost.", "We believe this result is exact due to the emergent Galileo invariance.", "2.", "The metric crossing $ Z_2 $ term breaking the emergent Galileo invariance It is important to stress that the current Eq.REF for the $ z=1 $ case studied in Sec.V is particle-hole current, but here for $ z=2 $ , Eq.REF is either particle or hole current ( but not both ).", "By adding the leading irrelevant $ Z_2 $ term in Eq.REF , one can also evaluate the Noether 3-currents in Eq.REF : $J_\\tau & = & -i Z_1 \\psi ^{*} \\psi + i Z_2( \\psi ^{*} \\tilde{\\partial }_\\tau \\psi - \\psi \\tilde{\\partial }_\\tau \\psi ^{*} )= -i Z_1 \\psi ^{*} \\psi + i Z_2 [ ( \\psi ^{*} \\partial _\\tau \\psi - \\psi \\partial _\\tau \\psi ^{*} ) -ic ( \\psi ^{*} \\partial _y \\psi - \\psi \\partial _y \\psi ^{*} ) ]\\nonumber \\\\J_x & = & i v^2_x ( \\psi ^{*} \\partial _x \\psi - \\psi \\partial _x \\psi ^{*} )\\nonumber \\\\\\tilde{J}_y & = & i v^2_y ( \\psi ^{*} \\partial _y \\psi - \\psi \\partial _y \\psi ^{*} ) -ic J_{\\tau } = J_y-ic J_{\\tau }$ Setting $ Z_1=0, Z_2=1 $ or $ Z_1=1, Z_2=0 $ recovers Eq.REF or Eq.REF respectively [81].", "The first equation shows that $ J_\\tau $ consists of the particle ( or hole ) current Eq.REF with the weight $ Z_1 $ and the particle-hole current Eq.REF with a small weight $ Z_2 \\ll Z_1 $ .", "The second shows that $ J_x $ remains the same.", "The third shows the current along the $ y $ direction $ \\tilde{J}_y $ is still the sum of the intrinsic one $ J_y $ and the one due to the boost $ -ic J_{\\tau } $ .", "So this feature remains the same as in Eq.REF ( or Eq.REF ).", "Eq.REF ( or Eq.REF ) follows automatically.", "Plugging in $ \\langle \\psi \\rangle =\\sqrt{\\rho _0} e^{i k_0 y } $ into Eq.REF leads to: $J_\\tau & = & -i Z_1 \\rho _0 + i Z_2 2 k_0 c \\rho _0= -i \\rho _0 [Z_1-Z_2 2 k_0 c], \\nonumber \\\\J_x & = & 0 \\nonumber \\\\\\tilde{J}_y & = & -2 v^2_y k_0 \\rho _0 - c \\rho _0[Z_1-Z_2 2 k_0 c]$ which need to be evaluated at the corrected $ k_0 $ in Eq.REF .", "In any case, unlike the $ z=1 $ case, the conserved current at $ z=2 $ is PT even, so can not be used as an order parameter to distinguish BSF from the SF phase.", "Then one can only resort the translational symmetry breaking in the BSF phase: its ordering wavevector $ k_0 \\ne 0 $ can do such a job.", "The Mott phase with $ \\rho _0=0 $ carries no current.", "In the BSF phase near the $ z=2 $ line, plugging in the $ k_0 $ in Eq.REF , one finds $ J_\\tau =- i \\rho _0 ( \\frac{ Z_1 v^2_y }{v^2_y-Z_2 c^2 } ), J_x=0, \\tilde{J}_y=0 $ .", "When considering all the possible terms which break the emergent Galileo invariance and become important away from the $ z=2 $ line, we conclude that the physical picture listed below Eq.REF reached simply by setting $ Z_2=0 $ remains valid.", "Similarly, as outlined in Sec.V, one can also achieve the phase diagram Fig.REF from the charge-vortex duality performed in the moving frame.", "Namely, the $ z=2 $ line is driven by the instability in the vortex degree of freedoms, while the $ z=(3/2,3) $ line is the instability in the dual gauge degree of freedoms." ], [ " Galileo transformation in a lattice ", "We study the Galileo transformation in a non-relativistic many body system in a periodic potential first in the first quantization, then we transfer it to the second quantization language and project it to the Hubbard model in a tight binding limit in a square lattice.", "Finally we describe the emergent space-time [115] in the low energy limit in the second quantization.", "1.", "Galileo transformation in the many-body wavefunction in a periodic potential: bare space-time in the first quantization Eq.REF can be easily generalized to $ N \\gg 1 $ fermionic or bosonic systems in a one-body potential $ V_1(x) $ and also a short-range or long-range two-body ( Coulomb ) interaction $ V_2( x_i- x_j ) $ .", "In the lab frame, the many-body wavefunction satisfies ( ignoring the lattice vibrations, SOC, etc ): $i \\hbar \\frac{ \\partial \\Psi (x_1,x_2,\\cdots x_N) }{ \\partial t } =[ -\\frac{\\hbar ^2}{ 2 m} \\sum _{i} \\frac{\\partial ^2 }{ \\partial x^2_i }+ \\sum _i V_1(x_i)+\\sum _{i<j} V_2(x_i-x_j ) ] \\Psi (x_1,x_2,\\cdots x_N)$ where $ V_1( \\vec{x} )=V_1 ( \\vec{x } + \\vec{a} ) $ where the $ \\vec{a} $ is a lattice vector stands for the lattice potential.", "$ V_1( \\vec{x} )= \\sum _{\\vec{R}} v( \\vec{x} -\\vec{R} ) $ where $ \\vec{R} $ stand for the positions of all the ions in a periodic lattice [108].", "For the Coulomb interaction $ v( \\vec{x} -\\vec{R} )= - \\frac{Ze^2}{| \\vec{x}_i-\\vec{R} | } $ and $ V_2(x_i-x_j )= - \\frac{e^2}{| \\vec{x}_i-\\vec{x}_j | } $ .", "Setting $ V_1(x_i)=0 $ and $ V_2(x_i-x_j ) $ as Wan der-Waals interaction recovers the charge neutral Helium 4 case in Fig.REF which is clearly Galileo invariant.", "The many-body wavefunction in the lab frame are related to that in the moving frame $ \\vec{v} $ by: ( xi,t) = e i ( k0 i xi- N E0 t/) ( xi, t ) where $ \\Psi ( x_i,t)= \\Psi ( x^{\\prime }_i- vt^{\\prime }, t^{\\prime }) $ and $ k_0= -\\frac{ m v}{\\hbar }, E_0= \\frac{ \\hbar ^2 k^2_0 }{ 2 m }= \\frac{1}{2} m v^2 $ .", "Note that, in the moving frame, the ions are also moving, so $ \\vec{R}=\\vec{R}^{\\prime }- \\vec{v}t^{\\prime } $ , then $ V_1(x_i)=\\sum _{\\vec{R}} v( \\vec{x} -\\vec{R} )= \\sum _{\\vec{R}^{\\prime }} v( \\vec{x}^{\\prime } -\\vec{R}^{\\prime } )=V_1(x^{\\prime }_i) $ breaks the translational invariance up to a lattice constant $ \\vec{a} $ , but is still Galileo invariant [107].", "Furthermore, $ V_2( x_1- x_2 )= e^2/|x_1-x_2|= V_2( x^{\\prime }_1- x^{\\prime }_2 ) $ .", "Note that both $ \\vec{x} -\\vec{R}=\\vec{x}^{\\prime } -\\vec{R}^{\\prime } $ and $ x_1- x_2=x^{\\prime }_1- x^{\\prime }_2 $ are GI.", "It is easy to check that Eq.", "including the sign of $ k_0 $ agrees with Eq.REF .", "In the moving frame, the many-body wavefunction $ \\Psi ^{\\prime } ( x^{\\prime }_i, t^{\\prime } ) $ satisfies : $i \\hbar \\frac{ \\partial \\Psi ^{\\prime } ( x^{\\prime }_i, t^{\\prime } ) }{ \\partial t^{\\prime } } =[ -\\frac{\\hbar ^2}{ 2 m} \\sum _{i} \\frac{\\partial ^2 }{ \\partial x^{\\prime 2}_i }+ \\sum _i V_1(x^{\\prime }_i)+ \\sum _{i<j} V_2(x^{\\prime }_i- x^{\\prime }_j ) ] \\Psi ^{\\prime } ( x^{\\prime }_i, t^{\\prime } )$ which takes the identical form as Eq.REF .", "Despite microscopic model Eq.REF is formally GI, the wavefunction exponential factor in Eq.", "in the first quantization may still have some observable experimental consequences.", "As advocated by P. W. Anderson, \"more is different \".", "In the thermodynamic limit $ N \\rightarrow \\infty $ , there could be emergent phenomena such as quantum and topological phases and phase transitions.", "It is completely possible for this exponential phase factor to even drive various quantum and topological phase transitions.", "Unfortunately, it is hard to retrieve such effects in the mathematical formal form of Eq.REF and Eq.REF in the first quantization.", "To proceed further, one may resort to second quantization and quantum field theory approach to be developed in the following.", "2.", "Galileo transformation in the effective action in the second quantization in a periodic potential Eq.REF can be written in the second quantization language: ${\\cal H} = \\int d^2 x \\psi ^{\\dagger }( \\vec{x} )[ -\\frac{\\hbar ^2}{ 2 m} \\nabla ^2 + V_1( \\vec{x} ) - \\mu ] \\psi ( \\vec{x} )+ \\int d^2 x_1 d^2 x_2 \\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )V_2(x_1-x_2 ) \\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )$ where the single-body lattice potential $ V_1( \\vec{x} ) $ and the two-body interaction $ V_2(x_1-x_2 ) $ are automatically incorporated into the kinetic term and the interaction term respectively.", "By adding the chemical potential $ \\mu $ , we also change the canonical ensemble with a fixed number of particles $ N $ in the first quantization to the grand canonical ensemble in the second quantization.", "Following the step leading from Eq.", "to Eq., one can obtain the effective action in the moving frame ( for notational simplicity, we drop the $ \\prime $ in the moving frame ): ${\\cal H} = \\int d^2 x \\psi ^{\\dagger }( \\vec{x} )[ -\\frac{\\hbar ^2}{ 2 m} \\nabla ^2 + V_1( \\vec{x} )- \\mu -iv ( \\partial _x + \\partial _y ) ] \\psi ( \\vec{x} )+ \\int d^2 x_1 d^2 x_2 \\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )V_2(x_1-x_2 ) \\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )$ which hold not only in the periodic lattice potential case, also random impurity case.", "It is easy to show that Eq.REF takes the same form as Eq.REF after formally making the GT listed in Eq.REF .", "The wavefunction exponential factor in Eq.", "in the first quantization is equivalent to the shift of the momentum and the chemical potential as listed in Eq.REF in the second quantization.", "Indeed, $ - \\mu \\int d^2 x \\psi ^{\\dagger }( \\vec{x} ) \\psi ( \\vec{x} ) = - \\mu N $ where $ N $ is the number of particles in Eq.", "in the canonical ensemble.", "For small number of $ N $ , this is the end of story.", "Unfortunately, it is still difficult to see what are the effects of such a GT in the present second quantization scheme in the thermodynamic limit $ N \\rightarrow \\infty $ .", "This should not be too surprising, because as advocated by P. W. Anderson, \" more is different \", there are many emergent quantum or topological phenomena such as quantum magnetism, superconductivity, superfluidity, etc which can nowhere be seen in such a formal model Eq.REF .", "This kind of formal model looks complete, exact to some formal level, but not effective to see any emergent phenomena.", "So only when proceeding further and deeper to get an effective model to see the signature of emergent quantum or topological phenomenon, one may start to see the real effects of such a GT on such emergent phenomena.", "In a continuum system without underlying lattice such as Helium 4, by setting $ V_1(x_i)=0 $ , one can directly proceed further from Eq.REF , see Sec.IX.", "In a periodic lattice, the field operator $ \\psi _n( \\vec{x} ) $ can be expanded either in the Wannier basis $ \\psi _n( \\vec{x} ) = \\sum _i b_i \\phi _n ( \\vec{x}-\\vec{R}_i ) $ or in the Bloch basis $ \\psi _n( \\vec{x} )= \\sum _{\\vec{k}} b_{\\vec{k}} \\phi _{n \\vec{k}} ( \\vec{x} ) $ where $ n $ is the band index, $ \\vec{k} $ is the crystal momentum in the BZ.", "The Wannier functions satisfy the Wannier equation: $\\hat{h} \\phi _n( \\vec{x}-\\vec{R}_i )= \\sum _{\\vec{R}} E_n( \\vec{R}_i- \\vec{R} ) \\phi ( \\vec{x}-\\vec{R} ), ~~~~ \\hat{h}= -\\frac{\\hbar ^2}{ 2 m} \\nabla ^2 + V_1( \\vec{x} )$ where $ E_n(\\vec{k})= \\sum _{\\vec{R}} e^{i \\vec{k} \\cdot \\vec{R} } E_n( \\vec{R} ) $ is the tight-binding energy dispersion satisfying $ \\hat{h} \\phi _{n \\vec{k}} ( \\vec{x} )=E_n(\\vec{k}) \\phi _{n \\vec{k}}( \\vec{x} ) $ .", "One can transfer between the two basis by $ \\phi _{n \\vec{k}}( \\vec{x} )= \\sum _{\\vec{R}} e^{i \\vec{k} \\cdot \\vec{R} } \\phi _n ( \\vec{x}-\\vec{R} ) $ .", "Here, in the tight-binding limit, it is convenient to use the Wannier basis confined to the $ n=s $ band and ignore all the higher bands $ l=2,3,\\cdots $ .", "Substituting $ \\psi ( \\vec{x} ) = \\sum _i b_i \\phi ( \\vec{x}-\\vec{R}_i ) $ into Eq.REF leads to: ${\\cal H} & = & \\sum _{ij} b^{\\dagger }_i b_j \\int d^2 x \\phi ^{*}( \\vec{x}-\\vec{R}_i )[ -\\frac{\\hbar ^2}{ 2 m} \\nabla ^2 + V_1( \\vec{x} )- \\mu ] \\phi ( \\vec{x}-\\vec{R}_j ) \\nonumber \\\\& + & \\sum _{ij} b^{\\dagger }_i b_j \\int d^2 x \\phi ^{*}( \\vec{x}-\\vec{R}_i )(-iv) [ \\frac{ \\partial }{\\partial x } + \\frac{ \\partial }{\\partial y } ]\\phi ( \\vec{x}-\\vec{R}_j ) \\nonumber \\\\& + & \\sum _{ij,kl} b^{\\dagger }_i b_j b^{\\dagger }_k b_l \\int d^2 x_1 d^2 x_2 \\phi ^{*}( \\vec{x}_1-\\vec{R}_i )\\phi ( \\vec{x}_1-\\vec{R}_j )V_2(x_1-x_2 ) \\phi ^{*}( \\vec{x}_2-\\vec{R}_k )\\phi ( \\vec{x}_2-\\vec{R}_l )$ By using Eq.REF , the first line can be simplified to $ \\sum _{ij} b^{\\dagger }_i b_j E( \\vec{R}_j- \\vec{R}_i) - \\mu \\sum _{i} b^{\\dagger }_i b_i $ .", "In the tight-binding limit, it maybe justified to consider only the on-site term and the NN hopping term [111].", "For the interaction term, one may just keep the on-site term.", "Then for fermions or bosons, Eq.REF reduces to the fermionic [57] or Boson Hubbard model Eq.REF respectively.", "Setting $ v=0 $ recovers back to the lab frame.", "Obviously, one can start to see the Mott phase, SF phase and a quantum phase transition from Mott to the SF, so this is a more effective model than the formal model Eq.REF .", "Similarly, one may start to see the effects of GT on this more effective model.", "In the following, due to the exchange symmetry between $ x $ and $ y $ , we only examine the current along the x-bond.", "The identical argument applies to the y- bond.", "Now we investigate the second ( boost ) term which can be written as the current form $ J_{ij,x}= -\\frac{i v}{2} \\int d^2 x [ \\phi ^{*}( \\vec{x}-\\vec{R}_i )\\frac{ \\partial }{\\partial x } \\phi ( \\vec{x}-\\vec{R}_j )-\\frac{ \\partial }{\\partial x } \\phi ^{*}( \\vec{x}-\\vec{R}_i )\\phi ( \\vec{x}-\\vec{R}_j ) ] $ .", "For the S-wave band, one can take $ \\phi ( \\vec{x}-\\vec{R}_j ) $ to be real and only depends on its magnitude, so that $ \\phi ( \\vec{x}-\\vec{R}_j ) = \\phi ( | \\vec{x}-\\vec{R}_j | ) $ .", "Then the boost term in Eq.REF can be written as: $H_{bx}=-iv\\sum _{ij} b^{\\dagger }_i b_j \\int d^2 x \\phi ( |\\vec{x}| )[ \\alpha \\frac{ \\partial }{\\partial x } + \\beta \\frac{ \\partial }{\\partial y } ] \\phi ( |\\vec{x}-\\vec{R}_{ij} | ), ~~~~\\vec{R}_{ij}= \\vec{R}_i-\\vec{R}_j$ where $ (\\alpha ,\\beta ) $ indicate any generic direction.", "For simplicity, we take $ x- $ direction in the following.", "A simple reflection symmetry analysis under $ \\vec{x} \\rightarrow -\\vec{x} $ shows that only when $ \\vec{R}_{ij}= n a \\hat{x}, n=1,2, \\cdots $ is along the x-bond direction, $ H_b $ is non-vanishing.", "It is easy to see that the NN term with $ n=1 $ is nothing but a conserved Noether current along the $ \\hat{x} $ direction due to the $ U(1) $ symmetry, so can be absorbed into the NN hopping term by a unitary transformation ( See Sec.VIII ).", "So one may need go to at least the $ n=2 $ NNN current term to see all the effects of the boost: $H_{bx}=-iv[ t_{b1} \\sum _{i} b^{\\dagger }_i b_{i + \\hat{x} } + t_{b2} \\sum _{i} b^{\\dagger }_i b_{i + 2 \\hat{x} }] + h. c$ where $ t_{b1}, t_{b2} $ are completely determined by Wannier functions, so independent of the boost velocity.", "Of course, one may also keep the NN $ t_1 $ and NNN $ t_2 $ hopping term in Eq.REF , due to the different integrands involving in the matrix elements, $ t_{b2}/t_{b1} \\ne t_2/t_1 $ .", "So adding $ t_2 $ back will not change the qualitative physics.", "Figure: Left: The well known phase diagram of the Boson Hubbard model Eq.", "in the lab frame.The Mott insulating phase resides inside the n=1,2,3,⋯ n=1,2,3,\\cdots lobes.", "The superfluid (SF) phase takes the other space.There is a particle-hole (PH) ( or charge conjugation C C ) symmetry along the horizontal dashed lines going through the tip of the lobes,the Mott-SF transition has the dynamic exponent z=1 z=1 .There is no PH symmetry away from the horizontal dashed lines, the Mott-SF transition has the dynamic exponent z=2 z=2 .The solid lines emerging from the joint point betweenn n -th and n+1 n+1 -th Mott lobe delineates the contours of constant density with n+ϵ n + \\epsilon and n+1-ϵ n +1- \\epsilon respectively.The z=1 z=1 and z=2 z=2 SF-Mott transitions have emergent \"Lorentz \" invariance and emergent Galileo invariance respectively.As shown in Fig., , they response very differently to the boost of the moving frame.Right (a) The phase diagram in a moving frame with a fixed velocity v → \\vec{v} .For simplicity, we only draw the Mott lobe at n=1 n=1 .", "n>1 n > 1 can be similarly constructed.It is achieved by combining Fig.", "and Fig.", "with the relations between the phenomenological parametersand the microscopic parameters in Eq.", "and Eq..The black dashed line just copy the n=1 n=1 lobe in the lab frame on the left.It evolves into the red solid line in the moving frame by two steps.The step (I) to the blue dashed line is due to H b1 H_{b1} with k 0L k_{0L} of the ground state in Eq..The step (II) to the red solid line from the blue dashed line is due to H b2 H_{b2} with k 0 k_0 of the ground state in Eq..The shape of the boundary is given by Eq.", "with the non-monotonic f(v) f(v) shown in Fig.b.So the shift in step (II) is also non-monotonic reflecting the underlying lattice structure.The doppler shifts in the excitation spectrum in the BSF phase is given in Eq.", "for z=1 z=1 and in Eq.", "for z=2 z=2 respectively.", "The z=(3/2,3) z=(3/2, 3) line from the SF to the BSF transition is not reachablein the boosted bosonic Hubbard model, but reachable by boosting directly the SF as analyzed in Sec.IX-A and shown in (c)." ], [ " The Global phase diagram of the Mott-SF transition in a lattice observed in a moving frame ", "In this section, we extend the derivation [116] of Eq.", "from the boson Hubbard model Eq.REF in the lab frame to the boost case and derive Eq.", "from Eq.REF plus Eq.REF in the moving frame.", "The derivation process is highly instructive and bring new insights to the emergent space-time from both the $ z=2 $ and the $ z=1 $ QCP.", "Furthermore, it also establishes the relation between phenomenological parameters in Eq.", "and and the microscopic parameters in Eq.REF .", "When combining this relation with the phase diagram Fig.REF and Fig.REF achieved by the effective actions in terms of the phenomenological parameters, one reach the global phase diagram shown in Fig.REF a,b." ], [ " The exact non-perturbative and perturbative treatments of the NN and NNN boost terms ", "Here we will still take the \"divide and conquer \" strategy [64] to examine the NN and NNN boost terms in Eq.REF .", "For the notional simplicity, we study the boost along the x-bond.", "The $ t_{b1} $ boost term in Eq.REF can be combined with the NN hopping term as For $H_{b1}$ , since $t_0+it_{b1}=\\sqrt{t_0^2+t_{b1}^2}e^{-ik_0}$ with $ \\tan k_{0L}=- t_{b1}/t_0 $ , one can introduce a lattice version of GT, bi=biei k0L iy It maybe important to stress that the boost term $ t_{b1} $ term looks like a gauge field putting on the link.", "However, it was known any gauge field which generates a flux $ f $ through each plaquette breaks the lattice translational symmetry and generates non-commutative magnetic space group [67], [68], [69].", "However, all the boost terms keep the lattice translational symmetry, so if the current can be viewed as a gauge field on the link, it can only do so by a \"pure gauge\" which generates no flux.", "Indeed, for the particular case like $ H_{b1} $ , this \" pure gauge\" can be transformed away by the \"gauge\" transformation Eq.REF .", "Then Hamiltonian $H_1=H_0+H_{b1}$ can be rewritten in the same form as the original Bose Hubbard model Eq.REF : H1=-i (t0bibi+x +t02+tb12bibi+y+h.c.)", "+Ui bibibi bi -i bibi with just the hopping anisotropy along the $ y $ bond.", "While the interaction $ U $ and the chemical potential $ \\mu $ stay the same.", "So it does not change any symmetry of the Hamiltonian.", "Effectively it favors the SF over the Mott as one increases the boost.", "In fact, one can see the physics in Eq.REF easily when expanding: t02+tb12 ei k0L [ bibi+y +h.c]= t02+tb12[ k0L ( bibi+y +h.c) + i k0L ( bibi+y - h.c) ] where the first term is just the kinetic energy, the second is the conserved current.", "This expansion naturally explains why the first boost $ t_{b1} $ term can be absorbed by the unitary transformation Eq.REF .", "While the other form of the boost such as the $ t_{b2} $ term in Eq.REF may not.", "After the non-perturbation treatment of the $ t_{b1} $ , we discuss the possible perturbation handling of $ t_{b2} $ .", "Substituting Eq.REF to the $ t_{b2} $ term in Eq.REF leads to: Hb2 =-i tb2 ei 2 k0 i bibi+2x + h.c. =-i tb2 2 k0L bibi+2x + tb2 2 k0L bibi+2x + h.c. where $ \\cos 2 k_{0L} = \\frac{t^2_0 - t^2_{b1} }{ t^2_0 + t^2_{b1} } $ .", "The first term is odd in $ \\vec{k} $ , so plays very important role, even it could be small.", "The second term is even in $ \\vec{k} $ , so can be dropped relative to the dominant hopping term in Eq.REF .", "In the following discussions, after inserting back the boost velocity $ v $ as $ t_{b1} \\rightarrow t_{b1} v, t_{b2} \\rightarrow t_{b2} v $ , we will measure its strength relative to $ t_{b1} $ as: $t_b= t_{b2} \\cos 2 k_{0L}= \\alpha v t_{b1},~~~~ \\alpha = ( \\frac{v^2_c -v^2 }{ v^2_c + v^2 } ) \\frac{t_{b2}}{t_{b1}} \\ll 1$ where $ v_c= t_0/t_{b1} $ .", "Because $ \\frac{t_{b2}}{t_{b1}} $ is independent of the boost $ v $ , so $ \\alpha $ changes sign when $ v $ is tuned to pass $ v_c $ .", "Obviously, the discrete GT transformation can be easily extended to the boosts along both $x $ and $ y $ bonds with Hb12=-itb1xi (bibi+y-h.c.) -itb1yi (bibi+y-h.c.) Then the single particle spectrum takes the form k =-2t0(kx+ky)+2tb1x kx +2tb1y ky which develops a single minimum at $ K=(k_{0Lx}, k_{0Ly} ) $ where $ k_{0Lx}=-\\arctan (t_{b1x}/t_0), k_{0Ly}=-\\arctan (t_{b1y}/t_0) $ .", "Then similar to Eq.REF , the boosts can be transformed away by bi=bi ei ( k0Lx ix + k0Ly iy ) Then one may treat $ H_{b2} $ along both $ x- $ and $ y- $ bond by constructing an effective action.", "From symmetry point of view, one may also just treat $ H_{b3} $ in Eq.", "discussed in the appendix D. In the following sections, we will address $ H_1 + H_{b12} $ or simply $ H_1 + H_{b2} $ for simplicity.", "Because $ H_{b1} $ has been absorbed into $ H_0 $ , $ H_{b2} $ will be explored by effective actions in the continuum limit.", "This is the \"divide and conquer\" strategy to deal with emergent phenomena." ], [ " The derivation of Eq.", "We will do the calculations in the new basis in Eq.REF and REF .", "For notational simplicity, we will drop the tilde.", "In the strong coupling limit $ U/t \\gg 1 $ , the ground state is just the Mott state $ | Mott \\rangle = \\prod _i ( b^{\\dagger }_i )^N | 0 \\rangle $ .", "Its lowest excited state is either one particle or one hole at the site $ i $ : $|p \\rangle _i= b^{\\dagger }_i | Mott \\rangle ,~~~~~ |h \\rangle _i= b_i | Mott \\rangle $ where one can see the $ p $ and $ h $ is defined with respect to the Mott state.", "Because the Mott state is the \" vacuum \" state of both $ p $ and $ h $ , so they are very much different than the original boson creation $ b^{\\dagger }_i $ and the original boson annihilation operator $ b_i $ .", "At $ t/U =0 $ , the excitation energy of the particle and hole are $ E^0_p= U(1/2- \\alpha ) $ and $ E^0_h= U(1/2 + \\alpha ) $ respectively where $ \\alpha =\\mu -1/2 $ is the re-defined chemical potential.", "Now we turn on a weak hopping and also a weak boost Eq.REF .", "We first construct the translational invariant one particle and one hole eigen-state: $| p \\rangle = \\frac{1}{\\sqrt{N}} \\sum _i e^{i \\vec{k} \\cdot \\vec{R}_i } |p \\rangle _i,~~~~| h \\rangle = \\frac{1}{\\sqrt{N}} \\sum _i e^{i \\vec{k} \\cdot \\vec{R}_i } |h \\rangle _i$ Then by a first-order perturbation, one can find the particle and hole eigen-spectrum in the continuum limit: $E_{p/h}=E^0_{p/h}- 2t( \\cos k_x+ \\cos k_y) -2 t_b \\sin 2 k_x \\sim \\Delta _{p/h} + \\frac{k^2}{2m} -c k_x + c k^3_x + \\cdots $ where $ t_b $ is given in Eq.REF , the particle/hole excitation gap $ \\Delta _{p/h}= U(1/2 \\mp \\alpha )-4t $ .", "It is important to stress the crucial difference between the hopping and the boost in Eq.REF : the former is even in the momentum, while the latter is odd.", "So the specific form of the hopping term and the current term may not be important, only their symmetry matter and lead to the same low energy form in Eq.REF .", "We also take the low energy limit, as shown in Eq.REF , the effective mass and the boost velocity can be written as: $v^2_x= v^2_y= \\frac{1}{2m}=t = t_{b1} \\sqrt{ v^2_c + v^2 },~~~~ c= v \\alpha t_{b1}$ Extrapolating Eq.REF to larger hopping leads to the following simple physical picture: for $ \\alpha > 0 $ , the $ p $ condenses first at $ t/U \\sim 1/2- \\alpha $ , $ \\alpha < 0 $ , the $ h $ condenses first at $ t/U \\sim 1/2 + \\alpha $ , for $ \\alpha =0 $ , both condense at the same time at $ t/U \\sim 1/2 $ .", "These critical values will be slightly modified by various effects in the following.", "The first-order perturbation can be pushed to the third order [119] without changing the estimates qualitatively.", "We first introduce the particle and hole operator both of which annihilate the Mott state $ p | Mott \\rangle = h | Mott \\rangle =0 $ .", "In the path-integral language, one can simply write down the action to describe such a particle/hole condensation process as; ${\\cal L}_{p/h}= p^{\\dagger } ( \\partial _\\tau -ic \\partial _x + \\Delta _p-\\frac{1}{2m} \\nabla ^2 ) p+ h^{\\dagger } ( \\partial _\\tau -ic \\partial _x + \\Delta _h-\\frac{1}{2m} \\nabla ^2 ) h - \\lambda ( p^{\\dagger } h^{\\dagger } + p h )+ u_p ( p^{\\dagger } p )^2 + u_h ( h^{\\dagger } h )^2 + \\cdots $ where $ \\lambda $ creates the particle-hole pair on two neighbouring sites, $ u_p, u_h $ stand for the on-site interaction of the $ p $ and $ h $ respectively and $ \\cdots $ stand for all the possible higher order interactions among the particles and holes.", "Due to the diluteness of the p- or h- near the QPT, we expect $ u_p, u_h \\ll U $ .", "Despite it is not known what is the precise mathematical relation between $ (p, h), (p^{\\dagger }, h^{\\dagger } ) $ and the original $ ( b^{\\dagger }_i, b_i) $ , the original $ U(1) $ symmetry of the bosons in Eq.REF imply the $ U(1) $ symmetry of Eq.REF under which $ p \\rightarrow p e^{i \\theta }, h \\rightarrow h e^{-i \\theta } $ .", "It dictates the conservation of $ p^{\\dagger } p- h^{\\dagger } h $ .", "Despite the C-transformation [114] in Eq.REF can not be explicitly expressed in terms of $ ( b^{\\dagger }_i, b_i) $ , it can be easily expressed in the $ ( p, h) $ space as $p \\leftrightarrow h$ In view of the transformation in Eq.REF , it is tempesting to do the simultaneous transformation of $ p $ and $ h $ as $ p= \\tilde{p} e^{i k_0 x }, h= \\tilde{h} e^{i k_0 x } $ .", "However, the p-h pair term becomes $ - \\lambda e^{-i 2 k_0 x } \\tilde{p}^{\\dagger } \\tilde{h}^{\\dagger } + h.c $ which becomes $ x- $ dependent in the $ \\tilde{p}, \\tilde{h} $ basis, so not easy to deal with.", "If had the boost term in the hole part in Eq.REF reversed its sign, the simultaneous transformation of $ p $ and $ h $ would be $ p= \\tilde{p} e^{i k_0 x }, h= \\tilde{h} e^{-i k_0 x } $ , the p-h pair term would stay the same in the $ \\tilde{p}, \\tilde{h} $ basis, then Eq.REF would own the Galileo Invariance.", "Of course, changing the p-h pair to $ \\lambda ( p^{\\dagger } h + p h ) $ would have the same effects.", "So it is the Mott state and the p-h pair creation process in Eq.REF which breaks the GI.", "This important fact is crucial to explore the emergent space-time from both the $ z=2 $ and the $ z=1 $ QPT.", "Without losing any generality, one can always make $ \\lambda $ to be positive, so $ \\lambda \\sim t $ .", "Note that the $ t_{b2} $ term, due to its odd in $ \\vec{k} $ , making negligible contributions to $ \\lambda $ in the long wavelength limit.", "So we set $ \\lambda \\rightarrow \\lambda t $ in the following.", "1.", "The emergent space-time from the $ z=1 $ theory When $ \\alpha = 0 $ , $ \\Delta _p=\\Delta _h=\\Delta _0=U/2-4t $ dictated by the particle-hole( C-) symmetry, so the particle and hole condense at the same time, so one must treat them at equal footing.", "After performing the unitary transformation: $\\Psi = \\frac{1}{\\sqrt{2} } ( p+ h^{\\dagger } ),~~~~\\Pi =\\frac{1}{\\sqrt{2} } ( p- h^{\\dagger } )$ Eq.REF can be written as: ${\\cal L}[ \\Psi , \\Pi ] &=& \\Psi ^{\\dagger } ( \\partial _\\tau -ic \\partial _x) \\Pi + \\Pi ^{\\dagger } ( \\partial _\\tau -ic \\partial _x) \\Psi \\nonumber \\\\& + & \\Psi ^{\\dagger }(-\\frac{1}{2m} \\nabla ^2 + \\Delta _0 - \\lambda t ) \\Psi + \\Pi ^{\\dagger }(-\\frac{1}{2m} \\nabla ^2 + \\Delta _0 + \\lambda t ) \\Pi + \\cdots $ which indicate $ ( \\Psi ^{\\dagger }, \\Pi ) $ or $ ( \\Pi ^{\\dagger }, \\Psi ) $ become conjugate variables.", "Under the $ U(1) $ symmetry, $ \\Psi \\rightarrow \\Psi e^{ i \\theta }, \\Pi \\rightarrow \\Pi e^{ i \\theta } $ .", "Under the C-transformation $ \\Psi \\leftrightarrow \\Psi ^{\\dagger }, \\Pi \\leftrightarrow -\\Pi ^{\\dagger } $ .", "Obviously, Eq.REF enjoys both the $ U(1) $ symmetry and the C-symmetry.", "Obviously when $ \\Psi $ becomes critical, $ \\Pi $ remains massive, so integrating out $ \\Pi $ leads to ${\\cal L}[ \\Psi ]= Z_2 ( \\partial _\\tau -ic \\partial _x) \\Psi ^{\\dagger } ( \\partial _\\tau -ic \\partial _x) \\Psi + \\Psi ^{\\dagger }(-\\frac{1}{2m} \\nabla ^2 + \\Delta _0 - \\lambda t ) \\Psi + u | \\Psi |^4 + \\cdots $ where $ Z_2=1/(\\Delta _0+ \\lambda t ) $ .", "It also enjoys both the $ U(1) $ symmetry and the C-symmetry.", "After scaling out the factor $ Z_2 $ which is independent of $ c $ in the long wavelength limit $ {\\cal L}[ \\Psi ]/Z_2 $ , one can see it is nothing but the effective action Eq.", "describing the $ z=1 $ SF-Mott transition.", "To get the suitable scaling between the space and time, one need to scale out the factor $ Z_2 $ and obtain: $v^2_y(z=1) \\sim t [ U/2 + (\\lambda -4 ) t ],~~~~~~ r=\\Delta ^2_0 - \\lambda ^2 t^2$ which can be contrasted to Eq.REF with $ z=2 $ case.", "Setting $ r=0 $ leads to the critical value of $ (t/U)_c $ as $ (t/U)_c = 1/2(4+ \\lambda ) $ .", "Now we need to apply this connection to Fig.REF .", "At the QCP, $ v^2_y(z=1)=2 \\lambda ( t^2_0 + v^2 t^2_{b1} ) $ .", "While $ c^2= \\alpha ^2 v^2 t^2_{b1} $ with $ \\alpha \\ll 1 $ in Eq.REF , so $ c^2 \\ll v^2_y(z=1) $ at any $ v $ , the exotic $ z=(3/2,3) $ line is not reachable, it can only move along the Path I in Fig.REF .", "The Doppler shift also changes sign when $ v=v_c=t_0/t_{b1} $ .", "We are starting from the strong coupling limit at a fixed $ U $ , then gradually increases $ t $ to approach the QCP, then pass into the SF in Fig.REF .", "In this approach, Eq.REF indicates that at a fixed $ U $ , the intrinsic velocity $ v_y(z=1) $ increases as $ t/U $ goes through the SF phase.", "2.", "The emergent space-time from the $ z=2 $ theory When $ \\alpha > 0 $ , the particle condenses first $ \\Delta _p \\rightarrow 0 $ at the QCP at $ t/U \\sim 1/2- \\alpha $ , while the hole remains massive $ \\Delta _h > 0 $ , so can be integrated out.", "Using the expansion of the hole propagator: $\\frac{1}{ \\Delta _h + x} = \\frac{1}{\\Delta _h} - \\frac{x}{ \\Delta ^2_h} + \\frac{x^2}{ \\Delta ^3_h} + \\cdots ,~~~~~x= -\\partial _\\tau + ic \\partial _x -\\frac{1}{2m} \\nabla ^2$ where we the reversed sign of the first two terms in the $ x- $ propagator due to the integration by parts on the hole part.", "Then integrating out the $ h $ order by order in $ u_h $ leads to: ${\\cal L}_{p} & = & p^{\\dagger } ( \\partial _\\tau -ic \\partial _x + \\Delta _p-\\frac{1}{2m} \\nabla ^2 ) p + u_p |p|^4- \\frac{ \\lambda ^2}{ \\Delta _h } p^{\\dagger } p \\nonumber \\\\& + & \\frac{ \\lambda ^2 }{ \\Delta ^2_h } p^{\\dagger } ( -\\partial _\\tau + ic \\partial _x -\\frac{1}{2m} \\nabla ^2 ) p- \\frac{ \\lambda ^2 }{ \\Delta ^3_h } p^{\\dagger } ( -\\partial _\\tau + ic \\partial _x -\\frac{1}{2m} \\nabla ^2 )^2 p \\nonumber \\\\& + & 4 u_h ( \\frac{ \\lambda }{ \\Delta _h } )^4 ( p^{\\dagger } p )^2- 4 \\frac{ u_h }{ \\Delta _h } ( \\frac{ \\lambda }{ \\Delta _h } )^4 | p|^2p^{\\dagger } ( -\\partial _\\tau + ic \\partial _x -\\frac{1}{2m} \\nabla ^2 ) p + \\cdots $ where one can see the last term in the first line renormalize down the p- gap, the first term in the second line the particle propagator, the second is just the second derivative term in the imaginary time presented in Eq.REF in Sec.IX-C , while the third line involves the h- interaction $ u_h $ : the first term just renormalize the p- interaction, the second is the very important dangerously irrelevant term which leads to the Doppler shift term in the SF as explored in the following.", "Dropping the second and the third line leads to: ${\\cal L}_{p}= p^{\\dagger } ( \\partial _\\tau -ic \\partial _x + \\tilde{\\Delta }_p-\\frac{1}{2m} \\nabla ^2 ) p+ u |p|^4 + \\cdots $ where $ \\tilde{\\Delta }_p =\\Delta _p- \\lambda ^2 t^2/\\Delta _h $ is the renormalized p- gap.", "It is nothing but the effective action Eq.. Because the high energy hole is projected out, so the conservation of $ p^{\\dagger } p- h^{\\dagger } h $ reduces to $ p^{\\dagger } p $ .", "Even so, the $ p $ operator is still different than the original $ b_i $ , despite both have the $ U(1) $ symmetry.", "Very similarly, one can derive the identical action when $ \\alpha < 0 $ where the hole condenses first, it is just related to Eq.REF by the C- transformation.", "One can also establish the connections between the phenomenological parameters in Eq.", "and the microscopic parameter in Eq.REF as $v^2_y \\sim t ,~~~~~~ \\mu =- ( \\Delta _p- \\lambda ^2 t^2/\\Delta _h )$ which can be contrasted to Eq.REF with $ z=1 $ case.", "Setting $ \\mu =0 $ leads to the equation determining the critical value of $ (t/U)_c $ as: $( U/2t -4 )^2 - ( U/t)^2 \\alpha ^2= \\lambda ^2$ which indicates one can tune the effective chemical potential $ \\mu $ by either $ t/U $ or $ \\alpha $ .", "Now we need to apply this connection to Fig.REF .", "Plugging the microscopic value in Eq.REF into Eq.REF leads to: $k_0= -\\frac{ \\alpha v }{ 2 \\sqrt{ v^2_c + v^2 } }$ Eq.REF in the Mott side $ \\mu < 0 $ becomes: $- \\mu =\\frac{ \\alpha ^2 v^2 t^2_{b1} }{ t_{b1} \\sqrt{ v^2_c + v^2 } }= ( \\frac{t_{b2}}{t_{b1}} )^2 t_{b1}f(v),~~~~~f(v)= ( \\frac{v^2_c -v^2 }{ v^2_c + v^2 } )^2 \\frac{ v^2 }{ \\sqrt{ v^2_c + v^2 } }$ Obviously, as $ f(v) \\sim v^2, (v-v_c)^2, v $ when $ v \\ll v_c, \\sim v_c, \\gg v_c $ .", "Then when $ v < v_c=t_0/t_{b1} $ , the Doppler shift term is in the same direction as the original $ v $ .", "At $ v=v_c $ , simply no shift, $ v > v_c $ , it changes sign and becomes opposite to the original $ v $ .", "Obviously, this surprising change of sign of the Doppler shift is due to the Mott state in Eq.REF .", "Remembering the reverse sign of the boost in the second and third line in Eq.REF than the original p- propagator in the first line and performing the transformation $ p= \\tilde{p} e^{i k_0 x } $ with the $ k_0 $ listed in Eq., one obtains the $ z=2 $ effective action describing the Mott to SF transition driven by the p- BEC in the presence of the boost in the $ \\tilde{p} $ basis: ${\\cal L}_{p}= \\tilde{p}^{\\dagger } ( \\partial _\\tau + \\tilde{\\tilde{\\Delta }}_p -\\frac{1}{2m} \\nabla ^2 ) \\tilde{p}+ u |\\tilde{p}|^4 - i V |\\tilde{p}|^2 \\tilde{p}^{\\dagger } \\partial _x \\tilde{p}+ i w \\tilde{p}^{\\dagger } \\partial ^3_x \\tilde{p} + \\cdots $ where $ \\tilde{\\tilde{\\Delta }}_p =\\tilde{\\Delta }_p - \\frac{1}{2} m^2 c^2 $ ( or equivalently $ \\tilde{\\tilde{\\mu }}_p =\\tilde{\\mu }_p + \\frac{1}{2} m^2 c^2 $ in Eq.REF ) is the renormalized particle gap and $ V= 8 c \\frac{ u_h }{ \\Delta _h } ( \\frac{ \\lambda }{ \\Delta _h } )^4 $ is the dangerously irrelevant term which also breaks the GI explicitly.", "It is also the term leading to the Doppler shift in the SF.", "We also added the third-order $ \\partial ^3_x $ term in Eq.REF , so $ w \\propto c $ .", "Then it leads to Eq.REF .", "In fact, the derivations of Eq.", "and Eq.", "rely on just one important fact that the hopping term is P, T even, but the boost term is P, T odd, but still PT even.", "It is independent of many microscopic details such as how many terms we keep in the hopping terms in Eq.REF and the boost terms in Eq.REF .", "It also bring additional insights to the emergent space-time of $ z=1 $ and $ z=2 $ QCP.", "Furthermore, it establishes the connections between the phenomenological parameters in Eq.", "and Eq.", "and the microscopic parameters in Eq.REF plus Eq.REF .", "In short, there are two step I and step II to reach the global phase diagram Fig.REF : Step 1: These effects are due to the $ H_{b1} $ and easily captured by the direct treatments on the lattice.", "The introduction of the BEC momentum $ k_{0L} $ of the ground state in Eq.REF .", "There is also an increase of the hopping strength from $ t_0 $ to $ \\sqrt{ t^2_0 + t^2_{b1} } $ in Eq.REF which, in turn, leads to the shift of the Mott-SF QPT into the Mott side.", "It also modify the effective strength of the $ H_{b2} $ term in Eq.REF .", "Step 2.", "These effects are due to the $ H_{b2} $ and can only be resolved by constructing the effective actions in the low energy limit.", "For $ z=1 $ , there is no more BEC momentum shift and no more boundary shift either.", "The residual effect of $ H_{b2} $ is just to introduce a Doppler shift in Eq.REF to the excitation spectrum in both the Mott and the SF phase.", "For $ z=2 $ , there is one additional BEC momentum shift $ k_0 $ of the ground state in Eq.REF and Eq., one more boundary shift $ \\tilde{\\mu } $ in Eq.REF and Eq.REF .", "The shape of the boundary is given by $ f(v) $ in Fig.REF b.", "Even so, there is still the residual effect of $ H_{b2} $ which is the dangerously irrelevant $ i V $ term in Eq.REF .", "It is this term which introduces the Doppler shift in Eq.REF to the excitation spectrum in the SF phase.", "Combining the results at $ z=1 $ and near $ z=2 $ leads to Fig.REF a,b.", "Figure: The hierarchy of the energy scales and emergent space-time near a QPT.The dots on the left means that it come from the low end of Fig..The ionic model means Eq.. C- means the charge conjugation symmetry.Performing the GT at the lowest level leads to Eq.", "and Eq.", "respectively.But one must also perform the GT on the ionic and Boson Hubbard model also to establish the connections betweenthe phenomenological parameters in the two equations and those bare ones in the ionic model.The question marks in the first box means we do not even know what is the Hamiltonian at this level and ifit has GI or not.", "The emergent LI in the z=2 z=2 effective theory really means the pseudo-LI as stressed in the introduction.In general, the internal symmetry can only increase along the arrow,but the space-time symmetry like the GI may not , .Another two examples are bulk FQH and its associated edge mode to be presented in appendix G and H." ], [ " On the hierarchy of energy scales and emergent space-time from a QCP ", "The above analysis shows that the effective tight-binding second quantization Boson-Hubbard model Eq.REF is not Galileo invariant anymore.", "This observation can also be reached intuitively: in the tight-binding limit in Eq.REF , the boson coordinate $ i $ is pinned to be the same as the lattice site $ i $ .", "The lattice just discretize the space into a square lattice which confine the bosons to hop on.", "However, only on such a discretized lattice, one can define the P-H ( C ) symmetry stressed in the introduction [114].", "The Eq.", "is an even more effective action than the boson Hubbard model Eq.REF .", "It describes the Mott-SF transition and is not GI either.", "Then in a moving frame, the effective action should be Eq..", "The GT of the lattice site $ x_i \\rightarrow x^{\\prime }_i+ c t^{\\prime } $ is automatically encoded in Eq..", "Both the $ z=1 $ and $ z=2 $ SF-Mott transition happens at integer fillings which has the C- and no C-symmetry respectively.", "It is this C-symmetry which plays the crucial roles in Figs.REF , REF , REF a and also the emergent space-time [115] encoded in Eq.", "and Eq., but it is not even defined in the bare action Eq.REF which defines the bare space-time.", "One could start from the microscopic Hamiltonian Eq.REF which, of course, is also an approximation to real materials.", "It does not include many possible other terms such as the lattice vibrations ( or phonons ), SOC, etc.", "The relevant fields are the electron or boson field operators $ \\psi (\\vec{x}), \\psi ^{\\dagger }(\\vec{x}) $ .", "It has the GI.", "But it is even hard to see if there is any QPT at this level.", "It reduces to the boson Hubbard model Eq.REF after many truncations in Eq.REF .", "The relevant fields are the boson field creation or annihilation operators $ b_i, b^{\\dagger }_i $ attached to a given lattice site $ i $ .", "It does not have the GI anymore.", "But it has the QPT from the SF to the Mott phase shown in Fig.REF .", "Then near the QPT, one can construct the effective action Eq.", "and Eq.", "to describe the QPT at the tips and away from the tips respectively.", "The relevant fields are either the particle $ p $ or the hole $ h $ for the $ z=2 $ case, or the field $ \\Psi $ in Eq.REF which is a linear combination of $ p $ and $ h^{\\dagger } $ .", "Both $ p $ and $ h $ are defined with respect to the Mott state Eq.REF , can be considered as the two emergent particles from, but not simply related to the $ b_i, b^{\\dagger }_i $ .", "It does not have the GI either, but the C- symmetry can be readily and explicitly constructed in this $ p $ or $ h $ representation ( Fig.REF ).", "It is important to observe that Eq.", "and Eq.", "are completely determined from the symmetry principle, independent of many microscopic details.", "For example, we can add many terms to the Boson Hubbard model Eq.REF consistent with the symmetries, without changing the resulting effective actions Eq.", "or and Eq.", ".", "However, to establish the connections of the phenomenological parameters with the microscopic parameters in a lattice, one must also perform the GT starting from the ionic model in the tower in Fig.REF .", "There are many classical analogs of the similar phenomenon.", "One of them is shown in Appendix B: the Newton's law Eq.REF in a fluid or a solid is Galileo invariant.", "However, the resulting wave equation Eq.REF is not.", "The wave equation in a moving frame should be Eq.REF .", "So the medium supporting the wave already picks up a preferred frame among all the infinite number of equivalent inertial frames, so the Galileo invariance is \"explicitly \" broken.", "This can be contrasted to the spontaneous $ U(1) $ symmetry breaking in the SF phase to be examined in Sec.IX-A ( see also [49] ): the $ U(1) $ phase picks up a preferred one among all the infinite number of equivalent phase around a $ U(1) $ circle." ], [ " Driving the superfluid in the Helium 4 or Exciton superfluid ", "As mentioned in the introduction, He4 is the oldest SF.", "So it would be instructive to contrast the phase diagram Fig.REF to that in a He4: the Mott, SF and BSF correspond to the normal, SF and solid phase respectively [70], [75], [76], [77], [78].", "While the boost $ c $ and chemical potential $ \\mu $ correspond to the pressure $ P $ and temperature $ T $ respectively ( Fig.REF a, b ).", "This analogy [50] suggests that the putative supersolid (SS ) phase at $ P_c=120 bar < P < 170 bar $ seems unlikely to exist in the He4.", "Indeed, despite the reduction of non-classical rotational inertial (NCRI) moment in the putative SS phase, a later refined experiment [72] excludes the putative SS in (b).", "In Fig.REF a, the $ z=1 $ line is in 3D XY for any $ 0< c < v_y $ which is exactly marginal.", "The SF to BSF transition is a continuous quantum Lifshitz transition tuned by the boost $ c $ [71].", "The Mott to the BSF transition is a $ z=2 $ class, (b) The liquid to solid transition is a first order Lifshitz transition driven by the peak in the density-density correlation tuned also by the pressure.", "In Fig.REF b, the liquid to SF transition is also in a classical 3d XY class for any pressure $ 0< P < P_c $ which is also exactly marginal.", "The SF to solid transition is a first order classical Lifshitz transition triggered by the lowering of roton surface tuned by the pressure.", "The roton surface is spherically symmetric.", "The He4 system has the Galileo invariance which dictates the superfluid density $ \\rho _s (T=0 )= \\rho $ , namely 100 % superfluid at $ T=0 $ .", "Namely, all the quasi-particles are still part of the superfluid component at $ T=0 $ instead of the normal component.", "Even so, the SF phase in He4 still breaks the Galileo invariance spontaneously.", "The solid supports the longitudinal and transverse elastic sound waves in the classical physics or the longitudinal and transverse phonons in quantum physics, it also breaks the Galileo invariance spontaneously.", "The case (1) and case (2) mentioned in Sec.II has been discussed previously in He4 in a very preliminary fashion.", "In this section, we will revisit class-2 by various effective actions in terms of suitable order parameters.", "We will also comment on class 1 and stress its difference than class 2 and class 4.", "Overall, we treat all the classes in a unified framework.", "So the results achieved here may not apply to Fig.REF directly, but they are interesting on its own.", "They could be applied to the liquid Helium 4 in Fig.REF and also exciton supersfluids in BLQH and EHBL.", "The results in Sec.A and B are taken from the original work in [55] and are naturally embedded in the current context and contrasted with other results achieved here.", "Figure: There are some similarities between (a) Fig.", "at a 2d lattice and T=0 T=0 and(b) the liquid He4 diagram at 3d continuum and at a finite T T with T c ∼2.13K T_c \\sim 2.13 K and P c ∼120bar P_c \\sim 120 bar .The former does not have Galileo invariance, the latter does.To facilitate the comparison, we revert the horizontal axis in Fig..(c) Adding a boost Q Q directly to the SF.In the absence of a roton, the instability happens near the origin ( the phonon mode ), it leads to a BSF phase in Fig.c.If there exists a roton such as in He4, the instability at k=0 k=0 will always be pre-emptied by that near the rotonwith the roton gap Δ/k B ∼10K \\Delta /k_B \\sim 10 K and k 0 ∼2Å -1 k_0 \\sim 2 Å^{-1} .Then the instability near the roton minimum leads to a stripe supersolid (SSS) phase in (d).The QPT from the SF to the SSS is the same as that from the Mott to the SF along the path-I in (a) with z=1 z=1 .As argued in Sec.IX-B-1, the stripe supersolid phase is due to the vacancy BEC near the z=2 z=2 QPT from the SSS to the stripe solid.", "(c) and (b) are taken from ." ], [ " The quantum Lifshitz transition from the SF to the BSF driven by the instability in the Goldstone mode ", "Setting $ 2 V \\rho _0=c $ in Eq.REF and treating it as an independent tuning parameter, we will study the putative SF to the BSF transition tuned by this driving.", "As presented in the previous sections, this QPT will not happen in Fig.REF , but could happen in the class 2 shown in Fig.REF d. In the co-moving frame with the SF, the SF is static, so $ \\langle \\psi \\rangle =\\sqrt{\\rho _0+\\delta \\rho }e^{i\\phi }$ .", "After integrating out the magnitude fluctuations, the quantum phase fluctuations are described by: LM:SF[ ]= 12 u ()2 +0[vx2(x)2+vy2(y)2] + w 0 ( y )3 Now one gets to the lab frame just by performing a GT ( we drop the $ \\prime $ again ): LM:SF[ ]= 12 u (-ic x )2 +0[vx2(x)2+vy2(y)2] + w 0 ( y )3 where the boost velocity was pinned to be $ \\vec{c}= \\vec{Q}/m $ , the order parameter is in the lab frame becomes $\\psi _{SF} =\\sqrt{\\rho _0+\\delta \\rho } e^{i ( \\vec{Q} \\cdot \\vec{x} + \\phi ) }$ Now we study how the SF evolves as one increases $ \\vec{Q} $ .", "The mean-field state can be written as $\\phi =\\phi _0+k_0 y$ .", "Substituting it to the effective action Eq.REF leads to: S0 ( 2 U 0 vy2-c2)k02+ 2 w U 0 k03 At a low boost $c^2 < 2 U \\rho _0 v_y^2 $ , $k_0=0$ is in the SF phase.", "Its spectrum is given in Eq.REF .", "=2 u 0(vx2kx2+vy2ky2)-c ky At the critical boost between the SF and the boosted SF c2=2U0 vy2=vy2 =c2 v2y > 0 which gives the putative $ (z_x,z_y)=(3/2,3) $ line in Fig.REF and the real case in the driving SF in Fig.REF c. At a high boost $ c^2 > 2 U \\rho _0 v_y^2 $ $k_0 =\\frac{c^2-2 U \\rho _0 v_y^2 }{ 3 w U \\rho _0 }$ where $ w \\propto c $ , so the BSF phase has an additional modulation $ k_0 $ along the $ y-$ axis on top of Eq.REF : $\\psi _{BSF} =\\sqrt{\\rho _0+\\delta \\rho } e^{i [ ( \\vec{Q} + \\vec{k}_0 ) \\cdot \\vec{x} + \\phi ] }$ Note that the crucial difference between Eq.REF and here is that in the former case, the sign of $ k_0 $ can only be determined by the spontaneous C- symmetry breaking in the BSF phase, here the C- symmetry was explicitly broken at very beginning, so its sign is completely determined by the driving.", "Inside the BSF phase, the quantum phase fluctuations can be written as $ \\phi \\rightarrow \\phi _0+k_0y+\\phi $ .", "Expanding the action upto the second order in the phase fluctuations leads to LBSF = (-icy)2 + 2 U 0 vx2(x)2 + ( 2 c2- 2 U 0 vy2 )(y)2 + 2 w U 0 (y)3 + b (y )4 which leads to the gapless Goldstone mode inside the BSF phase: $\\omega _\\mathbf {k}=\\sqrt{ 2 U \\rho _0 v_x^2 k_x^2+( 2 c^2- 2 U \\rho _0 v_y^2 )k_y^2 }-ck_y$ where one can see $ 2 c^2- 2 U \\rho _0 v_y^2= c^2 + ( c^2- 2 U \\rho _0 v_y^2 ) > c^2 $ when $c^2> 2 U \\rho _0 v_y^2$ , thus the $\\omega _\\mathbf {k}$ is stable in BSF phase.", "It is instructive to expand the first kinetic term in Eq.REF as: $2 U \\mathcal {L}=Z(\\partial _\\tau \\phi )^2-2i c \\partial _\\tau \\phi \\partial _y\\phi + 2 U \\rho _0 v_x^2(\\partial _x\\phi )^2 + \\gamma (\\partial _y \\phi )^2+a(\\partial _y^2\\phi )^2+ 2w U \\rho _0 (\\partial _y\\phi )^3 + b (\\partial _y \\phi )^4$ where $ Z $ is introduced to keep track of the renormalization of $ (\\partial _\\tau \\phi )^2 $ , $ \\gamma =2 U \\rho _0 v^2_y- c^2 $ is the tuning parameter.", "The scaling $ \\omega \\sim k^3_y, k_x \\sim k^2_y $ leads to the exotic dynamic exponents $ (z_x=3/2, z_y=3 ) $ in Fig.REF c. Then one can get the scaling dimension of $ [\\gamma ]=2 $ which is relevant, as expected, to tune the transition, but $ [Z]=[b]-2 < 0 $ , so are two leading irrelevant operators[51] which determine the finite $ T $ behaviours and corrections to the leading scalings.", "However $ [w]=0 $ is marginal.", "The standard field theory one-loop RG used in Sec.III finds: $\\frac{ d w }{ d l}= \\epsilon w - A w^2$ where $ \\epsilon = 2 -d $ and $ A=1/v^2_x a > 0 $ .", "So it is marginally irrelevant at $ d=2 $ , simply irrelevant at $ d=3 $ .", "Setting $ Z=w=0 $ in Eq.REF leads to the Gaussian fixed point action at the QCP where $ \\gamma =0 $ , subject to the Logarithmic correction due to the marginally irrelevant $ w $ term.", "Again it is the crossing metric $ g_{\\tau , y}=g_{y, \\tau }=-i c $ in Eq.REF which dictates the quantum dynamic scaling near the QCP.", "It is a direct reflection of the new emergent space-time near the $ z=(3/2,3) $ QPT.", "Note that here the QPT is a quantum Lifshitz one tuned by $ [\\gamma ]=2 $ , so despite the cubic term $ [w]=0 $ , it could still be a 2nd -order transition in Fig.REF c, in contrast to the conventional QPT where a cubic term drives a first order one.", "Now we evaluate the conserved currents in both SF and BSF phase.", "The $ U(1) $ symmetry in the normal phase transpire as $ \\phi \\rightarrow \\phi + a $ for any shift $ a $ inside the $ U(1) $ symmetry broken SF phase, so the Noether current can be derived from Eq.REF as: $J_\\tau & = & 2( \\partial _\\tau \\phi -i c \\partial _y \\phi ) \\nonumber \\\\J_x & = & 4 u \\rho _0 v^2_x \\partial _x \\phi \\nonumber \\\\\\tilde{J}_y & = & J_y-ic J_\\tau =2u \\rho _0[ 2 v^2_y ( \\partial _y \\phi ) + 3w ( \\partial _y \\phi )^2 ]-ic J_\\tau $ In the SF phase, $ \\phi = \\phi _0 $ , then $ (J_\\tau , J_x, \\tilde{J}_y)= (0,0,0) $ and $ (J_\\tau , J_x, J_y)= (0,0,0) $ also.", "In the BSF phase, $ \\phi = \\phi _0 + k_0 y $ where $ k_0 $ is given by Eq.REF , then $ (J_\\tau , J_x, \\tilde{J}_y)= (-i 2c k_0, 0, 0 ) $ , but $ (J_\\tau , J_x, J_y)= (-i 2c k_0, 0, 2c^2 k_0 ) $ .", "So the conserved currents $ (J_\\tau , J_x, J_y) $ can still be used to distinguish the BSF from the SF phase.", "Using the $ \\phi $ representation of Eq.REF , one can also reproduce the current listed below Eq.REF for the $ z=1 $ case.", "However, if there exists roton shown in Fig.REF c, then this SF to BSF transition will be preempted by the the SF to a solid transition triggered by the roton touchdown as shown in the following subsection." ], [ " The SF to a stripe supersolid transition driven by the instability in the roton mode ", "In Helium 4, due to the long-range Wan der-Waals interaction, the density-density interaction $ V_d(q) $ develops a roton minimum which drives the transition from the SF to a solid [75], [76], [77], [78].", "Now the density-density interaction $ U $ becomes long-ranged in Helium 4, so we adopt the notation in [77] as $ U=V_d(k)= a -b k^2 + \\alpha k^4 $ which can be written as $ r + \\alpha ( k^2- k^2_0 )^2 $ near the roton minimum ( Fig.REF c ).", "We also consider the isotropic case $ v^2_x= v^2_y $ , then $ \\rho _s= \\rho _0 v^2_x= \\rho _0 v^2_y $ is the superfluid density, $ \\kappa ^{-1} = \\lim _{k \\rightarrow 0 } V_d(k) = a $ is the compressibility and $ v^2(k)= \\rho _s V_d(k) $ .", "From Eq.REF , one can see that the density stays the same in both the lab frame and the co-moving frame.", "The dynamic structure factor is: $S^{>}_n ( \\vec{k}, \\omega )=S_n(\\vec{k}) \\delta ( \\omega - \\epsilon _{+}(\\vec{k}) ),~~~S_n(\\vec{k})= \\frac{ \\pi \\rho _s k }{ 2 v(k) }$ where $ \\epsilon _{+}(\\vec{k}) = v(k) k + c k_x $ is the quasi-particle excitation energy.", "The following 3 f-sum rules follow: $\\int ^{\\infty }_0 d \\omega S^{>}_n ( \\vec{k}, \\omega ) & = & S_n(\\vec{k}), \\nonumber \\\\\\int ^{\\infty }_0 d \\omega \\omega S^{>}_n ( \\vec{k}, \\omega ) & = & S_n(\\vec{k})\\epsilon _{+}(\\vec{k})\\nonumber \\\\\\int ^{\\infty }_0 d \\omega S^{>}_n ( \\vec{k}, \\omega )/\\omega & = & S_n(\\vec{k})/\\epsilon _{+}(\\vec{k})$ which shows the Feymann relation still holds under the driving: $\\epsilon _{+}(\\vec{k})= \\frac{\\int ^{\\infty }_0 d \\omega \\omega S^{>}_n ( \\vec{k}, \\omega ) }{ \\int ^{\\infty }_0 d \\omega S^{>}_n ( \\vec{k}, \\omega ) }$ A similar relation for the quasi-hole excitation energy $ \\epsilon _{-}(\\vec{k}) = v(k) k - c k_x $ can be derived by replacing $ S^{>}_n ( \\vec{k}, \\omega ) $ by $ S^{<}_n ( \\vec{k}, \\omega ) $ .", "So Eq.REF can be extended to describe such a transition under a driving.", "Again the cubic term in the density-density channel need to be included at the very beginning: L[ ]= 12 ( - k, - ) [ 2 + i 2 c kx s k2 + ( r- c2 k2x s k2 ) + ( k2-k20 )2 ] ( k, ) -w ( )3 + u ( )4 + where $ r $ is the roton gap near $ k=k_0 $ .", "The $ h-$ term nails down the momentum $ k $ to be in the roton ring $ k=k_0 $ , the boost term pins it to be in the $ k_x $ axis $ k_x= \\pm k_0 $ .", "So the boost term just introduce an easy-axis to the isotropic roton mode.", "So the resulting solid has only two shortest reciprocal lattice vectors $ \\vec{G}= \\pm k_0 \\hat{x} $ : $n= n_0 + ( \\psi _G e^{i k_0 x} + \\psi ^{*}_G e^{-i k_0 x} )= n_0 + 2 | \\psi _G | \\cos ( k_0 x + \\alpha )$ where $ \\psi _G $ is the complex order parameter.", "Its phase $ \\alpha $ is the gapless phonon mode due to the translational symmetry breaking.", "It is the stripe solid phase.", "In fact, as shown in [52], even without such an easy axis term which explicitly breaks the rotational symmetry, a strip solid phase is most likely to be the ground state lattice structure due to the spontaneously lattice symmetry breaking.", "In the presence of such an easy axis term, the stripe solid is the ground state.", "Then writing $ k_x= \\pm k_0 + q_x, k_y=q_y $ and expanding to the leading quadratic terms, we obtain: L[ ]= 12 ( - k, - ) [ 2 + i 2 c qx s k20 + 4 k20 q2x + c2 s k20 q2y + r ] ( k, ) -w ( )3 + u ( )4 + where $ \\tilde{r} = r- c^2/\\rho _s $ is the boosted roton gap and $ k_x= \\pm k_0 + q_x, k_y=q_y $ need to be summed around both regimes near $ \\pm k_0 $ .", "Using the decomposition Eq.REF , one obtain the effective action describing the SF to the stripe solid transition: L[ G ]= 12 *G ( q, ) [ 2 + i 2 c qx s k20 + 4 k20 q2x + c2 s k20 q2y + r ] G ( q, ) + u | G ( q, ) |4 + where due to the stripe structure, the cubic term plays no role.", "In fact as shown in [52], the cubic term play an important role only in a triangular lattice where the three shortest reciprocal lattice vectors form a closed triangle.", "Eq.REF is nothing but in the same universality class of QPT from the Mott to SF along the Path-I in Fig.REF .", "So it has the dynamic exponent $ z_x=z_y=1 $ , the boost $ c $ is exactly marginal.", "Setting $ \\tilde{r} = \\Delta - c^2/\\rho _s =0 $ leads to the critical velocity: $c^2= \\rho _s \\Delta $ which can be contrasted to the naive one $ c_N= \\Delta /k_0 $ .", "The correct one scales as $ \\sqrt{\\Delta } $ , the naive one as $ \\Delta $ .", "This crucial difference should be subject to experimental tests in driving He4 superfluid in Fig.REF d. The scaling function in Sec.V-A applies here also.", "The crucial difference is that this is the QPT from the SF to the strip solid tuned by $ c $ in Eq.REF .", "Eq.REF establishes the relation between the physical quantity ( the density ) $ n $ and the order parameter in the effective action Eq.REF .", "The $ U(1) $ symmetry breaking leads to the Goldstone mode which is the phonon mode $ \\alpha $ in Eq.REF due to the translational symmetry breaking to the stripe solid phase.", "Of course, the SF owns its own Goldstone mode, but it remains un-critical throughout the SF to the stripe solid transition, so was integrated out in Eq.REF .", "In short, if the instability tuned by the driving is triggered by the Goldstone mode near $ k=0 $ , then the order parameter is still the boson phase $ \\phi $ , the instability leads to a BSF phase, the QPT from the SF to the BSF has the dynamic exponent $ z=(3/2, 3) $ subject to the Logarithmic corrections from the cubic interaction term $ 2w U \\rho _0 (\\partial _y\\phi )^3 $ .", "If the instability tuned by the driving is triggered by the Roton mode near $ k=k_0 $ , then the order parameter is still the conjugate density $ \\delta \\rho $ , the instability leads to a strip solid phase, the QPT from the SF to the solid phase has the dynamic exponent $ z=(1, 1) $ .", "The cubic term $ -w ( \\delta \\rho )^3 $ drops out in such a transition, the QPT is in the same universality class as the Mott to SF transition under a boost below the critical one ( the path-I in Fig.REF ).", "Substituting Eq.REF into Eq.REF , we get the corresponding order parameter: $\\psi _{SSDW} =\\sqrt{\\rho _0} e^{i ( \\vec{Q} \\cdot \\vec{x} + \\phi ) }[1 + | \\psi _G | \\cos ( k_0 x + \\alpha )/\\rho _0 ]$ which establishes the relation between the physical quantity $ \\psi _{SSDW} $ and the order parameter $ \\psi _G $ in the effective action Eq.REF .", "The translational symmetry $ x \\rightarrow x+ a $ in Eq.REF translates into the $ U(1)_T $ symmetry of $ \\psi _G \\rightarrow \\psi _G e^{ i k_0 a } $ where its phase $ \\theta = k_0 a $ is any continuous real number [122].", "The $ U(1)_T $ symmetry breaking leads to the Goldstone mode which is the phonon mode $ \\alpha $ in Eq.REF due to the translational symmetry breaking to the SSDW phase.", "It breaks the $ U(1)_I \\times U(1)_T $ symmetry leading to the two Goldstone modes $ \\phi , \\alpha $ which are is the superfluid and lattice Goldstone mode respectively.", "To get a supersolid which hosts the coexistence of both the superfluid density-wave component and a solid component, both have the same reciprocal lattice vectors, this coexistence was ruled out in the pressure ( P ) driving He4 by a more refined experiment [72].", "This is because the P driving is roton dropping 1st order transition resulting a hcp solid structure (Fig.REF ).", "Here it maybe possible due to its second order transition nature and the stripe lattice structure.", "So it resembles the extended boson Hubbard model [68], [69], [49] where a stripe supersolid was shown to always exist slightly away from $ 1/2 $ filling.", "Here it may be due to the BEC of vacancies in the spontaneously formed stripe solid.", "The vacancies behave similarly as the holes on the top of the Mott state examined in Sec.VIII-B with $ z= 2 $ .", "Due to the lack of C- symmetry, the vacancies usually have lower energies than that of interstitials.", "It is their BEC which leads to the stripe SS.", "A microscopic calculation is needed to test the existence of these vacancies and if they are stable against phase separations.", "If so, it could be called a stripe supersolid.", "Then the boost becomes an effective way to generate a supersolid than the pressure.", "For a more complete survey on Fig.REF d, we refer the readers to the original work [55]." ], [ " Driving a classical object through a superfluid ", "In the last two subsections, we drive the quantum fluids, no classical objects embedded inside it.", "In this section, we study the case of driving a classical object through a continuous fluid sketched in Fig.REF d. It is a different class than that discussed in Sec.", "A and B in this section, because here has both a classical object such as an impurity and the quantum fluid, while there is only quantum fluid in the latter.", "By analyzing how the GT act differently in the two cases, we also stress the crucial differences than the problems addressed in the previous sections When above a critical velocity, these classical objects will all cause viscosities and dissipations.", "The three kinds of classical objects: a point impurity, an underlying optical lattice or a straight wall correspond to different space symmetry: a spherical, translation by a lattice constant or translational symmetry along the wall, so should lead to different critical velocities.", "Unfortunately, we are not able to provide any concrete solutions to such a different class of problems but we outline possible approach to solve such a different class of problems.", "As long as there are relative motions between the quantum fluids and the classical objects, it does not matter if it is the classical object moving or the quantum fluid moving, both need to the same physics.", "So in the following, we assume it is the classical object which is moving.", "1.", "A moving impurity In a continuous system with a translational invariance, $ V_1(x)=0 $ in Eq.REF .", "We look at the case of driving the impurity with a given velocity $ v $ in the Hamiltonian Eq.REF : ${\\cal H}^i_L = \\int d^2 x \\psi ^{\\dagger }( \\vec{x} )[ -\\frac{\\hbar ^2}{ 2 m} \\nabla ^2 - \\mu - g_i \\delta ( \\vec{x}- \\vec{R}+ \\vec{v} t ) ] \\psi ( \\vec{x} )+ \\int d^2 x_1 d^2 x_2 \\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )V_2(x_1-x_2 ) \\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )$ where $ \\vec{R} $ is the initial position, $ \\vec{v} $ is the velocity of the impurity, $ g_i $ is the scattering potential strength.", "As expected, it is a time-dependent driving system in the lab frame.", "In the frame moving together with the impurity [120], one can setting $ \\vec{x}^{\\prime }= \\vec{x}+ \\vec{v} t $ .", "Then one obtain the Hamiltonian in this co-moving frame ( still drop the $ \\prime $ for the notational simplicity ): ${\\cal H}^i_{M} = \\int d^2 x \\psi ^{\\dagger }( \\vec{x} )[ -\\frac{\\hbar ^2}{ 2 m} \\nabla ^2 - \\mu - g_i \\delta ( \\vec{x}- \\vec{R} ) -i v \\partial _x ] \\psi ( \\vec{x} )+ \\int d^2 x_1 d^2 x_2 \\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )V_2(x_1-x_2 ) \\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )$ which, as expected, becomes time-independent in the moving frame.", "Unfortunately, even so, it becomes very difficult to solve Eq.REF even in such a static frame.", "It belongs to a quantum impurity problem [117] where one need to deal with an impurity scattering on a moving fluid.", "If setting $ g_i =0 $ , the system is a gapless system describing by a CFT, then $ g_i \\ne 0 $ may act as a Boundary condition changing operator in such a boundary CFT.", "The $ g_i \\rightarrow \\infty $ limit may just directly set the Dirichlet boundary condition $ \\psi ( \\vec{x}= \\vec{R} )= 0 $ .", "After solving such a quantum impurity problem in the static frame, one need to transfer the solution back to the lab frame where it becomes a time-dependent again.", "So far, the only available theoretical treatment is Landau's original argument by treating the impurity as a heavy classical particle and the quantum fluids as classical also.", "But such a classical treatment may not be precise to describe such a quantum impurity problem.", "It may break down in the presence of lattice anyway.", "2.", "A moving optical lattice Driving the underlying ionic lattice in a solid is hard to achieve in materials, but may be implemented in cold atom systems.", "It is easy to extend a single impurity located at position $ \\vec{R} $ to a macroscopic lattice located at the ordered array of $ \\vec{R}_i $ , so $ V_1( \\vec{x} )= \\sum _{\\vec{R}} v (\\vec{x}-\\vec{R} ) $ is a single-body attractive trapping potential.", "We look at the case of driving the lattice at a given velocity $ v $ in the Hamiltonian Eq.REF : ${\\cal H}^{OL}_L = \\int d^2 x \\psi ^{\\dagger }( \\vec{x} )[ -\\frac{\\hbar ^2}{ 2 m} \\nabla ^2 - \\mu - \\sum _{\\vec{R}} v( \\vec{x}- \\vec{R} + \\vec{v} t ) ] \\psi ( \\vec{x} )+ \\int d^2 x_1 d^2 x_2 \\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )V_2(x_1-x_2 ) \\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )$ where $ \\vec{R}_i $ are the initial positions of the ions, $ \\vec{v} $ is the driving velocity of the lattice.", "As expected, it is a time-dependent driving lattice system in the lab frame.", "In the frame moving together with the optical lattice, one can set $ \\vec{x}^{\\prime }= \\vec{x}+ \\vec{v} t $ .", "Then one obtain the Hamiltonian in this co-moving frame ( still drop the $ \\prime $ for the notational simplicity ): ${\\cal H}^{OL}_{M} = \\int d^2 x \\psi ^{\\dagger }( \\vec{x} )[ -\\frac{\\hbar ^2}{ 2 m} \\nabla ^2 - \\mu - \\sum _{\\vec{R}} v( \\vec{x}- \\vec{R} ) -i v \\partial _x ] \\psi ( \\vec{x} )+ \\int d^2 x_1 d^2 x_2 \\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )V_2(x_1-x_2 ) \\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )$ which, as expected, becomes time-independent in the moving frame.", "The most dramatic differences between Eq.REF with $ V_1( x )= \\sum _{\\vec{R}} v (\\vec{x}-\\vec{R} ) $ and the current driving lattice case is that in the former one boost both the quantum and classical degree of freedoms at the same speed, so the relative distance between the boson and any lattice site $ \\vec{x}^{\\prime }- \\vec{R}^{\\prime }= \\vec{x}- \\vec{R} $ is invariant under the GT, so it is a time-independent problem in any inertial frame.", "so it can be set to be zero in the tight-binding limit $ \\vec{x}^{\\prime }- \\vec{R}^{\\prime }= \\vec{x}- \\vec{R} \\rightarrow 0 $ in any inertial frame.", "but here $ \\vec{R}_i $ are just a array of constants, so invariant under the GT, but the relative distance between the boson and the lattice site $ \\vec{x}- \\vec{R} + \\vec{v} t $ is not invariant under the GT.", "It is time-independent in the co-moving frame, but becomes time-dependent in the lab frame.", "So even one may be able to take the time-binding limit in the former, it breaks down in the latter.", "Due to this fact, it becomes very difficult to solve Eq.REF even in such a static frame.", "Then one need to transfer back the solution to the lab frame where it becomes a time-dependent problem again.", "So far, there is not only controlled theoretical treatment on such class of problems.", "3.", "A moving straight hard wall along the x-axis It is easy to extend the macroscopic lattice located at $ \\vec{R}_i $ to a continuous wall, so $ V_1( \\vec{x} )= \\int d\\vec{R} v (\\vec{x}-\\vec{R} ) $ with the repulsive interaction $ v (\\vec{x}-\\vec{R} ) $ .", "${\\cal H}^W_L = \\int d^2 x \\psi ^{\\dagger }( \\vec{x} )[ -\\frac{\\hbar ^2}{ 2 m} \\nabla ^2 - \\mu - \\int d\\vec{R} v( \\vec{x}- \\vec{R}+ \\vec{v} t ) ] \\psi ( \\vec{x} )+ \\int d^2 x_1 d^2 x_2 \\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )V_2(x_1-x_2 ) \\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )$ where the continuous $ \\vec{R} $ are the initial positions of the wall, $ \\vec{v} $ is the velocity of the wall.", "As expected, it is a time-dependent driving system in the lab frame.", "In the frame moving together with the wall, one can set $ \\vec{x}^{\\prime }= \\vec{x}+ \\vec{v} t $ .", "Then one obtain the Hamiltonian in this co-moving frame ( still drop the $ \\prime $ for the notational simplicity ): ${\\cal H}^W_{M} = \\int d^2 x \\psi ^{\\dagger }( \\vec{x} )[ -\\frac{\\hbar ^2}{ 2 m} \\nabla ^2 - \\mu - \\int d\\vec{R} v( \\vec{x}- \\vec{R} ) -i v \\partial _x ] \\psi ( \\vec{x} )+ \\int d^2 x_1 d^2 x_2 \\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )V_2(x_1-x_2 ) \\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )$ which, as expected, becomes time-independent in the co-moving frame.", "Now if one takes the hard wall limit $ v (\\vec{x}-\\vec{R} ) \\rightarrow \\infty $ in the co-moving frame, then it just sets the boundary condition at the wall $ \\psi ( \\vec{x}= W )=0 $ .", "One need to solve Eq.REF with such a Dirichlet boundary condition in such a static frame.", "Then one need to transfer back the solution to the lab frame where it becomes a time-dependent problem again.", "So far, there is not only controlled theoretical treatment on such class of problems either." ], [ " Contrasts to the Lorentz invariance, Unruh effects and $ AdS/CFT $ in relativistic QFT ", "In relativistic QFT, different inertial frames are just related by Lorentz transformations.", "relativistic Doppler shift is just a direct consequence of Lorentz transformations.", "For relativistic QFT, this is just the end of the story, Doppler shift will not lead to no new phases and new QPT.", "For non-relativistic quantum many body systems, Doppler shift is just a direct consequence of Galileo transformation.", "However, as shown in the previous sections, Doppler shift Eq.REF ,REF ,REF ,REF ,REF is far from being the end of the story.", "When the shift goes beyond the intrinsic velocity of the matter, it may trigger QPT and leads to new quantum phases.", "This is another explicit demonstration of P. W. Anderson's view on quantum many body systems \" More is different \" which can be expanded to include emergent space-time, so \"More is richer, more challenging, more interesting.....\" [105]." ], [ " Contrast to Doppler shift in the special relativity ", "In relativistic quantum field theory, different inertial frames are related by Lorentz transformation, so they are completely equivalent.", "Even so, changing to a different inertial frame, the frequency changes as follows [35]: $\\omega ^{\\prime }= \\gamma ( \\omega - \\vec{v} \\cdot \\vec{k} ),~~~\\gamma =1/\\sqrt{1-\\beta ^2},~~~\\beta =v/c_l$ which is the frequency in the boosted frame.", "Because $ \\omega ^2= c^2_l \\vec{k}^2 + m^2 c^4_l $ and $ v < c_l $ , so $\\omega ^{\\prime } $ always remains the same sign as $ \\omega $ .", "For the light $ m=0 $ , then $ k_1 = \\omega /c_l \\cos \\theta , k_2 = \\omega /c_l \\sin \\theta $ in the lab frame, then $\\omega ^{\\prime }= \\omega \\frac{ 1-\\beta \\cos \\theta }{ \\sqrt{1 - \\beta ^2 } }$ When $ \\theta = 0, \\pi $ , it simplifies to $ \\omega ^{\\prime }= \\omega \\sqrt{ \\frac{1- \\beta }{1+\\beta } } $ where $ \\beta =v/c_l $ takes positive ( negative ) when the observer is moving away ( towards ) from the source ( which correspond to the moving frame Fig.REF b and the lab frame Fig.REF a respectively ).", "When $ \\theta =\\pi /2 $ , it reduces to $ \\omega ^{\\prime }= \\frac{ \\omega }{ \\sqrt{1 - \\beta ^2 } } $ .", "This is nothing but well known relativistic Doppler shift [97], [98].", "Positive ( negative ) frequency stays invariant in different inertial frames, no QPT.", "Eq.REF can be contrasted with the non-relativistic Doppler shift term in Eq.REF ,REF ,REF ,REF ,REF and also Eq.REF in the appendix E ( see also [44], [79], [80] ).", "For a massive particle, when taking the non-relativistic limit $ k \\ll m c_l $ ( or $ k^2/2m \\ll m c^2_l $ ) limit, as expected, they become identical with $\\omega ^{\\prime }= \\omega - \\vec{v} \\cdot \\vec{k} $ and $ \\omega =mc^2_l + \\vec{k}^2/2m -k^4/8 m^3c^2_l+ \\cdots $ .", "Note that the rest mass $ m c^2_l $ is still important [105] to keep $\\omega ^{\\prime } $ always positive in taking the non-relativistic limit.", "However, the crucial difference between Eq.REF and our case is that: In the former, despite the frequency of a mode depends on the choice of the inertial frame, the decomposition into positive and negative frequencies is invariant.", "While, in the latter, the positive frequency can turn into a negative one driven by the boost, therefore trigger the QPTs shown in Fig.REF and Fig.REF ." ], [ " Contrast to the Unruh effects in the general relativity ", "However, the in-equivalence may come from a non-inertial frame such as an uniformly accelerating frame.", "Even an observer at rest in the Minkowski space-time see a true vacuum $ |0\\rangle _M $ with no particles.", "An uniformly accelerating observer would see a quite different vacuum $ |0\\rangle _R $ with no particles in the Rindler space-time $ _R\\langle 0 | n_R(k)|0\\rangle _R=0 $ and with the excitations: $_M\\langle 0 | n_R(k)|0\\rangle _M = \\frac{1}{e ^{2\\pi \\omega /a} -1 }$ where $ \\omega = \\sqrt{c^2_l \\vec{k}^2 + m^2 c^4_l } $ also listed below Eq.REF .", "This is the well known Unruh effects [91], [92].", "The uniformly accelerating observer ( with a constant 4-acceleration $ a $ ) will see a thermal bath of particles with the temperature: $k_B T_U= \\frac{ \\hbar a }{ 2 \\pi c_l}$ Namely a pure state at $ T=0 $ in Minkowski space-time becomes a mixed state at $ k_B T_U $ .", "Unfortunately, the Unruh effect is so small that it is extremely difficult to detect.", "A proper acceleration of $ a= 2.47 \\times 10^{20} m/s^2 $ corresponds to $ T_U \\sim 1 K $ .", "Or conversely, $ a = 1 m/s^2 $ corresponds to $ T_U \\sim 4 \\times 10^{-21} K $ which is beyond any current available cold atom experiment.", "The observer draws a worldline ( trajectory ) $ x^2=t^2+ (1/a)^2 $ in the Minkowski space-time.", "The reference frame where an uniformly accelerating observer is at rest is called Rindler space-time which is related to the Minkowski space-time by $ x= r cosh \\eta , t= r sinh \\eta $ .", "It is confined to the wedge $ x \\ge |t| $ separated by the \" Rindler horizon \" at $ x= \\pm t, x >0 $ from the rest of the space-time.", "The uniformly accelerating observer follows the trajectory with $ r=1/a, \\eta = a \\tau $ where $ \\tau $ is the proper time.", "In relativistic QFT, the Unruh effect tells us that two different sets of observers such as inertial and Rindler will describe the same quantum state in very different ways.", "The Unruh effect is a quantum effect which vanishes as $ \\hbar \\rightarrow 0 $ .", "It is also a relativistic effect which vanishes as $ c_l \\rightarrow \\infty $ ( namely when taking a non-relativistic limit by sending $ c_l \\rightarrow \\infty $ ).", "Here in a non-relativistic quantum field theory on a lattice, we showed that even two different inertial observers will also see the same quantum system in very different ways.", "As shown in Sec.V-B-3, our effects survive not only the $ c_l \\rightarrow \\infty $ , but also the classical limit $ \\hbar \\rightarrow 0 $ , so it may be more robust than the Unruh effect.", "In fact, the lower part $ 0 < c < v_y $ of Fig.REF is quite similar to the relativistic counterpart: $ 0 < c < v_y $ is exactly marginal and causes just a non-relativistic Doppler shift term.", "This is consistent with Eq.REF in the $ \\beta \\rightarrow 0 $ limit.", "However what is \"more\" is the upper part $ c > v_y $ where there is a new quantum phase BSF and two new phase transitions.", "This shows that non-relativistic systems show much richer effects than their relativistic counter-parts, which can be much more easily detected experimentally in condensed matter or cold atom set-ups to be presented in Sec.XI.", "Figure: The hierarchy of the energy scales and emergent space-time starting from the Planck scale ∼10 19 \\sim 10^{19} Gevdown to the Standard Model (SM) scale ∼10 2 \\sim 10^{2} Gev.The question marks in the first box means we do not even know what is the global structure of the M theory.The question marks in the Calabi-Yau (CY) compactifications mean we are far from being know how to achieve such a dimensional reductionfrom 9+1 to 3+1 dimension.", "The big question marks mean there are huge barriers from such a dimensional reduction tothe Standard Model and the classical general relativity plus any possible sharp corrections denoted by the small question markswhich could face any experimental tests.The arrow on the right means that it moves to the direction of Fig.", "on non-relativistic QFT.The AdS/CFT comes from the low energy limit of open string on D-branes and closed strings respectively, but it is far fromthe real possible duality between SM and GR yet.", "For a possible and an interesting unification of various versions of the SM under theframework of QPT, see a recent work ." ], [ " Emergent curved space-time from the boundary in low dimensional $ AdS_{d+1}/CFT_d $ . ", "High Energy physics such as String (M) theory and quantum gravity always starts from the (build in) high symmetries which look quite unusually large groups such as $ SO(32) $ , $ E_8 \\times E_8 $ , etc.", "also some usual symmetries such as the Lorentz invariance, Supersymmetry, conformal invariance, etc.", "Then try to understand how these symmetries are broken explicitly or spontaneously.", "Its low energy limit naturally leads to the $ AdS_5/CFT_4 $ duality with a $ SO(4,2) $ group structure [97] where $ CFT_4 $ stands for the 4-dimensional $ {\\cal N}=4 $ SUSY $ SU(N) $ Yang-Mills gauge theory in the large $ N $ limit.", "The planck length $ l_p = ( \\hbar G/c^3_l )^{1/2}\\sim 1.6 \\times 10^{-33} cm $ is the natural length in such a low energy limit.", "Obviously $ l_p \\rightarrow 0 $ , by taking $ \\hbar \\rightarrow 0 $ or $ G \\rightarrow 0 $ , especially the non-relativistic limit $ c_l \\rightarrow \\infty $ .", "This fact indicates the quantum gravity disappears not only in the $ \\hbar \\rightarrow 0 $ limit, but also in the non-relativistic limit $ c_l \\rightarrow \\infty $ .", "Of course, $ G \\rightarrow 0 $ limit simply means the gravity disappears.", "This maybe called Top-down approach in Fig.REF .", "On the other hands, condensed matter physics always starts from the real materials which in most cases, living on a lattice and have much lower exact symmetries: No Lorentz invariance, no Supersymmetry, no conformal symmetries, etc.", "Then try to understand how these symmetries and other bigger symmetries such as $ SO(8), SO(10), E_8 $ etc.", "emerge.", "This maybe the basic ingredient of P. W. Anderson's great insight: More is different !", "In fact, as argued in [115], even the curved space-time can be emergent from some condensed matter systems on the boundary.", "This is called Down-top approach in Fig.REF which is complementary to the above Top-down approach in Fig.REF .", "The Sachdev-Ye-Kitaev (SYK) model [94], [95], [96] is a natural product of such a down-up approach.", "The SYK is an infinite range random interacting 4-fermion model, so it is an effective zero-space dimensional model which of course, is neither Lorentz nor Galileo invariant.", "It has an emergent parametrization invariance ( which can also be called 1d conformal invariance ) which is both explicitly and spontaneously broken.", "It naturally leads to the $ NAdS_2/NCFT_1 $ duality where $ NCFT_1 $ stands for the emergent parametrization invariance with a leading irrelevant operator $ \\partial _\\tau $ .", "Obviously, the speed of light $ c_l $ does not even appear in the microscopic SYK model at the first place.", "Now we move to one dimension higher $ AdS_3/CFT_2 $ where the $ CFT_2 $ stands for the 2d CFT with a central charge $ c $ which is related to the $ AdS_3 $ gravity by $ c= 3R/2 G^{(3)}_N $ .", "Some typical $ CFT_2 $ is the 1d Luttinger liquid ( see appendix C-2 ) and 1d chiral edge state of FQH [5].", "This correspondence holds best in the central charge $ c \\rightarrow \\infty $ limit [112].", "Here again in the $ CFT_2 $ , the space-time is related by the intrinsic velocity $ v $ as $ z= x+ivt, \\bar{z}= x-ivt $ , the speed of light $ c_l $ is also irrelevant and plays no role in such a correspondence.", "This is in sharp contrast to the CFT in the worldsheet of the string theory in the second box of Fig.REF where the space-time is related by the light velocity $ c_l $ .", "Just like SUSY is not necessary, the Lorentz invariance is also not necessary in $ AdS_{d+1}/CFT_d $ at low $ d $ .", "Even the bulk space-time structure can be a low energy product emergent from the boundary [115].", "Indeed, as shown by the RT formula [118], the entanglement entropy (EE) in the boundary can be evaluated by the minimum surface in the bulk with a suitable AdS metric [121].", "This fact should be interpreted as the bulk space-time geometry emerge from the boundary." ], [ " Experimental detections in a moving frame or a moving sample ", "In all the previous sections, we presented our results by assuming the sample is static in the lab frame $ S $ , but observed in the moving frame $ S^{\\prime } $ shown in Fig.REF .", "In practice, the sample is very small, but the detecting device is usually heavy.", "As stressed in the caption of Fig.REF , exchanging the role of the lab and moving frame does not change the results, because both are related by Galileo transformation anyway.", "So, in a practical scattering detection experiment shown in Fig.REF , it is more convenient to set the emitter and the receiver static and set the sample moving with a constant velocity $ \\vec{v} $ .", "Due to the small size of the sample, it is not easy to focus the beam on the sample when it is moving.", "To overcome this difficulty, one may just continuously shine the emitting beam, only when the sample move into its shadow, it will be scattered and collected by the detector.", "When it moves out of the shadow, there is no scattered beam anymore.", "Figure: Light, atom or neutron scattering on a moving sample with a velocity c → \\vec{c} .The multiple irradiating lines from the emitter declinate the irradiation regime where the sample enter and leave, then the scattered beamscan be detected by the receiver.If the intrinsic velocity is too high, then one can perform the experimentin a fast moving train or even a satellite and perform the scattering measurements on the ground.", "This set-upmay also a practical way to probe the emergent space-time structure near a QPT ( see Fig.", ").For $ z=1 $ in a bosonic system, in principle, only when the moving velocity $ c $ reaches the intrinsic velocity of the matter, the full phase diagram Fig.REF can be explored.", "For example, the sound velocity in He4 is about $ v\\sim 238 m/s $ .", "In a conventional lab on the earth, taking a high way ( magnetic levitated ) train moving with a velocity $ 300 km/h \\sim 83 m/s $ is still below this characteristic velocity.", "A civil air-craft flight can reach even higher $ 800 km/h \\sim 240 m/s $ which just reaches the sound velocity in the Helium4.", "The main difference between a cold atom and materials is that the former has a fixed density ( in a canonical ensemble ), the latter has a fixed chemical potential ( in a grand canonical ensemble ) in Fig.REF .", "However, as shown in Fig.REF a, in practice, just moving the sample alone can never reach the $ z=(3/2,3) $ line.", "As shown in Sec.IX, one need to get inside the sample and boost the SF directly to explore the SF to the BSF transition with $ z=(3/2,3) $ .", "Instead of going beyond the intrinsic velocity $ v $ , one can also reduce the intrinsic velocity.", "The cold atom systems with very low $ \\vec{v} $ may become good candidates.", "For example, in a weakly interacting BEC, $ v\\sim 1 cm/s $ is very small.", "So the full phase diagram Fig.REF can be easily explored just by putting the trap in a slowly moving trail or circling around a ring as shown in Fig.REF .", "For $ z=2 $ in a bosonic system, the advantages for experimental detections is that the critical velocities to probe the $ z=2 $ line and the $ (z_x,z_y)=(3/2,3) $ line is much lower than those in Fig.REF .", "1.", "Limitations and applicability of detection techniques on a moving sample When a sample is static, many experimental techniques can be applied.", "For example, various light, atom, X-ray and neutron scattering can be used to detect all the excitation spectrum in all the phases in Fig.REF , REF .", "The conventional 2-terminal or 4- terminal transport measurements can be applied to detect the currents in Sec.IV, The compressibility $ \\kappa $ and the specific heat $ C_v $ , the superfluid density $ \\rho _s $ in Eq.REF in Sec.V can be separately measured by various techniques [87], [88], [90], also by In-Situ measurements [89].", "However, when setting the sample moving with a constant velocity $ b $ in Fig.REF , some of the measurements such as transports may become hard to implement.", "Fortunately, all kinds of scattering experiments sketched in Fig.REF remain applicable.", "Because just by measuring the difference of the energy and momentum between the scattering beam and the incidence beam, one can map out the whole excitation spectrum inside the moving sample.", "One simply perform all the measurements in the same reference frame $ S^{\\prime } $ , so for the light scattering, there is no need to consider the Doppler shift of the photon in Eq.REF .", "In the cold atoms, the Bragg spectroscopies and photo-emissions are easily applicable when setting the trap holding the optical lattice in a constant motion.", "Then the single-particle Green functions, the density and density correlation functions investigated in Sec.V, the Goldstone and Higgs modes in Fig.REF and REF can be detected by all kinds of Bragg spectroscopies such as dynamic or elastic, energy or momentum resolved, longitudinal or transverse Bragg spectroscopies [58], [59], [60], [61], [62], [63].", "The compressibility $ \\kappa $ automatically follow.", "Unfortunately, it seems difficult to measure the free energy, therefore the specific heat in the moving sample.", "The currents carried by the BSF Eq.REF may also be hard to measure.", "2.", "Experimentally scan the four paths in Fig.REF by the scattering experiment We arrange the four paths into two groups: the first group ( IIIb, II ) tuning the velocity $ c $ and the second group (I, IIIa) tuning $ t/U $ .", "Along path IIIb, one can set $ t/U $ to be inside the Mott phase $ r > 0 $ in a static sample, then gradually increase the velocity of the sample.", "According to Eq.REF , $ \\tilde{\\mu }=-r +\\frac{ ( c^2- v^2_y )^2 }{4a} $ will tune the sample from the Mott to the BSF phase through the $ z=2 $ QPT.", "Along path II, one can set $ t/U $ to be inside the SF phase $ r < 0 $ in a static sample, then gradually increase the velocity $ c $ of the sample.", "When the velocity $ c $ reaches the intrinsic velocity $ v $ , it will tune the sample from the SF to the BSF phase through the $ z=(3/2, 3 ) $ QPT.", "In the second group, in a static sample, one tunes $ t/U $ to study the Mott to SF transition at $ c=0 $ .", "Then repeat it to scan the path I at $ c < v $ and scan the path IIIa at $ c > v $ .", "There was a report that the Higgs amplitude mode was detected in a three dimensional superfluid of strongly interacting bosons in an optical lattice by Bragg spectroscopy [73].", "The Higgs mode has also been detected [74] near the 2d $ z=1 $ SF-Mott transition with $ ^{87} Rb $ by slightly modulating the lattice depth within a linear response regime in the lab frame.", "But the Higgs mode should disappear in the BSF phase which should be easily confirmed by the Bragg spectroscopies.", "3.", "Driving the superfluid in a continuum The Mott phase in Fig.REF and REF requires the existence of the lattice at the first place.", "But the SF does not require it.", "So we dedicate the Sec.IX to study the SF in a continuum which owns the emergent Galileo invariance.", "Going beyond any intrinsic velocity becomes easy when one drives the SF directly.", "In addition to the well known SF in the He4, there are also excitonic SF in the electronic systems in semi-conductors.", "The Bilayer Quantum Hall systems (BLQH) [26] have a charge neutral sector which hosts the exciton SF in the charge neutral sector with the Goldstone mode velocity $ v_{BL}\\sim 1.4 \\times 10^{4} m/s $ .", "How does it change the charge sector described by a topological Chern-Simon term in a moving frame remains interesting to see.", "The ESF in a electron-hole bilayer system (EHBL) with the Goldstone mode velocity $ v_{EH}\\sim 5 \\times 10^{3} m/s $ emits photons [76], [99], [100], [101].", "It is also interesting to investigate how a running ESF emits the light ( see also Appendix E )." ], [ " Conclusions and perspectives ", "In this work, we study GT systematically in many systems.", "We start from the basic single particle Schrodinger equation, then to many body Schrodinger equation in both first and second quantization.", "Then we apply it to many body systems in a periodic lattice which may break the GI explicitly or spontaneously.", "We also investigate He4 which is a continuous system respecting GI.", "As a byproduct, we also push our systematic approach to many-body systems in an external magnetic systems such as FQH and its associated edge states.", "By using the systematic and unified approach, we explicitly show P. W. Anderson's \"More is different\" may need to be supplemented by how to perform a GT near a QPT and could be extended to \"More is richer\" ( Fig.REF ).", "We start from the microscopic model of boson-Hubbard model of interacting bosons at integer fillings in a square lattice which shows Mott to Superfluid (SF) transitions in Fig.REF .", "In the continuum limit, the low energy effective actions have either the emergent pseudo-Lorentz or the emergent Galileo invariance with the dynamic exponent $ z=1 $ and $ z=2 $ respectively.", "Then by the combination of constructing effective actions and performing calculations in the lattice in the moving frame, we obtain the effective phase diagrams Fig.REF and REF and finally the global phase diagram Fig.REF a in the moving frame.", "The phase boundary shifts by the two steps (I) and (II) in Fig.REF a show that the Mott insulating state in the regimes (I)+(II) in Fig.REF a in a lab frame may become a BSF state in the moving frame, but not the other way around.", "So in the regime near the QPT boundary, if it is a Mott insulator or a dissipation-less SF depends on which observer is taking the measurement.", "One may understand this exotic quantum phenomena from physical point of view [82]: along the $ z=2 $ vertical line in Fig.REF in the lab frame, one increases the chemical potential $ \\mu $ to drive the QPT from the Mott state to a SF state.", "Here, as shown in Eq.REF for $ z=1 $ and in Eq.REF for $ z=2 $ , one increases the kinetic energy of the fast moving train which plays the similar role as increasing $ \\mu $ to drive the QPT from the Mott state to the BSF state with $ z=2 $ , so a moving frame makes the gap of a weak Mott state vanishing and drives it into a weak BSF state.", "It will not affect the strong Mott phase deep inside the QPT, but will turn a weak Mott phase into a weak BSF [82].", "It is known that the Entanglement entropy (EE) at $ T=0 $ dramatically increases near a QCP.", "Fig.REF and REF at $ T=0 $ show that the EE at $ T=0 $ will also change in the moving frame.", "Indeed, the EE may also be used to describe the emergent space-time structure [115], [118].", "Despite new phases emerge in the moving frame, the $ T=0 $ remains, so pure state remains pure.", "In a sharp contrast, the Unruh effects reviewed in Sec.X-A lead to the construction of thermo-field double (TFD) state.", "The out of time correlation (OTOC) to describe the quantum information scramblings [94], [95], [96] at a finite $ T $ also dramatically increases.", "As shown in Sec.III, despite the Mott to SF transition along path I in Fig.REF is the same universality class of 3D XY with $ z=1 $ , its finite temperature properties does depend on $ c $ .", "Because any 3D CFT has only global conformal symmetry, so no local parametrization invariance in 1d SYK model or 2d CFT[94], [95], [96] are expected, so one must perform specific calculations at a finite $ T $ to study how the Lyapunov exponent $ \\lambda _{L} = f(c) T \\le 2 \\pi T $ depends on $ c $ at the QCP, especially near the critical velocity $ v_y $ ( or the M point ) in Fig.REF .", "The quantum Lifshitz transition from the SF to BSF along the path II in Fig.REF provides a rare example of a Gaussian theory with highly non-trivial dynamic exponent $ z=(3/2, 3 ) $ , the $ [Z]=-2 $ term is still quadratic, so will not lead to any quantum chaos, but the $ [b]=-2 $ term in Eq.REF or $ [w]=0 $ in Eq.REF is non-linear and will lead to quantum chaos, so it remains important to evaluate the OTOC due to this leading irrelevant operator.", "As shown in appendix B, it can also be used to describe the dynamic classical transitions between two sound waves in a medium.", "It is interesting to see if the quantum chaos at a QCP implies also classical chaos in such a dynamic classical phase transition.", "One can also add an Abelian gauge field [124] to Eq.REF , namely, the combination of Eq.REF and Eq.REF .", "One can even add Non-Abelian ( spin-orbit coupling [83], [84], [85], [86] ) gauge field ( For SOC, See appendix C-3 ) Then in going from the first quantization to the second quantization, then projecting to the tight binding limit, the matter fields go to the lattice sites, the Abelian or Non-Abelian gauge fields go to the links.", "Because the single particle Hamiltonian in Eq.REF becomes $ \\hat{h}= -\\frac{1}{ 2 m} [ -i \\hbar \\nabla -\\frac{e}{c} \\vec{A}( \\vec{x} ) ]^2 + V_1( \\vec{x} ) $ ( See Eq.REF ), the Wannier functions in Eq.REF may also depend on the Abelian or Non-Abelian gauge fields.", "It is important to derive the form of the boost on a lattice in the presence of Abelian or Non-Abelian gauge fields [125] in the tight -binding limit.", "Because the SOC is a major factor leading to various topological phases [27], [28], [29], so this form could be useful to study observing various topological phases in a moving frame [32].", "Obviously, the GT is only for the space-time, does not care about if the microscopic degree of freedoms are bosons, fermions or spins.", "The methods developed in this work can be applied to study any other QPTs such as the magnetic phase transitions [109], lattice vibrations ( phonons ) in a solid or topological phase transitions in interacting bosonic or non-interacting/interacting fermionic systems [32], FQH plateau-plateau transitions.", "The charge-vortex duality presented in Sec.VI is formulated in the continuum.", "It can also be formulated in a dual lattice where the vortices are hopping subject to a dual gauge field on the links.", "Then injecting vortex currents in the presence of the dual gauges field is interesting on its own right, it belongs to a new class of problems: boosting a lattice gauge theory [103], [104] with both matter on the lattice site and the dynamic gauge fields on the link.", "So it will also bring important insights to quantum phases and transitions with matter and gauge fields in the dual lattice.", "However, putting a gauge field in a lattice is not guaranteed.", "For example, it is still not known how to put topological QFT such as Chern-Simon theory to be discussed in appendix G and H in a lattice.", "In this case, the continuum limit is the only way to proceed, but runs into many technical probelms.", "This work focus on charge neutral SF, so may not be directly applied to the superconductor-insulator transition (SIT) in a thin film where the Coulomb interaction plays an important role [10].", "It leads to a time-component of the gauge field $ A_0 $ in Eq.", "which breaks its emergent Lorentz invariance [108].", "The effects of the gauge fields under a GT are given in Eq.REF Sec.IV-B-3 and also the appendix F-H. Then the time component remains the same $ \\tilde{A}_0= A_0 $ and the spatial components remain vanishing $ \\tilde{A}_x= A_x=0, \\tilde{A}_y= A_y=0 $ .", "As examined in the Appendix B, they may also be used to explore dynamic transitions in classical systems.", "How to boost a quantum spin model is also interesting to pursue.", "Adding SOC to the quantum spin model will also lead to novel physics which will be explored in future publications.", "Acknowledgements We dedicate this work to celebrate the 75th birthday of Moses Chan, a world class experimentalist, especially at low temperature physics and its implication in critical phenomena and also 60th birthday of Subir Sachdev, a world class theorist, especially at quantum/topological phase transitions and its possible implication on quantum black holes.", "In the time order during various stages of completing this work, J. Ye also thank Bo-Zheng Wang, Xinghui Ge, M. Novotny, Li Li, Z. Q. Hu, Y. S. Wu, C. Gu, Z. Q Hu, X. J. Liu, Biao Wu, Guowu Meng, Chong Wang for helpful discussions, also inspiring questions from Ph.D. students Jiahou Yang, Xiao Wang from T.D.", "Lee institute and C. L Chen from West Lake university.", "We thank C. L. Chen for very helpful discussions when writing the Appendix F,G and H [140] and also W. M Liu for long time encouragement.", "Appendix The important roles by gauge invariance has been stressed and explored in both high energy and condensed matter physics.", "The Lorentz invariance is essential ingredient in relativistic QFT.", "Somehow, the roles of Galileo transformation in the many body interacting non-relativistic system in a continuum has been much less explored.", "In fact, there exist also quite confusing and contradicting discussions in the previous literatures on Galileo invariance.", "In the several appendices, we discuss it in a systematic way and also make connections to the new results established in the main text." ], [ " Boosted Hamiltonians for $ z=1 $ and {{formula:30dc505d-5dd9-4092-8186-216157022d31}} . ", "Here we will first derive the effective Hamiltonians from the effective actions, then we will derive them from the microscopic Boson Hubbard model.", "It is constructive to compare the two complementary approaches.", "1.", "The Boosted effective Hamiltonians of $ z=1 $ and $ z=2 $ In the main text, we only used the boosted Lagrangian approach, here we applied the corresponding Hamiltonian.", "Despite the two approaches are completely equivalent, the Hamiltonian approach may be more intuitive in the lattice approach on the boson Hubbard model Eq.REF mentioned in the conclusion section.", "Similar quantization method is useful to quantize fields in a curved space-time [93].", "(a).", "The boosted Hamiltonian in the $ z=1 $ case From Eq., one can find the conjugate momentum, $\\Pi & = &\\frac{ \\partial {\\cal L}_M}{ \\partial (\\partial _{\\tau } \\psi )}= \\partial _\\tau \\psi ^*-ic\\partial _y\\psi ^* \\nonumber \\\\\\Pi ^{*} & = &\\frac{ \\partial {\\cal L}_M }{ \\partial (\\partial _{\\tau } \\psi ^{*} )}= \\partial _\\tau \\psi -ic\\partial _y\\psi $ After imposing the equal-time commutation relations: [ ( x, t ), ( x, t ) ] = 0 ( x, t ), ( x, t ) = 0 ( x, t ), ( x, t ) = i ( x- x ) where there is also an identical set for $ \\psi ^{*} $ and $ \\Pi ^{*} $ .", "The two sets commute with each other.", "One can find the corresponding Hamiltonian: ${\\cal H}_{M,z=1}& = & - \\Pi \\partial _{\\tau } \\psi - \\Pi ^{*}\\partial _{\\tau } \\psi ^{*} + {\\cal L_M} \\nonumber \\\\& = & - \\partial _{\\tau } \\psi ^{*} \\partial _{\\tau } \\psi - c^2 \\partial _{y} \\psi ^{*} \\partial _{y} \\psi +v_x^2|\\partial _x\\psi |^2+v_y^2|\\partial _y\\psi |^2 + r |\\psi |^2+ u |\\psi |^4+ \\cdots $ Now it is important to express $ \\partial _{\\tau } \\psi ^{*}=\\Pi + ic\\partial _y\\psi ^*,\\partial _{\\tau } \\psi =\\Pi ^{*} + ic\\partial _y\\psi $ in terms of the conjugate momentum and substituting them into the Eq.REF leads to: ${\\cal H}_{M,z=1}=-\\Pi \\Pi ^{*} -ic \\Pi \\partial _y\\psi -ic \\Pi ^{*}\\partial _y\\psi ^* + v_x^2|\\partial _x\\psi |^2+v_y^2|\\partial _y\\psi |^2 + r |\\psi |^2+ u |\\psi |^4+ \\cdots $ which plus the commutation relations Eq.", "gives the complete quantized boosted Hamiltonian for $ z=1 $ .", "Putting $ c=0 $ recovers the $ {\\cal H}_{M} $ in the lab frame.", "It can be shown that it is Hermitian in the real time formalism [36].", "It is interesting to put Eq.REF in a lattice using the Hamiltonian formulation ( namely keeping the imaginary time continuous and putting both conjugate variables in a lattice ) reviewed in [103], [104].", "Eq.REF can also be derived from the lattice model in the appendix A-1 below where the physical meanings of conjugate variables $ \\psi ( \\vec{x}, t ), \\Pi ( \\vec{x}^{\\prime }, t) $ become more transparent in Eq.REF and Eq.REF .", "(b).", "The boosted Hamiltonian in the $ z=2 $ case From Eq., one can find the conjugate momentum, $\\Pi = \\frac{ \\partial {\\cal L_M}}{ \\partial (\\partial _{\\tau } \\psi )}= \\psi ^{\\dagger }$ After imposing the equal commutation relations: [ ( x, t ), ( x, t )] = 0 ( x, t ), ( x, t ) = 0 ( x, t ), ( x, t ) = i ( x- x ) One can find the corresponding Hamiltonian: ${\\cal H}_{M,z=2} & = & - \\Pi \\partial _{\\tau } \\psi + {\\cal L_M} \\nonumber \\\\& = & -ic \\psi ^{\\dagger } \\partial _y\\psi + v_x^2|\\partial _x\\psi |^2+v_y^2|\\partial _y\\psi |^2 - \\mu |\\psi |^2+ u |\\psi |^4+ \\cdots \\nonumber \\\\& = & \\psi ^{\\dagger } [ -ic \\partial _y - v_x^2 \\partial ^2_x - v_y^2\\partial ^2_y - \\mu ]\\psi + u (\\psi ^{\\dagger }\\psi )^2+ \\cdots $ which is automatically in terms of the conjugate momentum $ \\psi ^{\\dagger } $ .", "It plus the commutation relations Eq.", "gives the complete quantized boosted Hamiltonian for $ z=2 $ .", "Putting $ c=0 $ recovers the $ {\\cal H}_{M} $ in the lab frame.", "Eq.REF can be written as ${\\cal H}_{M,z=2}= \\psi ^{\\dagger } [ v_x^2 (-i \\partial _x )^2 + v_y^2 (-i \\partial _y + k_0 )^2- \\tilde{\\mu } ]\\psi + u (\\psi ^{\\dagger }\\psi )^2+ \\cdots $ where $ k_0=- \\frac{c}{ 2 v^2_y }, \\tilde{\\mu }= \\mu + v^2_y k^2_0 $ as listed in Eq.REF .", "Then by introducing $ \\tilde{\\psi }= \\psi e^{-i k_0 y } $ which also satisfies the commutation relation Eq.. Then Eq.REF can be re-written in terms of $ \\tilde{\\psi } $ as: ${\\cal H}_{M,z=2}= \\tilde{\\psi }^{\\dagger } [ v_x^2 (-i \\partial _x )^2 + v_y^2 (-i \\partial _y )^2- \\tilde{\\mu } ] \\tilde{\\psi }+ u (\\tilde{\\psi }^{\\dagger } \\tilde{\\psi } )^2+ \\cdots $ which takes exactly the same form as that in the lab frame.", "It is nothing but the Hamiltonian version of the path-integral in Eq..", "Following the procedures in Appendix D-2 and combining the $ z=1 $ and $ z=2 $ case, one can also derive the Hamiltonian corresponding Eq.", "REF 2.", "The Boosted Hamiltonians of $ z=1 $ and $ z=2 $ from the lattice models.", "The $ z=2 $ Hamiltonian corresponding to the $ z=2 $ Lagrangian Eq.REF is nothing but Eq.REF .", "Furthermore, it provides the physical meaning of the $ p $ operator.", "One can also write down immediately the $ z=1 $ Hamiltonian corresponding to the $ z=1 $ Lagrangian Eq.REF as: ${\\cal H}[ \\Psi , \\Pi ] &=& \\Psi ^{\\dagger } (-ic \\partial _x) \\Pi + \\Pi ^{\\dagger } (-ic \\partial _x) \\Psi \\nonumber \\\\& + & \\Psi ^{\\dagger }(-\\frac{1}{2m} \\nabla ^2 + \\Delta _0 - \\lambda t ) \\Psi + \\Pi ^{\\dagger }(-\\frac{1}{2m} \\nabla ^2+ \\Delta _0 + \\lambda t ) \\Pi + \\cdots $ where $ ( \\Psi ^{\\dagger }, \\Pi ) $ or $ ( \\Pi ^{\\dagger }, \\Psi ) $ are conjugate variables.", "One can see that by $ \\Pi \\leftrightarrow \\Pi ^{\\dagger } $ and setting $ \\Pi \\rightarrow \\Pi /\\sqrt{\\Delta _0+ \\lambda t }, \\Psi \\rightarrow \\sqrt{\\Delta _0+ \\lambda t } \\Psi $ to keep the commutation relations, it is nothing but Eq.REF .", "After this re-scaling, one gives back to Eq.REF .", "Furthermore, it provides the physical meaning of the $ ( \\Psi , \\Pi ) $ operator as written in REF .", "Then it is instructive to revert back to the Hamiltonian corresponding to Eq.REF : ${\\cal H}_{p/h}= p^{\\dagger } (-ic \\partial _x + \\Delta _0-\\frac{1}{2m} \\nabla ^2 ) p+ h^{\\dagger } (-ic \\partial _x + \\Delta _0-\\frac{1}{2m} \\nabla ^2 ) h - \\lambda t ( p^{\\dagger } h^{\\dagger } + p h )+ \\cdots $ where $ ( p , p^{\\dagger } ) $ and $ ( h , h^{\\dagger } ) $ are just two pairs of conjugate variables.", "Eq.REF maybe more practical than Eq.REF in performing a lattice calculation to study the boost effects." ], [ " Galileo transformation in classical sound waves and a dynamic classical phase transition. ", "In this appendix, we show that despite Newton's law is GI, the resulting wave equation is not.", "Then by using the quantum classical correspondence, we apply the approach used in Sec.IV to study the dynamic transition tuned by the boost.", "In classical mechanics, we look at a group of $ N $ particles interacting via a two-body potential $ V( \\vec{x}_i- \\vec{x}_j ) $ .", "Their trajectories are given by the Newton equation: $m_i \\frac{ d \\vec{v}_i }{ d t} = - \\sum _j \\nabla _i V( \\vec{x}_i- \\vec{x}_j )$ Under the GT $ x^{\\prime }= x + v t, t^{\\prime }=t $ from the lab frame to the $ S^\\prime $ frame, $ \\vec{v}_i= \\vec{v}^{\\prime }_i - \\vec{v}, \\nabla _i= \\nabla ^{\\prime }_i,\\vec{x}_i- \\vec{x}_j = \\vec{x}^{\\prime }_i- \\vec{x}^{\\prime }_j $ , then Eq.REF takes the identical form in the $ S^\\prime $ frame.", "In a continuum where $ N \\rightarrow \\infty $ , things can change dramatically.", "Indeed, as advocated by P. W. Anderson, \"more is different\".", "It even applies to classical systems.", "Now we look at a sound wave in a continuous medium: $\\frac{ \\partial ^2 \\psi }{\\partial x^2} - \\frac{1}{v^2_0} \\frac{ \\partial ^2 \\psi }{\\partial t^2} =0$ where $ v_0 $ is the sound wave velocity in the medium.", "In a moving frame with velocity $ v $ , under the GT, the sound wave equation changes to ( following above Eq., we still drop $ \\prime $ in $ \\psi ( \\vec{x}^{\\prime }- \\vec{v} t^{\\prime }, t^{\\prime }) $ ): $(1- \\frac{v^2}{v^2_0} ) \\frac{ \\partial ^2 \\psi }{\\partial x^2} + \\frac{2 v}{v^2_0}\\frac{ \\partial ^2 \\psi }{\\partial x \\partial t } - \\frac{1}{v^2_0} \\frac{ \\partial ^2 \\psi }{\\partial t^2} =0$ which, due to the crossing metric term, is obviously different than Eq.REF .", "No transformation [113] on $ \\psi $ can restore it back to Eq.REF .", "It is also limited to $ v < v_0 $ .", "Its lack of GI is expected.", "This is because the sound wave are compressions or rarefaction in the air, water or other materials.", "So Eq.REF only holds in the frame where the supporting medium is at rest [113].", "Despite the Newton's law describing the motion of a point particle must be Galileo invariant, the sound wave supported by a specific medium does not.", "This fact may also be considered as the classical version of Anderson's insight \" More is different \" on quantum emergent phenomena.", "While the EM, there is no need for such a medium ( which was called ether in the history ), so the EM must be Lorentz invariant.", "Obviously, when $ v > v_0 $ , there is an instability in Eq.REF .", "Surprisingly, there is no previous literature to address what does the instability leads to.", "Fortunately, the solution to this question has been automatically included in Eq.REF .", "In the real time formalism, one can just treat Eq.REF as a classical Lagrangian ( namely, we do not do any path-integral on the field ), then its classical equation of motion precisely leads to Eq.REF with just one trivial extra dimension.", "To find the solution to the instability, one must consider both the high derivative terms and the interaction terms in Eq.REF .", "This has been achieved in Sec.", "IV.", "When changing the imaginary time to the real time, the effective action Eq.REF describing the BSF phase can be used to lead to the following new sound wave equation after $ v > v_0 $ .", "$(1- \\frac{v^2}{3 v^2-2 v^2_0} ) \\frac{ \\partial ^2 \\psi }{\\partial x^2} + \\frac{2 v}{3 v^2-2 v^2_0}\\frac{ \\partial ^2 \\psi }{\\partial x \\partial t } - \\frac{1}{3 v^2-2 v^2_0} \\frac{ \\partial ^2 \\psi }{\\partial t^2} =0$ where $ v > v_0 $ .", "it just replaces the intrinsic velocity $ v^2_0 $ in Eq.REF by $ 3 v^2-2 v^2_0 > v^2_0 $ in Eq.REF when $ v > v_0 $ .", "The dynamic phase transition from the wave Eq.REF to Eq.REF is precisely described by the dynamic exponent $ z=(3/2,3 ) $ presented in Sec.IV.", "This story provides a novel example of quantum-classical correspondence.", "The possible classical chaos due to the non-linear terms near the dynamic transition remains to be explored.", "In fact, the class (1) in Sec.II also has a classical analog: a moving object such a stone moving in water or airplane moving the air, there are also two case: subsonic $ v < v_0 $ and supersonic $ v > v_0 $ which maybe related, but still different than the above two case in a moving frame." ], [ " The Galileo invariance and its explicit breaking in 1d Luttinger or 2d/3d SOC case ", "In this appendix, we first discuss the GT in the Schrodinger equation in the first quantization which clarify a lot of confusions in the existing literatures or even textbooks, then show that the Fermi Surface of non-interacting fermions ( in the second quantization ) breaks Lorentz invariance, but owns Galileo invariance.", "Then we show that in the second quantization language, any Luttinger liquid in 1d or the spin-orbit coupling [106], [125] break the Galileo invariance.", "1.", "Galileo transformation on the single particle Schrodinger equation: first quantization It was known that the non-relativistic Schrodinger equation ( 1st quantization form ) $i \\hbar \\frac{ \\partial \\psi }{ \\partial t } = [ - \\frac{ \\hbar ^2}{ 2 m} \\frac{ \\partial ^2}{ \\partial x^2} + V(x, t) ] \\psi $ where the single-body potential is usually time independent in the lab frame $ V(x,t)=V(x) $ .", "Eq.REF transforms under the Galileo transformation $ x^{\\prime }= x-vt, t^{\\prime }=t $ as: $i \\hbar \\frac{ \\partial \\psi ^{\\prime } }{ \\partial t^{\\prime } } =[ - \\frac{ \\hbar ^2}{ 2 m} \\frac{ \\partial ^2}{ \\partial x^{\\prime 2}} + V^{\\prime }( x^{\\prime },t^{\\prime } ) ] \\psi ^{\\prime }$ where the wavefunction and the potential in the lab frame are related to those in the moving frame by: ( x,t) =e i ( k0 x- E0 t/) ( x,t ) V( x,t )= V( x+ v t, t ) V( x, t ) where $ \\psi ( x,t) =\\psi ( x^{\\prime } - v t^{\\prime },t^{\\prime } ) $ and $ k_0= -\\frac{ m v}{\\hbar }, E_0= \\frac{ \\hbar ^2 k^2_0 }{ 2 m }= \\frac{1}{2} m v^2 $ .", "Its many-body generalization in a periodic lattice potential was shown in Eq.REF and Eq..", "In general, due to $ V^{\\prime }( x^{\\prime },t^{\\prime } ) \\ne V( x^{\\prime }, t^{\\prime } ) $ Schrodinger equation is not Galileo invariant.", "Of course, if $ V=0 $ , it is Galileo invariant.", "This case will be revisited in the next subsection in the second quantization.", "However, any single-body potential $ V(x) \\ne 0 $ breaks the GI, because it is time-independent in the lab frame, but becomes time-dependent in the moving frame, so can not be GI.", "For example, if one takes the harmonic potential $ V_h(x)= \\frac{1}{2} m \\omega ^2 x^2 $ and the center of the potential is fixed at $ x=0 $ , so not moving under the GT.", "Then the potential becomes time dependent $ V^{\\prime }( x^{\\prime },t^{\\prime } )= V( x^{\\prime }+ v t^{\\prime }, t^{\\prime } )=\\frac{1}{2} m \\omega ^2 ( x^{\\prime }+ v t^{\\prime } )^2 $ in the moving frame, therefore breaks the GI.", "For a typical optical lattice potential $ V_o(x) = \\cos ^2 \\pi x/a= \\cos ^2 \\pi ( x^{\\prime }+ v t^{\\prime } )/a $ , then it becomes time-dependent which is periodic not only in the space with $ x^{\\prime } \\rightarrow x^{\\prime } + a $ , but also in the time $ t^{\\prime } \\rightarrow t^{\\prime } + na/v $ , or the joint space-time symmetry.", "This is the case examined in Sec.IX-C.", "However, if the center is also moving under the GT, then it is still GI [107].", "This is the case investigated in Sec.VII-VIII.", "It was known that the Electro-Magnetic (EM) fields satisfy Maxwell equations which is intrinsically Lorentz invariant.", "Klein-Gordan (KG) equations for spin-0 bosons $ \\phi $ and Dirac equations for spin-1/2 fermions $ \\psi $ are also Lorentz invariant.", "So Klein-Gordan equations and Dirac equations with a rest-mass $ m $ couple to external EM fields in the minimal coupling scheme are both gauge invariant and Lorentz invariant.", "The intrinsic velocity is the speed of light $ c_l $ .", "Under the Lorentz transformation $ x^{\\prime } = \\Lambda x,~~\\Lambda =e^{ i \\omega _{\\alpha \\beta } L_{\\alpha \\beta } } $ where $ L_{\\alpha \\beta }= - L_{\\beta \\alpha } $ stands for [97] the three rotation $ J_i,i=1,2,3 $ and the three boosts $ Ks_i,i=1,2,3 $ , then KG spin-0 bosons and Dirac spin-1/2 fermions give its scalar and spinor representation of the Lorentz group respectively: $\\phi ^{\\prime }( x^{\\prime } )= \\phi (x),~~~~~~ \\psi ^{\\prime }( x^{\\prime } )= \\Lambda _{1/2} \\psi (x),~~\\Lambda _{1/2}= e^{ i \\omega _{\\mu \\nu } S_{\\mu \\nu } },$ where $ S_{\\mu \\nu }=\\frac{i}{4}[ \\gamma _{\\mu }, \\gamma _{\\nu } ] , \\Lambda ^{-1}_{1/2} \\gamma _{\\mu } \\Lambda _{1/2}= \\Lambda _{1/2} \\gamma _{\\mu } $ and $\\gamma _{\\mu } $ are the 4 Dirac $ \\gamma $ matrix.", "When contrasted to Eq.REF for the non-relativistic case, one can see the chemical potential is kept to be zero $ \\mu =0 $ in Eq.REF .", "This is because any $ \\mu \\ne 0 $ breaks Lorentz invariance.", "So the chemical potential shift in Eq.REF is a unique feature for the Galileo boost.", "Indeed, it is this shift which is responsible for the step (II) shift in the QPT boundary from the Mott to the BSF in Fig.REF .", "Taking the non-relativistic limit $ v/c_l \\ll 1 $ where $ v=k/m $ limit, KG or Dirac equation in an EM field just reduces to the S-equation with a mass $ m $ in the EM field which is GT ( see Eq.REF ).", "For the Dirac equation, there is an additional Zeeman field term $ -\\mu _B \\vec{\\sigma } \\cdot \\vec{B} $ where $ \\mu _B = \\frac{ e \\hbar }{2 m c } $ is the Bohr magneton and the $ \\vec{\\sigma } $ is the Pauli matrix.", "This Zeeman term is obviously GI.", "When pushing to higher orders, the Dirac equation in a scalar potential $ A_0 $ reduces to S-equation with a mass $ m $ in the scalar field ( See Eq.REF ) plus several new terms such as the, $ p^4 $ term, the spin-orbit coupling (SOC) term and Darwin term which break the GI.", "The gauge invariance is always kept in such a limit [105].", "Indeed, as shown below, the SOC breaks the Galileo invariance.", "2.", "A moving FS in second quantization: Galileo invariance Using the prescription presented in the main text, we immediately find the FS observed in a moving frame with the velocity $ v\\hat{y} $ in the second quantization form: $H_M =\\psi ^{\\dagger }(\\vec{k} ) [ \\frac{ \\hbar ^2 k^2 }{2m} - \\mu - c k_y] \\psi (\\vec{k} ), ~~~\\mu = \\frac{ \\hbar ^2 k^2_{0F} }{2m}$ where $ k_{0F} $ is the Fermi momentum in the lab frame.", "In a grand canonical ensemble, one need also introduce the chemical potential $ \\mu $ which also plays the role of the energy $ E $ in the canonical ensemble.", "Eq.REF can be written as: $H_M =\\psi ^{\\dagger }(\\vec{k} ) [ \\frac{ \\hbar ^2 }{2m}[( \\vec{k}-\\vec{k}_0 )^2 - ( k^2_{0F} + k^2_0 ) ] \\psi (\\vec{k} )$ where $ \\vec{k}_0= mc/\\hbar ^2 \\hat{y} $ is the FS center shift due to the boost.", "In the fermion case, one can perform the similar set of transformation as listed in Eq.REF , one can show Eq.REF is also Galileo invariant, but no TPT.", "Of course, the GI is always emergent, because there always exist higher order derivative terms such as $ k^4, k^6,..... $ which break the GI explicitly.", "In fact, Eq.REF can also be viewed as the kinetic term in the interacting boson case in Eq.REF .", "In the boson case, one focus on the BEC momentum, so the shift can also be transformed away by introducing the new field $ \\tilde{\\psi } $ and then the new effective chemical potential $ \\tilde{\\mu } $ in Eq.REF .", "So the boson case in Eq.REF also has the Galileo invariance.", "However, in contrast to the fermion case, the effective chemical potential $ \\tilde{\\mu } $ can tune the QPT from the Mott to SF with $ z=2 $ as shown in Sec.IX.", "Both the boson and fermion case are also due to the Galileo invariance of the non-relativistic Schrodinger equation at $ V=0 $ presented in the first quantization language in the last subsection.", "3.", "1d Luttinger liquid in second quantization: breaking the Galileo invariance For 1d case, one can identify the two Fermi points in the moving frame: $k^{+}_F= k_0 + \\sqrt{ k^2_0 + k^2_{0F} } > 0 \\nonumber \\\\k^{-}_F= k_0 - \\sqrt{ k^2_0 + k^2_{0F} } < 0$ which shows the Fermi momentum is shifted from $ \\pm k_{0F} $ .", "Because, no change of topology of FS, so no topological phase transition (TPT) tuned by the boost.", "Now we add the two body interaction listed in Eq.REF which is GI.", "then at the low energy limit, it leads to the 1d Luttinger liquid: ${\\cal L}=\\frac{g}{2} \\int dx d \\tau [ \\frac{1}{v} ( \\partial _\\tau \\phi )^2 + v ( \\partial _x \\phi )^2 ]$ where $ g, v $ are the two Luttinger parameters which may be evaluated by some microscopic calculations.", "It is described by $ c=1 $ CFT displaying the emergent pseudo-Lorentz invariance with the characteristic velocity $ v $ .", "It is the oldest example of non-Fermi liquid which the shows spin-charge separation in 1d if there is also a spin degree of freedoms.", "Its spinon and holon excitations are collective excitations of the whole system, not directly related to the original spin 1/2 fermions anymore.", "It is similar to the $ z=1 $ case presented in the main text and also to the classical sound waves discussed in the appendix-B.", "There is also an emergent space-time shown in Fig.REF .", "Due to the no differences between hard core bosons and fermions in 1d, 1d Luttinger liquids emerge from 1d quantum spin, fermionic or bosonic Hubbard models in a lattice.", "For example, the 1d Hubbard model away from the $ n=1 $ filling at the strong ( or hard-core ) coupling limit projecting to one boson per site is a Luttinger liquid [9].", "So just like the 2d Hubbard model Eq.REF , one can always start from a lattice model.", "How does the 1d Luttinger liquid response to the Galileo boost will be examined further in a future publication.", "4.", "2d/3d Spin-orbit coupling and split FS with opposite helicities in second quantization: breaking the Galileo invariance by an effective Zeeman field.", "One can add a 3d Weyl or 2d Rashba like SOC [83] term $ \\lambda \\vec{k} \\cdot \\vec{S} $ to Eq.REF : $H_{M,SOC} =\\psi ^{\\dagger }(\\vec{k} )[ \\frac{ \\hbar ^2 k^2 }{2m} - \\mu - c k_y + \\lambda \\vec{k} \\cdot \\vec{S}] \\psi (\\vec{k} ), ~~~\\mu = \\frac{ \\hbar ^2 k^2_{0F} }{2m}$ In the helicity basis[83], [84], [85], [86], the SOC alone [106] without the boost can also be written as a linear $ k $ term with $ \\pm $ sign ( see Eq.REF ).", "So it is interesting to look at the combined effects of SOC and the boost.", "Eq.REF can be written as $H_{M,SOC} =\\psi ^{\\dagger }(\\vec{k} )[ \\frac{ \\hbar ^2 }{2m}[( \\vec{k}-\\vec{k}_0 )^2 - ( k^2_{0F} + k^2_0 ) ]+ \\lambda \\vec{k} \\cdot \\vec{S} ] \\psi (\\vec{k} )$ where $ \\vec{k}_0= mc/\\hbar ^2 \\hat{x} $ is the FS center shift due to the boost.", "After performing the similar set of transformation as in Eq.REF , Eq.REF becomes: $H_{M,SOC} = \\psi ^{\\dagger }(\\vec{q} )[ \\frac{ \\hbar ^2 }{2m} [ q^2 - ( k^2_{0F} + k^2_0 ) ] + \\lambda \\vec{q} \\cdot \\vec{S} +\\lambda \\vec{k}_0 \\cdot \\vec{S}] \\psi (\\vec{q} )$ where the extra term $ \\lambda \\vec{k}_0 \\cdot \\vec{S} $ breaks the Galileo invariance.", "It plays the role of a Zeeman field which is due to the combination of both the SOC $ \\lambda $ and the boost $ \\vec{k}_0 $ .", "So we conclude the SOC breaks the Galileo invariance.", "It is easy to find its two eigen-energies at 2d: $E(q_x,q_y) = \\frac{ \\hbar ^2 }{2m} [ q^2 - ( k^2_{0F} + k^2_0 ) \\pm 2 k_R \\sqrt{k^2_0 + 2 k_0 q_x +q ^2 } ]$ where $ k_R= m \\lambda $ corresponds the recoil momentum [84].", "Setting $ k_0 =0 $ recovers the SOC in the lab frame $ \\frac{ \\hbar ^2 }{2m}[ q^2 \\pm 2 k_R q- k^2_{0F} ] $ which leads to one large ( and one small) FS with negative ( positive ) helicity and the Fermi momentum $k_{F\\pm }= \\sqrt{ k^2_{0F} + k^2_R } \\pm k_R$ which is due to the SOC, indeed can be contrasted with Eq.REF which is due to the boost.", "Of course, any boost breaks the joint spin-momentum rotation symmetry, so the helicity is not a good quantum number anymore.", "As shown in the last subsection, there is no TPT tuned by the boost in the absence of SOC.", "However, as $ k_0 $ increases in Eq.REF , there should be TPTs tuned by the boost in the presence of SOC which plays the role of the Zeeman field $ \\lambda \\vec{k}_0 \\cdot \\vec{S} $ .", "Similarly, one can show that the conventional $ \\lambda \\vec{S} \\cdot \\vec{L} $ in materials [106] also breaks the Galileo invariance.", "For the dramatic effects of Weyl type SOC in interacting fermionic systems in a continuum, see [83], [84], [85].", "For conventional SOC, see the review [13]." ], [ " The excitation spectrum in the SF phase away from integer fillings ", "In the main text, we focused on the Mott-SF transition at integer fillings.", "If away from an integer filling with only onsite interaction, then the system is always in a SF phase.", "There is no Mott phase, no Mott-SF transitions anymore.", "Because the particle or hole in Eq.REF is defined with respect to the Mott phase only, so away from the integer fillings, there is no need to define $ p $ or $ h $ anymore, one just directly deal with the original bosons $ b_i $ .", "One may understand the SF phase at the weak coupling limit $ U/t \\ll 1 $ just by Bogoliubov theory.", "We expect the calculations may also apply at the joint point between $ n $ and $ n+1 $ lobe in Fig.REF where the boons number per site could be any one between $ n $ and $ n+1 $ , any small hopping also induces a SF.", "Then one can inject three different kinds of currents which stand for three different subgroups of the GT put on the square lattice: Hb1=-itb1i (bibi+y-h.c.),    Hb2=-itb2i (bibi+2y-h.c.) Hb3=-itb3i (bibi+x+y-h.c.) where stands for the current along the $ y $ bonds with $ n=1 $ , or with $ n=2 $ , along the diagonal $ (1,1) $ bond, $t_b$ stands for the strength of the injected currents.", "Note that all these currents ( or any of these linear combinations ) still keeps the translational symmetry of the lattice.", "Of course, one can add infinite number of terms which still keep the translational symmetry of the lattice.", "Eq.", "just gives the three simplest ones.", "From Eq.REF , one can see the boost $ H_{bx}= v( H_{b1}+ H_{b2} ) $ .", "But here we will study their effects separately.", "1.", "The calculations in the original basis In the absence of the injections, the BEC occurs at $k=K=0$ .", "However, the injection drives the BEC momentum to $ K $ which also means that the single particle spectrum $\\epsilon _k$ develops a single minima at $k=K$ with $ \\min _k \\epsilon _k=\\epsilon _{K} $ .", "In the following, it is convenient to define $k=K+q$ .", "Applying the Bogoliubov theory and rewriting $b_{K+q}=\\sqrt{N_0}\\delta _{q,0}+\\psi _{K+q}$ The Hamiltonian $ H= H_0 + H_b $ becomes H=E0+N0[K-+Un0](K+K) +q[(K+q-) K+qK+q +Un0(2K+qK+q +12K+qK-q +12K+qK-q)]+ Setting the linear term vanishing leads to $\\mu =n_0U+\\epsilon _K$ which also fix the total number of bosons.", "The quadratic part can be diagonalized by a Bogoliubov transformation, which gives the collective Goldstone mode: Eq= -(K;q) +[ +(K;q)-K] [+(K;q)-K+2n0U] where $ \\epsilon _{\\pm }(K;q)=\\frac{\\epsilon _{K+q}\\pm \\epsilon _{K-q}}{2} $ .", "The $ \\epsilon _{-}(K;q) $ term outside the square root mimics the Doppler shift term in the continuum ( See Eq.REF ).", "In the following, we apply the formalism to the three injections in Eq.. (a).", "Injecting $ H_{b1} $ .", "When considering $H_{b1}$ in Eq., the single particle spectrum takes the form k =-t0eikx+(-t0-itb1 )eiky+h.c.", "=-2t0(kx+ky)+2tb1 ky which develops a single minimum at $k=K=(0,k_0),\\quad $ with K=-2(t0+t02+tb12),   k0=-(tb1/t0) Note that $t_0\\sin k_0+t_{b1} \\cos k_0=0$ can be used to simplify many expressions.", "One can calculate +(K;q) =-2t0qx-2t02+tb12qy -(K;q) =2(tb1k0+t0k0)qy =0 Thus Eq.", "can be written as Eq=[K+q+2(t0+t02+tb12)][K+q+2(t0+t02+tb12)+2n0U] where the lattice Doppler shift term just vanishes.", "Why is this so will be re-examined below in the subsection 2. from the tilted basis Eq.REF .", "In the long-wavelength limit, $q\\ll 1$ ( BZ size ), the Goldstone mode becomes: Eq=2n0U(t0 qx2+t02+tb12qy2) which has no associated Doppler shift term.", "When comparing this equation with Eq.REF and considering the difference between $ n_0 $ here is the original boson density, while $ \\rho _0 $ in Eq.REF is the density for the $ p $ or $ h $ , one may identity $ v_x^2=v_y^2= \\sqrt{t_0^2+t_{b1}^2} $ in the boosting along both x- and y- bond.", "This is the same as that listed in Eq.REF .", "(b) Injecting $H_{b2}$ The single particle spectrum takes the form k =-t0eikx-t0eiky-itbei2ky+h.c.", "=-2t0(kx+ky)+2tb2ky which develops a single minimum at $k=K=(0,k_0),\\quad $ with (assuming $t_0>0$ ) K=-2t0-3t0+t02+32tb22 12+t0t0+t02+32tb2,   k0=(t0-t02+32tb28tb) Note that $t_0\\sin k_0+2t_b\\cos 2k_0=0$ which can be used to simplify many expressions and $|k_0|\\le \\frac{\\pi }{4}$ holds for any ratio of $t_b/t_0$ .", "One can calculate +(K;q) =-2t0(qx+k0qy)+2tb2k02qy -(K;q) =t0k0(2qy-2qy) In the long-wavelength limit, $q\\ll 1$ , Eq.", "can be simplified as Eq=2n0U[t0 qx2+(t0k0-4tb2k0)qy2+O(q4)] +t0k0 qy3+O(q5) where the Doppler shift term does not vanish, but goes as $ q_y^3 $ instead of $ q_y $ in the continuum ( See Eq.REF ).", "Again, there is no chance to drive a QPT in this special case.", "(c) Injecting $H_{b3}$ .", "The single particle spectrum takes form k =-t0eikx-t0eiky-itbei(kx+ky)+h.c.", "=-2t0(kx+ky)+2tb(kx+ky) which develops a single minimum at $k=K=(k_0,k_0),\\quad $ with (assuming $t_0>0$ ) K=-122tb (3t0+t02+8tb2)4tb2-t02+t0t02+8tb2,   k0=(t0-t02+8tb24tb) Note that $t_0\\sin k_0+t_b\\cos 2k_0=0$ which can be used to simplify many expressions and $|k_0|\\le \\frac{\\pi }{4}$ holds for any ratio of $t_b/t_0$ .", "One can calculate +(K;q) =-2t0k0(qx+qy)+2tb2k0(qx+qy) -(K;q) =2t0k0[qx+qy-(qx+qy)] In the long wave-length limit $q\\ll 1$ ( BZ size ), Eq.", "can be simplified as Eq=2n0U[t0k0(qx2+qy2)-tb2k0(qx+qy)2+O(q4)] -13t0k0[qx3+qy3-(qx+qy)3]+O(q5) By using the constraint Eq.REF , one can show the quantity inside the square root is positive, while that outside vanishes as $ t_b $ as $ t_b \\rightarrow 0 $ .", "One can see the Doppler shift term does not vanish, but goes as $ q^3 $ instead of linear in the continuum ( See Eq.REF .", ").", "The first term remains linear.", "So the Doppler shift term is always sub-leading to the first term in the long wavelength limit.", "There is no chance to drive a QPT in this special case either.", "2.", "The calculations in the new basis with $ H_{b1} $ .", "It is also tempting to study the BEC of the bosons in the new basis Eq.REF where the BEC condensation momentum gets back to 0.", "Rewriting $\\tilde{b}_k=\\sqrt{N_0}\\delta _{k,0}+\\tilde{\\psi }_k$ and applying the Bogoliubov theory leads to: H=E0+N0[-2(t0+t02+tb12)-+Un0](0+0) +k[(k-) kk +Un0(2kk +12k-k +12k-k)]+ Setting the linear term vanishing leads to $\\mu =n_0U-2(t_0+\\sqrt{t_0^2+t_{b1}^2})$ .", "The quadratic Hamiltonian can be diagonalized by a Bogoliubov transformation: Ek=[k+2(t0+t02+tb12)] [k+2(t0+t02+tb12)+2n0U] which has no Doppler shift term.", "This basis explained the exact vanishing of the Doppler shift term in the $ H_{b1} $ case, but not the other two cases with $ H_{b2} $ and $ H_{b3} $ .", "Note that the above Bogoliubov calculation of the SF phase automatically break the C- symmetry at $ z=1 $ , so no Higgs mode can be found.", "This is expected, because above calculations only apply to away from integer fillings where there is no C- symmetry anyway.", "But as mentioned below Eq., it may apply to the $ z=2 $ case where either $ p $ or $ h $ condenses, after considering the difference between the density of bosons $ n_0 $ and that $ \\rho _0 $ of either $ p $ or $ h $ , the interaction between the bosons $ U $ and that of $ u $ between $ p $ or $ h $ .", "This is the main difference between $ b_i $ and the order parameter $ \\psi $ in Eq.", "and Eq.." ], [ " A moving superfluid, Doppler shifts and Galileo transformation ", "In this appendix, we review the topic (2) mentioned in the introduction.", "we perform some microscopic Bogliubov calculations to quadratic order on a moving superfluid to demonstrates the known phenomena of Doppler shifts due to the Galileo transformation.", "They are consistent with the mean field + Gaussian fluctuation analysis on the effective action approach used in the main text.", "However, the main limitation of the microscopic calculations used in this appendix is that it is not able to determine what is the quantum phase beyond the critical velocity, let alone the universality class of the quantum phase transitions driven by the boost.", "One must push this Bogliubov calculations to infinite order to address this questions.", "This can only be achieved by the effective action and RG approach used in Sec.IX-A in the main text.", "1.", "Doppler shift in a moving SF at weak coupling: The Hamiltonian for weakly interacting bosons in a continuum system such as the ESF in the EHBL [76], [99], [100], [101] is $H_{B}= \\sum _{\\vec{k}} ( \\epsilon _{\\vec{k}} - \\mu ) b^{\\dagger }_{ \\vec{k} } b_{ \\vec{k} }+ \\frac{1}{2 A } \\sum _{\\vec{k},\\vec{p},\\vec{q} } V (\\vec{q} )b^{\\dagger }_{\\vec{k} - \\vec{q} } b^{\\dagger }_{\\vec{p} + \\vec{q} }b_{\\vec{p} }b_{\\vec{k} }$ where $ \\epsilon _{\\vec{k}}= \\hbar ^2 k^2/2m $ is the free boson dispersion, $ \\mu $ is the chemical potential, $ A $ is the 2d area, $ V_d (\\vec{q} ) $ is the boson-boson interaction.", "We assume it is weak, so the following Bogliubov method applies.", "Setting the SF moving with a momentum $ \\vec{Q} $ : $\\psi _0 = \\sqrt{N_0} e^{i \\vec{Q} \\cdot \\vec{x} }$ which means a finite superflow $ \\vec{v}= \\vec{Q}/m $ where $ m $ is the mass of an atom.", "Now one can write the boson operator as: $\\psi _{\\vec{Q} + \\vec{k} } = \\sqrt{N_0} \\delta _{ \\vec{k},0} + b_{\\vec{Q} + \\vec{k} }$ where $ b_{\\vec{Q} + \\vec{k} } $ stands for the quantum fluctuations with the momentum $ \\vec{k} $ measured relative to the BEC momentum $ \\vec{Q} $ .", "Substituting Eq.REF into Eq.REF and expanding it to the quadratic order, one can determine the chemical potential $ \\mu $ by eliminating the linear term of $ b_{\\vec{q}} $ in the Hamiltonian $ H_{SF} $ as $\\mu = n_0 V_d(0) + \\frac{1}{2} m v^2$ where $ n_0= N_0/A $ is the condensate density.", "Then one obtain the mean field Hamiltonian $ H_{SF} $ to the quadratic order: $H_{SF}= \\sum _{\\vec{k}} [ \\epsilon _{\\vec{k}} + n_0 V_d ( \\vec{Q}- \\vec{k} ) - \\epsilon _{\\vec{Q}} ] b^{\\dagger }_{ \\vec{k} } b_{ \\vec{k} }+ \\frac{ n_0}{2 } \\sum _{\\vec{k}} [ V_d (\\vec{k} ) b^{\\dagger }_{\\vec{Q} + \\vec{k} }b^{\\dagger }_{\\vec{Q} - \\vec{k} } + h.c. ]$ which can be diagonalized by the Bogoliubov transformation $\\beta _{\\vec{k}} = u_{\\vec{k}} b_{\\vec{Q} + \\vec{k} } + v_{\\vec{k}} b^{\\dagger }_{\\vec{Q} - \\vec{k} }$ We obtain $ H_{SF} $ in terms of the quasi-particle creation and annihilation operators $ \\beta _{\\vec{k}} $ and $ \\beta ^{\\dagger }_{\\vec{k}} $ : $H_{SF} = E(0) + \\sum _{\\vec{k}} E_v ( \\vec{k} )\\beta ^{\\dagger }_{\\vec{k}} \\beta _{\\vec{k}}$ where $ E(0) $ is the ground state energy and $u^2_{\\vec{k}} = \\frac{ \\epsilon _{\\vec{k}} + n_0 V_d ( \\vec{k} ) }{ 2 E( \\vec{k} ) }+ \\frac{1}{2} \\nonumber \\\\v^2_{\\vec{k}} = \\frac{ \\epsilon _{\\vec{k}} + n_0 V_d ( \\vec{k} ) }{ 2 E( \\vec{k} ) } - \\frac{1}{2} \\nonumber \\\\E_v ( \\vec{k} ) = E( \\vec{k} ) + \\vec{k} \\cdot \\vec{v}~~~$ where $ E( \\vec{k} )= \\sqrt{ \\epsilon ^2_{\\vec{k}} + 2 n_0 V_d ( \\vec{k} ) \\epsilon _{\\vec{k}} } $ .", "One can see that in the moving SF, $ u_{\\vec{k}} $ and $ v_{\\vec{k}} $ are the same as in the lab frame.", "However, the energy spectrum $ E_v ( \\vec{k} ) $ contains a Doppler shift term $ \\vec{k} \\cdot \\vec{v} $ [44], [79], [80].", "In the low energy limit $ \\vec{k} \\rightarrow 0 $ limit, $ E_v ( \\vec{k} ) \\rightarrow u | \\vec{k} | + \\vec{k} \\cdot \\vec{v} $ where $ u= \\hbar \\sqrt{ \\frac{n_0 V_d(0) }{m} } $ which is identical to Eq.REF .", "If one picks up $ \\vec{k} || - \\vec{v} $ , one can identify the critical velocity $ v_c= u $ .", "Unfortunately, as stressed in the first paragraph, one is not able to tell what will happen beyond the critical velocity from this Bogliubov approach.", "One can also obtain normal and anomalous Green function: $G_n( \\vec{Q}; \\vec{k}, \\omega ) & = & i \\frac{ \\omega - \\vec{k} \\cdot \\vec{v} + \\epsilon _{\\vec{k}} + n_0 V_d ( \\vec{k} ) }{ ( \\omega - \\vec{k} \\cdot \\vec{v} )^2 -E^2 ( \\vec{k} ) } \\nonumber \\\\G_a( \\vec{Q}; \\vec{k}, \\omega ) & = & i \\frac{ n_0 V_d ( \\vec{k} ) }{ ( \\omega - \\vec{k} \\cdot \\vec{v} )^2 -E^2 ( \\vec{k} ) }$ where one can identify the excitation spectrum $ \\omega = \\pm E( \\vec{k} ) + \\vec{k} \\cdot \\vec{v} $ which is nothing but the last equation in Eq.REF .", "2.", "Galileo transformation on the SF The Galileo transformation is: $\\vec{k}^{\\prime } & = & \\vec{k} \\nonumber \\\\E^{\\prime } & = & E- \\vec{k} \\cdot \\vec{v} + \\frac{1}{2} m v^2 \\nonumber \\\\\\mu ^{\\prime } & = & \\mu + \\frac{1}{2} m v^2$ where $ \\vec{k}^{\\prime } , E^{\\prime }, \\mu ^{\\prime } $ are the momentum, energy and chemical potential in the moving frame, where $ \\vec{k}, E , \\mu $ are those in the lab frame.", "Note that the momentum does not change, because as listed in Eq.REF , it was measured from the BEC momentum $ \\vec{Q}= m \\vec{v} $ from very beginning.", "In the moving frame, the Green functions take the same form as those in the lab frame: $G_n( \\vec{k}^{\\prime }, \\omega ^{\\prime } ) & = & i \\frac{ \\omega ^{\\prime } + \\epsilon _{\\vec{k}^{\\prime }} + n_0 V_d ( \\vec{k}^{\\prime } ) }{ \\omega ^{\\prime 2 } -E^2 ( \\vec{k}^{\\prime } ) } \\nonumber \\\\G_a( \\vec{k}^{\\prime }, \\omega ^{\\prime } ) & = & i \\frac{ n_0 V_d ( \\vec{k}^{\\prime } ) }{ \\omega ^{\\prime 2 } -E^2 ( \\vec{k}^{\\prime } ) }$ By substituting $\\omega ^{\\prime } = E^{\\prime }-\\mu ^{\\prime }= E-\\mu - \\vec{k} \\cdot \\vec{v} = \\omega - \\vec{k} \\cdot \\vec{v}$ which is just the non-relativistic $ c_l \\rightarrow \\infty $ limit of Eq.REF , one can see Eq.REF recovers Eq.REF .", "It would be interesting to look at how the photons are emitted from the moving ESF in the EHBL systems [76], [99], [100], [101], especially across the QPT from the SF to the BSF in Fig.REF at both $ T=0 $ and a finite $ T $ .", "As warned above, the Bogliubov method here may not be applied near any QPTs.", "One may extend the effective action developed in Sec.IX-A in the main text to couple to a photon bath to achieve this goal.", "For a similar approach to the chiral edge state of a FQH phase, see appendix H." ], [ " Galileo invariance of many-body systems in an external gauge field ", "The GT in Eq.REF discussed in Sec.VII in the main text only contains the lattice potential, no external gauge field.", "Here we discuss the role of the external gauge field which shows quite different behaviours than just the lattice scalar potential in Eq.REF .", "Following the strategies used in the last section, we will first present the GI in the first quantization in a canonical ensemble, then analyze its form in the second quantization in either a canonical or a grant canonical ensemble.", "1.", "Galileo transformation on many body Schrodinger equation: the first quantization In a quantum Hall system subject to a uniform magnetic field, assuming electron spins are all polarized due to the large Zeeman splitting, so one only need to consider the orbital effects of the magnetic field for these spinless fermions, the lattice potential $ V_1(x) $ can be ignored in such an effective continuum system, then Eq.REF changes to: $i \\hbar \\frac{ \\partial \\Psi (x_1,x_2,\\cdots x_N,t) }{ \\partial t } =[ \\sum _{i} \\frac{1}{ 2 m} [ -i \\hbar \\frac{\\partial }{ \\partial x_i }- \\frac{e}{c_l} \\vec{A}( x_i) ]^2+ \\sum _i e A_0( x_i) + \\sum _{i<j} V(x_i, x_j ) ] \\Psi (x_1,x_2,\\cdots x_N,t)$ where $ \\vec{B}= \\nabla \\times \\vec{A} = B \\hat{z} $ .", "Because of the possible involvement of the time component $ A_0 $ of the gauge field, one may also need to put it into the Hamiltonian in any case.", "As shown below, even $ A_0 =0 $ at beginning, it may be generated by a GT.", "The crucial difference than Eq.REF is the appearance of the speed of light $ c_l $ in the minimal coupling to $ \\vec{A} $ which need to be considered carefully when performing the GT.", "So Eq.REF and Eq.REF are two different class of quantum MBS in Fig.1.", "Setting $ A_0=0,~~~\\vec{A}=0 $ recovers the Helium 4 case which is charge neutral and clearly Galileo invariance.", "Similarly, $ V( x^{\\prime }_1- x^{\\prime }_2 )= V( x_1- x_2 )= e^2/|x_1-x_2| $ is translational and Galileo invariant [108].", "Note that Eq.REF also applies to the charge neutral interacting bosons in a rotating trap[31] where the artificial gauge field $ \\vec{A}= \\frac{1}{2} \\vec{\\omega } \\times \\vec{r} $ comes from the rotation close to the trapping frequency $ \\omega = \\omega _t $ .", "In this artificial gauge field case, then the speed of light $ c_l $ does not appear either.", "The gauge potential $ \\vec{A}(x) $ usually breaks the translational invariance, however, the magnetic field does not.", "For example, in the Landau gauge $ A_x=0, A_y=Bx $ or $ A_x=-By, A_y=0 $ , the gauge potential breaks translational invariance along a given direction $ x $ or $ y $ .", "In the symmetric gauge $ \\vec{A}= \\frac{1}{2} \\vec{B} \\times \\vec{r} $ , namely $ A_x=-\\frac{1}{2} B y, A_y=\\frac{1}{2}Bx $ , it breaks translational invariance completely, but keeps rotational symmetry around an arbitrarily chosen origin chosen at $ z=0 $ .", "However, in a moving frame, the space-time, the many-body wavefunction and the gauge potential in the (symmetric gauge) in the lab frame are related to those in the moving frame x = x+vt,    t=t t = t + v x,   x=x The many-body wavefunction changes as the same as in Eq.", "( xi,t) = e i ( k0 i xi- N E0 t/) ( xi,t ) where $ k_0= -\\frac{ m v}{\\hbar }, E_0= \\frac{ \\hbar ^2 k^2_0 }{ 2 m }= \\frac{1}{2} m v^2 $ .", "While the gauge field changes as A0 = 0,   Ax( y ) = - 12 B y,    Ay( x, t )= 12 B x-12 B v t, A0 = A0 + vcl Ax= - v2c B y,   Ax = Ax,    Ay=Ay In contrast, the periodic lattice potential listed below Eq.", "is GI.", "Then one can check the changes in electric and magnetic field ( be care of $ A^0=-A_0 $ in Lorentz signature ) E= -1cl A t - A0 = vcl B y B,     B= A = B which turns out to be the LT of $ E $ and $ B $ to the order of $ v/c_l $ , but dropping $ (v/c_l)^2 $ and higher orders.", "When taking the non-relativistic limit $ v/c_l \\ll 1 $ , naively, one may simply drop this electric field generated by the GT.", "However, the above manipulations demonstrate that one must keep this small electric field to the order of $ v/c_l $ to keep the Galileo invariance.", "This should also be expected because the speed of light $ c_l $ does appear in the minimal coupling in the many-body Schrodinger equation Eq.REF .", "Of course, it also appears in the magnetic length $ l_0= \\sqrt{ \\frac{\\hbar c_l}{eB} } $ .", "This is dramatically different than the charge neutral systems such as the Boson Hubbard model and the ionic lattice model Eq.REF discussed in the main text where there is simply no EM field, so one can safely take $ c_l \\rightarrow \\infty $ limit.", "In the moving frame, one can perform the $ U(1) $ gauge transformation to get rid of the time component $ A_0 $ in Eq., e -i 12 Bv t y where $ \\chi = -\\frac{1}{2} Bv t^{\\prime } y^{\\prime } $ .", "Then the gauge field in Eq.", "becomes: A0 A0 - 1cl 0 =0 ,   Ax Ax + x = - 12 B y,    Ay Ay + y =12 B x -B v t One can check that $ \\vec{E}^{\\prime } $ and $ \\vec{B}^{\\prime } $ stay the same.", "In real time, one need to consider $ A^0=-A_0 $ to get the sign right.", "Similarly, one can perform the $ U(1) $ gauge transformation to recover the spatial component of the gauge field, e i 12 Bv t y where $ \\chi = \\frac{1}{2} Bv t^{\\prime } y^{\\prime } $ .", "Then the gauge field in Eq.", "becomes: A0 A0 - 1cl 0 =-vc B y ,   Ax Ax + x = - 12 B y,    Ay Ay + y =12 B x So in the combination of the GT Eq.,, and the gauge transformation Eq., one can recover the spatial component $ \\vec{A} $ , but still generates a time component $ A^{\\prime }_0 =\\frac{v}{c} B y^{\\prime } $ which leads to the electric field $ \\vec{E}^{\\prime }= \\frac{ v}{c_l} B \\hat{y}^{\\prime } \\ll B $ .", "One can also see the differences between the GT and gauge transformation, the former changes $ E $ or $ B $ , but the latter must keep both the same.", "One can repeat the calculations for the two Landau gauges $ A_x=0, A_y=Bx $ or $ A_x=-By, A_y=0 $ .", "The Laughlin ground state wavefunction in the symmetric gauge indeed breaks the translational invariance, but keeps rotational symmetry.", "In the lab frame, it takes the form: $\\Psi (z_1, z_2, \\cdots , z_N,t)=\\Pi _{i < j } ( z_i-z_j)^{m} e^{ - \\sum _i \\frac{ |z_i|^2}{4 l^2_0} }e^{- i E_G t/\\hbar }$ where $ E_{G} $ is the ground state energy, $ m $ is odd ( even) for fermion ( boson ) respectively, $ l_0= \\sqrt{ \\frac{\\hbar c_l}{eB} } $ is the magnetic length.", "In the moving frame, the Laughlin wavefunctions change as Eq.", "in its $ N $ coordinate and Eq.", "under the GT.", "The Jastraw factor keeps the Galileo invariance, but the Gaussian factor does not.", "It was known it is the Jastraw factor which acts as the conformal blocks of the chiral edge theory to be discussed in the appendix H. Similar GT maybe applied to Jain's CF wavefunctions also [129].", "2.", "Galileo transformation on many-body QFT: the second quantization Eq.REF can be written in the second quantization language: ${\\cal H} = \\int d^2 x \\psi ^{\\dagger }( \\vec{x} )[ \\frac{1}{ 2 m} [ -i \\hbar \\frac{\\partial }{ \\partial x }- \\frac{e}{c_l} \\vec{A}( x) ]^2 - e A_0 (x ) - \\mu ] \\psi ( \\vec{x} )+ \\int d^2 x_1 d^2 x_2 \\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )V_2(x_1-x_2 ) \\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )$ where the single-body gauge potential $ \\vec{A}( \\vec{x} ), A_0( \\vec{x} ) $ and the two-body interaction $ V_2(x_1-x_2 ) $ are automatically incorporated into the kinetic term and the interaction term respectively.", "By adding the chemical potential $ \\mu $ , we also change the canonical ensemble with a fixed number of particles $ N $ in the first quantization to the grand canonical ensemble in the second quantization.", "Following the step leading from Eq.", "to Eq., one can obtain the effective action in the moving frame with the velocity $ v \\hat{x} $ ( for notational simplicity, we drop the $ \\prime $ in the moving frame ): ${\\cal H} = \\int d^2 x \\psi ^{\\dagger }( \\vec{x} )[ \\frac{1}{ 2 m} [ -i \\hbar \\frac{\\partial }{ \\partial x }- \\frac{e}{c_l} \\vec{A}( x) ]^2-iv \\partial _x - e A_0(x) - \\mu ] \\psi ( \\vec{x} )+ \\int d^2 x_1 d^2 x_2 \\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )V_2(x_1-x_2 ) \\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )$ It is easy to show that Eq.REF takes the same form as Eq.REF after formally making the same GT as in Eq.", "REF $\\tilde{\\psi } = \\psi e^{-ik_0x},~~~~ \\tilde{\\mu }= \\mu + \\frac{m v^2}{2}$ in a grand canonical ensemble.", "While the gauge field ( in the symmetric gauge ) changes as Eq..", "The wavefunction exponential factor in Eq.", "in the first quantization is equivalent to the shift of the momentum and the chemical potential as listed in Eq.REF in the second quantization.", "Indeed, $ - \\mu \\int d^2 x \\psi ^{\\dagger }( \\vec{x} ) \\psi ( \\vec{x} ) = - \\mu N $ where $ N $ is the number of particles in Eq.", "in the canonical ensemble.", "For a canonical ensemble with a fixed number of particles $ N $ which is more convenient for the FQH ( See Eq.REF ) to be discussed in the following appendix: $\\tilde{\\psi } = \\psi e^{-i( k_0 x - E_0 t/\\hbar ) }$ where $ k_0= -\\frac{ m v}{\\hbar }, E_0= \\frac{ \\hbar ^2 k^2_0 }{ 2 m }= \\frac{1}{2} m v^2 $ For small number of $ N $ , this is the end of story.", "Unfortunately, it is still difficult to see what are the effects of such a GT in the present second quantization scheme in the thermodynamic limit $ N \\rightarrow \\infty $ .", "This should not be too surprising, because as advocated by P. W. Anderson, \" more is different \", there are many emergent quantum or topological phenomena such as quantum magnetism, superconductivity, superfluidity, fractional Quantum Hall effects etc which can nowhere be seen in such a formal model Eq.REF .", "This kind of formal model looks complete, exact to some level, but not effective to see any emergent phenomena.", "So only when proceeding further and deeper to get an effective model to see the signature of emergent quantum or topological phenomenon, one may start to see the real effects of such a GT on such emergent phenomenon.", "In the present case, the emergent phenomenon is the fractional Quantum Hall effects which can be best seen by introducing the Chern-Simon gauge fields to be discussed in the following appendix." ], [ " Galileo invariance in the Chern-Simon effective action of FQH in the bulk:\ncomments on HLR theory with $ z=2 $ and Son's Dirac theory with {{formula:dbd81406-c3a2-4ab6-8ddc-81b7b0b99c00}} at {{formula:7b8e8313-9368-4129-a3ea-e1700634683a}} . ", "The new quantum phases and novel quantum phase transitions discovered in the main text still fall into Ginsburg-Landau symmetry breaking picture.", "It was known that topological transitions without accompanying symmetry breaking are beyond Ginsburg-Landau scheme.", "The Fractional Quantum Hall systems (FQH) [21], [22], [26] mentioned in the introduction have Galileo invariance.", "It dictates that the mass appearing in the Landau level spacing ( or cyclotron frequency $ \\omega _c= eB/mc_l $ ) is simply the bare mass $ m $ which is not renormalized by any inter-particle interaction $ V(x_i-x_j) $ in Eq.REF [22], [21].", "This result may be considered as the counter-part in a strongly interacting bosonic system: as said in Sec.IX, $ \\rho _s(T=0) = \\rho $ , namely, it is 100 % superfluid at $ T=0 $ despite many atoms are kicked out of the condensate at zero momentum to high momentum states due to the atom-atom interaction $ V(x_i-x_j) $ .", "As alerted in the last appendix, it is impossible to see how the FQH emerge from the microscopic Hamiltonian Eq.REF in its first quantization or Eq.REF in its second quantization.", "One may need to construct effective actions to study this emergent quantum phenomena.", "Both bosonic [21], [26], [53] and fermion CS effective action [130], [22] have been constructed to study the FQHE by performing a singular gauge transformation to attach even or odd number of fluxs to electrons.", "It is important to check if the GI has been kept in such a singular gauge transformation which despite was claimed to be exact formally, due to its singularity, could be very dangerous in practice.", "One may perform a singular gauge transformation by attaching $ \\tilde{\\phi }=2 $ flux quanta to the electron operator $ \\psi $ : $\\psi _{CF} ( \\vec{x} ) = \\psi ( \\vec{x} )e^{ i \\tilde{\\phi } \\int d^2 \\vec{x}^{\\prime } arg( \\vec{x}- \\vec{x}^{\\prime } ) \\rho (\\vec{x}^{\\prime } ) }$ where $ \\rho (\\vec{x} )= \\psi ^{\\dagger }( \\vec{x} ) \\psi ( \\vec{x} )= \\psi ^{\\dagger }_{CF}( \\vec{x} ) \\psi _{CF} ( \\vec{x} )$ is the electron or composite fermion (CF) density.", "This flux attachment to the electron operator $ \\psi $ can be formally achieved by introducing the Chern-Simon (CS) gauge field $ a_{\\mu } $ coupled to the CF.", "The Lagrangian corresponding to Eq.REF in the canonical ensemble becomes [21], [22], [132], [26] ${\\cal L}_{CF}[ \\psi ] & = & \\psi ^{\\dagger }_{CF} ( -i \\partial _t - \\frac{e}{\\hbar } A_0- a_0 ) \\psi _{CF}-\\frac{1}{2m} | ( \\partial _i -i \\frac{e}{\\hbar c_l} A_i -i a_i ) \\psi _{CF} |^2+ \\frac{i}{ 4 \\pi \\tilde{\\phi } } \\epsilon _{\\mu \\nu \\lambda }a_{\\mu } \\partial _\\nu a_{\\lambda } \\nonumber \\\\& + & \\int d^2 x_1 d^2 x_2 [\\psi ^{\\dagger }( \\vec{x}_1 )\\psi ( \\vec{x}_1 )- \\bar{n}]V_2(x_1-x_2 ) [\\psi ^{\\dagger }( \\vec{x}_2 )\\psi ( \\vec{x}_2 )- \\bar{n}]$ where $ ( A_0, A_i ) $ is the external magnetic field and $ ( a_0, a_i ) $ are the Chern-Simon gauge field, $ \\bar{n} $ is the average density of the electrons.", "One may also note that the former is a real EM field, so the speed of light $ c_l $ must be kept, while the latter has nothing to do with the light speed $ c_l $ .", "In the kinetic term, it may be convenient to absorbing the external gauge field into the CS field by defining $a_{\\mu } + A_{\\mu }=( a_0 + \\frac{e}{\\hbar } A_0, a_i + \\frac{e}{\\hbar c_l} A_i )$ which is a suitable combination of the dynamic CS field $ a_\\mu $ having no intrinsic velocity with the external gauge field $ A_{\\mu } $ having the speed of light as the intrinsic velocity.", "This right combination make the two different type of gauge fields transform the same under the GT.", "Considering the functional integral measure $ \\int {\\cal D } \\psi _{CF} {\\cal D } \\psi ^{\\dagger }_{CF}= \\int {\\cal D } \\psi {\\cal D } \\psi ^{\\dagger } $ , then it is straightforward to show Eq.REF is GI under the space-time Eq., the CF operator Eq.REF , the external gauge field Eq.", "and the CS gauge field Eq.REF .", "In fact, all the three terms in Eq.REF are separately GI under the GT.", "At the half filling case $ \\nu =1/2 $ , the average value of the CS field $ a_{\\mu } $ and the external gauge field $ A_{\\mu } $ just cancels each other $ \\langle a_{\\mu } + A_{\\mu } \\rangle =0 $ in Eq.REF .", "Eq.REF becomes the starting point of the HLR theory [22].", "Unfortunately, Eq.REF does not respect the particle-hole (PH) symmetry at $ \\nu =1/2 $ expected in the $ m \\rightarrow 0 $ limit [22], [133], [131], [132], [134], [135], [136], [23].", "When an effective action does not get the right symmetry inherited from the microscopic Hamiltonian, it will problems in many physical quantities [132], [133], [23].", "Some ad hoc ways could be developed to fix some of these problems [135], [136], but to fix all these problems at a fundamental level must come up with a theory which respects the fundamental PH symmetry.", "The root of such fundamental problems may just come from the singular gauge transformation Eq.REF .", "It is non-perturbative and formally exact, but mix up all the energy scales of the original Hamiltonian Eq.REF .", "This maybe problematic because the low energy sectors in the transformed Hamiltonian Eq.REF may actually correspond to some high energy levels in the original Hamiltonian Eq.REF , so screw up the PH symmetry at the LLL.", "However, the CS action can be used to re-derive all the global topological properties of the wavefunctions, anyon excitations and the fractional statistics from the second quantization, this is because these topological properties are robust against these local energy level re-distributions ( This is also the underlying mechanism of topological quantum computing which is robust against local perturbations ).", "Unfortunately, it may not be used to calculate correctly all the local properties such as the energy gaps, correlation functions etc.", "This is because these local properties are usually sensitive to these local energy level re-distributions, so they may only be accurately calculated in the first quantization using the wavefunctions [129], [137].", "As mentioned in the conclusion, these difficulties maybe directly related to the fact that in contrasts to the Maxwell theory, it is still unknown how to regularize a CS theory on a lattice.", "In this regard, the status of FQH maybe similar to string theory which is a first quantization theory, while the string field theory which is a second quantization theory is just too difficult to formulate so far.", "This fact motivated Son to develop an effective 2-component Dirac theory [134], [135] with a finite chemical potential also coupled to the external gauge field $ A_{\\mu } $ and the CS field $ a_{\\mu } $ , but with no CS term for $ a_{\\mu } $ .", "This Dirac fermion theory has the exact PH symmetry in the zero Dirac mass limit, therefore can be used to address several difficult problems [132] suffered in the original HLR theory [22], [136].", "However, it breaks the GI of the microscopic Hamiltonian of interaction electrons in an external magnetic field Eq.REF .", "As shown here, the HLR theory does have the GI inherited from the microscopic Hamiltonian.", "Therefore, in terms of the PH symmetry, the Son's Dirac fermion theory ( can be called $ z=1 $ ) has a clear advantage over the HLR theory.", "However, in terms of the GI, the HLR theory ( can be called $ z=2 $ ) seems has an edge over the Son's Dirac fermion theory.", "It has been quite difficult to distinguish the two competing theory in any experiments [131], [136].", "On the face value, the lack of GI in Son's Dirac theory may be its deficiency.", "In reality, it may not.", "As concluded in the caption of Fig.REF , the PH symmetry is a must for any sensible low energy theory in the LLL, but the space-time symmetry like the GI may not.", "In fact, Dirac fermion interacting with Coulomb interaction coupled the CS gauge theory has been developed to investigate the QH to QH transition [24], [10].", "However, the Dirac fermions in [24], [10] emerges from a low energy theory of a microscopic lattice theory.", "It is well known a lattice system is a common one leading to Dirac or Weyl fermions.", "Son's Dirac fermion may emerge from a low energy effective theory under the LLL projection.", "If so the LLL projection may give a completely new mechanism leading to Dirac fermion.", "In addition to the Kohn theorem on the inter-Landau level spacing dictated by the GI, the GI does not have any other implications in the LLL physics which has the exact PH symmetry at $ \\nu =1/2 $ .", "So an effective theory at the LLL may not even need to care about the GI anymore.", "Unfortunately, despite some numerical evidences to support the nature of Dirac fermion with the $ \\pi $ Berry phase, such a microscopic derivation leading to a possible Dirac fermion with a finite potential is still lacking.", "In the following appendix H, we provide another example that despite the bulk gapped CS theory has the GI, the low energy excitations along the edge does not." ], [ " Galileo transformation and Emergent space-time in the chiral edge state with $ z=1 $ . ", "As shown in the last appendix, the bulk FQH phase does not break any symmetry, so it remains Galileo invariance.", "However, as shown in this appendix, its edge mode may behave quite differently under the GT than the bulk.", "It is still unknown if there is a deep mechanism for such an unusual bulk-edge correspondence in terms of GT.", "In the following, we take the real time formalism, so the bulk CS term describing the Abelian FQH state at $ \\nu =1/m $ is ${\\cal L}_{CS} = \\frac{m}{ 4 \\pi } \\epsilon ^{\\mu \\nu \\lambda }a_{\\mu } \\partial _\\nu a_{\\lambda }$ where $ \\nu =1/m $ is the filling factor.", "It is important to observe that the bulk CS term has no intrinsic velocity relating the space $ x $ and the time $ t $ .", "Dimensional analysis $ [ a_0 \\partial _{\\alpha } ]= [ a_{\\alpha } \\partial _{0} ] $ shows that $ [ a_0 ]= [ c a_{\\alpha } ] $ where $ c $ is any velocity.", "It can be shown that the bulk CS term is invariant under the GT with any boost velocity $ c $ Eq.", "and Eq.REF which are the real time version of Eq.REF and Eq.REF adapted to the boost along x-direction: x = x+ c t,    t=t t = t + c x,   x=x The CS field transforms as : $a^{\\prime }_0 = a_0 + c a_x,~~~~ a^{\\prime }_x= a_x,~~~~a^{\\prime }_y= a_y$ That is expected, as shown in the last appendix G, the bulk FQH systems is GI.", "When deriving the chiral edge effective action, we take a similar approach to study a moving SF as done in the appendix E. Namely, assuming the chiral edge state moves with a velocity $ v $ along the x-edge, then we take the co-moving $ S^{\\prime } $ frame moving together with the edge mode, so that $ c=v $ which is the intrinsic velocity of the edge mode.", "In this co-moving frame, we also choose the temporal gauge $a^{\\prime }_0 = 0$ which vanishes in the co- moving frame only.", "As shown in [5], [127], this gauge imposes the constraint $ f^{\\prime }_{ij}= 0 $ which can be solved by $ a^{\\prime }_i= \\partial ^{\\prime }_i \\phi $ .", "Substituting the solution into Eq.REF leads to the chiral edge action in the co-moving frame: $S_{co-}= \\frac{m}{ 4 \\pi } \\int dx^{\\prime } dt^{\\prime } \\partial ^{\\prime }_t \\phi \\partial ^{\\prime }_x \\phi $ Now one need to get back to the lab frame by performing the GT Eq.", "and Eq.REF .", "Then Eq.REF becomes $a^{\\prime }_0 = a_0 + v a_x=0,~~~~ a^{\\prime }_x= a_x,~~~~a^{\\prime }_y= a_y$ Then Eq.REF transfers back to that in the lab frame; $S_L= \\frac{m}{ 4 \\pi } \\int dx dt ( \\partial _t - v \\partial _x ) \\phi \\partial _x \\phi $ which is the chiral edge state of the Abelian QH state at the filling $ \\nu =1/m $ along the edge along X- direction.", "Then one only need to change the sign of $ v $ of the other edge.", "Eq.REF was first derived by Wen [5], [127].", "What is new here is we derive it by GT which relates Eq.REF in the co-moving frame to Eq.REF in the lab frame.", "Our derivation establishes a deep connection between the GT and the chiral edge state and also brings additional insights on the connections between the bulk theory Eq.REF which is GI to the chiral edge action Eq.REF which is not GI.", "It can be pushed further to get some new results.", "For example, one can perform the GT Eq.", "with any boost velocity $ c $ along the x-direction on the effective action Eq.REF again to reach the action in $ S^{\\prime } $ frame ( still drop the $ \\prime $ for simplicity ): $S_b= \\frac{m}{ 4 \\pi } \\int dx dt ( \\partial _t + c \\partial _x - v \\partial _x ) \\phi \\partial _x \\phi $ Setting $ c=v $ recovers that in the co-moving frame Eq.REF .", "When $ c < v $ , the edge moves along the same direction as in the lab frame.", "When $ c =v $ , it becomes zero in the co-moving frame.", "When $ c > v $ , it reverses the direction.", "So the chiral edge action does depend on which frame it is observed.", "The more tricky thing to study is when boosting along the normal direction to the x-edge [140].", "Then one may need to study from the bulk Chern-Simon action Eq.REF and re-derive the effective action under such a transverse boost.", "Drawing the insights to study the edge states of a Quantum Anomalous Hall under a transverse boost [32], we expect the chiral edge velocity becomes $ \\sqrt{v^2 -c^2 } $ which indicates there is a chiral edge state when $ c < v $ , but not anymore when $ c > v $ .", "It is also interesting to extend the chiral edge theory of the Abelian FQH to non-Abelian such as the Read-Moore state or Read-Rezayi state which have multi-channels[140].", "Gravitational anomaly [29] has been studied in the bulk, but not in the edge yet.", "Here, we only study the flat bulk and flat boundary, so may not need to worry about the Gravitational anomaly yet.", "In the curved space, it could be a different class of problems [140].", "Obviously, the gauge invariance of the bulk plus the edge and the associated anomaly cancellation is a must for such an edge theory, but not the GI.", "As hinted in the appendix B, the lack of GI of the chiral edge theory should be expected, because the chiral edge theory describes the electronic wave propagation along the edge, just like the sound waves in Eq.REF , it does break the GI.", "A recent work [138] pointed out a striking result: the fact that Wen's chiral edge theory is not invariant under the GT along the straight edge is an intrinsic deficiency of theory which need to be fixed in a refined theory.", "Our results show that there is probably no deficiencies in the chiral edge theory [126], [5], [128], in contrast to the claim made in [138].", "The lack of GI of Wen's edge theory is a salient feature instead of a deficiency.", "Another important example is that it was known that the Coulomb interaction is responsible for the fact that QPT from one FQH phase to another FQH phase has the dynamic exponent $ z=1 $ which seems also in-compatible with the Galileo invariance.", "The $ z=1 $ may imply that both the quasi-particle and quasi-hole excitation gaps vanish at the QCP, but may have different gaps away from it.", "Indeed, the bare mass $ m $ can be sent to 0 in the chiral edge states and near the $ z=1 $ bulk QCP.", "It hints new emergent space-time structure [115] near the TPT in the bulk and also in the chiral edge theory which may be interpreted as a TPT in the real space." ] ]
2207.10475
[ [ "Inferring eccentricity evolution from observations of coalescing binary\n black holes" ], [ "Abstract The origin and formation of stellar-mass binary black holes remains an open question that can be addressed by precise measurements of the binary and orbital parameters from their gravitational-wave signal.", "Such binaries are expected to circularize due to the emission of gravitational waves as they approach merger.", "However, depending on their formation channel, some binaries could have a non-negligible eccentricity when entering the frequency band of current gravitational-wave detectors.", "In order to measure eccentricity in an observed gravitational-wave signal, accurate waveform models that describe binaries in eccentric orbits are necessary.", "In this work we demonstrate the efficacy of the improved TEOBResumS waveform model for eccentric coalescing binaries with aligned spins.", "We first validate the model against mock signals of aligned-spin binary black hole mergers and quantify the impact of eccentricity on the estimation of other intrinsic binary parameters.", "We then perform a fully Bayesian reanalysis of GW150914 with the eccentric waveform model.", "We find (i) that the model is reliable for aligned-spin binary black holes and (ii) that GW150914 is consistent with a non-eccentric merger although we cannot rule out small values of initial eccentricity at a reference frequency of $20$ Hz.", "Finally, we present a systematic method to measure the eccentricity and its evolution directly from the gravitational-wave posterior samples.", "Such an estimator is useful when comparing results from different analyses as the definition of eccentricity may differ between models.", "Our scheme can be applied even in the case of small eccentricities and can be adopted straightforwardly in post-processing to allow for direct comparison between models." ], [ "Introduction", "Compact binary black holes (BBHs) emit gravitational waves (GWs) during the last stages of their coalescence.", "During this process the system loses energy and angular momentum, causing the orbit to both shrink and progressively circularize [1].", "This motivates the analysis of gravitational-wave signals with theoretical templates that are generated by waveform models using the quasi-circular approximation.", "However, recent studies highlight how accurate measurements of eccentricity can provide vital astrophysical information that could, for example, help discriminate between different binary formation channels [2], [3], [4], [5], [6], [7].", "Consequently, there has been a growing interest in analyzing the GW events detected by LIGO and Virgo with inspiral-merger-ringdown (IMR) waveform models that include eccentricity  [8], [9], [10], [11].", "For example, the GW transient GW190521 [12] has recently been analyzed under the hypothesis that it originated from a hyperbolic capture that resulted in a highly eccentric merger [13].", "One of the most promising approaches towards modelling the full GW signal emitted by compact binaries on arbitrarily eccentric orbits is the effective-one-body framework (EOB) [14], [15], [16], [17].", "Early attempts at incorporating eccentricity within the EOB framework were presented in [18], [19], [20] but have seen numerous improvements over recent years [21], [22], [23], [24], [25], [26], [27], [28], [29].", "In addition to EOB, there have also been numerous developments using alternative approaches towards modelling the complete IMR signal from eccentric binaries, including Numerical Relativity (NR) surrogates [30], [31] and hybrid models that blend post-Newtonian (PN) evolutions with NR simulations [32], [33], [34], [35].", "A key limitation of these approaches, however, is that they are often constrained by the availability of accurate numerical relativity simulations that span the full parameter space and – in the case of surrogates – by the length of the simulations themselves, which often do not cover the early inspiral of the system.", "Conversely, models based on analytical PN and scattering calculations [36], [37], [38], [39], [40], [41] can deliver representations of signals from long lasting inspirals, but they lack a description of the strong-field merger and are only valid for moderate eccentricities.", "We will be particularly interested in the TEOBResumS model [42], [43], [44] and the extension to eccentricity [21], [22], [45] that is built on the idea of dressing the circular azimuthal component of radiation reaction with the leading-order (Newtonian) non-circular correction [21].", "This approach has been subsequently extended to each multipole in the waveform and was further improved by incorporating higher order post-Newtonian information in an appropriately factorized and resummed form [23].", "In particular, [23] extended the noncircular waveform up to 2PN using results that partially build on [46].", "Whilst several proposals exist for incorporating radiation reaction, a detailed survey of these schemes was conducted in [25] concluding that the Newtonian factorization complete with 2PN corrections demonstrated the best agreement with results in the test-mass limit.", "This paradigm was further extended in [26].", "In this work we focus on TEOBResumS and study the performance of its circular and eccentric versions (TEOBResumS-GIOTTO and TEOBResumS-DALI, respectively) when applied to GW parameter estimation.", "We do so with the aim of validating the model and gauging possible biases due to eccentricity (or lack thereof).", "We dedicate special attention to the study of the quasi circular limit of TEOBResumS-DALI, and investigate how its structural differences with respect to TEOBResumS-GIOTTO – quantified in terms of mismatches against numerical relativity waveforms – reflect on GW data analysis of synthetic signals and GW150914.", "We then introduce a method to estimate the eccentricity directly from binary black holes observations and determine its evolution as a function of frequency.", "This procedure is efficient and suitable to be applied to any eccentric waveform model in post-processing.", "Furthermore it is advantageous for comparing different eccentric analysis of GWs events.", "The paper is organized as follows: In Sec.", "we summarize the main elements of the EOB waveform model used here.", "In Sec.", "we present a brief review of the elements of Bayesian inference needed for our analysis.", "Section  is devoted to the validation of the waveform model via specific injection and recovery analyses.", "The model is then used to analyze GW150914 data in Sec.", "and Sec.", "is dedicated to present our method to estimate the eccentricity evolution of a coalescing BBHs system in post-processing.", "Concluding remarks are reported in Sec. .", "Throughout we use $G=c=1$ unless stated otherwise." ], [ "Quasi-circular and eccentric waveform model: ", "All analyses presented in this paper are performed with TEOBResumS, either in its native quasi-circular version, TEOBResumS-GIOTTO [44], or in its eccentric avatar, TEOBResumS-DALI [22].", "In this section we describe in some detail the features of the two models, highlighting their structural differences and quantifying their (dis)-agreement as measured by the unfaithfulness." ], [ "Quasi-circular model: ", "TEOBResumS-GIOTTO is a semi-analytical state-of-the-art EOB model for spinning coalescing compact binaries [47], [48], [42], [49], [43], [44].", "The conservative sector of the model includes analytical Post-Newtonian (PN) information, resummed via Padé approximants.", "Spin-orbit effects are included in the EOB Hamiltonian via two gyro-gravitomagnetic terms [47], while even-in-spin effects are accounted for through the centrifugal radius [47].", "Numerical Relativity (NR) data is used to inform the model through an effective 5PN orbital parameter, $a_6^c$ , and a next-to-next-to-next-to leading order (NNNLO) spin-orbit parameter, $c_3$ [42].", "In the dissipative sector, waveform multipoles up to $\\ell =8$ are factorized and resummed according to the prescription of [43].", "Next-to-quasicircular (NQC) corrections ensure a robust transition from plunge to merger, and a phenomenological NR-informed ringdown model completes the model for multipoles up to $\\ell \\le 5$ .", "Although we focus here on BBH systems, we note that TEOBResumS-GIOTTO can also generate waveforms for binary neutron star coalescences, see [42] and references therein.", "Waveforms built employing only the dominant multipole $\\ell =|m|=2$ have been tested against the entire catalog of spin-aligned waveforms from the Simulating-eXtreme-Spacetimes (SXS) collaboration [50], and were shown to be consistently more than $99\\%$ faithful to NR [44].", "When higher modes are included, the EOB/NR unfaithfulness always lies below the $3\\%$ threshold for systems with total masses below $120 M_{\\odot }$  [43]." ], [ "Eccentric model: ", "The eccentric generalization of $\\tt TEOBResumS$ , TEOBResumS-DALI [21], [22], builds on the features of the quasi-circular model detailed above but differs in few key aspects.", "First, the quasi-circular Newtonian prefactor that enters the factorized waveform multipoles is replaced by a general expression obtained by computing the time-derivatives of the Newtonian mass and current multipoles, as described in [22].", "The same approach is implemented for the azimuthal radiation reaction force.", "Second, for eccentric binaries, the radial radiation reaction force $\\mathcal {F}_r$ that contributes to the time evolution of the radial EOB momentum can no longer be neglected [21].", "Third, the initial conditions must be specified in a different manner with respect to the quasi-circular case: instead of employing the post-adiabatic procedure of [51], TEOBResumS-DALI computes adiabatic initial conditions and always start the evolution of the system at the apastron, see Appendix  for further details.", "These conservative eccentric initial conditions, however, do not reduce to the quasi-circular initial conditions in the limit of small eccentricity.", "To partially correct for this issue, the quasi-circular initial conditions are manually imposed for $e_0 < 10^{-3}$ .", "Finally, the values of $a_6$ and $c_3$ were modified in order to ensure that the model remains faithful to the quasi-circular limit [22]." ], [ "Quasi-circular limit of ", "All of the modifications above allow TEOBResumS-DALI to provide waveforms and dynamics that are faithful to mildly eccentric SXS simulations [21], [22], scattering angle calculations [22] and highly eccentric test-mass waveforms [52].", "At the same time, however, because of these structural differences, the quasi-circular limit of the eccentric model TEOBResumS-DALI does not exactly reduce to the TEOBResumS-GIOTTO model.", "In order to quantify the agreement of TEOBResumS-DALI with NR simulations and TEOBResumS-GIOTTO, respectively, we calculate the unfaithfulness $\\bar{F} = 1-F= 1 - \\underset{t_0, \\phi _0}{\\rm max} \\frac{\\langle h_1 | h_2 \\rangle }{\\sqrt{\\langle h_1 |h_1 \\rangle \\langle h_2 |h_2 \\rangle }},$ where $(t_0, \\phi _0)$ are the initial time and phase of coalescence, and $\\langle h_1 |h_2 \\rangle $ is the noise weighted inner product between two waveforms $\\langle h_1 |h_2 \\rangle = 4 \\Re \\displaystyle \\int _{f_{\\rm min}}^{f_{\\rm max}} \\frac{ \\tilde{h}_1 (f) \\tilde{h}^*_2 (f)}{S_n (f)} df,$ where $S_n (f)$ denotes the power spectral density (PSD) of the detector strain noise and $\\tilde{h}_1 (f)$ and $\\tilde{h}_2$ are the Fourier transforms of the time domain functions $h_1$ and $h_2$ .", "Figure: EOB/NR unfaithfulness using TEOBResumS-DALI with e 0 inj =10 -8 e_0^{\\rm inj} = 10^{-8}over the non-precessing and non-eccentric SXS catalog.", "See text for more details.In Fig.", "REF we show the unfaithfulness of TEOBResumS-DALI against almost allWe exclude the following simulations due to large numerical errors: SXS:BBH:0002, SXS:BBH:0649, SXS:BBH:1110, SXS:BBH:1141, SXS:BBH:1142, SXS:BBH:1145. non-eccentric, spin-aligned NR simulations in the SXS catalogue [53] using the designed power spectral density (PSD) of Advanced LIGO [54].", "This figure complements, with many more simulations, Fig.", "3 of [22].", "Let us remind the reader that the corresponding plot for TEOBResumS-GIOTTO is centered around $10^{-3}$ with $\\max (\\bar{F}_{\\rm EOBNR})\\le 9\\times 10^{-3}$ with only a few outliers above $3\\times 10^{-3}$ (see Fig.", "4 of [44]).", "We thus see here that TEOBResumS-DALI and TEOBResumS-GIOTTO are two EOB models, similarly informed by NR simulations, that perform differently with respect to quasi-circular NR simulations, though both are clearly below the usual threshold of $3\\%$ unfaithfulness.", "It is therefore interesting to understand how this difference translates in terms of biases on parameters.", "This will be discussed in Sec. .", "Our understanding of the properties of detected GWs is predicated on the ability to infer the parameters of the sources emitting them.", "This process is carried out within the framework of Bayesian inference and relies on Bayes' theorem [55] $p(\\theta |\\textbf {d},H)=\\frac{p(d|\\theta , H) \\, p(\\theta |H)}{p(\\textbf {d}|H)},$ where $p(\\theta |\\textbf {d},H)$ is the posterior probability of a set of parameters $\\theta $ given the data $\\textbf {d}$ assuming a specific model $H$ , $p(\\theta |H)$ is the prior, $p(\\textbf {d}|\\theta ,H)$ is the likelihood and $p(\\textbf {d}|H)$ is the evidence or marginalized likelihood.", "The evidence can be expressed as: $Z= p(\\textbf {d}|H) = \\int \\, p(d|\\theta ,H) \\, p(\\theta |H) \\textbf {d}\\theta ,$ where the integral extends over the entire parameters space.", "The evidence assumes the role of an overall normalization constant but plays an important role in Bayesian model selection.", "Given two competing hypotheses $H_A$ and $H_B$ , the Bayes' factor is defined as the ratio of evidences $\\mathcal {B}_A^B= \\frac{p(\\textbf {d}|H_B)}{p(\\textbf {d}|H_A)} \\, ,$ where the hypothesis $H_B$ is favoured by the data over $H_A$ if $\\mathcal {B}_A^B > 1$ .", "The expectation value of a parameter $\\theta _i \\in \\theta $ can be estimated through the likelihood as $E[\\theta _i] = \\int \\theta _i \\, p(\\theta _i|\\textbf {d}, H) d\\theta _i,$ where $p(\\theta _i|\\textbf {d}, H)$ is the marginalized posterior distribution for the parameter $\\theta _i$ ." ], [ "Gravitational Wave Parameter Estimation", "The GW signal emitted by an eccentric coalescing binary black hole system is fully described by 17 parameters: $\\theta _{\\rm CBC} = \\lbrace m_1, m_2, \\chi _1, \\chi _2, D_L, \\iota , \\alpha , \\delta , \\psi , t_0, \\phi _0, e_0, f_0\\rbrace ,$ where $m_{1,2}$ denotes the (detector-frame) masses of the two black holes such that $m_1 \\ge m_2$ , $\\chi _{1,2}$ are the dimensionless spin angular momenta vectors, $D_L$ is the luminosity distance to the source, $\\iota $ is the inclination angle, $\\lbrace \\alpha , \\delta \\rbrace $ are the right ascension and declination and define the sky location of the source, $\\psi $ is the polarization angle, $\\lbrace t_0, \\phi _0 \\rbrace $ are the reference time and phase, and $\\lbrace e_0, f_0\\rbrace $ are the initial eccentricity magnitude and the average frequency between the apastron and periastron respectively.", "In this work we utilize the bajes package for Bayesian inference [56] employing the nested sampling [57] algorithm dynesty [58] in order to extract the posterior probability density functions (PDFs) and to estimate the evidence." ], [ "Likelihood", "We are interested in the joint likelihood between $N$ detectors in a GW detector network $p(\\textbf {d}|\\theta , H_S) = \\prod _{i=1}^{N} p(\\textbf {d}_{i}|\\theta , H_S),$ where $H_S$ denotes the hypothesis that the data contains a GW signal.", "Under the assumption of Gaussian, stationary noise that is uncorrelated between each detector, and assuming a time domain signal model $h \\equiv h(t, \\theta _{\\rm CBC})$ and data set $ d \\equiv d(t)$ , the likelihood is given by $p(\\textbf {d}|\\theta _{\\rm CBC}, H_S) \\propto e^{-\\frac{1}{2} \\sum _{i=1}^N \\langle h-d_i|h - d_i\\rangle },$ where $\\langle \\cdot |\\cdot \\rangle $ is the noise-weighted inner product as defined in Eq.", "(REF ), $\\langle h -d_i|h - d_i\\rangle = 4 Re \\int _0^{\\infty } \\frac{|\\tilde{h} (f)-\\tilde{d}_i(f)|^2}{S_n (f)} df,$ where $S_n (f)$ is the PSD of the detector strain noise, and $\\tilde{h} (f)$ and $\\tilde{d}$ denote the Fourier transform of $h$ and $d$ respectively." ], [ "Priors", "For the analyses presented in Sec.", "we adopt priors that broadly follow [59], [56] and are given as follows: The prior distribution for the masses is chosen to be flat in the components masses $\\lbrace m_1, m_2\\rbrace $ and can be written in terms of the chirp mass $M_c = (m_1 m_2)^{3/5}/(m_1+m_2)^{1/5}$ and the mass ratio $q = {m_1}/{m_2} \\ge 1$ as $p(M_c, q|H_S) = \\frac{M_c}{\\Pi _{M_c} {\\Pi _{q}}} \\left(\\frac{1+q}{q^3}\\right)^{2/5},$ where $\\Pi _{M_c}$ and $\\Pi _{q}$ are the prior volumes, as defined in Sec.V B of [56] delimited by the prior bounds of $M_c$ and $q$ .", "To aid the comparison with results from analyses that allow for precessing spins, we assume priors that correspond to the projection of a uniform and isotropic spin distribution along the $\\hat{z}$ -direction as proposed by Veitch [60], [56]: $p(\\chi _{i}|H_S) = \\frac{1}{2 \\chi _{\\rm max}} \\ln {\\Bigg |\\frac{\\chi _{\\rm max}}{\\chi _{i}}\\Bigg |},$ where $\\chi _i$ is the magnitude of each black hole spin and $\\chi _{\\rm max}$ is the maximum spin magnitude.", "The prior distribution for the luminosity distance $D_L$ is specified by a lower and an upper bound and its analytic form is defined by a uniform distribution over the sphere centred around the detectors: $p(D_L|H_S) = \\frac{3 D_L^2}{D^3_{\\rm max}- D^3_{\\rm min}}$ The prior distributions for $\\alpha $ and $\\delta $ , defining the sky location, are taken to be isotropic over the sky with $\\alpha \\in [0, 2 \\pi ]$ , $\\delta \\in [-\\pi /2, +\\pi /2]$ and $p(\\alpha , \\delta |H_S) = \\frac{\\cos {\\delta }}{4 \\pi } .$ Analogously, for the inclination we have $p(\\iota , H_S) = \\frac{\\sin {\\iota }}{2},$ where $\\iota \\in [0, \\pi ]$ .", "For $\\lbrace \\psi , t_0, \\psi _0\\rbrace $ , the prior distributions are taken to be uniform within the given bounds.", "The prior on $\\lbrace e_0, f_0\\rbrace $ are taken to be uniform or logarithmic-uniform within the provided bounds." ], [ "Validation of the waveform model", "In this section, we test the consistency of TEOBResumS-DALI with TEOBResumS-GIOTTO (and viceversa) by performing Bayesian inference on simulated GW signals (injections).", "The aim of this analysis is to give a more quantitative meaning to the standard EOB/NR unfaithfulness figures of merit discussed above.", "To do so, we inject mock signals into a zero noise realization with a signal-to-noise ratio (SNR) of $\\sim 42$ in the Advanced LIGO and Advanced Virgo network.", "We employ the Advanced LIGO and Advanced Virgo design sensitivity PSDs [61], [54], [62].", "All injections are performed at the same GPS time, $\\rm t_{GPS} =1126259462.4$ s. We analyze segments of 8s in duration with a sampling rate of 4096Hz.", "We use dynesty to sample the posterior distributions, using the following setting: 3000 live points to initialise the MCMC chains, a maximum of $10^4$ MCMC steps, a stopping criterion on the evidence of $\\Delta \\ln Z = 0.1$ , and we require five autocorrelation times before accepting a point.", "For all our analyses, we restrict the waveform model to only the $(2,|2|)$ -mode, allowing us to analytically marginalize over the phase." ], [ "Quasi-circular limit of the eccentric model", "As mentioned above, TEOBResumS-DALI is structurally different to the quasi-circular TEOBResumS-GIOTTO model.", "Moreover, despite having been informed by the same NR simulations, its unfaithfulness to NR is larger than that of TEOBResumS-GIOTTO.", "To better understand how this difference in the unfaithfulness translates into parameter biases, we perform an unequal mass injection in the quasi-circular limit, as detailed in Table REF .", "More precisely, the injected waveform is generated with TEOBResumS-GIOTTO from a fixed initial frequency of 20 Hz, and it is recovered with either the same model (Prior 1) or with TEOBResumS-DALI assuming a fixed initial eccentricity of $e_0 = 10^{-8}$ at 20 Hz (Prior 2).", "In Fig.", "REF we show the one-dimensional and joint posterior distributions for $M_c$We note that we quote the detector-frame chirp mass throughout the paper., $q$ and the effective spin $\\chi _{\\rm eff} = \\frac{m_1 \\chi _{1z}+m_2 \\chi _{2z}}{m_1+m_2},$ where the two spins are taken to be aligned along the $\\hat{z}$ -direction: $\\chi _{1z}=\\chi _{1}$ and $\\chi _{2z}=\\chi _{2}$ .", "The median values of $M_c$ , $\\chi _{\\rm eff}$ and $q$ recovered with TEOBResumS-GIOTTO and TEOBResumS-DALI are shown, with their 90$\\%$ credibility interval, respectively in the first and second column of Table REF in Appendix .", "Comparing the results, we notice that the median values of the parameters recovered with TEOBResumS-GIOTTO are in good agreement with the injected ones, while those recovered with the quasi-circular limit of TEOBResumS-DALI are slightly biased towards higher values.", "This is not surprising given the different analytical structures (dissipative sectors and NR-informed parameters) of the two models and the fact that TEOBResumS-DALI is less NR-faithful than TEOBResumS-GIOTTO by, on average, one order of magnitude ($\\sim 10^{-2}$ vs. $10^{-3}$ ) (see Fig.", "REF and Fig.", "4 of [44]).", "Table: Parameters of the circular injection and two different priors.", "The prior distributions are described in Sec. .", "The sky location corresponds to the maximum sensitivity for the Advanced LIGO Hanford detector.Figure: Testing the quasi-circular limit of TEOBResumS-DALI.", "We inject a quasi-circularwaveform generated with TEOBResumS-GIOTTO and recover it with either TEOBResumS-GIOTTO (blue) orwith TEOBResumS-DALI with fixed initial eccentricity at e 0 =10 -8 e_0 = 10^{-8} (teal).", "The injected values are indicated bythe solid lines.", "We find that the parameters recovered with TEOBResumS-DALI are slightly biased.", "See text for discussion." ], [ "Testing the eccentric model", "In the EOB framework, the dynamics of a system of coalescing binaries is evolved from initial conditions.", "For the TEOBResumS-DALI model, this is done by defining an initial eccentricity $e_0$ and an initial frequency $f_0$ and, through Eq.", "(REF )-(), determining ($r_0$ , $p_{\\varphi }^0$ , $p^0_{r*}$ ).", "The degree to which the initial frequency $f_0$ has an impact on Bayesian inference and our ability to constrain this parameter from the observations is poorly understood.", "In previous similar analyses, comparable quantities, such as the argument of the periapsis or mean anomaly, have typically been ignored.", "However, recent studies [30], [9] suggest that the mismatches can degrade as we vary these parameters for a given eccentricity.", "It is therefore useful to quantify the impact of $f_0$ on Bayesian inference.", "To do so we perform a non-eccentric injection with $e_0=0$ and $f_0=20$ Hz and recover with TEOBResumS-DALI either sampling on $e_0$ and $f_0$ (Prior 1) or only on $e_0$ (Prior 2).", "The details of the injection and the priors are listed in Table REF .", "For the other parameters, the injected values and prior ranges are the same as in Table REF .", "Figure REF shows the one-dimensional and joint posterior distributions obtained with the two different priors.", "In the two-dimensional posterior distributions assuming the first prior, shown in the upper panel of Fig.", "REF , we do not observe any significant correlation between $e_0$ and $f_0$ , as may be expected.", "In the bottom panel of Fig.", "REF , we show the posterior distributions for $M_c$ , $\\chi _{\\rm eff}$ , $q$ and $e_0$ obtained using the first prior choice (orange) and the second prior choice (teal).", "The median values, at $90 \\%$ credibility, are shown in the third (Prior 1) and forth (Prior 2) columns of Tab.", "REF in App. .", "We do not observe any significant differences between the two analyses and we find that the posterior on $f_0$ is weakly correlated with $e_0$ about its true value.", "This is in broad agreement with the conclusions of [10], who found that the argument of periapsis is only likely to be resolvable for the loudest events.", "However, as also discussed in [10], we could potentially see biases if we fix $f_0$ to a frequency that effectively corresponds to the argument of the periapsis being out of phase with the true value.", "In the $e_0 \\rightarrow 0$ limit, however, one may expect $f_0$ to become increasingly degenerate with the coalescence phase.", "We note that whilst the injected value for $e_0$ is not contained within the priors, we do not observe railing of the posteriors against the prior boundaries.", "This is a consequence of using a uniform prior for the eccentricity and results in posteriors that are skewed towards larger values of the eccentricity relative to the logarithmic-uniform prior.", "We would therefore expect to find no significant changes when lowering the prior boundary in the uniform-prior analysis.", "We next inject mock signals with two different values of $e_0$ and recover them using TEOBResumS-GIOTTO and TEOBResumS-DALI respectively.", "The details of the injected values for $e_0$ and $f_0$ and their priors are described in Table REF .", "The injected values and priors for the other parameters are the same as before as given in Table REF .", "Figure REF shows the one- and two-dimensional posterior distributions for $M_c$ , $\\chi _{\\rm eff}$ and $q$ (left) and the one-dimensional posterior distribution for $e_0$ (right) for a non-eccentric injection recovered with TEOBResumS-DALI with two different choices of prior distributions: logarithmic-uniform (teal), uniform (orange).", "The recovered median values corresponding to the Prior 1 (orange) are shown in the third column while the one corresponding to the Prior 2 (teal) are shown in the fifth column of Tab.", "REF in App. .", "We observe that for eccentricities comparable to zero, the mass and spin measurements are robust and independent of the choice of eccentricity prior.", "In the right panel of Fig.", "REF , we observe that when using a logarithmic-uniform prior for the eccentricity, the recovered median value of the eccentricity is pushed to smaller values as a result of the priors.", "In Fig.", "REF , instead, we show the posterior distributions for an injection with $e_0 = 0.05$ (TEOBResumS-DALI) and recovered with both the models, TEOBResumS-GIOTTO and TEOBResumS-DALI.", "The median values of the parameters recovered with TEOBResumS-GIOTTO (orange) are indicated in the first column of Tab.", "REF in App.", ", while the ones recovered with TEOBResumS-DALI (teal) are indicated in the second column of the same Table.", "In the left figure, we observe a stronger correlation between mass and spin parameters when we recover with TEOBResumS-GIOTTO.", "Previous studies have pointed out correlations between the chirp mass, the effective inspiral spin and the eccentricity [27].", "As our recovery model neglects eccentricity, biases in the mass and spin parameters are anticipated to compensate for this.", "Lastly, we draw our attention on the right figure of the bottom panel, where it is shown how excellently the recovery of the eccentricity is accomplished pointing out the robustness and accuracy of the model.", "In terms of model selection, we find that for the non-eccentric injection, the recovery with TEOBResumS-GIOTTO is preferred with respect to the one with TEOBResumS-DALI with an estimated logarithmic Bayes' factor of $\\ln \\mathcal {B}_{\\rm circ}^{\\rm ecc} \\sim 9$ .", "Similarly, for the eccentric injection, the eccentric model TEOBResumS-DALI is preferred with respect to the quasi-circular model TEOBResumS-GIOTTO with an estimated logarithmic Bayes' factor of $\\ln \\mathcal {B}_{\\rm ecc}^{\\rm circ} \\sim 5$ in the case of the uniform eccentricity prior, and $\\ln \\mathcal {B}_{\\rm ecc}^{\\rm circ} \\sim 10$ when using the logarithmic-uniform prior.", "These investigations demonstrate that TEOBResumS-DALI is a reliable waveform model to analyze spin-aligned, eccentric binaries.", "Table: Injected values for e 0 e_0 and f 0 f_0 and their priors.", "Two choices of recovery are made to perform this first testing analysis of TEOBResumS-DALI.", "We choose to sample in both parameters in one case (Prior 1) and only in e 0 e_0 in the other case (Prior 2).Table: Second test of TEOBResumS-DALI with an eccentric recovery.", "First column: injected values for e 0 e_0 and f 0 f_0 and their prior limits for an injection with e 0 =0e_0=0 injection recovered with TEOBResumS-DALI with two different prior choices.", "Second column: injected values for e 0 e_0 and f 0 f_0 and their prior limits for an injection with e 0 =0.05 e_0 = 0.05 recovered with TEOBResumS-DALI and TEOBResumS-GIOTTO." ], [ "Analysis of GW150914", "In this section, we reanalyse GW150914 with the TEOBResumS-DALI and TEOBResumS-GIOTTO waveform models.", "The strain data and PSDs are obtained from the GW Open Science Center [63].", "We analyse an $8s$ -long data stretch centered around the GPS time of the event $\\rm t_{GPS} =1126259462.4 \\, s$ sampled at a sampling rate of $\\rm 4096 \\, Hz$ .", "For the inference, we use DYNESTY choosing the same settings discussed in Sec.", "." ], [ "Quasi-circular analysis of GW150914", "First, we analyse GW150914 under the assumption of a quasi-circular binary black holes system.", "To do so, we perform two analyses, either using TEOBResumS-GIOTTO or TEOBResumS-DALI, fixing initial EOB eccentricity to $e_0 = 10^{-8}$ , as described in Table REF .", "In both cases we recover a maximum likelihood SNR of $\\sim 26$ corresponding to $\\sim 20$ in LIGO-Hanford and $\\sim 18$ in LIGO-Livingston.", "In Fig.", "REF we show the marginalized one-dimensional and two-dimensional posterior distributions for $(M_c,\\chi _{\\rm eff},q)$ obtained with TEOBResumS-DALI (teal) and TEOBResumS-GIOTTO (blue).", "The recovered median values are reported in the second and third column of Table REF .", "We observe that the values recovered with TEOBResumS-GIOTTO are consistent with the values for GW150914 reported in GWTC-1 [59], while the median values for the chirp mass and effective inspiral spin found with TEOBResumS-DALI with fixed $e_0 = 10^{-8}$ are slightly higher in comparison to GWTC-1, but still consistent at the 90% credible level.", "In terms of Bayes' factors we find that the analysis with TEOBResumS-GIOTTO is favored with a $\\ln \\mathcal {B}_{\\rm ecc, 10^{-8}}^{\\rm circ} \\sim 1$ .", "Based on the results for mock signals presented in Sec.", "REF , this is not surprising because of the structural difference between the two models and the influence of initial conditions on the quasi-circular limit as discussed extensively in Sec.", "REF .", "Table: Choice of priors for the analysis of GW150914 to test the quasi-circular limit of TEOBResumS-DALI.", "The prior distributions are described in detail in Sec.", ".Figure: One-dimensional and two-dimensional posterior distributions for M c M_c, q q and χ eff \\chi _{\\rm eff} obtained with the quasi-circular model TEOBResumS-GIOTTO (blue) and the eccentric TEOBResumS-DALI in the quasi-circular limit (i.e.", "e 0 e_0 fixed to 10 -8 10^{-8} (teal)).", "The solid lines indicate the values from the quasi-circular analysis presented in GWTC-1 .Table: Results for the different analysis of GW150914 with TEOBResumS-GIOTTO or TEOBResumS-DALI.", "The prior ranges for e 0 e_0 and f 0 f_0 for each analysis are indicated.", "We give the median values and symmetric 90% credible interval for M c M_c, χ eff \\chi _{\\rm eff} and qq.", "Our results are contrasted by the values obtained from the non-eccentric, precessing analysis presented in GWTC-1  shown in the last column." ], [ "Eccentric analysis of GW150914", "Finally, we reanalyse GW150914 with the eccentric model TEOBResumS-DALI sampling in both the initial eccentricity $e_0$ and $f_0$ (see Table REF for prior details).", "For the eccentricity we use two different priors: one uniform in $e_0$ and the other one logarithmic-uniform which occupies a larger prior volume at low eccentricities.", "All other priors and settings are identical to the quasi-circular analysis of Sec.", "REF .", "Consistently with this, we estimate a network SNR of $\\sim 26$ with $\\sim 20$ in LIGO-Hanford and $\\sim 18$ in LIGO-Livingston for the maximum likelihood parameters.", "In Fig.", "REF we show the one-dimensional and joint posterior distributions together with the median values reported in GWTC-1 [59] or calculated from [64] (solid lines).", "The median values for $(M_c,\\chi _{\\rm eff},q)$ are given in Table REF .", "The two eccentric analyses give consistent results for the mass and spin parameters, i.e.", "we do not find any appreciable difference between the results for the two different choices of the eccentricity prior.", "We do, however, find differences in the $e_0$ posterior under the two different prior assumptions as shown in the bottom panel of Fig.", "REF .", "While both posteriors are consistent with small values of initial eccentricity, we find that the $e_0$ -posterior peaks at $\\sim 0.05$ for the uniform $e_0$ -prior, which is in mild tension with other results [65], [66].", "However, we note that this may be due to the uniform prior, which may not sufficiently explore low values of eccentricity.", "By contrast, when choosing the logarithmic-uniform prior, lower values of $e_0$ are preferred in full agreement with other analyses.", "We find that the maximum 90% upper limit is $e_0 \\lesssim 0.08$ , which is consistent with the results based on NR simulations presented in [67], where it was shown that the log-likelihood drops sharply as the eccentricity grows beyond $\\sim 0.05$ at about 20 Hz.", "For the other parameters (see Figs.", "REF and REF in Appendix ) we find broad agreement with the exception of the right ascension, where a different mode is preferred.", "In comparison to the quasi-circular analysis, the eccentric analyses give slightly higher median values for $M_c$ and $\\chi _{\\rm eff}$ in agreement with [65], [66].", "In terms of model selection we find that TEOBResumS-GIOTTO is favoured over TEOBResumS-DALI with an estimated Bayes' factor of $\\ln \\mathcal {B}^{\\rm circ}_{\\rm ecc} \\sim 2$ irrespective of the prior.", "This is in agreement with the results reported in [65], but differs from the ones in [66].", "However we note that Ref.", "[66] uses higher order modes while in our analysis we only employ the dominant multipole $\\ell = |m| = 2$ in the waveform.", "We conclude that, while the hypothesis of a quasi-circular BBH merger is preferred for GW150914, we cannot exclude a small value of eccentricity at 20 Hz.", "All three analyses, however, give consistent results for the intrinsic parameters at 90$\\%$ confidence.", "Our results are in agreement with previous analyses  [59], [65], [11]." ], [ "Model-agnostic estimate of the Eccentricity Evolution", "Bayesian inference allows us to determine the posterior distributions of binary parameters at a certain reference frequency.", "Certain parameters are, however, frequency dependent and hence change over time.", "One of these parameters is the eccentricity of the orbit, which decays due to the emission of GWs.", "In Sec.", "REF we determined the posterior distribution of the initial eccentricity $e_0$ of the EOB model measured at a (varying) reference average frequency $f_0$ .", "We now devise a scheme to determine the evolution of the eccentricity as a function of frequency using a previously introduced eccentricity estimator [68].", "Gravitational radiation at future null infinity is expected to be manifestly gauge invariant, motivating the use of an estimator based on the relative oscillations in the gravitational-wave frequency.", "This mitigates against the contamination of eccentricity measurements through the use of gauge dependent quantities [69].", "This has the additional advantage of allowing for the direct comparison between different eccentric analyses, which often use different definitions of eccentricity [70]We remind the reader that in general relativity one does not have a unique, Newtonian-like definition of orbital eccentricity: due to periastron precession elliptic orbits do not generally close, even in the absence of dissipation caused by GW.", "Moreover, and most importantly, eccentricity is not a gauge invariant quantity, but rather it depends on the specific choice of coordinates.", "A detailed discussion on this topic can be found in e.g.", "[37].. Our scheme is computationally efficient and applicable to any eccentric waveform model in post-processing.", "To calculate the eccentricity evolution, we employ the eccentricity estimator first introduced by Mora et al.", "[68]: $e_{\\omega }(t) = \\frac{\\omega _p(t)^{1/2} -\\omega _a(t)^{1/2}}{\\omega _p(t)^{1/2}+\\omega _a(t)^{1/2}},$ where $ \\omega _p(t)$ and $\\omega _a(t) $ are fits to the GW frequency of the $(2,2)$ -mode at the periastron and the apastron, respectively.", "We note that this eccentricity estimator is also used in other works, e.g.", "either based on the orbital [71], [32], [30] or the GW frequency [21], [22].", "To calculate $\\omega _p(t)$ and $\\omega _a(t)$ , we first generate the TEOBResumS-DALI waveform for each posterior sample and compute the GW frequency as $\\omega (t) = \\dot{\\phi }(t)$ , where $\\phi (t)$ is the phase of the $(2,2)$ -mode defined as $h_{22}=A(t) e^{-i\\phi (t)}$ with $A(t)$ being the amplitude of the waveform.", "We then identify the maxima (periastron) and the minima (apastron) of the second time-derivative of the GW frequency.", "We use the second derivative in order to amplify the peaks such that the identification of the maxima and minima is more robust for small eccentricities.", "Figure: Analyses of GW150914 with TEOBResumS-GIOTTO (blue) and TEOBResumS-DALI with a uniform e 0 e_0-prior (teal) and a logarithmic-uniform e 0 e_0-prior (orange).", "Upper panel: Joint posterior distributions with 90%\\% and 50%\\% credibility interval and median values reported in GWTC-1   (solid lines).", "Bottom panel: Marginalised one-dimensional posterior distributions and median values of e 0 e_0 (dashed lines) for the two eccentric analyses.Figure: Left: Illustration of the fitting procedure to determine the maxima (teal) and minima (orange) of the GW frequency (red).", "Right: Evolution of the eccentricity e ω (t)e_{\\omega } (t) calculated using themethod described in the text for a BBH with M c =24.74M_c = 24.74, χ eff =0\\chi _{\\rm eff} = 0 and q=1.5q = 1.5.Once the minima and maxima are identified, we fit $f(t) = \\omega (t)/(2 \\pi )$ using cubic spline interpolation.", "An example of this is shown in the left panel of Fig.", "REF , where the red curve shows the GW frequency with clearly visible eccentricity-induced oscillations and the green and orange curves show the fits to the maxima and minima respectively.", "From Eq.", "(REF ) we calculate $e_{\\omega } (t)$ for each posterior sample to find the corresponding eccentricity evolution, as shown in the right panel of Fig.", "REF .", "We note that the eccentricity estimated at the initial time $e_{\\omega }(t=0)$ can differ from the initial EOB eccentricity $e_0$ defined by the EOB dynamics as the eccentricity at the average frequency between apastron and periastron, as explained by Eq.", "(REF ).", "Since we are interested in determining how the eccentricity decays as the GW frequency increases towards merger, we need to map $t\\rightarrow f$ .", "Due to the non-monotonic behavior of the GW frequency, such a mapping is not unique and hence we introduce the average GW frequency $\\bar{f}(t)$ instead: $\\bar{f}(t) = \\frac{1}{2} \\left(f_p (t) + f_a (t) \\right),$ where $f_p (t) = \\omega _p (t) / (2 \\pi )$ and $f_a (t) = \\omega _a (t) / (2 \\pi )$ , and use linear interpolation to infer the eccentricity as a function of $\\bar{f}$ throughout the inspiral.", "A benefit of this way of estimating the eccentricity in post-processing is that it can be calculated directly from the GW signal in contrast to definitions inferred from the dynamicsNonethless, we note that since we also have at hand the EOB dynamics, the same approach could be applied to the EOB orbital frequency..", "In addition, it also reduces to the Newtonian definition of eccentricity, even in the high eccentricity limit [68], [32].", "A caveat to the correct calculation of $e_\\omega (t)$ is, however, that it requires the inspiral to be sufficiently long such that many periastron and apastron peaks can be resolved.", "In particular, for short waveforms where we only have one or two maxima and minima available, this method is expected to become inefficient and inaccurate [32].", "A way to circumvent this situation is to generate the EOB waveforms from a lower starting frequency but at the cost of increasing the waveform generation time and hence the time taken for a Bayesian inference run to complete.", "Similarly, in the low-eccentricity limit, we may also expect peak-finding algorithms to become numerically unstable.", "While strategies to amplify the peaks, such as the use of the second derivative of the frequency, help to isolate the stationary points, in practice we found that the peaks can still be poorly resolved for a small subset of the samples.", "However, by cutting the frequencies at sufficiently small times ($t=0.4$ s), we found the eccentricity estimator to be numerically robust with only a small percentage of samples $(\\lesssim 0.03 \\%)$ potentially suffering from pathologies.", "For those samples, we can adjust the cutoff time/frequency to produce an estimate of the eccentricity.", "In Fig.", "REF we show the 90% upper limit of the eccentricity evolution $e_{\\omega }(\\bar{f})$ as a function of the average frequency for the simulated eccentric signal with $e_0 = 0.05$ and $ f_0 = 20$ Hz, as discussed in Sec.", "REF .", "In addition, we also show the eccentricity evolution for the injected waveform itself (black triangles).", "We see that it is always contained within the $90\\%$ upper limit.", "Figure: Upper limit of the 90%\\% credibility interval for the estimated eccentricity evolution e ω (f ¯)e_{\\omega }(\\bar{f}) for an injection with e 0 =0.05e_0 = 0.05.", "The upper limit is calculated estimating e ω (f ¯)e_{\\omega }(\\bar{f}) for all the posterior samples, interpolating it at different values of f ¯\\bar{f} and then taking the 90 %\\% credibility interval of the of the data.", "The black triangles represent the injection.", "We note that the estimated initial eccentricity is slightly lower than e 0 =0.05e_0 = 0.05, where e 0 e_0 is defined from the EOB dynamics.Finally, we apply the same method to calculate the eccentricity evolution for GW150914 from the posterior samples obtained using the eccentric TEOBResumS-DALI model as outline in Sec.", "REF .", "Figure REF shows the $90 \\%$ upper limit of $e_{\\omega }(\\bar{f})$ obtained for the uniform $e_0$ -prior distribution (blue) as well as for the log-uniform $e_0$ -prior distribution (orange).", "We obtain an upper limit of $e_{\\omega }(\\bar{f})$ at $\\sim $ 20 Hz of $\\sim 0.075$ for the analysis with the uniform $e_0$ -prior and $\\sim 0.055$ for the analysis with the logarithmic-uniform $e_0$ -prior.", "This is comparable with Fig.", "7 of [67] where it was found that GW150914 is unlikely to have an eccentricity higher than $\\sim $ 0.05 at about 20 Hz at $90 \\%$ credibility.", "We also see that while we cannot exclude small values of eccentricities at low frequencies, once an average frequency of $\\sim 30$ Hz is reached, any residual eccentricity $e_\\omega (\\bar{f})$ can no longer be distinguished from zero.", "Figure: Upper limit of the 90%\\% credibility interval for the estimated eccentricity evolution e ω (f ¯)e_{\\omega }(\\bar{f}) for the two eccentric analyses of GW150914 with TEOBResumS-DALI.", "The upper limit is calculated estimating e ω (f ¯)e_{\\omega }(\\bar{f}) for all the posterior samples, interpolating it at different values of f ¯\\bar{f} and then taking the 90 %\\% credible interval of the data.", "This result is agreement with previous results ." ], [ "Discussion", "In this work we presented a Bayesian validation of the TEOBResumS-DALI waveform model [22] for eccentric coalescing binary black holes with aligned spins, a fully Bayesian reanalysis of GW150914 and a systematic method to estimate the eccentricity in post-processing.", "Our study explored the potential of TEOBResumS-DALI and allowed us to test its reliability.", "Our work is an extension of our previous study [22] and demonstrates the efficacy of the model in distinguishing between circular and eccentric GW signals.", "In particular, we found that the differences between the quasi-circular limit of TEOBResumS-DALI and its quasi-circular companion TEOBResumS-GIOTTO are relevant, and lead to clear (though small) biases in the recovered parameters.", "We attribute these biases to differences between the two models in both the dynamics (and especially in the radiative sector) and the waveform itself.", "When performing parameter estimation with small fixed eccentricityWe note that if the initial eccentricity is sufficiently small the setup of the initial data is identical in both models.", "this results in appreciable differences in the posteriors of numerous parameters.", "This indicates that the original TEOBResumS-DALI model needs improvements, notably to recover a quasi-circular limit that is as accurate as the one of TEOBResumS-GIOTTO.", "Some work in this direction has been done [45] (see in particular Fig.", "8 therein) but more investigations are needed to improve the model in the nearly equal-mass regimeWe also note that the TEOBResumS strategy is rather different from the one followed by the SEOBNRv4EHM model [27] that substantially limits itself at changing initial conditions, without touching the structural elements of the dynamics.", "Although this choice guarantees, by construction, an excellent quasi-circular limit, it introduces inaccuracies for eccentric dynamics, as highlighted in Ref. [25].", "After testing TEOBResumS-DALI for quasi-circular binaries, we validated the model on injections with nonzero initial eccentricity.", "In particular we found that TEOBResumS-DALI excellently recovers the injected value of eccentricity.", "In addition, we quantified the impact of eccentricity on the estimation of the intrinsic parameters of the binary: notably, we observed that the correlations between parameters became less strong when introducing eccentricity.", "If neglecting eccentricity, however, we see biases in the mass and spin parameters to compensate for it.", "We then performed Bayesian inference with TEOBResumS-DALI on the first GW event, GW150914.", "We found that the circular analysis was preferred with respect to the eccentric ones with $\\ln (\\mathcal {B}^{\\rm circ}_{\\rm ecc})\\sim 2$ .", "However, we also found that we cannot exclude small values of eccentricities at low frequencies, and that once an average frequency of $\\sim 30$ Hz is reached, any residual eccentricity becomes indistinguishable from zero.", "Lastly we performed the calculation of the eccentricity evolution using an eccentricity estimator deduced from the instantaneous GW frequency.", "After testing the calculation on mock signals, we applied the method to the data of GW150914 finding that, at about 20 Hz, the maximum eccentricity allowed for the system is $\\sim 0.075$ for a uniform prior and $\\sim 0.055$ for a logarithmic-uniform prior on the initial eccentricity.", "This is quantitatively comparable with the findings of [67].", "In the late stages of the preparation of this manuscript we became aware of related but independent work on eccentricity definitions [72].", "Given current BBH merger rate estimates [73] and the sensitivity of the LIGO-Virgo-KAGRA detector network [74], future detections of eccentric binaries will significantly constrain the lower limit of mergers that result from clusters and other dynamical channels [6].", "The possibility of several eccentric BBH candidates [9], [66] makes it crucial to have a reliable method to infer the eccentricity directly from observations.", "For the first time we presented a systematic method to infer the eccentricity evolution directly from observations of GWs from coalescing BBHs that can be used in the future to robustly measure the eccentricity and make meaningful comparisons between different models.", "We thank the LIGO-Virgo-KAGRA Waveforms Group and, in particular, Vijay Varma, Antoni Ramos-Buades, Md Arif Shaikh, and Harald Pfeiffer for useful discussions and comments on the manuscript.", "We also thank Alan Knee for helpful discussions during the development of this work.", "A. B.", "is supported by STFC, the School of Physics and Astronomy at the University of Birmingham and the Birmingham Institute for Gravitational Wave Astronomy.", "A. B.", "acknowledges support from the Erasmus Plus programme and Short-Term Scientific Missions (STSM) of COST Action PHAROS (CA16214) for the first part of the project when she was visiting the Theoretisch-Physikalisches Institut in Jena.", "R. G. and M. B acknowledge support from the Deutsche Forschungsgemeinschaft (DFG) under Grant No.", "406116891 within the Research Training Group RTG 2522/1.", "P. S. and G. P. acknowledge support from STFC grant No.", "ST/V005677/1.", "Part of this research was performed while G. P. and P. S. were visiting the Institute for Pure and Applied Mathematics (IPAM), which is supported by the National Science Foundation (Grant No.", "DMS-1925919).", "S. B. and M. B. acknowledge support by the EU H2020 under ERC Starting Grant, no. BinGraSp-714626.", "P. R. aknowledges support by the Fondazione Della Riccia.", "Computations were performed on the Bondi HPC cluster at the Birmingham Institute for Gravitational Wave Astronomy and the ARA supercomputer at Jena, supported in part by DFG grants INST 275/334-1 FUGG and INST 275/363-1 FUGG and by EU H2020 ERC Starting Grant, no. BinGraSp-714626.", "The waveform model used in this work is TEOBResumS and is publicly developed and available at https://bitbucket.org/eob_ihes/teobresums/.", "Throughout this work we employed the commit 0f19532 of the eccentric branch.", "To perform Bayesian inference we used the bajes software publicly available at https://github.com/matteobreschi/bajes.", "In this work we used the version available at https://github.com/RoxGamba/bajes/commits/dev/teob_eccentric employing the commit b3ad882.", "This manuscript has the LIGO document number P2200219." ], [ "Quasi-circular and eccentric initial conditions", "For quasi-circular binaries, TEOBResumS applies Kepler's law to the initial frequency of the orbit to compute the initial separation $r$ .", "Then, the initial values of the EOB angular and radial momenta $p_{\\varphi }, p_{r_*}$ are estimated via an iterative process (known as post-adiabatic expansion, “PA\" henceforth) in which the right-hand side of the Hamilton equations is solved analytically under the assumption that $p_{r_*} \\sim 0$  [51], [75].", "At zeroth PA order, one assumes that $p_{r_*} = 0$ exactly.", "Then, by evaluating $\\partial _r \\hat{H}_{\\rm EOB} = 0$ one can analytically find the circular angular momentum $j_0(r)$ at the requested initial separation.", "Neglecting terms of $O(p_{r_*}^2)$ , one can then use $d p_{\\varphi }/dr = \\hat{\\mathcal {F}}_{\\varphi } \\dot{r}^{-1}$ to compute $p_{r_*}$ at the first PA order.", "This procedure can then be repeated any number of times, with even (odd) PA orders providing corrections to $p_\\varphi $ ($p_{r_*}$ ).", "Correctly computing the initial conditions of the systems and having $p_{r_*}$ different from zero at the initial separation is crucial to avoid effects due to spurious eccentricity.", "For eccentric binaries, initial conditions necessarily need to be specified in a different manner.", "Let us denote with $e$ the eccentricity of the ellipse that the system would orbit along assuming no GW emission.", "Similarly, let us denote with $p$ its semilatus rectum and with $\\xi $ its anomaly.", "A generic point on the ellipse has radial coordinate $r = p/(1+e\\cos \\xi )$ .", "To find adiabatic initial conditions for our EOB dynamics we need to find a way to map $(f_0, e, \\xi )$ into $(r_0, p_{\\varphi }^0, p_{r_*}^0)$ .", "In practice, for convenience, the initial orbital frequency $\\Omega _0$ is always assumed to correspond either to the apastron ($r_0 = p_0/(1-e)$ ), periastron ($r_0 = p_0/(1+e)$ ) or to the average frequency between the two points.", "We then solve numerically $\\partial _{p_\\varphi } H(r_0(p_0), j_0(p_0), p_{r_*}=0) = \\Omega _0$ where $j_0$ is the adiabatic angular momentum computed using energy conservation $\\hat{H}_{\\rm eff}^0(p_0, j_0, \\xi =0) = \\hat{H}_{\\rm eff}^0(p_0, j_0, \\xi =\\pi ),$ and estimate the semilatus rectum of the obit $p_0$ .", "The evolution of the system is then always started at the apastron, so that $r_0 &= \\frac{p_0}{(1-e)}, \\\\p_{\\varphi }^0 &= j_0,\\\\p_{r_*}^0 &= 0.$ This adiabatic procedure can be generalized to higher PA orders1PA eccentric initial conditions have been implemented in the public TEOBResumS code in commit eb5208a.", "We leave a discussion of such initial conditions to future work." ], [ "Tables", "In this section we report the posteriors for $M_c$ , $\\chi _{\\rm eff}$ and $q$ for two injections and different recoveries performed.", "Table: Posterior distribution functions for M c M_c, χ eff \\chi _{\\rm eff} and qq for a circular injection (e ω inj =0e^{\\rm inj}_{\\omega } = 0 and f 0 =20f_0 = 20Hz) with different recoveries using TEOBResumS-GIOTTO and TEOBResumS-DALI.Table: Posterior distribution functions for M c M_c, χ eff \\chi _{\\rm eff} and qq for an eccentric injection (e ω inj =0.05e^{\\rm inj}_{\\omega } = 0.05 and f 0 =20f_0 = 20Hz) with different recoveries using TEOBResumS-GIOTTO and TEOBResumS-DALI." ], [ "Full corner plots for the GW150914 eccentric analysis", "In this section we report the full corner plots showing the posterior distributions of the intrinsic and extrinsic parameters relative to the eccentric analysis of GW150914.", "Figure: One dimensional and join posterior distributions for the intrinsic parameters in addition with e 0 e_0 and f 0 f_0 recovered with the two eccentric analyses of GW150914.", "The analysis using a uniform eccentricity prior is represented in teal, the one utilizing a logarithmic-uniform prior for the eccentricity is shown in orange.Figure: One dimensional and join posterior distributions for the extrinsic parameters in addition with e 0 e_0 and f 0 f_0 recovered with the two eccentric analyses of GW150914.", "The analysis using a uniform eccentricity prior is represented in teal, the one utilizing a logarithmic-uniform prior for the eccentricity is shown in orange." ] ]
2207.10474
[ [ "Deep Audio Waveform Prior" ], [ "Abstract Convolutional neural networks contain strong priors for generating natural looking images [1].", "These priors enable image denoising, super resolution, and inpainting in an unsupervised manner.", "Previous attempts to demonstrate similar ideas in audio, namely deep audio priors, (i) use hand picked architectures such as harmonic convolutions, (ii) only work with spectrogram input, and (iii) have been used mostly for eliminating Gaussian noise [2].", "In this work we show that existing SOTA architectures for audio source separation contain deep priors even when working with the raw waveform.", "Deep priors can be discovered by training a neural network to generate a single corrupted signal when given white noise as input.", "A network with relevant deep priors is likely to generate a cleaner version of the signal before converging on the corrupted signal.", "We demonstrate this restoration effect with several corruptions: background noise, reverberations, and a gap in the signal (audio inpainting)." ], [ "Introduction", "It has been demonstrated that the success of deep convolutional neural networks in computer vision is due to deep priors in the convolutional architecture itself and should not be attributed to the training process alone [1].", "Leveraging these priors, it is possible to perform image denoising, super resolution, inpainting, and more in an unsupervised fashion without any pretraining [1], [3].", "These discoveries helped bridge the gap between classical methods and machine or deep learning methods, while providing insights into the inner workings of deep architectures.", "Zhang et al.", "[2] showed that classical architectures do not provide good priors for audio.", "They examined the classical WaveUnet, and showed that even working with spectrograms, regular and dilated convolutions suffer from severe limitations.", "For example, when trying to fit three noised stationary signals of 1,000, 2,000 and 3,000 Hz the networks fitted the signal and the noise at an equal pace.", "To overcome these limitations, they proposed the usage of harmonic convolutions.", "Their goal was to exploit harmonic structures as an inductive bias for auditory signal modeling.", "Despite the success of their experiments, current SOTA audio architectures do not utilize harmonic structures and tend to rely more on standard building blocks.", "Later, Tian et.", "al.", "[4], [5] demonstrated that in certain cases deep spectrogram priors can still be found in regular and dilated convolutions.", "Using varying dilation schedules and interwoven skip connections, Narayanaswamy et al.", "succeeded in strengthening these priors [6].", "Michelashvili et al.", "[7] showed that it is possible to preform speech denoising in an unsupervised manner without priors by leveraging the inherent lack of structure in noisy signals.", "In their work, they identified the pixels in the spectrogram which fluctuate most during the training of the network.", "Using the intensity of the fluctuations they created a mask over the spectrogram which can be used to differentiate between the original signal and the noise.", "Recently, Défossez et al.", "[8] proposed the Demucs model for the task of music source separation.", "The Demucs model is a neural network composed of a fully convolutional encoder followed by a sequential modeling over the encoder's output, and a reverse decoder with U-net like connections between the encoder and decoder (we describe the model later in more detail).", "This model reached SOTA performance in music source separation [9], [10] and speech enhancement [8].", "In this work we show that using this new architecture it is possible to preform denoising, dereverberation, and inpainting on the raw waveform in an unsupervised manner using priors inherent to the network.", "Figure: Visualization of the deep prior process for denoising.", "A Neural Network is trained to generate a target corrupted signal from an initial input which is pure noise.", "At 500 epochs the generated signal is substantially cleaner then at 1000 epochs, when the corrupted signal is recreated.", "Network was applied on raw waveform, and spectrograms are used only for visualization.Figure: Graph of SI-SNR as a function of epochs.", "We can see that the SI-SNR(Recreated, clean) goes above the SI-SNR(Corrupted, Clean) meaning the recreated signal becomes more similar to the clean signal then the corrupted signal." ], [ "Deep Waveform Priors Overview", "To discern whether a network contains deep priors relevant to a specific input, we fit a generator network to a single corrupted signal.", "The network weights are randomly initialized, and fitted to the signal using gradient descent.", "Thus, the network weights serve as a parametrization for the signal.", "In this manner, the only information used to perform reconstruction is contained in the single corrupted input signal and the architecture of the generator network.", "If the architecture of the network contains relevant deep priors, it is possible that during training, the generator will give a cleaner signal before overfitting the corrupted version.", "Figure REF presents a diagram showing this process.", "In the diagram we see that after five hundred epochs the signal is cleaner then the original noisy signal.", "The graph in Figure REF shows the behavior of the SI-SNR during the process described above.", "In the graph, the line representing SI-SNR(recreated, clean) goes unexpectedly above the line representing SI-SNR(corrupted, clean) before leveling downwards.", "This essentially means that the recreated signal becomes more similar to the clean signal then the corrupted one.", "We use the waveform Demucs architecture [9] as the basis for our analysis.", "The Demucs architecture consists of a series of down-sampling 1D convolution layers each followed by a GLU, a number of LSTM layers, and then a series of up-sampling 1D deconvolution layers each followed by a GLU.", "Following Ulyanov et al.", "[1] we removed the skip connections from the network.", "Intuitively, this makes space for the network's inductive priors to work by propagating the loss only from the very end of the network.", "When the skip connections remain in place, they correct the weights to overfit the noisy target in every part of the network, and do not leave the network the freedom necessary for the priors to work.", "In order to allow the network to learn despite the removal of the skip connections it was necessary to reduce the number of convolutional and LSTM layers." ], [ "Implementation details", "In different experiments different variants of the architecture were used, to achieve improved results, though a systemic evaluation of all the variants was done only in the ablation studies.", "The Adam optimizer was used throughout all the experiments with a learning rate of 1e-4.", "The networks were trained using L1 loss to calculate the difference from the raw waveforms.", "The input noise was sampled from a Gaussian distribution with an std of 0.1, and was resampled every epoch in all experiments excluding the audio inpainting, where it was only sampled once.", "We used a kernel of size 8 with a step of 4 similarly to the default Demucs architecture.", "In all experiments the networks were trained until convergence (see Figure REF ), though almost all converged within 3000 epochs.", "The audio was sampled at 16Khz in all sections, for all audio types to allow for fair comparison.", "Figure: This figure shows the SI-SNR and PSNR of the denoised singals.", "Each bar represents the average of the metric maximum (similarly to the one marked in Figure ) over 20 randomly sampled tracks of appropriate class, noise type and noise level.", "We can see the limits of the deep priors as the SI-SNR increases and it is harder to differentiate between the signal and the noise." ], [ "Metrics", "We use SI-SNR, PSNR and PESQ as evaluation functions.", "SI-SNR is defined as in [11], [12] using the following formula: $\\text{SI-SNR}(s,\\hat{s}) = 10\\ log_{10}\\ \\frac{\\Vert \\tilde{s} \\Vert ^2}{\\Vert e \\Vert ^2},$ where $\\ \\tilde{s}=\\frac{\\langle s,\\hat{s} \\rangle s }{\\Vert s\\Vert ^2}$ , $ e=\\hat{s}-\\tilde{s} $ .", "We consider PSNR as follows: $\\text{PSNR}(s,\\hat{s})=10\\ log_{10}\\left(\\dfrac{\\text{MAX}_{I}^2}{\\text{MSE}}\\right),$ where $MAX_I$ is the difference between the maximum and the minimum amplitude values.", "To calculate PESQ we use the python-pesq packagehttps://github.com/ludlows/python-pesq.." ], [ "Denoising", "The first type of corruption we attempt to correct is denoising.", "We corrupt the signal using two types of noise: Gaussian noise — sampled from a Gaussian distribution and added to every sample in the raw waveform Uniform noise — sampled from a Uniform distribution and added to every sample in the raw waveform The noise is added to a variety of sounds: Speech — taken from Valentini et al.", "'s dataset [13] Singing — taken from the MUSDB18 dataset [14] Instruments — taken from MedleyDB 2.0 [15] Throughout this paper all our clean audio is taken from these 3 datasets.", "To allow for comparisons between different noise types the noises are added at equal intensities.", "The addition of the noise was done using code published by Xia et al.", "[16].", "Thus, we create noisy samples with a number of different SI-SNRs and allow comparison between them.", "The results of our analysis can be found in Figure REF .", "The graphs report the average maximum metrics achieved by the output signal while training the neural network to generate the corrupted signal.", "While these results are not competitive with SOTA denoising methods, they do show the existence of a deep prior inherent to the architecture which is the goal of this study.", "For every level of noising, that is SI-SNR(corrupted, clean), we can see that the network usually performs some denoising before fitting the noisy signal.", "Figure REF visualizes one of our denoising results using spectrograms.", "Figure: The spectrograms of (a) Noisy target; (b) Network output; (c) Original clean source.", "As the network recreates the raw waveform, spectrograms are used only for visualization." ], [ "Dereverberation", "Reverberations are acoustical noise appearing in enclosed spaces through multiple reflections of the sound on the walls and objects of a room.", "When a speaker talks in a room, these multiple echoes add to the direct sound and blur its temporal and spectral characteristics [17].", "Reverberant speech can be described as sounding distant with noticeable echos.", "These detrimental perceptual effects generally increase with increasing distance between the source and the microphone [18].", "In this work we attempt to clean the reverberations from the signal treating them as we treated other types of noise in the previous section.", "This task is inherently more difficult then the other denoising tasks we performed since the corruption itself has the same structure as the signal to be cleaned.", "As expected, our success here was more limited although a modest amount of dereverberation was obtained.", "A summary of our experimental results can be found in Table REF .", "We used the pyroomacoustics python package to add reverberations to our audio [19] as well as Valentini et al.", "'s dataset [13].", "Every setting in the table was averaged over 20 samples, although the results were robust even when averaging over only 10-15 samples.", "To better evaluate the level of dereverberation improvement, results reported in Table  REF are the SI-SNR improvement metric (SI-SNRi) over the reverberant signal.", "We can see that the priors succeed in removing reverberations to a certain extent.", "PESQ is not reported in the table as it was very unstable and with most samples achieved very high scores which did not represent an improvement in audio quality.", "Our hypothesis to explain the results in the table is as follows.", "When the reverberation is very weak (Rt60=0.1) the audio sounds very similar to the original, so there is very little to improve, as such the improvements achieved are minor.", "When the reverberations are of medium strength (Rt60=0.2) there are more reverberations to clean, yet they can still be distinguished from the original signal, so the network has a stronger effect.", "When Rt60=0.5 the reverberations are too strong for the prior to differentiate between them and the original signal, and the network recreates them both equally well.", "Table: Dereverberating results for different input signals with different Rt60s.", "Rt60 is defined as the number of seconds it takes for the reverberations to decrease by 60 dB.", "The numbers reported in the table are the average SI-SNR and PSNR improvement over the mixture that the network's output managed to achieve relative to the reverberated signal." ], [ "Audio Inpainting", "Inpainting is a classical problem in computer vision and has been used to demonstrate deep priors of convolutional networks.", "When performing inpainting a mask is placed over part of the image and the network recreates the image behind the mask.", "In audio, speech and music signals are often subject to localized distortions, where the intervals of distorted samples are surrounded by undistorted samples.", "Examples may include noises or clicks, CD scratches, old recordings, packet loss in cellphones etc.", "[20].", "Audio inpainting in the waveform domain contains two additional challenges: Dimensionality – at a standard sample rate of 16KHz masking even 10 ms affects 160 samples.", "Hence, the network must recreate these 160 consecutive samples.", "2D vs. 1D – When recreating an image context can be taken from 360 degrees, as the picture is 2-dimensional.", "When recreating a waveform, context can only be taken from a single dimension.", "In this work we masked a small (1-5 ms, see Table REF ), non-silent, segment of a 2 second clip, and trained the network to recreate the input signal.", "The training loss ignored the network's output under the mask.", "Hence, the network is unobstructed in recreating this part of the signal and it's deep priors can come into view.", "A visualization of our results can be found in Fig.", "REF .", "The figure shows that the network recreates a good approximation of the original signal, despite no loss being calculated on the masked area.", "To measure the impact of the network priors we compare the recreation quality of the Demucs network to the recreation quality of WaveUnet which has been shown to contain very week priors [7], [2].", "A summary of the results can be found in Table REF .", "The results reported are averaged over 20 clips each one of 2 seconds.", "The masked segment is randomly sampled within the clip.", "The table shows a number of points.", "First, the Demucs architecture consistently succeeds in recreating the signal better then the WaveUnet.", "Second, as the length of the masked signal increases the quality of the recreation goes down.", "Third, the Demucs architecture is less successful at recreating samples of singing.", "It is interesting to note that the Demucs architecture achieved SOTA results in music separation and speech enhancement, but has not been used for singing audio, similar to our results.", "Table: The table describes the metrics reached by the network when inpainting the masked signal using the Demucs architecture vs. using the WaveUnet architecture (which contains substantially weaker priors).", "Every result is averaged over 20 clips with the masked section randomly sampled within the clip.", "We can see that the Demucs architecture is consistently superior to WaveUnet in all metrics.", "However, as the length of the mask grows the quality of the inpainting decreases.Figure: Waveforms of masked signal, recreated signal and original (un-masked) signal.", "The masked area (the horizontal line in the circle) is recreated and the waveform of the recreated signal is very close to the waveform of the original signal.Table: Ablation studies done on different variations of the Demucs architecture.", "When no activation function is mentioned a standard ReLU is used." ], [ "Ablation Studies", "To better understand which parts of the Demucs architecture cause the architecture to have deep priors we analyzed the network from the bottom up.", "At each step we added another element to the network and saw how this affects the prior.", "Our analysis was done using randomly sampled Uniform noise with an SI-SNR of 2.5dB to allow the differences between the variants of the architecture to be seen clearly.", "With each variant of the architecture we randomly sampled 20 audio clips, denoised them, and averaged the results.", "Table REF reports our results.", "When the activation is not reported a standard ReLU is used.", "The original architecture is the Conv + LSTM + GLU option, we added attention layers instead of LSTM layers as part of our ablation studies to see their possible effect.", "Additionally, we examined the effects of different amounts of convolutional layers as well as the effect of skip connections.", "Using an LSTM alone did not provide results so the option is not reported in the table.", "There are a number of points we can learn from the table: GLU improves the priors of the network when used with convolutional layers.", "When used with the LSTM layers the effect of the GLU is detrimental.", "Adding LSTM layers to the convolutional layers improves the network's prior.", "Adding attention layers instead of LSTM layers does not improve the network's priors.", "We understand these results to mean that convolutional layers, aside from containing deep priors themselves, are necessary to prepare the waveform for the sequential modeling (LSTM) and to revert the sequential model back to the waveform.", "However, the LSTM itself contains significant deep priors.", "Additionally, although this is not represented in the table, Conv + LSTM works better with only 2 layers of convolution, and not with four.", "We hypothesize, that as the network deepens the learning becomes harder due to vanishing gradients.", "Skip connections can not be used as a solution, since they do not leave the network sufficient freedom to learn." ], [ "Conclusions", "In this work we show for the first time (to the best of our knowledge) the power of deep waveform priors in a SOTA audio architecture.", "We demonstrate the strength of these priors on 3 separate tasks: audio denoising, audio dereverberation, and audio inpainting and achieve results which could not be achieved without deep priors.", "We believe, the findings presented in this study shed light on the recent success of the Demucs architecture [9] on several source separation tasks operating over the raw-waveform.", "Acknowledgements: This research was supported by grants from the Israel Science Foundation, the DFG, and the Crown Family Foundation." ] ]
2207.10441
[ [ "Correlation between bandwidth and frequency of plasmaspheric hiss\n uncovered with unsupervised machine learning" ], [ "Abstract Previous statistical studies of plasmaspheric hiss investigated the averaged shape of the magnetic field power spectra at various points in the magnetosphere.", "However, this approach does not consider the fact that very diverse spectral shapes exist at a given L-shell and magnetic local time.", "Averaging the data together means that important features of the spectral shapes are lost.", "In this paper, we use an unsupervised machine learning technique to categorize plasmaspheric hiss.", "In contrast to the previous studies, this technique allows us to identify power spectra that have \"similar\" shapes and study their spatial distribution without averaging together vastly different spectral shapes.", "We show that strong negative correlations exist between the hiss frequency and bandwidth, which suggests that the observed patterns are consistent with in situ wave growth." ], [ "Plain Language Summary", "[ enter your Plain Language Summary here or delete this section]" ], [ "Introduction", "Plasmaspheric hiss is an incoherent, electromagnetic whistler mode wave that has a frequency range of 20 Hz to a few kHz.", "Hiss plays a major role in the scattering of high energy electrons and creating the slot region between the inner and outer radiation belts [20].", "Observations by li2013unusual suggested that hiss can be split into two components: low ($<$ 150 Hz) and high ($>$ 150 Hz) frequency hiss waves.", "Recent analysis by malaspina2017statistical showed that the low and high frequency waves are statistically distinct populations.", "Two major differences are that low frequency hiss reaches peak amplitudes near 15 hours magnetic local time (MLT), while it is approximately 12 MLT for the high frequency one.", "Also, low frequency hiss is localized close to the plasmapause while high frequency hiss can be observed significantly farther earthward.", "Understanding the properties of low frequency hiss is particularly important because it can resonate with higher energy electrons than the high frequency part, therefore it may have a larger impact on the radiation belt dynamics by scattering electrons out of trapped orbits and into the atmosphere [12].", "Previous studies of hiss used either case studies, which were manually identified ($<$ 100 wave events, chen2014generation, ni2014resonant,li2013unusual) or analyzed the statistical shape of the magnetic field power spectra by averaging thousands of hours of data as a function of L-shell, MLT, and distance from the plasmasphere [14], [16], [15].", "While a statistical approach is necessary to obtain robust results of the spatial distribution of hiss wave activity, this method has a major disadvantage: a wide range of spectral shapes (waves with different center frequencies and bandwidths) co-exist in a given spatial domain (e.g.", "MLT and L-shell bin and geomagnetic activity), therefore averaging them together means that important details about the wave activity are lost.", "This is particularly problematic for accurate inclusion of hiss wave population in predictive models of inner magnetosphere plasma dynamics since the statistical spectral shape might be significantly distorted due to the averaging e.g.>[]chen2012modeling.", "In this paper, we use an unsupervised machine learning technique called Self-Organizing Map (SOM) to identify and categorize plasmaspheric hiss.", "This technique sorts electric field power spectra into nodes where power spectra belonging to the same node have similar properties: they all display wave activity at approximately the same frequency with similar power spectra density, and bandwidth.", "This method has the advantage that large data sets ($>$ 1 million electric field power spectra) can be analyzed without spatial averaging of a broad range of spectral shapes, therefore the key properties of hiss (frequency, bandwidth and power spectra density) can be derived more accurately.", "We investigate the spatial distribution of various electric field power spectra shapes in the plasmasphere and show that from 10 hours MLT to 14 hours MLT the hiss frequency increases while the bandwidth decreases.", "We discuss the possible mechanisms that may explain the origin of the low and high frequency hiss." ], [ "Data Preparation and Methodology", "We use the Van Allen Probes data sets previously analyzed by malaspina2017statistical, which is based on measurements from the Electric Fields and Waves (EFW) instrument [22] and the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrument suite [11].", "Data outside the plasmasphere ($n_e<$ 50 cm$^{-3}$ ) and data recorded during spacecraft charging events, eclipses, thruster firings or EFW bias sweeps were excluded from the analysis.", "For details of the data cleaning see [14].", "The spin and axial electric field power spectra were measured onboard for 0.5 seconds out of 6 second intervals on a logarithmically spaced frequency grid with 50 elements between 2 and 2000 Hz.", "As opposed to several previous studies [14], [16], [15], we use the electric field data to analyze the spectral properties of hiss waves.", "We suggest that the electric field instrument is more appropriate for the analysis of low frequency hiss compared to the search coil magnetometer due to relatively lower noise floor compared to plasma wave signals at frequencies $<$ 200 Hz [22].", "The relatively high noise floor of the search coil magnetometer at frequencies $<$ 200 Hz leads to the systematic overestimation of the power spectra density in that frequency range, which was discussed by malaspina2017statistical.", "The combined (Probe A and B) data set includes over 24.6 million power spectra, which significantly exceeds the size that could be processed with our computational resources to train a Self-Organizing Map.", "Therefore, we restrict our study to 250 days of randomly selected data from Probe A (2.1 million power spectra).", "The presence of magnetosonic waves can distort the magnetic and electric field power spectra.", "We use the following filtering method to eliminate them from our analysis: the compressibility ($|\\delta B_{||}|/|\\delta B_{total}|$ , for details see malaspina2016distribution) of magnetic fluctuations is calculated in 50 frequency bins between 20-2000 Hz.", "We omit all of those electric field power spectra where the corresponding magnetic compressibility spectra had more than 6 frequency bins with $|\\delta B_{||}|/|\\delta B_{total}|>$ 0.6.", "In total 245,000 power spectra (from the initial 2.1 million) were excluded from the hiss analysis due to this criteria.", "We use the machine learning technique developed by vech2021novel, which was demonstrated with large data sets (182,000 power spectra in total) of fluxgate and search coil magnetic field data from the Magnetospheric Multiscale Mission.", "SOM is an unsupervised machine learning technique that consists of a two-dimensional grid of nodes where the number of nodes is typically between a few dozens and a few hundreds; in our study we use 100 nodes.", "The goal of the training process is to assign each input vector (i.e.", "power spectra) to a node while ensuring that \"similar\" input vectors are assigned to the same or neighboring nodes while \"dissimilar\" input vectors are assigned to nodes far from each other.", "The similarity between input vectors can be quantified by a variety of metrics, here we use the Euclidian distance: $d(q,p)=\\sqrt{\\sum (q_i - p_i})^2$ where $q$ and $p$ correspond to a pair of power spectra and $d$ quantifies their similarity.", "Since the power spectra density of the electric field fluctuations is highly variable, the power spectra have to be normalized before the SOM training process.", "We shift (in power spectra density) the electric field power spectra with a constant factor so they are all set to 0 (in logarmithmic space) (V/m)$^2$ /Hz at 20 Hz.", "This normalization means that the differences (i.e.", "the value of $d(q,p)$ ) between the power spectra are determined by differences in the high frequency wave activity and therefore the effect of low ($<$ 20 Hz) frequency fluctuations is eliminated.", "We used the procedure described in vech2021novel and trained the SOM with a 10x10 grid of nodes.", "The input matrix has 2.1 million rows (number of power spectra) and 50 columns (number of frequency bins) based on the electric field power spectra from the spin axis sensors.", "We do not use the axial electric field for the SOM training because this data product is affected by artifacts due to the fact that the voltage sensor is periodically in the shadow of the spacecraft [11].", "The training process was repeated 10 times (500 iterations each time) and we found that approximately 0.1% of the input vectors were assigned to different nodes suggesting that the trained model converged to a steady state and the node-assignment variation between iterations became small.", "In order to illustrate the power spectra assigned to a node, we plot the average power spectra for three nodes in Figure 1.", "The error bars correspond to the standard deviation of the power spectra density for each frequency bin.", "Figure 1a shows an example for a node that has no significant wave activity in the range of 20 Hz to 2000 Hz.", "Figure 1b shows a node that has high frequency hiss due to the fact the peak power spectra density (frequency corresponding to the \"bump\" in the spectra) is approximately at 600 Hz.", "Finally, Figure 1c shows an example of a node with low frequency hiss due to the fact that the enhanced wave activity extends well below 150 Hz.", "Figure: Three examples of the nodes a) without significant wave activity, b) with high frequency hiss and c) with low frequency hiss.", "The line plots correspond to the average of all power spectra assigned to each of the three nodes.", "The error bars correspond to the standard deviation of the E-field power spectra density in each frequency bin.We use the following method to identify nodes that display \"significant\" wave activity.", "We integrate the node averaged power spectra (such as Figure 1a,b and c) from 20 Hz to 1000 Hz (P) and split the data into two groups as P$<$ 1.57 $V/m$ (34 nodes) and P$>$ 1.57 $V/m$ (66 nodes).", "This empirical threshold was determined after manual inspection of all the 100 nodes and was found to be an adequate point to split the nodes into \"no waves\" and \"waves\" categories.", "For those 66 nodes with wave activity, we define the wave frequency and bandwidth with the following metrics.", "For each node, we identify the frequency of the inflection point (corresponding to the peak wave power spectra density) in the power spectra as the \"wave frequency\".", "For example, this is approximately 455 Hz in Figure 1b and 130 Hz in Figure 1c.", "The bandwidth is measured as the ratio of frequencies (below and above the \"wave frequency\") where the power spectra density drops (from the peak) by the factor of 1/e measured in logarithmic space.", "For example, this is a factor of 2.55 in Figure 1b and 1.4 in Figure 1c, respectively." ], [ "Spatial distribution of low and high frequency hiss", "In this Section, we investigate the spatial distribution of the observed hiss waves.", "We plot the distribution of hiss wave characteristics in a grid with with 36 angular (MLT from 0 to 24 hr) and 7 radial (L-shell from 0 to 7 Earth radii) bins.", "Figure: Median hiss wave power spectra density in each bin for the normalized (a) and raw data (b) on log scale.", "(c) Spatial distribution of the power spectra with hiss wave activity.", "(d) Spatial distribution of all power spectra used in our study regardless the wave activity.First we plot the hiss wave power spectra density at the wave frequency for the selected 66 nodes incorporating 1.1 million power spectra in the L-shell vs. MLT grid in Figure 2a and b where the color code corresponds to the median power spectra density in each L-shell vs. MLT bin on log scale.", "The distribution of this normalized power spectra density is shown in Figure 2a.", "In Figure 2b, we investigate the distribution of the \"raw power spectra density\" that refers to the power spectra density without normalization.", "The two panels display different features of the data set: Figure 2a essentially shows the hiss power spectra density with respect to the power spectra density at 20 Hz (i.e a relative power spectra density), in contrast Figure 2b show the \"absolute value\" of the hiss power spectra density.", "Both the normalized and raw power spectra density show some bias toward the pre-noon sector, which is consistent with the findings of previous statistical studies such as meredith2018global, meredith2021statistical.", "Figure: Spatial distribution of hiss with peak frequency a) f << 194 Hz, b) 194 << f << 252 Hz, c) 252 << f << 316 Hz, d) 316 Hz << f, respectively.Figure: Spatial distribution of hiss with bandwidth (B) in the range of a) B << 2.05, b) 2.05 << B << 2.51, c) 2.51 << B << 2.83, d) B >> 2.83.In Figure 2c we investigate the rate of occurrence of hiss by plotting the spatial distribution of the 1.1 million power spectra that were assigned to the 66 nodes with wave activity.", "The color code corresponds to the probability density in each bin, which is obtained by normalizing the count number in a given bin by the sum of all power spectra in the plot.", "Previous statistical studies such as malaspina2017statistical,meredith2018global,meredith2021statistical analyzed the distribution of power spectra amplitude (or integrated wave power) in the L-shell vs. MLT space, however, it is important to note that the maps presented in previous papers do not necessarily correspond to the rate of occurrence of hiss.", "A region with very large hiss amplitude does not necessarily coincide with the region where hiss occurs most frequently.", "Determination of the rate of occurrence requires classification of the power spectra (at the minimum two categories as \"hiss\" vs. \"no hiss\"), which was achieved with the SOM training process.", "Hiss occurrence rate is concentrated mostly in the pre-noon sector (Figure 3c), while the enhanced hiss amplitude extends to noon (Figure 2a,b).", "In Figure 2d, we plot the spatial location of each of the 2.1 million power spectra (i.e.", "all power spectra, regardless the wave activity) that we used in our study.", "The distribution suggests that the increased rate of hiss occurrence in the pre-noon sector presented in Figure 2a,b and c are not due to imbalance in the spacecraft observations (i.e.", "having more observations from 9-12 MLT compared to the rest of the spatial locations).", "Previously we determined the frequency and bandwidth of the nodes that displayed wave activity.", "Using these derived parameters, we create further sub-categories of the power spectra.", "First, we split the data (1.1 million power spectra assigned to the 66 nodes with hiss) into four groups based on the wave frequency.", "The frequency (f) thresholds are: 1) f $<$ 194 Hz, 2) 194 $<$ f $<$ 252 Hz, 3) 252 $<$ f $<$ 316 Hz, 4) 316 Hz $<$ f. These thresholds were determined as the 25, 50, 75 percentile values of the wave frequency data.", "We apply the same approach to the bandwidth (B) where the thresholds are 1) B $<$ 2.05, 2) 2.05 $<$ B $<$ 2.51, 3) 2.51 $<$ B $<$ 2.83, 4) B $>$ 2.83.", "Figures 3 and 4 show the spatial distribution of hiss for each sub-category.", "First we compare Figure 3a and Figure 4d: the overlap between these plots suggests that hiss in the pre-noon sector is characterized by narrow bandwidth and high frequency.", "The comparison of Figure 3d and Figure 4a shows that in the afternoon sector the hiss has the lowest frequency and broadest bandwidth.", "Although there is some scattering in the intermediate categories of Figure 3b,c and Figure 4b,c, they are consistent with the pattern that the hiss bandwidth increases from pre-noon to afternoon while the frequency decreases." ], [ "Generation mechanisms of low and high frequency hiss", "Historically, the two main leading theories of hiss growth were the 1) in situ growth due to unstable electron distributions e.g.>[]church1983origin and 2) wave injection due to terrestrial lightning strikes e.g.>[]draganov1992magnetospherically.", "However, more recent measurements show that both of these concepts are often inconsistent with data e.g.>[]green2005origin.", "bortnik2008unexpected offered an alternative explanation suggesting that chorus waves may be the source of hiss.", "This model could explain several features of hiss such as distribution in L-shell (pre-noon sector) and day-night asymmetry.", "A statistical study of the correlation between hiss and chorus waves found significant overlap (in MLT) between the two wave modes [1].", "However, the chorus origin is also a matter of considerable debate.", "For example, hartley2019van suggested that the wave-vector orientation of chorus waves in the pre-noon sector is not consistent with entering the plasmasphere and in the pre-noon sector chorus waves can explain only $\\approx 1\\%$ of hiss wave power.", "The transition of chorus waves to hiss waves was found to be more significant near the plasmaspheric plume where large azimuthal density gradients exist.", "In that region $>80\\%$ of the hiss wave power was explained by chorus waves.", "A major difficulty for the chorus wave theory is to explain the \"frequency jump\" across the plasmasphere.", "Chorus waves are typically observed between 0.1-1 $\\omega _{ce}$ (corresponding to approximately 2-7 kHz and $\\omega _{ce}$ is electron cyclotron frequency) [2] while the the typical hiss frequency in the pre-noon sector is around 316-1000 Hz meaning the required frequency change is a factor of 2 to 22.", "Moving toward the dusk region, the hiss frequency decreases ($<$ 194 Hz, Figure 3), therefore the required frequency jump for chorus to transition into hiss could be as large as a factor of 10 to 40.", "In addition to this difficulty, the rate of occurrence of chorus waves drops significantly in the $>$ 12 MLT region compared to the pre-noon sector [1].", "Therefore, even if chorus waves can enter the plasmasphere in the afternoon sector, they can only explain a small fraction ($\\approx 24\\% $ ) of all hiss occurrence.", "Recently several studies argued that electron injections may play an important role in hiss generation.", "For example, shi2017systematic conducted a statistical study of low frequency hiss and suggested that local amplification induced by electron injection events at higher (L$\\approx $ 6) L-shell is a possible source of these waves.", "The statistical distribution of low frequency hiss (Figure 3a) is consistent with this idea.", "hikishima2020particle, proposed local generation of hiss through linear and nonlinear interactions of electromagnetic field fluctuations with anisotropic energetic electrons.", "ratcliffe2017self used particle-in-cell simulations to model whistler mode wave growth with a distinctly warm and hot electron populations.", "They found that the growth of whistler mode waves was split into upper and lower bands approximately around 0.5$\\omega _{ce}$ .", "They also found that the frequency gap sensitively depends on the temperature and anisotropy of each electron component.", "zhu2019triggered suggested that low-frequency hiss consists of parallel and antiparallel Poynting fluxes, resulting from multiple reflections inside the plasmasphere.", "We suggest that differences in the electron populations between the pre-noon and afternoon sectors might be able to split hiss waves into two bands in a similar fashion.", "There is some evidence for this idea in Figure 3b and c, which shows that the high and low frequency hiss are strongly separated, and there is a gap in the rate of hiss occurrence at around 12 MLT in Figure 3b.", "This suggests that the different electron distributions in the pre-noon and afternoon sectors may support hiss wave growth in two separate frequency bands." ], [ "Conclusion", "The traditional approach for studying plasmaspheric hiss is based on calculating spatial averages of the magnetic field power spectra.", "This technique has a major disadvantage since it does not take into account the diverse shapes of power spectra that occur in a given L-shell vs. MLT bin.", "In this paper, we used an unsupervised machine learning technique to categorize plasmaspheric hiss and studied the spatial distribution of the various spectral shapes without averaging together vastly different spectral shapes.", "First, we categorized the power spectra as \"hiss\" vs. \"no hiss\" and studied the rate of occurrence of hiss in the L-shell vs. MLT space.", "Secondly, we created eight sub-categories of hiss based on bandwidth and frequency.", "This sophisticated classification allowed us to understand the evolution of the spectral shapes from dawn to dusk.", "We showed that hiss at around 9 MLT have the narrowest bandwidth and highest frequency.", "The frequency gradually decreases toward dusk while the bandwidth broadens.", "We discussed possible mechanisms that may generate plasmaspheric hiss and pointed out some inconsistencies between our observations and the idea that hiss originates from chorus waves.", "To explain the obtained frequency and bandwidth correlation, we favor the in situ wave growth mechanism proposed by ratcliffe2017self due to the fact that it could naturally account for the observed two bands of hiss waves.", "Further work is needed to quantify the required temperature and anisotropy of the hot and warm electron populations that create hiss waves consistent with our observations.", "Finally, the some current radiation belt models operate with simple assumptions such as constant hiss frequency and amplitude e.g.>[]fok2011recent.", "In our study we quantified the variability of the hiss spectral shapes.", "Based on the results we suggest that parameterizing the hiss with MLT dependent frequency and bandwidth is necessary for adequate inclusion of this wave mode in predictive models of high energy electron scattering.", "D. V. was supported by NASA contract 80NSSC21K0454.", "D.M.", "was supported by NASA contract 80NSSC19K0305.", "The authors thank the Van Allen Probes team, especially the EFW and EMFISIS teams for their support.", "This work was funded by NASA Grant 80NSSC18K1034.", "All Van Allen Probes data used in this work are available from the EFW and EMFISIS team websites (which one can link to here:http://rbspgway.jhuapl.edu)." ] ]
2207.10505
[ [ "A nonstationary spatial covariance model for data on graphs" ], [ "Abstract Spatial data can exhibit dependence structures more complicated than can be represented using models that rely on the traditional assumptions of stationarity and isotropy.", "Several statistical methods have been developed to relax these assumptions.", "One in particular, the \"spatial deformation approach\" defines a transformation from the geographic space in which data are observed, to a latent space in which stationarity and isotropy are assumed to hold.", "Taking inspiration from this class of models, we develop a new model for spatially dependent data observed on graphs.", "Our method implies an embedding of the graph into Euclidean space wherein the covariance can be modeled using traditional covariance functions such as those from the Mat\\'{e}rn family.", "This is done via a class of graph metrics compatible with such covariance functions.", "By estimating the edge weights which underlie these metrics, we can recover the \"intrinsic distance\" between nodes of a graph.", "We compare our model to existing methods for spatially dependent graph data, primarily conditional autoregressive (CAR) models and their variants and illustrate the advantages our approach has over traditional methods.", "We fit our model and competitors to bird abundance data for several species in North Carolina.", "We find that our model fits the data best, and provides insight into the interaction between species-specific spatial distributions and geography." ], [ "2pt2pt2pt -8em 1.5 Spatial data can exhibit dependence structures more complicated than can be represented using models that rely on the traditional assumptions of stationarity and isotropy.", "Several statistical methods have been developed to relax these assumptions.", "One in particular, the “spatial deformation approach\" defines a transformation from the geographic space in which data are observed, to a latent space in which stationarity and isotropy are assumed to hold.", "Taking inspiration from this class of models, we develop a new model for spatially dependent data observed on graphs.", "Our method implies an embedding of the graph into Euclidean space wherein the covariance can be modeled using traditional covariance functions such as those from the Matérn family.", "This is done via a class of graph metrics compatible with such covariance functions.", "By estimating the edge weights which underlie these metrics, we can recover the “intrinsic distance\" between nodes of a graph.", "We compare our model to existing methods for spatially dependent graph data, primarily conditional autoregressive (CAR) models and their variants and illustrate the advantages our approach has over traditional methods.", "We fit our model and competitors to bird abundance data for several species in North Carolina.", "We find that our model fits the data best, and provides insight into the interaction between species-specific spatial distributions and geography.", "Keywords: spatial deformation, graph embedding, CAR model, resistance distance, quasi-Euclidean metric, intrinsic distance." ], [ "Introduction", "Models for spatially indexed data traditionally assume that observations close to one another in space are more highly correlated than observations which are far apart.", "Spatial models that assume the correlation between any two observations can be represented by a function of only the distance in space between those observations are said to be stationary, in which case the spatial autocorrelation is the same in all parts of the spatial domain, and isotropic, spatial autocorrelation is the same in all directions [1], [2].", "While these assumptions allow for a simple and interpretable mathematical representation of spatial processes, such assumptions are often not appropriate for real world data [3].", "As an example of this, refer to Figure REF .", "This plot was taken from the website of the eBird database, a project of the Cornell Lab of Ornithology that crowd-sources bird population data by allowing anyone to record and submit their birdwatching observations to the database.", "This is done in the form of user-submitted, location-indexed checklists containing the counts of each species observed by the bird watcher [4].", "Figure REF depicts the percentage of submitted checklists for each 20 by 20 kilometer gridded region within eastern North Carolina that contain an observation of at least one brown pelican, a seabird common along the coast of North Carolina.", "Dark purple rectangles correspond to regions where the bird is most commonly observed.", "It is readily apparent that the spatial distribution of the brown pelican throughout this region is highly dependent on the proximity to the coast, and as such, a model for brown pelican abundance which assumed that the spatial dependence between observations is the same for all locations in North Carolina and in all directions, would be woefully inadequate at representing the data.", "Consider for example how the frequency of observation along the northeast-southwest axis of the domain is relatively constant when compared to frequency along the northwest-southeast axis.", "Additionally, when similar data for other bird species within the same spatial domain are considered, one may find nonstationary dependence exhibiting a completely different spatial pattern.", "However, it is reasonable to imagine that for any given species (or more generally, for any nonstationary and anisotropic spatial process) there is a unique distance metric underlying the space in which data are observed, in which the covariance patterns are isotropic and stationary with respect to this metric rather than to the geographic distances between observed location.", "Using Figure REF as an example, one could say that for a brown pelican, two locations 50 kilometers apart along the coast might be viewed as closer together than two locations with one on the coast and the other 50 kilometers apart inland, perpendicular to the coastline.", "Figure: Percentage of eBird checklists containing an observation of a Brown Pelican within each 20km gridded region in eastern North Carolina.", "Image provided by eBird (www.ebird.org) and created 16 February 2022Many classes of spatial models have been developed for use with nonstationary and anisotropic data [5].", "Among these methods is the “spatial deformation approach,\" which parameterizes spatial covariance in terms of an estimated metric space corresponding to a deformation of the Euclidean space in which data are observed [6] and which leads to interpretation similar to the “intrinsic distances\" described in the brown pelican example from above.", "The vast majority of these methods have been developed specifically for point referenced data.", "However, areal data is also common within the world of spatial statistics, either because some random variables are observed only at a regional level, or because raw point-referenced data might be aggregated and processed into areal or gridded regions out of convenience or due to scientific or policy interest in questions at the county, state, or national level.", "Such data are representable using graphs, with a node representing each region and edges connecting each pair of adjacent regions.", "Conditional autoregressive (CAR) models, first introduced by [7], are commonly used for modeling spatially dependent data observed on a graph, but are potentially limited when faced with more complex spatial dependence patterns.", "Given a random vector $$ with $y_1,...,y_p$ being observations at each of the $p$ nodes (or vertices) of some graph $G = (V,E)$ , where $V$ contains the set of nodes, and $E$ is set of edges between those nodes, a CAR model specifies a Markov random field in which each element of $$ has a conditional distribution determined by the values at neighboring nodes, for example $y_i|_{-i} \\sim N\\left( \\sum _{j\\sim i} \\kappa w_{ij} y_j,\\sigma ^2\\right)$ where $j\\sim i$ indicates that node $j$ is adjacent to node $i$ , $$ is a $p \\times p$ adjacency or weights matrix (where $w_{ij} > 0$ if $j\\sim i$ else $w_{ij} = 0$ ), $\\kappa $ controls the degree of spatial correlation and $\\sigma ^2$ is a variance parameter.", "This results in a marginal distribution for $$ of $\\sim N(, \\sigma ^2 (_n - \\kappa )^{-1}).$ The CAR model may be thought of as the spatial generalization of the first order autoregressive (AR1) model often used in time series modeling.", "There are many varieties of CAR model that have been developed for use in generalized mixed models [8], with multivariate data [9], and which can utilize weighting schemes incorporating geographic and covariate information [10], [11].", "Most of the literature regarding spatially dependent graph data is limited to various forms of autoregressive models [12].", "The majority of CAR models and their variants tend to to treat all first-order neighbors within a graph as equally proximate due to the use of graph adjacency structure alone their definition of spatial dependence.", "This will rarely result in effective representation of nonstationary spatial processes.", "In instances where more complex weighting schemes have been utilized, such as in [10], and [11], edge weights are defined as linear functions of a small set of environmental covariates.", "While such approaches allow for more flexibility than those using adjacency alone, the space of possible weighting matrices under those models is restricted by the column space defined by the collection of environmental covariates being used.", "There are also instances in which covariates associated with graph edges may not be available to researchers, in which case a model must be defined using only the structure of the graph itself.", "One computational advantage of the CAR modeling framework is that the induced precision matrix will generally be sparse due to the assumption of independence for each observation conditional on the values of its neighbors.", "This assumption can come with trade-offs in terms of predictive efficiency; [13], [14], [15] all demonstrate the shortcomings of models which condition only on local or nearest neighbor structure (as is the case in CAR models) in comparison to those which also condition on more distant observations.", "For more discussion on CAR models see [16] and [17] for review papers which provide detail on the use, development, and interpretation of these models and their variants.", "To address the concerns regarding limitations of CAR models for use with nonstationary spatial processes, we propose a model for the spatial covariance of graph data using a different set of assumptions, which we will describe in the following sections.", "We will also illustrate that our proposed method has greater capacity to represent certain correlation structures that may be of interest to statisticians and ecologists, especially in contrast to the most commonly used CAR variants.", "One approach to modeling spatial nonstationarity and anisotropy for point referenced data is the “spatial deformation\" approach introduced by [6].", "Unlike CAR models and most other spatial models which characterize covariance as a function of a known and fixed orientation of the data within some space (whether discrete as with areal data or continuous with point referenced data), the spatial deformation approach assumes instead that covariance is determined by the distances between the data within some latent space which is estimated.", "Given some spatial process $\\lbrace Y() : \\in \\mathcal {G} \\rbrace $ over a space $\\mathcal {G}$ (in most applications $\\mathcal {G}$ is $\\mathbb {R}^2$ and we refer to it as the “geographic-\" or “$\\mathcal {G}$ \"-space), which is repeatedly observed at some finite set of locations $_1, _2, ..., _n \\in \\mathcal {G}$ at times $t \\in 1,...,T$ , we can model the covariance between any two locations in $\\mathcal {G}$ as $\\text{Cov}(Y(,t),Y(^{\\prime },t)) = \\sigma ^2 \\rho (\\Vert \\delta ()-\\delta (^{\\prime })\\Vert ),$ where $\\sigma ^2$ is a scale parameter, $\\rho (\\cdot )$ is a valid correlation function, and $\\delta (\\cdot )$ is a non-linear transformation mapping $\\mathcal {G}$ to a latent space (referred to as the “deformation-\" or “$\\mathcal {D}$ \"-space) in which the stationary and isotropy assumptions hold.", "Covariance is thus a direct function of the distances between points in the $\\mathcal {D}$ -space, which allows for nonstationarity in the covariance when viewed from the $\\mathcal {G}$ -space.", "The challenge with the implementation of this approach lies in determining an appropriate framework to estimate the function $\\delta (\\cdot )$ .", "One should define $\\delta (\\cdot )$ such that the orientation of the points in the $\\mathcal {G}$ -space is considered within the function, (proximity in the $\\mathcal {G}$ -space should generally imply relative proximity in the D-space), and in a manner that avoids producing a $\\mathcal {D}$ -space on which it is impossible to obtain a positive definite covariance matrix.", "In their 1992 article, Sampson and Guttorp estimate $\\delta (\\cdot )$ using a two-step process in which they first estimate the latent space locations $\\delta (_1),...,\\delta (_n)$ via a nonmetric multi-dimensional scaling algorithm that is initialized using the geographic locations such that $\\lbrace \\delta (_i)\\rbrace ^{(0)} = \\lbrace _i\\rbrace $ before converging to a set of locations in the $\\mathcal {D}$ -space such that a stress criterion based on the variogram of the data within the $\\mathcal {D}$ -space is minimized; they then use thin plate splines to obtain the full function $\\delta (\\cdot )$ which maps from the geographic locations to the latent space locations over the entirety of the $\\mathcal {G}$ -space.", "Later articles further developing this method propose using a Gaussian-process prior for $\\delta :\\mathbb {R}^2 \\rightarrow \\mathbb {R}^2$ centered at the observed locations $\\lbrace _i\\rbrace $ [18].", "The spatial deformation approach allows for considerable flexibility in the covariance structure of the data, and provides an intuitive interpretation of model results if one views the distances between observations in the $\\mathcal {D}$ -space as the “intrinsic distances\" between locations in the $\\mathcal {G}$ -space.", "In this article we present a covariance model for graph data that has similar interpretation to the deformation approach of Sampson and Guttorp.", "Fitting the model involves the estimation of a distance metric over our graph which in turn corresponds to an embedding of our graph into a high-dimensional Euclidean space.", "Covariance is then assumed to be a function of the distances between nodes in the embedding, similarly to how the spatial deformation approach assumes stationarity in the $\\mathcal {D}$ -space, resulting in potentially nonstationary, anisotropic covariance structure in the $\\mathcal {G}$ -space.", "In the next section of this article we establish some of the necessary background information regarding covariance functions and graph metrics.", "In particular, we highlight a class of graph metrics which are compatible with most traditional covariance functions such as those from the Matérn family [19].", "In Section 3 we formally introduce our model, and provide some detail on its interpretation as well as its implementation.", "In Section 4 we proceed to demonstrate the model's performance in comparison to traditional autoregressive models through a case study using bird abundance data for several species in the state of North Carolina, and show that our model performs best in terms of WAIC among the models considered for the majority of species evaluated.", "As part of this case study, we also demonstrate the manner in which our model can provide valuable insight into the interaction between a spatial process and the underlying geography via the posterior distance matrix of the graph on which a process is observed.", "We conclude this article with a discussion of our model's strengths and limitations in comparison to existing methods, as well as potential extensions for future work." ], [ "Background", "Suppose we are given a graph $G = (V,E)$ and random variables $\\lbrace Y_v: v \\in V\\rbrace $ observed at the nodes of $G$ .", "In this article we propose a model for $\\Sigma _{vv^{\\prime }} = \\text{Cov}(Y_v,Y_{v^{\\prime }})$ which is described visually in Figure REF .", "The covariance matrix $$ is parameterized by $$ , a matrix of unknown positive edge weights, and $\\sigma ^2 > 0$ , a scale parameter.", "The parameter $$ is an element of $\\mathcal {W}_G$ , the space of all possible edge weight matrices given $G$ , and $$ is mapped to a distance matrix $$ via a graph metric $\\Delta (\\cdot )$ ; $$ is in turn mapped to a symmetric positive definite matrix $$ using a covariance function $C(\\cdot )$ which also takes the parameter $\\sigma ^2 > 0$ as input.", "In the following two subsections, we establish the background information necessary to justify our model and present it formally in Section 3.", "Figure: Visual representation of the model presented in this article.", "The edge weight parameter is mapped to a distance matrix, which in conjunction with the scale parameter σ 2 \\sigma ^2 is mapped to the covariance matrix ." ], [ "Covariance Functions and Distance", "Let $\\mathcal {D}_p$ be the space of metric distance matrices between $p$ points.", "That is, if $\\in \\mathcal {D}_p$ then $$ is a $p \\times p$ matrix where all diagonal elements of $$ are zero, the off-diagonal entries are positive, $$ is symmetric, and all elements of $$ satisfy the triangle inequality (i.e.", "for any $\\lbrace i,j,k\\rbrace \\subset \\lbrace 1...p\\rbrace $ $d_{ij} \\le d_{ik} + d_{kj}$ ).", "In our model we define the $p \\times p$ covariance matrix $$ as $= C(),$ for $\\in \\mathcal {D}_p$ , where $C(\\cdot )$ is a covariance function which maps distance matrices to $\\mathcal {S}^+_p$ , the space of $p \\times p$ symmetric positive definite matrices.", "To allow for flexibility in the covariance, we will treat the distance matrix of a graph as an unknown parameter to be estimated from the data.", "This in contrast to most spatial models, which take the distances between observations as known and fixed.", "Instead we will parameterize distance as a function of edge weights on the graph, which are estimated from the data.", "This approach allows us define covariance structures for graph data that are considerably more complex than ones obtained using the neighborhood structure of the graph alone.", "Covariance functions defined only by the distances between locations, and not the locations themselves, are said to be stationary and isotropic.", "However, as shown by the deformation approach of [6], such functions can be used to represent nonstationary and anisotropic spatial dependencies if the distances used as inputs are allowed to deviate from the fixed distances defined by the geographic space in which data are observed.", "Stationary and isotropic covariance functions are often defined such that they can be applied element-wise to a distance matrix.", "For $\\in \\mathcal {D}_p$ we can also define $$ by $\\Sigma _{ij} = c(d_{ij})$ where $c(\\cdot )$ is some function applied to a distance between two points.", "For $$ to be a valid covariance matrix it must be symmetric and positive definite, and thus $c(\\cdot )$ must be chosen carefully with respect to $$ .", "There are many well known covariance functions that can be used to produce positive definite matrices using distances as inputs.", "Functions from the Matérn covariance family, originally developed by [19] and given by $c_\\nu (d) = \\sigma ^2\\frac{2^{1-\\nu }}{\\Gamma (\\nu )}\\left( \\sqrt{2\\nu }\\frac{d}{\\tau }\\right)^\\nu K_\\nu \\left( \\sqrt{2\\nu }\\frac{d}{\\tau }\\right)$ are among the most commonly utilized within spatial statistics [1], [2].", "Here the input $d$ is a distance between two points, $\\sigma ^2$ is a scale parameter, $\\tau $ is a range parameter, $\\nu $ is a smoothness parameter, and $K_\\nu $ is a modified Bessel function of the second kind.", "The Matérn family includes the commonly used exponential and Gaussian (also known as the radial basis kernel or squared exponential) covariance functions [2].", "If $c(\\cdot )$ is a member of the Matérn family of covariance functions it is necessary for the the distances contained in $$ to be Euclidean in order to ensure that $$ is a positive definite matrix [20].", "Here we define the space of Euclidean distance matrices $\\mathcal {D}^E_p \\subset \\mathcal {D}_p$ such that if $\\in \\mathcal {D}^E_p$ , there exists $\\lbrace _1, ..., _p\\rbrace \\subset \\mathbb {R}^k$ for some $k$ such that $d_{ij} = \\Vert x_i - x_j\\Vert $ .", "In our model, for a given graph $G = (V,E)$ with $p$ nodes we will require that $$ , the matrix containing pairwise distances between the nodes of $G$ , satisfies $\\in \\mathcal {D}^E_p$ .", "This also implies that there exists an embedding of $G$ into a Euclidean space such that the graph distances are equal to the distances between nodes in the embedding.", "The fact that many of the most common covariance functions (including those from the Matérn family) require Euclidean distances as inputs to guarantee positive definiteness is reasonably well known, and can be confirmed when one considers Bochner's theorem, which provides a necessary and sufficient condition for functions on Euclidean vectors to be positive definite (see [21]).", "However, this requirement has at times been disregarded by researchers who were interested in utilizing common spatial modeling tools with non-Euclidean distances, which can result in non-positive definite covariance matrices and negative prediction variances in applications.", "Such scenarios arise especially often in analyses of graph and network data, where the most commonly used metrics, such as the shortest path distance, often result in non-Euclidean distances.", "[20] discusses this issue and provides several examples of published articles which failed to consider the non-compatibility of their chosen covariance functions and distance metrics." ], [ "Graph Metrics", "To parameterize the covariance of random variables observed on a graph by applying the Matérn covariance function to a matrix of estimated distances, we must be careful that $$ is parameterized in a manner that ensures that it is Euclidean over the entirety of the parameter space.", "To do this, we must consider what graph metrics produce Euclidean distances and how their properties will influence model interpretation.", "Given an undirected, connected, simple graph $G = (V,E)$ with $p$ nodes, we define a class of distance matrices parameterized by the edge weights matrix $$ of $G$ .", "Many graph metrics are functions of $$ where $w_{ij} > 0$ if node $i$ is adjacent to node $j$ and $w_{ij} = 0$ otherwise [22].", "We define $\\mathcal {W}_G$ to be the space of all edge weights matrices possible given $G$ , and note that all information regarding the edges of $G$ is contained in each $\\in \\mathcal {W}_G$ based on whether or not an element of $$ is equal to zero.", "The distance matrix $$ for a graph $G$ is given by $= \\Delta ()$ where $\\Delta (\\cdot )$ is the function for a particular metric which maps an edge weight matrix to a distance matrix.", "For a given graph $G$ and metric $\\Delta (\\cdot )$ we define $\\mathcal {D}^\\Delta _G$ as the space of distance matrices which are possible to obtain under different edge weight configurations for $G$ , that is $\\mathcal {D}^\\Delta _G = \\lbrace =\\Delta (): \\in \\mathcal {W}_G\\rbrace $ .", "We have established that for $= C()$ to be a valid covariance matrix with $C(\\cdot )$ a Matérn family covariance function, $$ must be a Euclidean distance matrix.", "Thus, in choosing a graph metric $\\Delta (\\cdot )$ , we must ensure that $\\mathcal {D}^\\Delta _G \\subset \\mathcal {D}^E_p$ .", "In doing so, we can specify a parameterization of $$ in terms of an unobserved edge weights matrix $\\in \\mathcal {W}_G$ with the property that $= C(\\Delta ()) \\in \\mathcal {S}^+_p$ for all $\\in \\mathcal {W}_G$ .", "This also implies that because $= \\Delta () \\subset \\mathcal {D}_p^E$ , $$ characterizes an embedding of $G$ into Euclidean space.", "This embedding is conceptually similar to the mapping from the $\\mathcal {G}$ -space to the $\\mathcal {D}$ -space in the deformation approach.", "In Section 3 we establish that for appropriate choice of metric, our covariance model is identifiable, which implies that the embedding of $G$ implied by $$ is unique up to isometry.", "The most commonly used graph metric is the shortest path distance [23] (denoted here as $\\Delta ^{SP}(\\cdot )$ ) which defines the distance between nodes $i$ and $j$ as $d_{ij} = \\Delta ^{SP}()_{ij} = \\min _{p_{ij} \\in P_{ij}}\\left( \\sum _{e_{lk} \\in p_{ij}} w_{lk} \\right)$ where $P_{ij}$ is the set of all possible paths between nodes $i$ and $j$ , but can intuitively be described as the sum of the weights along the “shortest\" path connecting those nodes.", "However, this metric generally produces non-Euclidean distance matrices [24] and as such $= C()$ is not guaranteed to be positive definite if $C(\\cdot )$ is Matérn.", "A commonly used alternative graph metric is the resistance distance [23], denoted $\\Delta ^{\\Omega }(\\cdot )$ , where the distance between nodes $i$ and $j$ is defined as $d_{ij} = \\Delta ^\\Omega ()_{ij} = (_i - _j)^\\top ^+(_i-_j)$ where $^+$ is the Moore-Penrose generalized inverse of the graph Laplacian, defined as $= \\text{diag}(_p) - $ where $\\text{diag}(\\cdot )$ is a function taking a $p$ -length vector and returning a $p \\times p$ diagonal matrix, $_p$ is a vector of ones, and $\\lbrace _i\\rbrace _{1:p}$ are the standard basis vectors.", "This metric was introduced to the mathematical literature by [25] with roots in electrical physics and is based on the formula for calculating the effective resistance between nodes in a network of resistors.", "Attractive properties of resistance distance include the fact that the distance between two nodes is reduced by the number of paths connecting said nodes, a property that shortest path distance does not possess [25], and that it is proportional to commute time distance, i.e.", "the expected hitting time of a random walk between two nodes [26].", "Use of resistance distance is widespread within ecological settings as both of the above properties characterize a type of flow-based graph connectivity which corresponds nicely with existing models for the movement of individual organisms as well as animal populations [27], [28].", "Because $^+$ is always a positive semi-definite matrix, resistance distance may be viewed as a squared Euclidean (or Mahalanobis) distance.", "[24] highlight that a related class of metrics of the form $d_{ij} = \\Delta ^{(m)}()_{ij} = \\sqrt{(_i - _j)^\\top \\lbrace ^+\\rbrace ^m(_i-_j)}$ where $m>0$ is an exponent (i.e $\\lbrace ^+\\rbrace ^2 = ^+^+$ ), are guaranteed to produce Euclidean distance matrices for any $m>0$ and all $\\in \\mathcal {W}_G$ .", "Thus, all metrics of this form can be used in conjunction with any covariance function which requires Euclidean distances as inputs, such as those from the Matérn class.", "The square root of resistance distance is one of these metrics (when $m=1$ ), but we are particularly interested in using the second metric of this form (when $m=2$ )—often referred to as the quasi-Euclidean metric—because it shares many of the same properties regarding graph connectivity with resistance distance and has been demonstrated to be well behaved in comparison to other metrics, especially when used with planar graphs which are typically seen in spatial and ecological settings [29], [30].", "The quasi-Euclidean metric also scales linearly with the inverse of the edge weights, that is, $c= \\Delta ^{(2)}(/c)$ for $c > 0$ .", "This property is convenient for computation and enables greater parameter interpretability.", "Subsection 3.2 contains a numeric illustration of how the quasi-Euclidean metric operates within our model." ], [ "Covariance Model", "Given a simple, undirected graph $G = (V,E)$ with $p$ nodes, we make repeated observations of the random variables $\\lbrace Y_v: v \\in V\\rbrace $ ; each repetition is recorded as $_i \\in \\mathbb {R}^p$ for $i \\in 1,...,n$ .", "These observations may be stored in the $n \\times p$ data matrix $$ , where $y_{ij}$ is the $i$ th observation at the $j$ th node of $G$ .", "We define $$ , the between-node covariance of each $_i$ , in terms of the intrinsic distances between the nodes of $G$ which are in turn a function of the known structure of the graph $(V,E)$ and unobserved edge weight parameter matrix $$ , which is estimated from the data.", "Our parameterization of $$ could be used in conjunction with any number of distributional assumptions for $$ , including temporal dependence or non-normality, but to make presentation concrete we initially define our model using the assumption that $\\lbrace _i\\rbrace _{1:n}$ are independent draws from a $N(_p,)$ distribution.", "We will discuss possible extensions later in the article.", "We parameterize $$ as follows: $\\begin{aligned}\\Sigma _{jk} &= \\sigma ^2 \\rho _\\nu (d_{jk}) \\\\d_{jk} &= \\sqrt{(_j - _k)^\\top \\lbrace ^+\\rbrace ^2(_j-_k)} \\\\&= \\text{diag}(_p) - \\\\\\sigma ^2 &> 0, \\; w_{jk} > 0 \\text{ if } j \\sim k, \\text{ else } w_{jk} = 0.\\end{aligned}$ Under this model, the data $$ comes from a mean-zero matrix-normal distribution with column covariance $$ and row covariance $_n$ .", "The parameter $$ is a $p \\times p$ edge weight matrix, $\\sigma ^2$ is a scale parameter, and $\\rho _\\nu (\\cdot )$ is the Matérn correlation function given below: $\\rho _\\nu (d) = \\frac{2^{1-\\nu }}{\\Gamma (\\nu )}\\left( \\sqrt{2\\nu }d\\right)^\\nu K_\\nu \\left( \\sqrt{2\\nu }d\\right).$ Common practice is to take the smoothness parameter $\\nu $ as fixed rather than as a parameter to be estimated.", "Within this article we will set $\\nu = 5/2$ , which implies a spatial process that is smoother than those given by an exponential covariance function, but less smooth than those defined with Gaussian covariance; however, any value of $\\nu > 0$ would be valid in this model [31].", "Note that the correlation function defined above does not include an explicit range or spatial-decay parameter.", "Equation REF , the standard form of the Matérn covariance function, contains the parameter $\\tau $ whereas the correlation function we are using, given in Equation REF , does not.", "Despite the lack of a parameter $\\tau $ , which directly scales the distances used as inputs in the correlation function, our parameterization of $$ implicitly controls for the rate of spatial decay thru the edge weight matrix $$ : Let $$ be the $p \\times p$ distance matrix produced by applying the quasi-Euclidean metric to the edge weight matrix $$ .", "The third and fourth lines of Equation REF show how $$ is obtained.", "Let $\\Delta ^{(2)}(\\cdot )$ be the function such that $= \\Delta ^{(2)}()$ .", "Recall that $c= \\Delta ^{(2)}(/c)$ for any positive constant $c$ .", "The distances in our model are directly scaled by the magnitude of the edge weights, which are themselves unrestricted above zero rendering an additional parameter $\\tau $ where correlation is a function of $d/\\tau $ unnecessary.", "To interpret our model's estimate of the spatial decay in the covariance of $$ , we recommend re-scaling the distance matrix $$ after fitting.", "If we define $^s = /\\text{max}()$ and $\\tau ^s = 1/\\text{max}()$ , then the covariance defined by $\\Sigma _{jk} = \\rho _\\nu (d^s_{jk}/\\tau ^s)$ is equal to the one defined $\\Sigma _{jk} = \\rho _\\nu (d_{jk})$ , and $\\tau ^s$ has equivalent interpretation to a range parameter in a covariance model for point-referenced data in which the geographic distances between observations were scaled to have a maximum distance of one.", "As parameterized, our model is identifiable: Proposition 1 (Identifiability) Using the parameterization for $$ given in Equation REF , $(\\sigma ^2_1, _1) \\ne (\\sigma ^2_2, _2)$ implies $_1 \\ne _2$ for all $\\sigma ^2_1,\\sigma ^2_2 > 0$ and all $_1, _2 \\in \\mathcal {W}_G$ .", "A proof of the model's identifiability is provided in the appendix of this article." ], [ "Interpretation", "Rather than characterizing nonstationary and anisotropic spatial covariance via the use of a complex covariance function or kernel, our model uses a relatively simple covariance function, and pushes most of the modeling complexity into the estimation of latent distances between graph nodes, which are themselves functions of a set of edge weight parameters; nodes that are close to one another are more correlated, while nodes that are far apart are less correlated.", "The metric we utilize to define distances over the graph was chosen to ensure that the resulting covariance matrix will be positive definite for any combination of edge weight parameters, but what of the edge weight parameters themselves?", "Generally speaking, larger edge weight parameters indicate greater correlation between nodes in the regions of the graph where that edge is found, while smaller edge weight parameters indicate lower correlation in those regions.", "When edge weights approach zero, the corresponding edges are effectively omitted from the graph, which may be thought of as a type of automatic model selection for the design of the graph itself.", "Figure REF contains an illustration of how edge weights impact covariance in our model.", "It depicts four sets of edge weights for the same five-node graph and shows the resulting correlation matrix when a Matérn correlation function with $\\nu = 5/2$ is applied to the distance matrix produced by the edge weights.", "Subfigures REF and REF highlight how the scale of edge weights matter, with larger edge weights producing greater between-node correlation.", "Subfigure REF demonstrates how it is possible to obtain nonstationarity when certain edge weights are larger than others, with nodes along the path from 4 to 1 to 2 exhibiting greater intercorrelation than the rest of the graph.", "Subfigure REF shows that it is possible under our model to obtain covariance structures where non-adjacent nodes (1 and 5) are more correlated (in effect closer together) than any other pair of first-order neighbors, which is possible due to the properties of the quasi-Euclidean metric, which characterizes the distances between nodes as shorter the greater the number of connecting paths between them.", "Figure: The covariance matrices for a simple graph resulting from several different edge weight configurations when using a Matérn covariance function with ν=5/2\\nu = 5/2." ], [ "Comparison to CAR models", "The covariance model specified here is quite flexible, especially in contrast to the commonly used graph-based methods which treat all first-order neighbors as equally proximate, such as most CAR models.", "We illustrate this with a simple example.", "The correlation component of our model for $$ is determined fully by $\\in \\mathcal {W}_G$ , while $\\sigma ^2$ simply acts to scale the correlation matrix defined by $$ .", "Applying the quasi-Euclidean metric to $$ in turn characterizes $\\mathcal {D}^\\Delta _G,$ the space of distance matrices possible under the model, which in turn can be used to define $\\mathcal {S}^{(C \\circ \\Delta )}_G \\subset \\mathcal {S}^+_p$ , the subset of covariance matrices that can be obtained under our model for a given graph, metric, and covariance function.", "Fitting our model to data results in us finding the “best\" $\\hat{}$ in $\\mathcal {S}^{(C \\circ \\Delta )}$ to approximate $$ as defined by maximizing likelihood, minimizing Bayesian loss, or some other approach.", "Just as with our model, other specifications of $$ using fewer than $p(p-1)/2$ parameters (the number of free elements in a $p \\times p$ covariance matrix) characterize a proper subset of $\\mathcal {S}^+_p$ containing all covariance matrices that are possible to obtain under that model.", "Figure REF depicts a graph $G$ with five nodes, and two correlation matrices, $_1$ and $_2$ , which represent possible dependence structures for data observed on $G$ ; we note that neither $_1$ or $_2$ are elements of $\\mathcal {S}^{(C \\circ \\Delta )}_G$ .", "The matrix $_1$ depicts a correlation structure in which one side of $G$ (the path connecting nodes 4-1-2) exhibits greater correlation than the rest of the graph, while $_2$ depicts a correlation structure in which non-adjacent nodes (1 and 5) are more correlated with one another than any pair of adjacent nodes in $G$ .", "These two configurations of $$ were chosen to show types of graph nonstationarity which could be present in real world data, but are not meant to represent the full range of possible cases.", "In this example, we assume a multivariate normal, mean-zero distribution and find that best approximations of $_1$ and $_2$ from $\\mathcal {S}^{(C \\circ \\Delta )}_G$ minimize Kullback-Leibler (KL) divergence to a greater degree than the equivalent best approximations from the CAR models we considered.", "Specifically, $\\hat{}_1$ and $\\hat{}_2$ are obtained by identifying the optimal set of edge weights such that $\\hat{}_i = \\operatornamewithlimits{arg\\,min}_{} \\text{KL}(N(,),N(,_i)), \\; \\in \\mathcal {S}^{(C \\circ \\Delta )}_G, \\; i = 1,2$ Figure: A graph GG and two correlation matrices used for model fit comparisons.", "We identify the closest approximations of 1 _1 and 2 _2 from the subsets of correlation matrices defined by multiple models.The most widely used CAR variant is the first-order CAR model, which we will refer to as the CAR1 model; it incorporates only the basic adjacency structure of $G$ into its design and has a marginal distribution which defines $= \\sigma ^2(_n-\\kappa )^{-1}, \\quad \\sigma ^2 > 0, \\; \\kappa \\in (-1,1)$ where $$ is the fixed adjacency matrix of $G$ .", "Let $\\mathcal {S}^{\\text{CAR1}}_G$ be the space of covariance matrices possible to obtain under the CAR1 model.", "Given that the CAR1 model has only two parameters, it may be unfair to compare it to a more complex model such as the one proposed in this article, which has number of parameters equal to the number of edges in $G$ (8 in the case of $G$ from Figure REF ) plus one.", "We thus propose a second more complex CAR model, which we refer to as the weighted CAR or CARw model, which represents the maximally flexible correlation structure possible under the CAR framework, (as characterized by this article,) in which we treat all non-zero elements of the weights matrix as parameters, defining $= \\sigma ^2(\\text{diag}() - \\kappa )^{-1}, \\quad \\sigma ^2 >0, \\; \\kappa \\in (-1,1), \\; \\in \\mathcal {W}_G.$ Let $\\mathcal {S}^{\\text{CARw}}_G$ be the space of covariance matrices possible to obtain under this CAR model, which is specified using a weights matrix $$ for which edge weights are estimated individually, just as with the model described in Section 3.", "This type of autoregressive model is uncommon within the literature, though a similar specification appears in [32].", "In general, the limited number of existing CAR approaches utilizing a flexible weights matrix tend to define the elements of $$ as some function of environmental covariates, geographic distances, or the sizes of areal units rather than estimating individual edge weights directly [10], [33], [16].", "For each of the three models discussed above and the two correlation matrices from Figure REF , we approximated the correlation matrix from each model which minimizes the KL-divergence between the distributions characterized by the true $$ and the model estimate $\\hat{}$ .", "This was done by generating 100,000 samples from $N_5(_5,)$ and obtaining a Bayes estimator for $$ using the simulated data (the details for obtaining this estimator are described in the following subsection).", "The Bayes estimator has been shown to concentrate about the parameter value which minimizes KL-divergence between a model and the true distribution as sample size increases [34], [35].", "For this exercise we scaled each draw of $$ within our sampler to ensure that it was a correlation matrix; this was done to improve the fairness of the comparison between our model and the CAR models which will generally have heterogeneous diagonal variance if specified as in equations REF and REF .", "Table REF contains the KL-divergences between each model and the truth for $_1$ and $_2$ .", "$\\hat{}$ is the approximate optimal parameter under each model in terms of KL-divergence.", "Lower KL-divergence indicates that the subset of covariance matrices defined by a particular model gets closer to the true data-generating distribution.", "As can be seen, our model is able to more closely approximate both $_1$ and $_2$ than the CAR variants we considered.", "The CAR1 model performs notably worse than our method, which is unsurprising given that it uses far fewer parameters and the KL-divergence does not account for model complexity.", "The CARw model has one more parameter than our model, yet still performs worse for both $_1$ and $_2$ with the gap in performance being more apparent for $_2$ .", "This is due in part to the fact that our model has greater flexibility than most autoregressive models to produce covariance matrices for which non-adjacent nodes are more correlated with one another than to any of the intermediate nodes due how the quasi-Euclidean metric defines distance in the presence of multiple viable paths between locations.", "Table: KL(N(, i ^),N(, i ))\\text{KL}(N(,\\hat{_i}),N(,_i)) for each model and correlation matrix.", "The KL-divergence of the best performing model for each “true\" correlation matrix is in bold." ], [ "Priors and Model Fitting", "While one could obtain parameter estimates using maximum likelihood estimatation, the complexity of the relationship between edge weights and the covariance matrix makes maximizing the likelihood challenging in comparison to Bayes estimation using a Markov chain Monte Carlo (MCMC) approach.", "We suggest priors in accordance with the parameter space of our model which requires that $\\sigma ^2$ and all edge weight parameters $\\lbrace w_{jk}: j \\sim k\\rbrace $ must be positive real numbers.", "If the nodes of the graph correspond to a known orientation in physical space (as is frequently the case when dealing with areal data) one can obtain the geographic distance matrix $^{Geo}$ by calculating the physical distances between centroids of areal units.", "It is then possible to encode this geographic information into the prior for the edge weights, which may be helpful for model fitting, especially when dealing with a limited number of observations of the process over the graph.", "In such cases, we propose using the following priors: $\\begin{aligned}\\sigma ^2 &\\sim \\text{Inverse-gamma}(a_{\\sigma ^2},b_{\\sigma ^2}) \\\\w_{jk} &\\sim \\text{Gamma}(a_w/d^{Geo}_{jk},b_w) \\text{ if } j \\sim k,\\text{ else } w_{jk} = 0.\\end{aligned}$ Here $d^{Geo}_{jk}$ is the geographic distance between nodes $j$ and $k$ , and the prior expectation of the associated edge weight is $a_w/(d^{Geo}_{jk}b_w)$ which is smaller for adjacent nodes which are far apart than for adjacent nodes which are closer in physical space.", "In instances where no geographic information is available about the graph on which data is observed, it is appropriate to simply use i.i.d.", "gamma priors for the edge weights.", "The choice of inverse-gamma prior for $\\sigma ^2$ is natural when $$ has a normal likelihood due to built-in conjugacy.", "The choice to use independent gamma priors for all edge weights is somewhat simplistic, as it may be reasonable to assume some level of dependence between the weights of incident edges.", "The MCMC sampler we use to fit this model is provided in Algorithm REF , which describes how to obtain posterior samples for $\\sigma ^2$ and edge weight matrix $$ .", "In essence, we use the full conditional inverse-gamma distribution of $\\sigma ^2$ to perform a Gibbs update at each iteration, and perform a block Metropolis-Hastings update for the edge weights, in which new values for groups of edges are proposed and either accepted or rejected.", "[t!]", "MCMC Sampler for Covariance Model Input: $$ an $n \\times p$ data matrix, and $G$ a graph with $p$ nodes Output: $T$ posterior samples for model parameters $\\sigma ^2$ and $$ Initialize $\\sigma ^{2(0)}$ and $^{(0)}$ and set $^{(0)} = c_\\nu (\\sigma ^{2(0)},^{(0)})$ $t = 1 \\text{ to } T$ Update $\\sigma ^{2(t)}$ by taking draw from full conditional $\\pi (\\sigma ^{2(t)}|,^{(t-1)})$ Set $^{cur} = ^{(t-1)}$ Set $^{cur} = c_\\nu (\\sigma ^{2(t)},^{(t-1)})$ $j = 1 \\text{ to } p$ Set $^* = ^{cur}$ Define $K = \\lbrace k: j \\sim k\\rbrace $ (set of nodes adjacent to node $j$ ) $k \\in K$ Propose $w_{jk}^*$ from $g(w_{jk}^*|w_{jk}^{cur})$ and set $w_{kj}^* = w_{jk}^*$ Set $^* = c_\\nu (\\sigma ^{2(t)},^*)$ Compute $\\alpha = \\frac{ \\displaystyle f(|^*) \\prod _{k \\in K}\\pi (w_{jk}^*) g(w_{jk}^*|w_{jk}^{cur})}{ \\displaystyle f(|^{cur}) \\prod _{k \\in K}\\pi (w_{jk}^{cur}) g(w_{jk}^{cur}|w_{jk}^*)}$ Generate $u$ from $U(0,1)$ $u < \\text{min}(1,\\alpha )$ $w_{jk}^{cur} \\leftarrow w_{jk}^*$ and set $w_{jk}^{cur} = w_{kj}^{cur}$ for all $k \\in K$ $^{cur} \\leftarrow ^*$ $^{(t)} \\leftarrow ^{cur}$ $^{(t)} \\leftarrow ^{cur}$ The model is complex and the sampler used is somewhat computationally expensive, but we have found that it mixes well and within reasonable time periods for most small and medium sized graphs ($p < 500$ nodes).", "The data for the application described in the following section had $n = 36$ observations and $p =100$ nodes.", "It required approximately three hours to obtain 2500 iterations of the sampler using code written in R and run single threaded on the lead author's personal laptop equipped with an Intel Core i7-9750H processor.", "The code used for our analysis is included with the supplemental materials." ], [ "An Application Using NC eBird Data", "Returning to the data discussed in the introduction of this article, and which motivated the development of this method, we now fit our model to bird abundance data within the state of North Carolina.", "As alluded to in our introduction, there are varying patterns of between-site dependence among species within this spatial domain, and we are interested in obtaining the species-specific intrinsic distances which characterize that spatial dependence.", "The data were retrieved from the eBird database, a project of the Cornell Lab of Ornithology that allows bird watchers across the globe to record and submit their bird watching observations in the form of checklists that contain the specimen count for each observed species as well as the observer's location and “effort,\" or time spent bird watching [4].", "The model was fit to the data for several common species within the state of North Carolina during the time period from January 2018 to December 2020.", "The species considered were the northern cardinal, the Carolina wren, the mourning dove, the turkey vulture, the Canada goose, the laughing gull, and the mallard.", "In addition to being abundant throughout the state, these species were chosen as they represent a considerable range of genetic and habitat diversity.", "For each species in the data set, observed counts were aggregated temporally by month and spatially by county.", "The data was stored in a $36 \\times 100$ matrix $$ , in which $x_{ij}$ represents the combined number of individual birds from that species observed during month $i = 1,...,36$ within the borders of county $j = 1,...,100$ by all birdwatchers contributing to the database.", "In addition to the species-specific counts matrices, there is a $36 \\times 100$ effort matrix $$ common to all species, where $t_{ij}$ contains the total time spent bird watching during month $i$ within county $j$ , as well as a geographic distance matrix $^{Geo}$ , obtained by calculating the physical distances between the geographical centroid of all bird watching locations within each county.", "To account for non-uniformity in $$ across counties, we transform the observed counts to obtain $$ , the $36 \\times 100$ matrix of log-rates at which species were observed.", "We define $y_{ij} = \\text{log}([x_{ij}+.5]/t_{ij})$ .", "After subtracting the row means from $$ to account for temporal variation in bird abundance, we find that the residuals given by $- \\lbrace _{100}_{100}^\\top /100\\rbrace $ have histograms that are approximately bell-shaped for all seven species, leading us to the choice to model $$ as coming from a normal distribution.", "By treating the 100 counties of North Carolina as nodes, with 256 edges connecting each pair of adjacent counties, we obtain the graph representation of the state depicted in Figure REF .", "Figure: North Carolina's 100 counties represented as a graphFor each species, we model $$ using the following spatial random effects model with temporally-varying mean to account for seasonal changes in bird abundance: $\\begin{aligned}&= \\mu _n _p^\\top + _p^\\top + + \\\\&\\sim N_{n\\times p}(_{n \\times p}, (\\sigma ^2,) \\otimes _n) \\\\&\\sim N_{n\\times p}(_{n \\times p}, \\psi _{np}) \\\\\\mu &\\in \\mathbb {R}, \\; \\; \\in \\mathbb {R}^n, \\; \\; \\sigma ^2,\\psi >0, \\; \\; \\in \\mathcal {W}_G.\\end{aligned}$ Here $\\mu $ is an intercept term and $\\alpha _i$ is the temporally-varying mean effect for month $i$ .", "The spatial random effects matrix $$ has independent rows and between-county covariance given by $$ as defined as in equation REF ; $$ is a matrix of iid Gaussian noise.", "Of primary interest to us is $$ and the underlying distance matrix $$ , which can be viewed as containing the species-specific intrinsic distances between counties in North Carolina, with other model parameters primarily serving to enable an appropriate fit for $$ .", "Note that the model as written does not include any environmental covariates, although it would be straightforward to include them in the mean of the linear model.", "It could also be of interest to include environmental covariates in the estimation of $$ , a point which we will discuss in Section 5.", "We use the priors given in equation REF for $\\sigma ^2$ and $$ and use the following priors for the remaining model parameters: $\\begin{aligned}\\mu &\\sim N(0,\\theta _0) \\\\&\\sim N(_n, \\theta _\\alpha _n) \\\\\\psi &\\sim \\text{Inverse-gamma}(a_{\\psi },b_{\\psi }) \\\\\\theta _{\\alpha } &\\sim \\text{Inverse-gamma}(a_{\\alpha },b_{\\alpha }) \\\\\\theta _{\\beta } &\\sim \\text{Inverse-gamma}(a_{\\beta },b_{\\beta })\\end{aligned}$ To ensure identifiability of the mean effects we fix $\\alpha _1 = 0$ .", "In our implementation of this model we set prior parameters $a_{\\sigma ^2} = a_w = a_\\psi = a_\\alpha = a_\\beta = 2$ and $b_{\\sigma ^2} = b_w = b_\\psi = b_\\alpha = b_\\beta = 1$ .", "Because these priors result in fully conjugate conditional distributions, this model can be fit using the sampler provided in Algorithm REF with the addition of Gibbs steps for updating the parameters in equation REF ." ], [ "Interpretation of Results", "To interpret the output of our model (which we subsequently refer to as the nonstationary graph covariance or NSGC model), it is useful to have a cursory understanding of the geography of North Carolina.", "The majority of physical geographic variability within the state of North Carolina tends to run on an east to west axis.", "Figure REF depicts the four main geographic regions of the state, with the easternmost tidewater region, followed by the inner coastal plain, the Piedmont plateau in the center of the state, with the Blue Ridge mountains running through the westernmost part of North Carolina [36], [37].", "Figure: The four main geographic regions of the state of North Carolina.", "Image retrieved from NCpedia (ncpedia.org) and accessed on 20 February 2022.We are interested in how these geographic regions may manifest themselves within the intrinsic distances across the state for each species.", "Are there species for which distances along the eastern coast of the state are greater or smaller relative to the intrinsic distances from the coast moving inland?", "Are there species for which the mountain ranges in the west of the state significantly impact their spatial distribution while other species are unaffected?", "Figure REF depicts the approximate posterior distribution for selected elements of the scaled posterior distance matrix for each species.", "Subfigure REF depicts the state of North Carolina and four line segments labeled “A\", “B\", “C\", and “D\", with these segments chosen to reflect various physical regions within the state, or the borders between them.", "For each of the seven bird species, there is a different distance associated with each of the four line segments; these distances are shorter, and the counties at their endpoints are “closer\" to one another if there is high correlation in the bird counts between locations.", "Subfigures REF and REF show the posterior distributions of the intrinsic distances associated with two pairings of those segments.", "Both of these pairings corresponds to a geographic contrast that may be useful for understanding how these species spatial distributions interact with the physical properties of the environments they inhabit.", "We see that the intrinsic distances associated with each of the segments differ from species to species, with the posterior for the laughing gull being more divergent from the other six species in both subfigures.", "This divergence makes sense when one considers that the laughing gull it is the only seabird of the seven species, having a natural range of habitats with the least overlap among the species considered [37].", "We see this in subfigure REF , which shows that for a laughing gull, the distance associated with line segment “D,\" which runs along the coast, is relatively shorter when compared to the other species, while the distance associated with “C,\" which crosses the border from the tidewater region inland to the coastal plains, is relatively longer.", "This can be interpreted to mean that locations along the coast are “closer\" together for this species than the other six, and indicates that proximity to the ocean influences the between-county covariance for laughing gulls to a greater degree than for the other species.", "Subfigure REF highlights the contrast between the distances associated with segment “A,\" which spans across a mountain range, and “B\" which runs along the relatively flat Piedmont.", "Birds with posteriors concentrated at higher values along the x-axis of this plot, such as the Carolina wren, may be more inhibited in movement by the changes in elevation and habitat along segment “A\" than the others.", "Figure: Visualization of the uncertainty in posterior distances associated with four line segments for each bird speciesWe would also like to visualize the entire mean posterior distance matrix for each species.", "Because this matrix corresponds to a high dimensional Euclidean embedding of the graph, it is challenging to represent with a single plot and without significant distortion.", "We have found that plots of the posterior mean edge weights are not straightforward to interpret due to issues of scale and dependence on the number of nodes within a given neighborhood.", "Instead we instead present a visualization which highlights the contrast between the posterior mean intrinsic distance and the physical distance between each pair of adjacent counties; this provides us not only a sense of the relative distances between counties, but more significantly of the deformation of the graph's geographic orientation necessary to produce the intrinsic distances.", "We have done this by calculating $z^d_{jk}$ , a score designed to represent whether the intrinsic distance between adjacent nodes $j$ and $k$ is smaller or greater than suggested by the physical distance between those nodes.", "$z^d_{jk} = Z(^{Geo}(E)) - Z(\\overline{((E)|)})_{jk}$ Here $(E)$ indicates the set of distances between nodes connected by edges of our graph, and $\\overline{((E)|)})$ is the set of posterior means for those same distances.", "The function $Z(\\cdot )$ scales a set to have mean zero and standard deviation one.", "High values of $z^d_{jk}$ indicate that nodes $j$ and $k$ are intrinsically “closer\" together relative to their physical separation, while low values indicate that they are more distant.", "Figure: Maps of d ^d for Carolina Wren and Laughing Gull.", "Warm colors indicate areas which are “closer\" than suggested by their geographic distance and cool colors more distant.Figure REF contains two plots depicting $z^d_{jk}$ for all adjacent counties.", "Subfigure REF contains a map of North Carolina for the Carolina wren, a species present throughout the state, but is less common in colder regions and at higher altitudes [37].", "We note the band of dark blue line segments running approximately along longitude $-82^\\circ E$ , which indicates that observations of Carolina wren on either side of this border are less correlated than one would expect if they believed the data generating process to be stationary.", "Significantly, that border tracks along a similar line between the mountainous and plateau regions of the state, revealing a physical explanation for what we have observed from our model's output.", "Subfigure REF depicts a similar map for the laughing gull, which is found most abundantly along the coast of the state, and becomes progressively less common as one moves inland [37].", "The edges along the coast are red and orange, indicating greater connectivity, with a blue band running parallel to the coast at approximately $-77^\\circ E$ , along the border between the coast and coastal plains regions of the state.", "This may be viewed as a sort of buffer-zone which inhibits movement or between site dependence from the coast inland.", "There is another teal band along $-79^\\circ E$ , where the coastal plain meets the Piedmont plateau, which may correspond to a second buffer zone which inhibits further westward movement.", "It is worth noting that laughing gulls west of that longitude are extremely rare within the data.", "For both species, these plots reveal significant interactions between their spatial distributions and the geography of where they were observed, despite the fact that no environmental covariates were included in this version the model, a point on which we will elaborate in the following section.", "Taken together, the plots (along with the plots not included for the other five species analyzed) reveal that there is a unique covariance structure and underlying distance between locations associated with each species.", "An additional quantity of interest in many spatial data applications is the ratio $\\sigma ^2/(\\sigma ^2+\\psi )$ which may be thought of as the percentage of total variability attributable to the spatial process (in comparison to the nugget effect represented by $\\psi $ ).", "Table REF contains a posterior summary of this ratio for each of the seven bird species.", "We note that it is particularly low for the Mallard, indicating a weak spatial signal in the data for that particular species, a result consistent with findings made during exploratory analysis of the empirical variograms of the data.", "Table: Posterior mean and 95% credible intervals of ratio σ 2 /(σ 2 +ψ)\\sigma ^2/(\\sigma ^2+\\psi ) for all seven analyzed species" ], [ "Fitted Model Comparison", "We are interested in evaluating the performance of our model on this dataset in comparison to existing methods for similar datasets, many of which have already been discussed within this article.", "In addition to the NSGC model, we fit the following four models for $$ to the data for each of the seven bird species.", "Other than the specification of $$ , each model was fit in the same manner according to equations REF and REF .", "$\\begin{aligned}\\text{Naive: } &= \\sigma ^2 \\rho _{5/2}(^{Geo}/ \\tau ^2) \\\\\\text{CAR1: } &= \\sigma ^2 (_p - \\kappa )^{-1} \\\\\\text{SAR1: } &= \\sigma ^2 [(_p - \\kappa )(_p - \\kappa )]^{-1} \\\\\\text{CAR4Dr: }&= \\sigma ^2 (\\text{diag}([_4 \\odot \\text{exp}(-_{Geo}/\\tau ^2)]_p) - \\kappa [_4 \\odot \\text{exp}(-_{Geo}/\\tau ^2)])^{-1}\\end{aligned}$ The notation here is as before, with $_4$ denoting the adjacency matrix for $G$ containing first thru fourth-order neighbors—that is to say $a_{4jk} = 1$ if there is a path with four or fewer edges connecting nodes $j$ and $k$ and $a_{4jk} = 0$ otherwise—and $\\odot $ indicating the Hadamard product.", "The notation $\\text{exp}(-_G/\\tau ^2)$ indicates that the exponential function is applied element-wise to the matrix $-_G/\\tau ^2$ .", "The naive model applies a Matérn covariance to the geographic distances between county centroids, treating the data as though it were point-referenced rather than areal.", "The CAR1 model is the basic CAR model described in section 3.3, while the SAR1 model is another common autoregressive model that tends to produce smoother covariances than the CAR1 [12].", "The CAR4Dr model is the best performing model from [16], a review paper which discusses and compares sixteen different autoregressive models for spatial data on graphs.", "It utilizes a higher order neighborhood structure, which is in turn weighted by the geographic distances between areal regions.", "The CAR4Dr model also uses row-standardization for its weight matrix, a common practice in the autoregressive model literature [12].", "In fitting these four models the same priors given in equation REF were used for all parameters not involved in the construction of $$ .", "Of the remaining parameters, and using the notation from equation REF each instance of $\\sigma ^2$ was given a diffuse inverse-gamma prior, $\\tau ^2$ was given a diffuse gamma prior, and $\\kappa $ a uniform prior on (-1,1).", "As before, the full conditional of $\\sigma ^2$ results in a Gibbs update, while the other two parameters can be sampled using a Metropolis-Hastings step.", "Upon obtaining posterior samples from each model for each species, we can analyze them to assess model performance.", "The Watanabe-Akaike information criteria (WAIC, also referred to at times as the “widely applicable information criteria\") was introduced by [38], and is designed to approximate the results of leave-one-out cross validation.", "It also tends to penalize model complexity to a greater degree than other common information criteria such as AIC and DIC.", "[35] discusses WAIC and other information criteria within a Bayesian context, indicating a preference for WAIC, finding that it is reasonably consistent with the results given by cross-validation while being considerably cheaper to compute.", "Table REF contains the WAICs for each of the five models when fit to each of the seven bird species we analyzed.", "As can be seen, our method (NSGC) performs best by this criteria for five of the seven species considered despite considerably greater model complexity, which suggests that the spatial dependence present in the data for several species is more complex than can be represented using simpler methods.", "The two birds for which our model did not perform best according to WAIC were the mourning dove and the mallard.", "In the case of the Mourning Dove, the NSGC model had the highest log likelihood, but the model complexity penalty of WAIC lowered it into second place, indicating a relatively simple dependence structure for that species.", "Our model performed the worst of the five for the mallard data, and even had the lowest log-likelihood among the five models.", "This is likely due to the relative lack of spatial structure in the Mallard data as discussed in Section 4.2.", "We note as well that the range of WAIC values across all five models for the Mallard data is extremely narrow in comparison to the ranges present for other species; this makes sense considering that the choice of how to model spatial random effects is irrelevant in cases where no meaningful spatial effect is present.", "Table: WAIC comparisons for all models and species.", "Lowest WAIC, indicating best model fit, for each species in bold." ], [ "Discussion", "We now briefly discuss a few potential applications, extensions, and questions related to the ideas presented within this article.", "One of the most significant pieces of output from our model is the posterior distance matrix.", "By itself, it can be analyzed to better understand the relationships between nodes of a graph in terms of their latent proximity to one another; but as was the case in our application with the eBird data, there are instances in which one may fit this model repeatedly for each of several variables (unique bird species in the case of the eBird example) which are observed on the same graph.", "In such cases, an analysis of how the posterior distance matrices for each individual variable compare to one another may reveal significant information about the relationships between observed variables.", "[39] presents a framework for analyzing multiple distance matrices computed on the same set of objects.", "Such a framework could be used for example to perform a clustering analysis of every bird species in North Carolina based on their unique distance matrices.", "As noted earlier, environmental covariates were not directly included within our model.", "Yet our interpretation of the model output from section 4.2 focused heavily on the physical geography of the state of North Carolina.", "In some senses our model can be thought of as a method by which one can reveal unobserved covariates which may impact between site covariance, similarly how we were able to “see\" the physical regions of North Carolina in the model output despite such information not being included in the model itself.", "While it would be relatively straightforward to obtain a collection of county-level environmental covariates which may impact bird abundance in North Carolina, it is not difficult to imagine circumstances under which one may have a data set and a spatial graph structure on which it was observed, but no covariate information or intuition regarding an appropriate set of covariates to include in a model.", "Our method is able to account for the spatial dependence that may be attributable to those missing covariates, and a posteriori analysis of the distance matrix may enable one to learn about the relationship between covariates and covariance.", "That said, in many instances we will be interested in the inclusion of environmental covariates in our model.", "We may wish to incorporate them into the mean via a linear model, and allow $$ to represent the covariance between observations after accounting for environmental covariates.", "Alternatively, we may be interested in incorporating environmental covariates directly into the specification of $$ itself.", "There are many instances of researchers including environmental covariates in their spatial covariance model as a method of representing covariate-based nonstationarity as discussed in [5].", "Such approaches have also been used in CAR models as in [10] and [33].", "A reanalysis of the eBird data using an adaptation of our method which accounts for the relationship between edge weight parameters and environmental covariates could be fascinating as a tool understand what factors most influence spatial connectivity for each species.", "Within this article we limited ourselves to cases with normally distributed data.", "There is however a wealth of literature for spatially dependant non-Gaussian models.", "The generalized CAR framework of [8] could be straightforwardly adapted for use with our covariance model.", "Species observation data is often zero inflated; accounting for this as in [40] could prove beneficial to model quality in applied settings.", "Other potential extensions include joint species modeling using our methodology.", "One approach could be to perform factor analysis [41], [42] on the model parameters characterizing species-specific covariance enabling one to better understand which species interact with the spatial domain similarly in terms of covariance and intrinsic distances, while providing a natural approach to modelling the intrinsic distances for each species hierarchically.", "The method described in this article provides a flexible and novel framework for modeling the covariance of data observed on a graph structure.", "Its development was inspired by the spatial deformation approach of [6] and the posterior distance matrix produced by fitting our model may be interpreted as containing the intrinsic distances between the nodes of a graph.", "We have demonstrated that our model is equipped to represent nonstationary and anisotropic spatial dependence, and that there are contexts in which it performs better than advanced versions of autoregressive models which are the standard tools for modeling spatially dependent graph data.", "We also find the specification of between-node covariance in terms of intrinsic distances a useful framing for understanding and interpreting model results, as exemplified by our discussion of model output in section 4.2 in which we found that an analysis of the posterior distance matrix revealed connections between the covariance in the eBird dataset and the physical geography of the spatial domain on which it was observed." ], [ "Appendix (Proof of Proposition 1)", "Proposition 1 (Identifiability) Using the parameterization for $$ given in equation REF , $(\\sigma ^2_1, _1) \\ne (\\sigma ^2_2, _2) \\Rightarrow _1 \\ne _2$ for all $\\sigma ^2_1,\\sigma ^2_2 > 0$ and all $_1, _2 \\in \\mathcal {W}_G$ .", "The equation relating $$ to model parameters $(\\sigma ^2,)$ is provided again for reference: $\\begin{aligned}\\Sigma _{jk} &= \\sigma ^2 \\rho _\\nu (d_{ij}) \\\\d_{jk} &= \\sqrt{(_j - _k)^\\top \\lbrace ^+\\rbrace ^2(_j-_k)} \\\\&= \\text{diag}(_p) - \\end{aligned}$ Because $\\rho _\\nu (\\cdot )$ is a correlation function, and the distance from any node to itself is zero, the diagonal elements of $$ are equal to $\\sigma ^2$ .", "Thus $\\sigma ^2_1 \\ne \\sigma ^2_2 \\Rightarrow _1 \\ne _2$ .", "For two distance matrices $_1 \\ne _2$ there exists some pair of nodes $(j,k)$ such that $d_{1jk} \\ne d_{2jk}$ .", "Because $\\rho _\\nu (\\cdot )$ is a strictly decreasing function, $d_{1jk} \\ne d_{2jk} \\Rightarrow \\Sigma _{1jk} \\ne \\Sigma _{2jk}$ .", "We now need only to prove that $_1 \\ne _2 \\Rightarrow _1 \\ne _2$ .", "The transformation from edge weights matrix to to distance matrix as defined by the quasi-Euclidean metric can be rewritten as follows: $\\begin{aligned}&= \\left( _p _{\\lbrace ^+\\rbrace ^2}^\\top + _{\\lbrace ^+\\rbrace ^2} _p^\\top - 2\\lbrace ^+\\rbrace ^2 \\right)^{\\circ \\frac{1}{2}} \\\\&= \\text{diag}(_p) - \\end{aligned}$ Here $_{\\lbrace ^+\\rbrace ^2}$ is defined to be the vector with elements equal to the diagonal entries of $\\lbrace ^+\\rbrace ^2$ and $(\\cdot )^{\\circ \\frac{1}{2}}$ denotes the Hadamard (element-wise) square root of a matrix.", "The Hadamard root is an invertible transformation ($_1 \\ne _2 \\Leftrightarrow _1^{\\circ \\frac{1}{2}} \\ne _2^{\\circ \\frac{1}{2}}$ for all matrices $_1,_2$ ) but the transformation $= \\Delta ()$ for any $p \\times p$ matrix $$ , given below, is not invertible.", "$= \\Delta () = ^\\top _+ _^\\top - 2$ This transformation is fairly well known in other statistical applications involving distances, and appears within the formulation of the quasi-Euclidean metric.", "A linear transformation $f(\\cdot )$ is injective iff its null space $\\mathcal {N}(f) =\\lbrace \\rbrace $ .", "The null space of $\\Delta $ is $\\mathcal {N}(\\Delta ) = \\lbrace : = ^\\top + ^\\top , \\in \\mathbb {R}^p\\rbrace $ as demonstrated below: $\\begin{aligned}&\\text{Suppose } \\Delta () = _{p \\times p}: \\\\&\\Rightarrow = ^{\\prime }_/2 + _^{\\prime }/2 \\\\&\\forall \\in \\mathbb {R}^p, \\text{ let } = ^\\top + ^\\top \\\\&\\Rightarrow _= 2\\\\&\\Rightarrow = ^{\\prime }_/2 + _^{\\prime }/2 \\\\& \\Rightarrow \\mathcal {N}(\\Delta ) = \\lbrace : = ^\\top + ^\\top , \\in \\mathbb {R}^p\\rbrace \\end{aligned}$ Let $\\mathcal {A}_p$ be the space of $p \\times p$ symmetric matrices such that $\\forall \\in \\mathcal {A}_p,$ $_p = _p$ .", "We note that for any graph with $p$ nodes, both its Laplacian matrix $$ and $\\lbrace ^+\\rbrace ^2$ are elements of $\\mathcal {A}_p$ .", "It can be seen that $\\mathcal {N}(\\Delta ) \\cap \\mathcal {A}_p = \\lbrace _{p \\times p} \\rbrace $ , which implies that $\\forall \\in \\mathcal {A}_p$ , $\\Delta () = _{p \\times p} \\Rightarrow = _{p \\times p}$ .", "We now show that this condition implies the injectivity of $\\Delta $ over $\\mathcal {A}_p$ .", "$\\begin{aligned}&\\text{Suppose } (\\forall \\in \\mathcal {A}_p, \\; \\Delta () = \\Rightarrow = ): \\\\&\\text{If } \\Delta (_1) = \\Delta (_2) \\text{ and } _1,_2 \\in \\mathcal {A}\\\\&\\Rightarrow \\Delta (_1) - \\Delta (_2) = \\\\&\\Rightarrow \\Delta (_1 - _2) = \\quad (\\text{b.c. }", "\\Delta \\text{ is linear}) \\\\&\\Rightarrow _1 -_2 = \\quad (\\text{from supposition, note that } (_1 - _2) \\in \\mathcal {A}_p)\\\\&\\Rightarrow _1 = _2\\\\&\\Rightarrow \\Delta \\text{ is injective over } \\mathcal {A}_p\\end{aligned}$ Because $\\lbrace ^+\\rbrace ^2 \\in \\mathcal {A}_p$ , $\\lbrace ^+\\rbrace ^2_1 \\ne \\lbrace ^+\\rbrace ^2_2 \\Rightarrow _1 \\ne _2$ .", "The uniqueness of the Moore-Penrose inverse in conjunction with the fact that Laplacian matrices are always positive semi-definite means that the transformation from $$ to $\\lbrace ^+\\rbrace ^2$ is injective.", "From the definition $=\\text{diag}() - $ , it is clear that $_1 \\ne _2 \\Rightarrow _1 \\ne _2$ , and thus that $(\\sigma ^2_1, _1) \\ne (\\sigma ^2_2, _2) \\Rightarrow _1 \\ne _2$ for all $\\sigma ^2_1,\\sigma ^2_2 > 0$ and all $_1, _2 \\in \\mathcal {W}_G$ ." ] ]
2207.10513
[ [ "Higher-order mean-field theory of chiral waveguide QED" ], [ "Abstract Waveguide QED with cold atoms provides a potent platform for the study of non-equilibrium, many-body, and open-system quantum dynamics.", "Even with weak coupling and strong photon loss, the collective enhancement of light-atom interactions leads to strong correlations of photons arising in transmission, as shown in recent experiments.", "Here we apply an improved mean-field theory based on higher-order cumulant expansions to describe the experimentally relevant, but theoretically elusive, regime of weak coupling and strong driving of large ensembles.", "We determine the transmitted power, squeezing spectra and the degree of second-order coherence, and systematically check the convergence of the results by comparing expansions that truncate cumulants of few-particle correlations at increasing order.", "This reveals the important role of many-body and long-range correlations between atoms in steady state.", "Our approach allows to quantify the trade-off between anti-bunching and output power in previously inaccessible parameter regimes.", "Calculated squeezing spectra show good agreement with measured data, as we present here." ], [ "Introduction", "The strong and tunable interactions among photons and atoms achievable in engineered nano-photonic structures present exciting prospects for fundamental studies in non-equilibrium many-body physics and for applications in quantum technology [1], [2], [3].", "Waveguide QED [4], [5], [6], [7], [8], specifically, offers unique opportunities to study the propagation of light in highly nonlinear media and in the realm of collective coupling with atoms [9], [10], [11], [12], [13], [14], [15].", "A distinctive feature of QED with nanophotonic waveguides is the possibility of realizing a chiral light-matter interaction in which atoms couple exclusively to photons propagating unidirectionally [16], [17], [18], [19], [20], [21].", "It was shown that pulse propagation through an ensemble of non-interacting atoms strongly and chirally coupled to a waveguide is governed by a rich structure of multi-photon states that can lead to time-ordered many-body states of light [22], [23].", "Remarkably, even in the case of weak coupling, where photons are predominantly scattered out of the waveguide, the interplay of losses with the nonlinearity of atoms results in strong correlations of the light [24].", "In recent experiments, these photon-photon correlations have been demonstrated in the form of (anti)-bunching [13] and squeezing [14] in light transmitted through an ensemble of two-level systems that were weakly and chirally coupled to a waveguide.", "Due to their non-equilibrium, many-body, and open-system dynamics, the theoretical description of such experiments is a major challenge at present.", "For lossless chiral systems, the scattering problem can be determined even in the many-body regime [25], [26].", "For the experimentally more relevant regime of weak coupling, this approach can also be applied in the subspace with few excitations, giving good agreement with measurements for small input powers [24], [13], [14].", "However, the same method cannot be applied for stronger driving fields approaching saturation, where states with larger numbers of excitations contribute significantly.", "Instead of propagating the wave function of photons by means of an expansion on scattering eigenstates, it is also possible to infer the properties of transmitted light from the dynamics of atoms using input-output relations, and the quantum-regression theorem [27], [28].", "In principle, this requires the solution of an open many-particle spin model [29], which in turn is only possible exactly in the subspace involving few excitations.", "An approximate treatment of the many-body dynamics of atoms for strong driving may exploit the fact that the coupling to the waveguide is weak.", "Since the dominant scattering of photons from the waveguide acts as a local decoherence channel for each atom and correlations between atoms are induced only via weak collective scattering in guided field modes, one can expect many-body correlations to be limited.", "Approximate descriptions for finitely correlated systems in terms of matrix-product-states (MPS) [30] have been applied to waveguide QED systems with good success [22], [31].", "However, because the one-dimensional geometry supports infinite-range interactions, the finite correlation length imposed by MPS may fail to capture important features, which becomes especially problematic for larger systems.", "Indeed, the long-range nature of interactions [12] and correlations [15] has been explored in recent studies.", "Here, we employ an improved mean-field theory based on a higher order cumulant expansion [32] to solve for the dynamics of a strongly-driven, weakly-coupled, chiral chain of atoms and thus determine the photon statistics of the transmitted light.", "In lowest order, this expansion reduces to ordinary mean-field theory, and essentially assumes a tensor product state of atoms.", "While this reproduces the equations of classical electrodynamics, it obviously fails to account for the collective effects due to quantum correlations [33].", "$n$ -th order mean-field expansions, accounting systematically for genuine $n$ -particle correlations, have received growing interest lately in the context of collective interactions of light with atomic ensembles [34], [35], [36], [37].", "In general, such an expansion reduces the effective dimensionality of the problem at the cost of introducing a nonlinearity in the equations of motion whose complexity grows with the order of expansion.", "Remarkably, as we show here, when applied to a chirally coupled system, the problem stays effectively linear.", "This avoids numerical issues usually arising at larger orders of truncation, and allows us to compare results for different expansions even for systems of considerable size.", "Using this method, we determine squeezing spectra and the degree of second-order coherence in parameter regimes not accessible with other methods.", "In the low power regime, we find results consistent with those found from the expansion in the two-photon subspace  [24].", "For large driving, we find that higher order correlations of atoms play a significant role for describing the correlations in transmitted light.", "We also study the spatial characteristics of two-particle correlations, and show that the system develops intriguing patterns of long-range correlations in steady state.", "Theoretical predictions for squeezing spectra are compared with measurements and show good agreement, extending the discussion in  [14] to the regime of large driving powers.", "The predictions for the antibunching achievable in a chiral waveguide system allow a discussion of the quality, in terms of the Mandel $Q$ parameter, of a stationary single photon source envisioned in [13], [38].", "The paper is structured as follows: In section we introduce the cascaded systems master equation governing the dynamics of the chiral system.", "We also introduce the correlation functions of light, which we aim to determine from the dynamics of atoms.", "Section deals with the lowest order approximation to the system, i.e.", "mean-field theory.", "In section , we introduce the higher order cumulant expansion, provide a systematic comparison of expansions at various order, and discuss the role and characteristics of atomic correlations." ], [ "Cascaded system master equation", "We consider the system shown in Fig.", "REF .", "An arrangement of $N$ two-level atoms is chirally coupled to a waveguide, i.e.", "the atoms effectively couple only to photons propagating in a single direction.", "In addition, each atom couples independently to environmental modes.", "The total rate of spontaneous emission $\\Gamma $ is accordingly composed of an emission rate $\\beta \\,\\Gamma $ into the waveguide and a rate $(1-\\beta )\\Gamma $ for decay into the environment, where $0\\le \\beta \\le 1$ .", "A continuous-wave coherent field is coupled into the waveguide and drives the atomic transition at frequency $\\omega _0$ .", "In the following, we will focus on the case of a resonant driving field investigated also in recent experiments [13], [14].", "The strength of the coherent drive is characterized by its mean photon flux $P_\\mathrm {in}$ corresponding to an input power $\\hbar \\omega _0P_\\mathrm {in}$ .", "For the sake of simplicity we will not distinguish these quantities and refer also to $P_\\mathrm {in}$ and related quantities as powers.", "In the following, we are interested in the regime of strong drive, by which we understand here an input power $P_\\mathrm {in}$ approaching the saturation power $P_\\mathrm {sat}=\\Gamma /\\beta $ .", "In this arrangement, the reduced state $\\rho _N$ of the $N$ atoms (after elimination of the photon field in standard Born-Markov approximation) evolves according to a cascaded system master equation [27], [28], [39], [21] $\\frac{1}{\\Gamma }\\frac{d\\rho _N}{dt}=-i\\sum _{j=1}^N\\sqrt{\\frac{P_\\mathrm {in}}{P_\\mathrm {sat}}}\\Big [\\sigma _j^- + \\sigma _j^+,\\rho _N\\Big ]+ (1-\\beta )\\sum _{j=1}^N D\\Big [\\sigma _j^-\\Big ]\\rho _N\\\\+\\frac{\\beta }{2}\\sum _ {{j,l=1\\\\ j>l}}^N\\Big [\\sigma _l^+\\sigma _j^- - \\sigma _j^+\\sigma _l^-,\\rho _N\\Big ]+ \\beta \\, D\\Big [\\sum _{j=1}^N\\sigma _j^-\\Big ]\\rho _NL_N\\rho _N$ The master equation is written in a frame rotating at $\\omega _0$ .", "The first two terms on the right hand side describe, respectively, the coherent resonant drive and decay of atoms to ambient modes.", "These two processes affect each of the atoms independently.", "$D[x]\\rho =x\\rho x^\\dagger -\\frac{1}{2}(x^\\dagger x\\rho +\\rho x^\\dagger x)$ denotes a Lindblad term with jump operator $x$ .", "The cascaded (chiral) coupling of atoms to the waveguide is accounted for by the two terms in the last line.", "These describe collective dynamics and induce correlations among atoms.", "The field coupled out of the waveguide is described by the annihilation operator $a_{\\rm out}(t)$ which fulfills the input-output relation aout(t) = ain(t)+Pin - i i=1N i-(t).", "The first two terms on the right hand side are, respectively, vacuum noise and coherent amplitude (assumed to be real valued) of the input field.", "The last term represents the field radiated by the atomic dipoles, and accounts for all effects arising from scattering of photons on atoms (including damping of the coherent amplitude).", "The atomic operators appear as a collective lowering operator as the detection of a photon exiting the waveguide can be emitted from any of the emitters.", "The vacuum noise on the right hand side of Eq.", "() will be suppressed in the following since we will be exclusively interested in normally ordered quantities to which vacuum fluctuations do not contribute.", "Specifically, we want to characterise the properties of the transmitted light while varying the input power $P_\\mathrm {in}/P_\\mathrm {sat}$ , the atomic waveguide coupling $\\beta $ , and the number of emitters $N$ .", "A crucial parameter in this system is its optical depth $OD=4\\beta N$ .", "We are interested in the transmitted power $P_\\mathrm {out}=*{a_{\\rm out}^\\dagger a_{\\rm out}}=P_\\mathrm {el}+P_\\mathrm {ie}$ , its components due elastically and inelastically scattered photons $P_\\mathrm {el}$ and $P_\\mathrm {ie}$ , the squeezing properties of the quadratures of the transmitted field, and its second-order coherence, i.e.", "the (anti)bunching of the transmitted photons.", "Via the input-output relation (), these quantities ultimately relate to correlation functions of atomic observables.", "In particular, we have Pel=*aout2 =|Pin - i i=1N i-(t)|2, Pie=*aoutaout-*aout2=  i,j=1N $\\m@th {\\langle }$$\\m@th {\\langle }$i+(t)j-(t)$\\m@th {\\rangle }$$\\m@th {\\rangle }$.", "$P_\\mathrm {el}$ corresponds to the power due to the interference of the coherent input and the field radiated by the average atomic dipoles.", "$P_\\mathrm {ie}$ is the power radiated from dipole fluctuations as described by their covariance or (second-order) cumulant $\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{\\sigma _i^+\\sigma _j^-}\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}}$ , which is generally defined by $\\m@th {\\langle }$$\\m@th {\\langle }$AB$\\m@th {\\rangle }$$\\m@th {\\rangle }$=AB-AB.", "While Eqs.", "() and () apply for arbitrary time, we focus in the following exclusively on the stationary dynamics of the system, implicitly taking a long-time limit.", "Furthermore, the squeezing spectrum of light quadratures $X_\\theta (t)=\\frac{1}{2}\\left(a_\\mathrm {out}(t)e^{i\\theta } + \\mathrm {h.c.}\\right)$ is quantified by the spectral density :S():=0d(ei+e-i): X() X(0) : of quadrature fluctuations $\\delta X_\\theta (t)=X_\\theta (t)- *{X_\\theta (t)}$ .", "The colons in $\\mathrel {\\mathop {:}}\\mathrel {A}\\mathrel {\\mathop {:}}$ denote normal and time ordering of $A$ .", "The angle $\\theta $ is the local oscillator phase with respect to the coherent drive.", "With the input-output-relation () the two-time correlations of quadrature fluctuations can be related to those of atomic operators, :X() X(0): =4 i,j=1N $\\m@th {\\langle }$$\\m@th {\\langle }$+j(0)-i()$\\m@th {\\rangle }$$\\m@th {\\rangle }$-e2i$\\m@th {\\langle }$$\\m@th {\\langle }$-i()-j(0)$\\m@th {\\rangle }$$\\m@th {\\rangle }$+h.c..", "The squeezing spectra in Eq.", "() with regard to two conjugate quadratures, say $\\theta =0,\\,\\pi /2$ , can be combined to determine the spectrum of inelastically scattered photons, Sie()=-de-i $\\m@th {\\langle }$$\\m@th {\\langle }$ aout()aout(0)$\\m@th {\\rangle }$$\\m@th {\\rangle }$= :S0():+:S/2():.", "The frequency integral of this spectrum in turn corresponds to the power in the inelastically scattered field in Eq.", "(), Pie=12-dSie()= :X0(0) X0(0):+:X/2(0) X/2(0):, where we introduced the integrated quadrature fluctuations, :X(0) X(0):=12-d:S():.", "Finally, we will also explore the normalized second order correlation function of the output field at equal times, g(2)(0)=aout(0)aout(0)aout(0)aout(0)Pout2.", "Its expression in terms of atomic operators follows from the direct application of the input-output relation in Eq.", "(), but turns out to be rather lengthy and is therefore moved to Eq.", "() in Appendix .", "Evidently, $g^{(2)}(0)$ involves atomic moments among up to four atoms.", "In summary, all quantities of interest ultimately depend on mean values and two-body, as well as higher order, correlations of atomic observables.", "These can, in principle, be calculated from the master equation (REF ), in combination with the quantum regression theorem as necessary.", "However, owing to the exponential scaling of the dimension of $\\rho _N$ in the number of atoms and due to the correlations that arise between them during collective decay through the waveguide an exact solution is unfeasible.", "This is the case even in the region of weak coupling to the waveguide, $\\beta \\ll 1$ , when the optical depth is large $OD=4\\beta N>1$ .", "However, in this regime the atoms mostly decay non-collectively, and it is therefore to be expected that correlations between atoms remain weak, at least in the sense that many-particle correlations are less pronounced than correlations between few particles.", "On this basis, we construct in the following approximate solutions of the steady state of the master equation in a mean-field approach and, in systematic extensions of this, in higher-order cumulant expansions." ], [ "Transmitted power", "The equations of motion for the expectation values of the $x$ , $y$ , and $z$ Pauli operators are 1ddt*xj=-12*jx, 1ddt*jy=-12*jy - 2j*jz + l=1j-1$\\m@th {\\langle }$$\\m@th {\\langle }$jzly$\\m@th {\\rangle }$$\\m@th {\\rangle }$, 1ddt*jz=2j *jy-*jz -1 - l=1j-1($\\m@th {\\langle }$$\\m@th {\\langle }$jxlx$\\m@th {\\rangle }$$\\m@th {\\rangle }$+$\\m@th {\\langle }$$\\m@th {\\langle }$jyly$\\m@th {\\rangle }$$\\m@th {\\rangle }$), as follows from Eq.", "(REF ) without approximation.", "For simplicity, we have already omitted quantities of the form $\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{\\sigma _i^x\\sigma _j^{y,z}}\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}}$ , anticipating that these vanish for resonant drive.", "In these equations we already expressed two-body correlations through second order cumulants using Eq. ().", "In this way, an effective field driving the $j$ –th atom naturally emerges, j=1-il=1j-1*l-, 1=PinPsat, where $\\alpha _1$ is the amplitude experienced by the first atom.", "The sum in the expression for $\\alpha _j$ accounts for the field radiated coherently by all atoms to the left of the $j$ –th one.", "For resonant driving, the expectation values of the out-of-phase atomic dipoles vanish in steady state, $*{\\sigma _j^x}=0$ , and thus the effective driving field $\\alpha _j=\\alpha _1-\\frac{\\beta }{2}\\sum _{l=1}^{j-1}*{\\sigma _l^y}$ is real .", "It is also useful to note that the output power corresponding to the elastically scattered photons in Eq.", "() can be considered as the effective driving field that would be seen by a hypothetical $(N+1)$th atom, $P_\\mathrm {el}/P_\\mathrm {sat}=\\alpha _{N+1}^2$ .", "Figure: Normalized power of the elastically scattered field P el /P in P_\\mathrm {el}/P_\\mathrm {in} versus number of atoms (lower xx-axes) and optical depth (upper xx-axes).", "Top row corresponds to β=0.01\\beta =0.01, bottom row to β=0.1\\beta =0.1.", "Columns refer to input powers P in /P sat =0.01,0.1,0.3P_\\mathrm {in}/P_\\mathrm {sat}=0.01,\\,0.1,\\,0.3 from left to right, respectively.", "Lines show results from mean-field (MF) theory Eq.", "() (dotted red line), Lambert law Eq.", "() (solid green line) and Beer-Lambert law Eq.", "() (dashed blue line).", "Results from higher order cumulant expansions at truncation order (TO) 2 and 3 are shown as dashed black and dash-dotted black lines, respectively.", "Mean-field theory gives good results for low β\\beta and low input power.", "For P in /P sat ≳1/8P_\\mathrm {in}/P_\\mathrm {sat}\\gtrsim 1/8 power decay is initially non-exponential, as expressed in Eq.", "().To close the system of Eqs.", "(REF ), it would have to be supplemented by corresponding equations for all correlations up to $N$ particles, which would amount to the exact solution of the master equation.", "The lowest-order approximation that yields a closed system of equations corresponds to neglecting all second-order cumulants $\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{\\sigma _j^\\mu \\sigma _l^\\nu }\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}}$ in Eqs.", "(REF ) where $\\mu ,\\nu =x,y,z$ .", "In view of Eq.", "(), this is tantamount to approximate ABAB where $A$ and $B$ are operators referring to different atoms.", "This approach corresponds to mean-field theory, where a product state ansatz is made for the density matrix $\\rho _N=\\bigotimes _i\\rho ^{(i)}$ with single particle states $\\rho ^{(i)}$ .", "Since the system considered here is not translationally invariant, the single particle states will not be identical.", "In this approximation and in stationary state, Eqs.", "(REF ) are solved by *zj=-11+8j2, *jy=4j1+8j2.", "Substituting in Eq.", "(REF ) yields a recurrence relation for the effective driving field in mean-field theory $\\alpha _j=\\alpha _1-2\\beta \\sum _{l=1}^{j-1}\\alpha _l/(1+8\\alpha _l^2)$ .", "An approximate solution can be constructed by considering the difference equation $\\Delta \\alpha _j=\\alpha _{j+1}-\\alpha _{j}=-2\\beta \\alpha _{j}/(1+8\\alpha _{j})$ .", "In the continuous limit (replacing the index $j$ by a continuous variable $z\\in [0,N]$ ), the solution of the corresponding differential equation for $\\alpha (z)$ yields the Lambert law for the elastically scattered power, Pel(z)=Psat (z)2=Pinw(812e812-4z)812. where $w(\\cdot )$ is the Lambert function Here, $w(x)$ denotes the solution of $w\\exp (w)=x$ for $x\\ge 0$ , and fulfills the identities $w(x\\exp (x))=x$ and $\\ln w(x)=x-w(x)$ ..", "It is instructive to express this as 4z=8PinPsat(1-Pel(z)Pin)-(Pel(z)Pin), which reveals a scaling behaviour of the particle number (here, $z$ ) with $\\beta $ .", "For low input power $(8P_\\mathrm {in}\\ll P_\\mathrm {sat})$ , the first term can be neglected with respect to the second one, and one recovers the Beer-Lambert law Pel(z)Pin(-4z).", "For large input powers $(8P_\\mathrm {in}\\gg P_\\mathrm {sat})$ one finds instead an (initial) nonexponential decay Pel(z)Pin-z2.", "In Fig.", "REF we illustrate the normalized power of the elastically scattered field $P_\\mathrm {el}/P_\\mathrm {in}=\\alpha ^2_{N+1}/\\alpha ^2_1$ versus number of atoms (optical depth).", "We compare results from mean-field theory where $\\alpha _{N+1}$ is determined from Eq.", "(REF ), with predictions according to the Lambert law (REF ) and the Beer-Lambert law (REF ).", "In the regime of weak coupling and small input power (up to $\\beta \\lesssim 0.1$ and $P_\\mathrm {in}\\lesssim P_\\mathrm {sat}$ ) the Lambert law provides a good approximation for the decay of $P_\\mathrm {el}$ , while the Beer-Lambert law is clearly violated.", "In Fig.", "REF , the results of the basic mean-field theory relying on the product state ansatz are compared to and confirmed by those of improved mean-field theories, which will be introduced in Section .", "Figure: Spectra of inelastically scattered field S ie (ω)S_\\mathrm {ie}(\\omega ) and of quadrature fluctuations :S θ (ω):\\mathrel {\\mathop {:}}\\mathrel {S_{\\theta }(\\omega )}\\mathrel {\\mathop {:}} for β=0.01\\beta =0.01 and P in P sat =0.1\\frac{P_\\mathrm {in}}{P_\\mathrm {sat}}=0.1 calculated in mean-field theory, that is TO 1 (top row), and in cumulant expansion at TO 2 (bottom row).", "Normally ordered spectra :S θ (ω):\\mathrel {\\mathop {:}}\\mathrel {S_{\\theta }(\\omega )}\\mathrel {\\mathop {:}} are bounded from below by -1/4-1/4.", "Blue regions indicate squeezing.", "Mean-field theory predicts an unphysical saturation at large optical depth OD =4βN\\mathrm {OD}=4\\beta N (upper xx-axes) which is due to the fact that correlations among atoms and re-scattering of photons are not reflected in mean-field approximation.", "Dashed lines correspond to a point of maximal atomic correlations discussed in Sec.", "and defined there in Eq.", "()." ], [ "Squeezing spectra", "We next consider the spectra of quadrature fluctuations which is an important quantity that directly reveal quantum features of light.", "It can be experimentally readily accessed using a balanced homodyne detection scheme, as in [14].", "In the following we will first discuss the squeezing spectrum in different truncation order and compare it to experimental data later in Sec REF .", "In the mean-field approach the product state ansatz implies that in Eq.", "() only the one-particle moments contribute, $\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{\\sigma ^\\mu _i(\\tau )\\sigma ^\\nu _j(0)}\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}}=\\delta _{ij}\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{\\sigma ^\\mu _i(\\tau )\\sigma ^\\nu _i(0)}\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}}$ with $\\mu ,\\nu $ being any $x,y,z$ .", "Therefore, the squeezing spectrum of light after the $j$ -th atom is given by the sum of the individual spectra of all $i\\le j$ atoms (cf.", "()), where the respective effective driving power is given by (REF ).", "That is, in mean-field treatment the problem of computing the squeezing spectrum after $j$ atoms reduces to the problem of resonance fluorescence of $j$ independent atoms, each driven with different power.", "Squeezing in the resonance fluorescence of single two level atoms has been covered in classic papers by Collet, Walls and Zoller [40], [41].", "It is shown there that with resonant drive, squeezing only occurs at moderate drive strength $8P_\\mathrm {in}/P_\\mathrm {sat} < 1$ , i.e.", "well below the threshold of $P_\\mathrm {in}\\simeq P_\\mathrm {sat}$ at which the Mollow triplet occurs.", "The spectra of quadrature fluctuations $\\mathrel {\\mathop {:}}\\mathrel {S_\\theta (\\omega )}\\mathrel {\\mathop {:}}$ predicted by mean-field theory are shown in the top row of Fig.", "REF for $\\beta =0.01$ and $P_\\mathrm {in}/P_\\mathrm {sat}=0.1$ .", "Beyond an optical depth of about 4, mean-field theory implies saturation in the spectra of the in-phase (in-quadrature) components ($\\theta =0,\\pi /2$ ), as well as in the spectrum of the total inelastically scattered field ().", "This is clearly unphysical, since due to the dominant scattering of photons out of the waveguide, the transmitted power must eventually decrease to zero.", "This artifact arises from the assumption implicit to the mean-field ansatz that each photon can scatter only once at one of the atoms.", "The repeated scattering of photons would cause correlations between the atoms that cannot be represented in a mean-field approach.", "We conclude that mean-field theory, while providing satisfactory results for describing the mean-field amplitude and hence the elastically scattered power, is clearly inadequate for determining squeezing spectra and the inelastically scattered power." ], [ "Higher order cumulant expansions", "To incorporate correlations into the description of the system, we use improved mean-field approximations based on a systematic extension of cumulant expansions [32], [37], [35], [34], [42].", "The basic mean-field theory described earlier, which neglects all second and higher-order cumulants, corresponds in this framework to a cumulant expansion with truncation order 1 (TO 1).", "In the following, we will use the cumulant expansions at TO 2, 3, and 4, which account for correlations involving up to two, three and four particles respectively.", "Figure: Power of inelastically scattered photons P ie P_\\mathrm {ie} (left column) and integrated fluctuations 〈:δX θ (0)δX θ (0):〉/Γ\\langle \\mathrel {\\mathop {:}}\\mathrel {\\delta X_{\\theta }(0) \\delta X_{\\theta }(0)}\\mathrel {\\mathop {:}} \\rangle /\\Gamma for amplitude and phase quadratures (middle and right column), scaled to the atomic line width Γ\\Gamma , for β=0.01\\beta =0.01, P in /P sat =0.1P_{\\mathrm {in}}/P_{\\mathrm {sat}}=0.1 (top) and β=0.1\\beta =0.1, P in /P sat =0.3P_{\\mathrm {in}}/P_{\\mathrm {sat}}=0.3 (bottom).", "The top row corresponds to the frequency integration of the data shown in Fig. .", "Mean-field theory (TO 1, red dotted line) deviates from expansions at TO 2,3,42,3,4 (black lines) for both sets of parameters.", "For lower β\\beta and input power, results of truncation order 2 and higher agree.", "For higher β\\beta and power even TO 2 and TO 3,4 deviate, indicating that higher order correlations start to play a non-negligible role.", "Including them raises the computational complexity, which is the reason why the lines for different TO's do not extend equally far.In a cumulant expansion at TO 2, all three-body cumulants $\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{ ABC}\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}}={ABC}-{AB}{C}-{AC}{B} -{BC}{A}\\nonumber +2{A}{B}{C}$ are discarded, which effectively expresses three-body moments by those of lower order, ABCABC+ACB+BCA-2ABC.", "Here, $A$ , $B$ , and $C$ refer to different atoms.", "This generalizes the approximation in Eq.", "(REF ) from TO 1 to TO 2.", "By means of Eq.", "(), the master equation (REF ) is approximated by a closed set of differential equations comprised of Eqs.", "(REF ) and corresponding equations for two-body-cumulants which we defer to the Appendix  in Eqs. ().", "This procedure has a systematic extension to higher TO which is discussed in the Appendix.", "In particular, the generalization of Eqs.", "(REF ) and () to arbitrary higher order is given in Eq.", "() of the Appendix.", "On this basis it is in principle straightforward to derive the corresponding equations for TO 3 and 4 from the master equation (REF ), but the results are too unwieldy to state explicitly here.", "The effective dimensionality of the resulting system of equations grows rapidly with increasing TO, which limits the treatment to progressively smaller numbers of particles $N$ .", "In Appendix  we also give a proof for the effective linearity of the system of equations, which is a special feature of cascaded systems at arbitrary order of trunction, and provide further comments and caveats on the method of cumulant expansions." ], [ "Transmitted power and squeezing spectra", "The results of TO 2 and TO 3 for the power in the elastically scattered field confirm the predictions of mean-field theory and the Lambert law, discussed earlier, for the parameter regime in Fig.", "REF .", "However, Fig.", "REF shows that the predictions for the squeezing spectra and the power spectrum of the inelastically scattered field differ significantly.", "In contrast to mean-field theory, a treatment in TO 2 predicts a – physically expected – decay of the spectra, which occurs in particular first at resonant frequencies.", "The same behaviour is observed in the third truncation order, the results of which we show in Fig.", "REF in Appendix .", "There, a larger range of powers is considered, covering the transition from squeezing to the Mollow triplet.", "In the following Section REF we compare the results for squeezing spectra in TO2 to experimental data and find good agreement.", "The discrepancy between mean-field theory and higher-order cumulant expansions is further illustrated in Fig.", "REF , which shows the power in the inelastically transmitted field and its components, the integrated spectra of the quadrature fluctuations defined in Eq. ().", "For low $\\beta $ and input powers, the higher order truncation results appear to be converged already at TO 2, indicating that it is sufficient to account for two-body correlations.", "For higher $\\beta $ and input powers, it can be seen that the results for TO 2 and TO 3 or TO 4 are in qualitative agreement, but convergence is not yet achieved for TO 2.", "This testifies the role of atomic correlations, also beyond pair correlations, and the collective nature of the light-atom interaction in the waveguide even at the low coupling strengths considered here.", "Fig.", "REF clearly demonstrates that atomic correlations are essential to obtain a physically meaningful behaviour for increasing optical depths and to evade the artificial saturation that occurs in the mean-field approach.", "Atomic correlations will be explored in more detail in Section REF .", "Figure: Total output light field P out =P el +P ie P_{\\mathrm {out}}=P_{\\mathrm {el}}+P_{\\mathrm {ie}} for β=0.01\\beta =0.01, P in P sat =0.1\\frac{P_{\\mathrm {in}}}{P_{\\mathrm {sat}}}=0.1 and β=0.1\\beta =0.1, P in P sat =0.3\\frac{P_{\\mathrm {in}}}{P_{\\mathrm {sat}}}=0.3 .", "For the first set of parameters the elastically scattered part dominates as the fraction of inelastically scattered photons is relatively small.", "Expressed in other words this means that the amount of higher order correlations is small and can be truncated without any loss of accuracy.", "For the second set of parameters the elastically scattered part dominates in the beginning of the chain as well.", "But with higher β\\beta there are more correlations among atoms, i.e.", "more inelastically scattered light.", "Here, higher orders of truncation are necessary for higher accuracy.", "The blue line denotes the high-particle limit of the 2-photon theory of .It is instructive to combine the results from Figs.", "REF and REF and examine the total output power $P_\\mathrm {out}=P_\\mathrm {el}+P_\\mathrm {ie}$ , cf.", "Fig.", "REF .", "For low optical depth the total transmitted power is dominated by its elastically scattered component.", "For larger $\\beta $ and input powers, a crossover becomes visible at higher optical depths, where the inelastically scattered field becomes dominant.", "In Fig.", "REF we include also the results from the theory developed in [24] where the photon wave function is expanded in the subspace including up to two excitations.", "This approach is limited to a regime of low (effective) driving power, but appears to be consistent at large optical depths with the asymptotic result of cumulant expansions beyond mean-field theory." ], [ "Comparison with experimental data", "In the following, we compare the previously obtained results for the squeezing spectrum with experimental data.", "The waveguide QED platform consists of laser cooled Cesium atoms coupled to a single mode optical nanofiber [14].", "The atoms couple weakly to the evanescent field part of the waveguided mode with $\\beta = 0.0070(5)$ and yield a total optical depth of $OD\\approx 5$ .", "The atoms are probed with a resonant field that is launched through the fiber with different input powers $P_\\mathrm {in}= [25 - 300]{pW}$ .", "For comparison to experimental data, it is useful to quantify the input power in terms of the saturation paramater $s = \\frac{8 P_\\mathrm {in}}{P_\\mathrm {sat}}$ , which is also consistent with the nomenclature of [14].", "The output light is analyzed with a balanced homodyne detection scheme from which we deduce the normally ordered squeezing spectrum $\\mathrel {\\mathop {:}}\\mathrel {S_\\theta (\\omega )}\\mathrel {\\mathop {:}}$ .", "A more detailed description of the setup and the measurement method can be found in [14].", "While the study in [14] was limited to weak excitation regime ($s\\ll 1$ ), the datasets presented here include higher input power but have elsewise been taken under the same conditions.", "Figure: Comparison between experiment and theory for different input saturation parameters ss for the amplitude (θ=0,π\\theta = 0,\\pi ) and phase quadrature (θ=π/2,3π/2\\theta = \\pi /2,3\\pi /2) at OD≈5OD\\approx 5.", "The theoretical prediction based on TO2 are shown by the solid lines.At low saturation, as shown in a) and b), the squeezing spectra of the two quadratures are symmetric around S θ (ω)=0S_\\theta (\\omega )=0.", "In this regime the theory predicts a crossing of both quadratures at ω≈0\\omega \\approx 0, which we attribute to an approximation error due to the high OD \\mathrm {OD}.", "For larger ss, as shown shown in c) - f), additional noise appears close to resonance, which breaks the symmetry, and eventually also leads to anti-squeezing for θ=0,π\\theta =0,\\pi .For comparison we show the prediction in the weak saturation regime s≪1s\\ll 1 , in dotted lines.Fig.", "REF shows $\\mathrel {\\mathop {:}}\\mathrel {S_\\theta (\\omega )}\\mathrel {\\mathop {:}}$ for different values of $s$ together with the corresponding theoretical prediction for TO 2.", "The amplitude ($\\theta =0,\\pi $ ) and the phase quadrature ($\\theta = \\pi /2, 3\\pi /2$ ) are displayed in blue and orange respectively.", "For low saturation $s\\ll 1$ , the amplitude of the squeezing spectrum scales with the input power and is mirror-symmetric with respect to the horizontal axis $\\mathrel {\\mathop {:}}\\mathrel {S_\\theta (\\omega )}\\mathrel {\\mathop {:}}= 0$ , i.e.", "the noise reduction in one quadrature leads to an increased noise in the conjugate quadrature, c.f.", "Fig.", "REF a) and b).", "Each spectrum consists of two sidebands which results from the interplay of coherent build-up and absorption.", "For $s \\gtrsim 1$ , the atomic transition starts to saturate, which adds additional noise to each quadrature.", "This behaviour is similar to a single emitter [43], [40] and breaks the symmetry between the two quadratures, as is apparent from Fig.", "REF c) onward.", "For larger input powers, the experimental conditions are less controlled, resulting in some deviation between theory and experiment.", "In particular, at higher input powers, photon scattering leads to recoil heating of the atoms, which modifies both $N$ and $\\beta $ , see Appendix  for details.", "Still, the experimental data exhibits the characteristics predicted by theory: As $s$ increases, additional noise first appears around resonance, breaks the symmetry between the quadratures and eventually, anti-squeezing appears in the amplitude quadrature.", "We note that in the relevant range, $\\beta \\ll 1$ , the squeezing spectra $S_\\theta (\\omega )$ do not directly depend on $\\beta $ , therefore we calculated the spectra at a slightly higher $\\beta $ ($=0.01$ ), for the same $\\mathrm {OD}$ , gaining an numerical advantage in terms of a lowered number of atoms.", "Figure: Cumulants 〈σ i μ σ j μ 〉\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{ \\sigma _i^{\\mu }\\sigma _j^{\\mu } }\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}} with μ=x,y,z\\mu = x,y,z for β=0.01\\beta = 0.01 and P in P sat =0.01,0.1,0.6,1\\frac{P_{\\mathrm {in}}}{P_{\\mathrm {sat}}}=0.01, 0.1, 0.6, 1 determined in TO 2.", "The top and right axis show the optical depth OD =4βj\\mathrm {OD}=4\\beta j.", "In stationary state, a complex correlation pattern emerges, which also exhibits long range correlations among atoms.", "For the top row, the maximal correlation among the first and the jj–th atom at j * j_* (OD * OD_*) given in Eq.", "() is marked by a yellow circle.", "As a guide for the eye, the black contour line indicates 70%70\\% of the maximal correlation." ], [ "Atomic correlations", "Since correlations among atoms play a crucial role, it is is worth studying them more closely.", "Fig.", "REF shows the pair correlations $\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{\\sigma ^\\mu _i\\sigma ^\\mu _j}\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}}$ in steady state determined in TO 2 for various levels of input power in the regime of weak coupling $\\beta =0.01$ .", "For power levels approaching saturation, a rather complex spin correlation develop along the atoms.", "Remarkably, even long-range correlations occur, where the pair correlations feature an extremum for a certain distance ${i-j}$ .", "For the case of a resonant input field considered here, the equation of motion for $\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{\\sigma ^x_i\\sigma ^x_j}\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}}$ correlations, cf.", "Eq.", "() in Appendix , is simple enough to gain some analytical insight regarding this characteristic distance of maximum correlation.", "In particular, for $i=1$ one finds the stationary correlation between the first and the $j$ –th atom $\\m@th {\\langle }$$\\m@th {\\langle }$x1xj$\\m@th {\\rangle }$$\\m@th {\\rangle }$ =(1+z1)zjl=2j-1(1+zl ).", "Substituting the mean-field solution in (REF ) for $\\langle \\sigma _l^z\\rangle $ , it is possible to approximately determine the index $j_*$ where these correlations become extremal.", "One finds that this is the case at an optical depth OD*=4j*(24 PinPsat) +8 PinPsat+2-13.", "We note that this formula holds in the limit of small $\\beta $ and does not cover the limit $P_\\mathrm {in}\\rightarrow 0$ where correlations decay monotonically.", "A comparison of this formula to numerical results is given in Fig.", "REF and shows good agreement.", "We also mark the optical depth $OD_*$ in the squeezing spectra shown in Figs.", "REF and REF .", "We observe that it correlates with the cross over of $\\mathrel {\\mathop {:}}\\mathrel {S_0(0)}\\mathrel {\\mathop {:}}$ from antisqueezing to squeezing.", "It would be highly interesting to have a similar characterisation of the maximal correlations of $\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{\\sigma ^y_i\\sigma ^y_j}\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}}$ , since these determine – for resonant driving – at which optical depths maximum squeezing occurs in $\\mathrel {\\mathop {:}}\\mathrel {S_0(0)}\\mathrel {\\mathop {:}}$ .", "Unfortunately, the equations of motion for this case are more complex, cf.", "Eqs.", "(), and cannot be solved in the same way.", "Figure: g 2 (0)g^2(0) correlation function for β=0.05\\beta =0.05 and different input powers and truncation orders 1 (MF), 3 and 4." ], [ "Second-order coherence", "Finally, we extend our treatment further to a cumulant expansion at TO 4.", "This enables us to discuss the second-order coherence function $g^2(0)$ and the antibunching of transmitted photons.", "This was recently investigated and experimentally demonstrated in [13].", "The experimental results were compared to the two-photon theory of [24] showing good agreement after taking into account an uncertainty in OD.", "The experiments were conducted for low coupling strength $\\beta =0.0083\\pm 0.0003$ and low input power $s=0.02$ ($P_\\mathrm {in}/P_\\mathrm {sat}=0.0025$ ).", "One can expect to see a rising discrepancy between the experimental results and 2-photon theory for higher input power.", "Our work is complementary in the sense that we can study the system at higher powers.", "However, the scaling of the effective dimensionality restricts our treatment to moderate particle numbers.", "This ensues that in the regime of low coupling strength ($\\beta \\le 0.01$ ) it becomes unfeasible to investigate the optical depth at which antibunching is maximal.", "Nevertheless, for slightly larger coupling strengths $\\beta \\ge 0.05$ we are able to treat optical depths of interest showing good agreement between our approach in low-power (black line at TO 4) and the 2-photon theory (red line), cf.", "Fig.", "REF .", "$g^2(0)$ is given in Eq.", "() and expressed in terms of atomic moments in Eq. ().", "Since $g^2(0)$ depends on correlations up to fourth order, low-order truncations can be expected not to yield reliable results.", "In Fig.", "REF we compare the results of mean-field theory and cumulant expansions at higher order for $\\beta =0.05$ and various levels of input power.", "Surprisingly, a mean-field approximation does give qualitatively similar results as higher order truncations.", "However, it is quantitatively wrong in the sense that it predicts too strong antibunching at too small optical depth.", "Results at TO 2 turned out to be nonphysical (predicting negative values for $g^2(0)$ ), and are therefore not shown in Fig REF .", "We attribute this unphysicality to the fact that one needs to apply a nested cumulant expansion in order to compute the 4-body moments in () at TO 2.", "Higher order expansions at TO 3 and 4 do not require nested expansions, and give physical results.", "They show good agreement among each other and, at low powers, with the predictions of [24].", "Thus, in order to get a quantitatively correct description, the inclusion of higher order correlations is essential.", "As we saw in Fig.", "REF and REF , correlations account for the initial collectively enhanced build up of the inelastically scattered part $P_\\mathrm {ie}$ and its subsequent loss.", "Antibunching can be understood as a delicate interference between the elastically and inelastically scattered components.", "Therefore, including correlations alters the prediction of occurrence and magnitude of antibunching strongly.", "Figure: The left panel shows the output power P out /ΓP_\\mathrm {out}/\\Gamma in dependence of the input power at maximal antibunching, i.e.", "N=19N=19 for β=0.05\\beta =0.05 and N=7N=7 for β=0.1\\beta =0.1, illustrating the trade-off between large anti-bunching and large output-power.", "The right panel compares the performance of single-photon sources based on single emitters (grey), with β=0.05\\beta =0.05 (dashed) ,1 (solid) and collective sources (blue) for β=0.05\\beta =0.05 calculated in TO 4\\mathrm {TO} 4 (solid) and using the 2-photon theory (dashed).Fig.", "REF shows the output power $P_\\mathrm {out}/\\Gamma $ (black line) in dependence of the input power at maximal antibunching, i.e.", "$N=19$ for $\\beta =0.05$ and $N=7$ for $\\beta =0.1$ .", "In the same plot, the green line shows $1-g^2(0)$ illustrating the amount of antibunching in the output field at different input powers.", "Evidently, there is a trade-off between the level of antibunching and output power, which is important to grasp if the system is considered as a single-photon source [38].", "Following up on this idea, it is worth comparing the performance of such a collective single photon source with that of a reference source based on a continuously driven single atom with a linewidth-limited transition whose emitted photons are collected into a given optical mode.", "In principle, one has to compare two quantities: indistinguishability of the photons and the achievable photon output rate, or brightness.", "Absent inhomogeneous spectral broadening, the photon current generated with the collective scheme has a similar spectral width and yields a similar temporal shape of $g^2(\\tau )$ as the fluorescence of a single atom.", "When transforming the photon current into a train of pulses containing at most one photon, both approaches thus yield the same performance in terms of photon indistinguishability.", "In order to quantify the brightness of both types of sources, we require a quantity that depends on the photon output rate $P_\\mathrm {out}$ and temporal width $\\tau $ of the anti-bunching dip.", "The latter defines the timescale over which one can be sure that, given a photon detection event, no second photon detection will occur.", "Therefore, we define the quantity Q =Pout(g(2)(0)-1 ), where $\\tau $ defines the full width of the anti-bunching dip where it reaches $85\\%$ of its maximum depth.", "In this region, we approximate $g^{(2)}(t)$ to be constant.", "For a single atom at low driving one can show that this definition of $\\tau $ yields $\\tau = 1/\\Gamma $ .", "The quantity $Q$ is equivalent to the continuous wave version of the Mandel Q parameter [44] which quantifies the deviation of the photon statistics of the light from a Poissonian distribution in the time interval $2\\tau $ .", "For a single-atom-based source with photon collection efficiency $\\beta $ , we obtain the analytical expression $Q=-\\beta s/2(1+s)^{3/2}$ with a minimum value of $Q_\\textrm {min}=-0.19\\beta $ .", "For the collective source, the formalism presented in this manuscript opens up the path to an investigation of the rate–quality trade-off also in the regime of strong driving.", "As the performance of the collective source is in first approximation independent of $\\beta $ , we calculate the expected Q-parameter as a function of the input power for the experimental parameters underlying Fig.", "REF , see section .", "The result is shown in the second panel of Fig.", "REF .", "The minimum value of $Q$ for this type of source is $Q_\\textrm {min}=-0.013$ .", "This is about $6.5\\%$ of that of a perfect single photon source, i.e.", "a single emitter-based source with unit collection efficiency.", "In other words, this means that at the optimal working point the performance of a collective single photon source is equivalent to that of a single emitter based source with $\\beta =6.5\\%$ .", "This shows that such a collective single photon source outperforms single quantum emitter-based photon sources in situations where $\\beta $ factors larger than $0.065$ cannot be realized." ], [ "Conclusions and Outlook", "We employed an improved mean-field theory based on a higher order cumulant expansion to determine the stationary state of a strongly-driven, weakly-coupled, chiral chain of atoms.", "We inferred the power of the transmitted light, its elastic and inelastic component, as well as squeezing spectra and the degree of second-order coherence.", "Our treatment evidences the important role of atomic correlations of growing order for larger input powers.", "Thanks to the linearity of the effective equations of motion, we are able to compare different order of cumulant expansions, and in this sense investigate systematically the deviations from a classical, mean-field description.", "We find that the system develops intriguing long-range correlations in steady state.", "Our theoretical predictions regarding squeezing spectra agree well with experimental results, even for large powers that could not be captured in previous descriptions.", "Our approach can be extended in various directions.", "Firstly, the assumption of resonant drive can be easily dropped, without changing our treatment conceptually.", "Secondly, the trade-off between anti-bunching and photon flux can be investigated more systematically.", "In order to do so for lower coupling, our approach need to be made more efficient in terms of the scaling with particle number, at least at TO 3.", "This could be done by restricting the descriptions to those three-particle correlations which are making a relevant contribution to $g^2(0)$ .", "Thirdly, while we focused here on a perfectly unidirectional system, it would be interesting to consider also systems of mixed chirality.", "This would, however, come at the cost of an unavoidable nonlinearity in the equations to be solved." ], [ "General idea of cumulant expansions", "We first review the general idea behind a cumulant expansion, for which we also refer to [34] and references in there.", "For $N$ particles, an $(\\ell +1)$ -body moment is given by $\\langle \\otimes _{m=1}^{\\ell +1} \\sigma _{j_m}^{\\beta _m}\\rangle $ where we take $\\ell +1\\le N$ , $\\beta _m\\in {x,y,z}$ , $j_m\\in [1,N]$ and $j_m\\ne j_n$ for $m\\ne n$ .", "The corresponding $(\\ell +1)$ -body cumulant is defined by [45] $\\m@th {\\langle }$$\\m@th {\\langle }$m=1+1 jmm$\\m@th {\\rangle }$$\\m@th {\\rangle }$=PP+1f(P)MPnMjnn, where $P_{\\ell +1}$ denotes the set of all partitions of the interval $[1,\\ell +1]$ , and $f(n)=(-1)^{n-1}(n-1)!$ .", "Note that one of the elements in $P_{\\ell +1}$ is the trivial partition given by ${[1,\\ell +1]}$ .", "This is the only partition with ${P}=1$ , and contributes the $(\\ell +1)$ -body moment on the right hand side of Eq. ().", "In an expansion at truncation order (TO) $\\ell $ , cumulants of order $\\ell +1$ are set to zero.", "This is equivalent to setting m=1+1 jmm=-PP+1 P>1f(p)MPnMjnn, which effectively replaces correlations of order $\\ell +1$ by a nonlinear function of correlations of lower order.", "In this way, the master equation is approximated by a system of differential equations of lower dimension (depending on the TO), which is closed but generally nonlinear.", "In the next section we will explain that this is not the case for cascaded systems.", "Before that, we comment on some well-known problems with cumulant expansions, and explain how we are dealing with these issues in this work.", "As was explained Sec.", ", a cumulant expansion at TO 1 corresponds to a product state ansatz for the density matrix, which is a physically meaningful state by construction.", "Cumulant expansions at higher order have no such analog on the level of the density matrix.", "Thus, the correlations determined in this way do not necessarily correspond to a physical state.", "Whether or not a given set $\\mathcal {C}_\\ell ^n$ of correlations up to a certain order $\\ell $ among $n$ particles can be due to a density operator $\\rho _n$ corresponds to the quantum marginal problem [46], [47], [48], which is NP-hard and even QMA-complete.", "In the present context the unphysicality of the set of correlations $\\mathcal {C}_\\ell ^n$ can give rise to unphysicalities in the properties of the transmitted field.", "For examples, it can give rise to negative expectation values of positive quantities such as transmitted power, or to violations of Heisenberg uncertainty relations for quadrature fluctuations, which reads $(\\mathrel {\\mathop {:}}\\mathrel {S_0(\\omega )}\\mathrel {\\mathop {:}})(\\mathrel {\\mathop {:}}\\mathrel {S_{\\pi /2}(\\omega )}\\mathrel {\\mathop {:}})\\ge \\frac{1}{16}$ .", "In our truncations up to TO 4 such violations do occur for large $\\beta $ and input powers, that is, in regimes where correlations of even higher order become important.", "Comparing results from different expansions confirms that unphysicalities become less severe or disappear at higher TO.", "In all cases shown and discussed here, we have ensured that the power spectrum is positive everywhere and that the Heisenberg uncertainty of the squeezing spectra is satisfied.", "For example, in TO 2, the master equation (REF ) implies the equations of motion for second order cumulants 1 ddt $\\m@th {\\langle }$$\\m@th {\\langle }$ix jx $\\m@th {\\rangle }$$\\m@th {\\rangle }$=-$\\m@th {\\langle }$$\\m@th {\\langle }$ixjx$\\m@th {\\rangle }$$\\m@th {\\rangle }$+ *iz*jz + l=1i-1$\\m@th {\\langle }$$\\m@th {\\langle }$lxjx$\\m@th {\\rangle }$$\\m@th {\\rangle }$*iz + l=1j-1$\\m@th {\\langle }$$\\m@th {\\langle }$lxix$\\m@th {\\rangle }$$\\m@th {\\rangle }$*jz 1 ddt $\\m@th {\\langle }$$\\m@th {\\langle }$ iy jy $\\m@th {\\rangle }$$\\m@th {\\rangle }$= -$\\m@th {\\langle }$$\\m@th {\\langle }$iyjy$\\m@th {\\rangle }$$\\m@th {\\rangle }$- 2j$\\m@th {\\langle }$$\\m@th {\\langle }$iyjz$\\m@th {\\rangle }$$\\m@th {\\rangle }$- 2i$\\m@th {\\langle }$$\\m@th {\\langle }$izjy$\\m@th {\\rangle }$$\\m@th {\\rangle }$+ l=1i-1$\\m@th {\\langle }$$\\m@th {\\langle }$lyjy$\\m@th {\\rangle }$$\\m@th {\\rangle }$*iz + l=1j-1$\\m@th {\\langle }$$\\m@th {\\langle }$lyiy$\\m@th {\\rangle }$$\\m@th {\\rangle }$*jz 1 ddt $\\m@th {\\langle }$$\\m@th {\\langle }$ iy jz $\\m@th {\\rangle }$$\\m@th {\\rangle }$= -32$\\m@th {\\langle }$$\\m@th {\\langle }$iyjz$\\m@th {\\rangle }$$\\m@th {\\rangle }$-2i$\\m@th {\\langle }$$\\m@th {\\langle }$izjz$\\m@th {\\rangle }$$\\m@th {\\rangle }$+ 2j$\\m@th {\\langle }$$\\m@th {\\langle }$iyjy$\\m@th {\\rangle }$$\\m@th {\\rangle }$+ l=1i-1$\\m@th {\\langle }$$\\m@th {\\langle }$lyjz$\\m@th {\\rangle }$$\\m@th {\\rangle }$*iz - l=1j-1$\\m@th {\\langle }$$\\m@th {\\langle }$lyiy$\\m@th {\\rangle }$$\\m@th {\\rangle }$*jy 1 ddt $\\m@th {\\langle }$$\\m@th {\\langle }$ iz jz $\\m@th {\\rangle }$$\\m@th {\\rangle }$= -2$\\m@th {\\langle }$$\\m@th {\\langle }$izjz$\\m@th {\\rangle }$$\\m@th {\\rangle }$+2i$\\m@th {\\langle }$$\\m@th {\\langle }$iyjz$\\m@th {\\rangle }$$\\m@th {\\rangle }$+ 2j$\\m@th {\\langle }$$\\m@th {\\langle }$izjy$\\m@th {\\rangle }$$\\m@th {\\rangle }$-l=1j-1$\\m@th {\\langle }$$\\m@th {\\langle }$lyiz$\\m@th {\\rangle }$$\\m@th {\\rangle }$*jy - l=1i-1$\\m@th {\\langle }$$\\m@th {\\langle }$lyjz$\\m@th {\\rangle }$$\\m@th {\\rangle }$*iy.", "for $i<j$ .", "Here, as in (REF ), we left out quantities of the form $\\savebox {}{\\m@th {\\langle }}\\mathopen {\\copy \\hspace{0.0pt}\\usebox {}}{\\sigma _i^x\\sigma _j^{y,z}}\\savebox {}{\\m@th {\\rangle }}\\mathclose {\\copy \\hspace{0.0pt}\\usebox {}}$ , anticipating that these vanish for resonant drive.", "These equations complement and close those in Eq.", "(REF ).", "We call attention to the nonlinearity on the right hand side which arises from the cumulant expansion.", "The cascaded nature of the dynamics implies that the state $\\rho _n$ of the first $n$ atoms evolves autonomously and independently of the $N-n$ atoms to the right.", "This can be seen by taking the partial trace with respect to these $N-n$ atoms in the master equation (REF ), which gives, for all $n\\le N$ , 1dndt=Lnn.", "An important consequence of this property, which holds generally for any cascaded system, is the following: The cumulant expansion at any order of a cascaded system yields a nonlinear system of differential equations whose structure corresponds to a hierarchy of nested systems of actually linear differential equations.", "This feature prevents certain numerical difficulties in solving the truncated differential equation systems associated with the normally occurring nonlinearity, which becomes pertinent especially for higher TOs.", "In order to show this, we denote by $\\mathcal {C}_\\ell ^{n}$ the set of correlations of the type () involving up to (and including) $\\ell $ particles among the leftmost $n$ atoms, assuming $\\ell \\le n$ .", "Thus, $\\mathcal {C}_\\ell ^n$ contains at most $\\ell $ -body moments and it is a subset of $\\mathcal {C}_\\ell ^{n+1}$ .", "$\\mathcal {C}_\\ell ^{n+1}$ contains additionally all correlations up to order $\\ell $ involving the $(n+1)$ -th particle.", "Our argument proceeds inductively: The correlations $\\mathcal {C}_\\ell ^\\ell $ up to order $\\ell $ among the first $\\ell $ atoms follow without approximation from Eq.", "() for $n=\\ell $ .", "Thus, $\\mathcal {C}_\\ell ^\\ell $ can be determined by solving linear equations.", "Now suppose that in a cumulant expansion of order $\\ell $ the correlations $\\mathcal {C}_\\ell ^n$ can be determined by solving linear equations, which, as we have just seen, holds for $n=\\ell $ .", "To determine the correlations in $\\mathcal {C}_\\ell ^{n+1}$ , we additionally need all correlations involving the $(n+1)$ -th particle.", "These satisfy linear equations of motion involving all correlations in $\\mathcal {C}_{\\ell +1}^{n+1}$ involving up to $\\ell +1$ particles.", "In a cumulant expansion at order $\\ell $ , $(\\ell +1)$ -particle correlations are replaced by Eq. ().", "In the right hand side of Eq.", "() each term in the sum contains exactly one factor involving the $(n+1)$ -th particle.", "All other factors are elements of $\\mathcal {C}_\\ell ^n$ , which are known.", "This implies that the correlations involving the $(n+1)$ -th particle follow from linear equations, and so does $\\mathcal {C}_\\ell ^{n+1}$ .", "Inserting the input-output relation () in the definition of $g^2(0)$ in Eq.", "() one arrives at g2(0) =Pin2Pout2(1 - 2Pin/Psatjyj+ 42Pin/Psatj eej    +22Pin/Psatij ij(xixj+ 3yiyj)- 432(Pin/Psat)3/2ijk ijk (yixjxk+ yiyjyk)    -23(Pin/Psat)3/2ij ijeeiyj+ 24Pin2/Psat2ij ijeeieej    +42Pin2/Psat2ijk ijk(xixj+ yiyj+ zixjxk+ ziyjyk)    + 416Pin2/Psat2ijkl ijkl (xixjxkxl+ 2xiyjxkyl+ yiyjykyl)), where we used $\\sigma _{ee}=(1+\\sigma _z)/2$ to denote the projector on the excited state.", "In mean-field approximation, every moment of order higher than one factorizes into products of first moments, and the expression for $g^2(0)$ simplifies considerably due to $\\langle \\sigma _x^i\\rangle =0$ for resonant drive.", "In treatments at truncation order $\\ell $ , all correlations involving more than $\\ell $ -particles are approximated by those of lower order by means of Eq. ().", "Here, we complement Fig.", "REF and show results for squeezing spectra in cumulant expansion at TO 3, which evidences good convergence at TO 2.", "We also extend the analysis to a larger set of input powers approaching saturation, where the Mollow triplet emerges in the spectrum of the inelastically scattered field.", "The results are shown in Fig.", "REF .", "Figure: S ie (ω)S_\\mathrm {ie}(\\omega ), :S 0 (ω):\\mathrel {\\mathop {:}}\\mathrel {S_0(\\omega )}\\mathrel {\\mathop {:}}, :S π/2 (ω):\\mathrel {\\mathop {:}}\\mathrel {S_{\\pi /2}(\\omega )}\\mathrel {\\mathop {:}} for β=0.01\\beta =0.01 and P in /P sat =0.03,0.06,0.1,0.3,1P_\\mathrm {in}/P_\\mathrm {sat}=0.03\\,, 0.06\\,, 0.1\\,, 0.3\\,, 1 at TO 3.", "The dashed line is computed via equation ().", "As observed earlier in Fig.", ", in the low power regime the spectra are symmetric.", "Approaching a regime where P in ≃P sat P_\\mathrm {in}\\simeq P_\\mathrm {sat} we see the transition to and, eventually, in the last row, the manifestation of the Mollow triplet." ], [ "The saturation parameter $s$ vs. {{formula:63cf0733-0159-4929-aa78-6a93f25cbe8f}}", "We point out that we characterize the saturation of the emitters by two quantities, $s$ and $P_\\mathrm {in}/P_\\mathrm {sat}$ , depending on the context.", "Both quantities are linked by $s=8 P_\\mathrm {in}/P_\\mathrm {sat}$ .", "The saturation parameter $s$ is more convenient when referring to experimental data.", "For $s=1$ , an emitter is subject to a light intensity $I_\\mathrm {sat}$ and scatters ${\\Gamma }/{4}$ photons [49].", "The saturation power $P_\\mathrm {sat} = \\frac{ \\Gamma }{\\beta }$ is more suitable for writing formulas, such as equations of motion etc., where it simplifies notations [24].", "We remind the reader that we scale powers to photon flux by $\\hbar \\omega _0$ ." ], [ "Heating and probing time", "We probe the atoms with input powers ranging from [20-300]pW during $[10-100]{\\mu s}$ and repeat the experiment $10\\,000$ - $100\\,000$ times.", "For larger input power, heating of the atoms during probing becomes important.", "To avoid a too large temperature, we decrease the probing time for larger input power: Up to $s=0.6$ we probe for $[100]{\\mu s}$ and then gradually decrease the probing time to keep the OD approximately constant.", "For the largest saturation parameter $s=2.19$ , we probe for $[10]{\\mu s}$ and the OD changes by up to 20%.", "Even with reduced probing times, we expect that heating is the main source for the discrepancy between theory and experiment at high input power since it affects both $\\beta $ and $N$ .", "First, the atoms are confined in anharmonic traps, such that the average coupling constant $\\beta $ decreases for atoms with larger energy.", "Second, atoms can be lost from the trap, which has a finite depth of about $\\simeq [100]{\\mu K}$ .", "Modeling how this modifies the squeezing spectrum is beyond the scope of this work." ], [ "Squeezing angle", "On resonace ($\\Delta =0$ ), the interesting squeezing angles are at $\\theta = 0$ and $\\theta = \\pi /2$ for which the largest squeezing and anti-squeezing occurs.", "In order to increase the signal to noise in $S_\\theta (\\omega )$ , we use the $\\pi $ -periodicity of $S_\\theta (\\omega )$ and average the data over $\\theta = 0$ and $\\pi $ as well as $\\theta =\\pi /2$ and $ 3\\pi /2$ respectively.", "For each value of $\\theta $ , we average over a range of $\\pm 18^\\circ $  [14]." ], [ "Quantifying the performance of source of a antibunched light", "For practical implementations of single photon sources based on a stream of antibunched light, it is crucial to provide a physical parameter that quantifies the latter's performance.", "This parameter should be linear in the output photon flux $P_\\textrm {out}$ .", "Furthermore, it should quantify how much the photon statistics is different from a Poissonian distribution, i.e.", "the larger the temporal width of the antibunching dip of $g^{(2)}(t)$ , the longer the output fields remains non-classical after the detection of a single photon and the higher the quality of the source.", "Here, we chose the Mandel-Q factor that quantifies the deviation of the photon statistics of a light field from a Poissonian distribution [44] Q=Pout-dt(1-|t|)(g(2)(t)-1) where $Q<0$ indicates a sub-Poissonian photon statistics.", "In the following, we consder a sufficiently short time interval after the detection of a photon, so that $g^{(2)}(t)\\approx \\mathrm {const}$ .", "For this, we define $\\tau $ as the $ 85\\%$ width of the anti-bunching dip.", "In this approximation one obtains QPout(g(2)(0)-1) For the fluorescence of a single atom, $\\tau $ is given by $\\tau \\approx \\Gamma ^{-1}\\cdot (1+s)^{-1/2}$ .", "For the $Q$ parameter one thus obtains Qsingle atom-Pout(1-g(2)(t))=-s2(1+s)3/2 with a minimum value of $Q_{min}=-0.19\\beta $ at $s=2$ .", "For the source based on collective forward scattering, we calculate the same quantity.", "The exact temporal shape of the $g^{(2)}(t)$ function in the high-power limit is hard to calculate.", "Therefore, we make the simplifying assumption that for the considered power regime the only change of $g^{(2)}(t)$ is the reduction of the depth of the antibunching dip (see Fig.", "REF ) while the overall temporal shape does not significantly change.", "That is, we approximate the temporal width to be independent of $s$ with $\\tau _\\textrm {coll}=0.41 \\Gamma ^{-1}$ , which we obtain from the 2-photon theory prediction.", "In this way, we obtain for the saturation-dependent $Q$ parameter the solid blue curve shown in Fig.", "REF which reaches a minimum value of $Q_\\mathrm {min}=-0.013$ at a saturation parameter of $s\\approx 0.8$ .", "For comparison, the solid gray curve is the Q parameter achievable for a perfect single photon source, i.e.", "a single atom with unit collection efficiency $\\beta =1$ .", "In contrast, the dashed grey curve depicts the Q parameter of a single atom with a coupling strength of $\\beta =0.05$ ." ] ]
2207.10439
[ [ "Real-Time Elderly Monitoring for Senior Safety by Lightweight Human\n Action Recognition" ], [ "Abstract With an increasing number of elders living alone, care-giving from a distance becomes a compelling need, particularly for safety.", "Real-time monitoring and action recognition are essential to raise an alert timely when abnormal behaviors or unusual activities occur.", "While wearable sensors are widely recognized as a promising solution, highly depending on user's ability and willingness makes them inefficient.", "In contrast, video streams collected through non-contact optical cameras provide richer information and release the burden on elders.", "In this paper, leveraging the Independently-Recurrent neural Network (IndRNN) we propose a novel Real-time Elderly Monitoring for senior Safety (REMS) based on lightweight human action recognition (HAR) technology.", "Using captured skeleton images, the REMS scheme is able to recognize abnormal behaviors or actions and preserve the user's privacy.", "To achieve high accuracy, the HAR module is trained and fine-tuned using multiple databases.", "An extensive experimental study verified that REMS system performs action recognition accurately and timely.", "REMS meets the design goals as a privacy-preserving elderly safety monitoring system and possesses the potential to be adopted in various smart monitoring systems." ], [ "Introduction", "According to the U.S. Department of Health and Human Services (HHS), the population age 65 and over has increased from 37.2 million in 2006 to 49.2 million in 2016, and is is expected to nearly double to 98 million in 2060 [19].", "Of these, about 28% (13.8 million) of non-institutional seniors live alone.", "With the unprecedented increasing of population aging and more elders living alone, care-giving from a distance becomes a compelling need, particularly for safety.", "A major threat to elders living alone is health problems such as falls and unconscious.", "Other chronic diseases, such as hypertension and hypoglycemia, also have certain behavioral manifestations which usually manifest as headache, chest pain, staggering, vomiting, etc [8].", "Timely detection and alarming of these abnormal situations are essential for their safety and health.", "Nowadays, there are many technologies applicable for remote safety monitoring and action recognition in healthcare services.", "Autonomous wearable sensors are often considered to detect dangerous actions like suddenly falls [18].", "However, wearable devices are inconvenient to put on and take off, and users often forget to wear them [10].", "Limited battery life is also put extra burden to users.", "Actually the effectiveness of wearable sensors is highly depending on the capability and willingness of the user.", "Meanwhile, video streams collected through non-contact optical cameras provide a richer information and release the burdens on elders.", "But the convenience is accompanied by the risk of privacy violations.", "Human skeleton is widely used in action recognition [13].", "In contrast to raw video streams, skeleton images are privacy-preserving by nature since a human body is represented as a coordinate of joints and background distractions are removed.", "Even if a skeleton image is leaked, it does not spill much personal information into the wild cyber-space.", "Skeleton-based human action recognition (HAR) is generally considered as a time series processing problem [14].", "Human actions in a period of time can be represented as a matrix with high auto-correlation inside.", "Therefore, deep learning (DL) methods that take advantage of the auto-correlation have achieved good results.", "The skeleton joints can be encoded into multiple 2D pseudo-images and then feed to Conventional Neural Networks (CNNs) to learn useful features [29], [32].", "But CNN itself is suffering from the problem of size and speed.", "Graph Conventional Networks (GCN) [31] firstly built a saptio-temporal graph, in which the joints are represented as graph vertices and the natural connectivity of human body structure and time is mapped to the graph edges.", "However, GCN-based HAR systems face the problem of data transforming.", "Time Pyramid performs well in analysing three-dimension (3D) geometric relationships but it is generally restricted by the width of the time windows [16].", "A novel two-stream recurrent neural network (RNN) is proposed to model both temporal dynamics and spatial configurations for skeleton data [26].", "RNNs have advantages in time-consuming processes when dealing with long sequences.", "But due to the recurrent connections with repeat multiplication of the recurrent weight matrix, the training of RNNs suffers from the gradient vanishing and exploding problem.", "Recently, Independently-Recurrent Neural Network (IndRNN) has been proposed to solve the problem of exploding gradient retains the advantages of RNN in processing auto-correlated data[11].", "In addition, IndRNN is able to achieve approximate recognition result with more concise structure compared with others[15].", "In this paper, a novel Real-time Elderly Monitoring for senior Safety (REMS) scheme is proposed leveraging a real-time skeleton image action recognition system based on IndRNNs.", "The major contribution of this work includes the following: An IndRNN based action recognition system is proposed that is able to sound alarm timely when emergency action occurs.", "Continuous real-time monitoring is enabled using a sliding window method that divides the input video stream into a sequence of overlapped short video segments to reduce the processing time for each segment.", "REMS achieved a better performance with a lightweight design, which makes REMS affordable for household use with very constrained computational resources.", "The rest of this paper is organized as follows.", "Section provides necessary background knowledge and gives a brief overview of related work.", "Section introduces the methodology and the system framework of REMS.", "Section reports experimental results and a comparison study with several state-of-the-art methods.", "Section concludes this paper with some discussions on the on-going efforts, specifically in terms of security and privacy.", "According to the technical approaches, recent human activity monitoring solutions roughly belong to two categories, wearable device-based and image processing-based [30].", "Wearable devices based HAR systems use variant types of sensors to collect human data for analysis, such as inertial sensors (e.g.", "accelerometers, gyroscopes, magnetometers), GPS, heart rate sensors, etc.", "[3].", "For instance, five dual-axis accelerometers are installed on arms, hips, knees, and ankles to identify daily behaviors including walking, jumping, an overall accuracy rate of 84% is achieved using a decision tree algorithm [2].", "Falling can be detected in real-time by collected fall and non-fall data-sets [1].", "Five younger and 19 elder persons went on their everyday work by wearing accelerometers.", "Ten unexpected falling is collected during total 500 hours of data acquiring.", "These HAR methods using single or multiple acceleration sensors have achieved encouraging results.", "However, the recognition accuracy is highly correlated with the sensor positions, and most of the recognition is performed offline, which does not meet the real-time requirements.", "In addition, as only few recognition actions can be discriminated in real-time, their utility is limited.", "With the widespread application of mobile robots in indoor scenes, ultra-wideband impulse-radar technology has also been applied to indoor positioning and motion capture, especially to capture dynamics in indoor mobile traffic scenes, such as hospitals and IoT factories.", "However, indoor UWB positioning technology can lead to inaccurate positioning due to measurement errors caused by obstacles and non-line-of-sight (NLOS)[17].", "Besides the applications in healthcare area, video surveillance has been widely adopted to serve many different purposes such as public safety [28], defense [5], and smart transportation [4].", "HAR based on visual processing normally uses image devices installed on predetermined points of interest.", "For example, a camera installed in the living room where an elder spends most of the time each day and a computer for intelligently analysis on the captured human actions in video or image sequences.", "By replacing the dense sampling with random sampling, the number of sampled patches to be processed is reduced such that the efficiency is increased to recognize human action in real-time [21].", "Although there are numbers of research that focuses on the RGB images acquired though cameras and achieves good recognition results, there are still shortcomings.", "The cameras must be installed in a specific environment, hence the viewing edge is usually limited.", "Also, image acquisition occupies a large amount of memory, therefore the hardware device is required to have large storage space and sufficient processing capabilities.", "The most important issue is, the data collected by cameras contains a lot of personal information while most computing intensive image/video processing tasks are outsourced to remote servers.", "The high risk of personal information leakage in the transmission of the raw images poses a non-negligible threat to privacy preservation [9]." ], [ "Skeleton Features Extraction", "With the development of 3D depth camera, the 3D node positions of a human skeleton can be quickly obtained from the depth image.", "At each moment, the skeleton corresponds to the positions of the joint points of a human body.", "Recent years have witnessed a fast development of HAR techniques based on depth information in multiple fields including smart home, games, and human-computer interaction [12].", "Compared with RGB images, skeleton image is less susceptible to appearance factors.", "The human skeleton is defined as a schematic model describing torso, head and limbs.", "The parameters and motion information can be used to represent gesture and pose.", "In 3D skeleton-based action representation, an action is a collection of 3D position-time series.", "However, differences in reference coordinate systems and recording environments, plus the differences among variant human motion styles, lead to errors if the HAR processing depends only on joint coordinate representing.Therefore, many different implementations on representation method of skeleton matrix are introduced, such as the pose of human body is normalized by length of shoulders and torso [27], the skeleton data is centered on the hip joint to establish coordinates [24] such that the data can be rewritten according to a new coordinate system." ], [ "REMS Methodology and System Architecture", "Figure REF shows the complete algorithm flow of REMS system.", "The Kinect V2 camera is adopted to establish the human body 3D-skeleton image.", "The untrimmed 3D skeleton videos are cut into a overlapped continuous video stream using a sliding window method.", "The processed 3D skeleton video stream will fed to the IndRNN based action recognition system, which will analyze the video, identify suspicious activities, and sound an alarm when an emergency situation is detected.", "Figure: Configuration of the 20 joint in skeleton image, The labels of the joints are: 1.hip center, 2.middle-spine, 3.shoulder center, 4.Head, 5.Left-shoulder, 6.Left-elbow, 7.Left-wrist, 8.Left-hand, 9.Right-shoulder, 10.Right-elbow, 11.Right-wrist, 12.Right-hand, 13.Left-hip, 14.Left-knee, 15.Left-ankle, 16.Left-root, 17.Right-hip, 18.Right-knee, 19.Right-ankle, 20.Right-foot." ], [ "3D Skeleton Image Feature Extraction", "Kinect sensor is able to construct simplified human skeleton model by using 20 key points information instead of using total 206 bones.", "Figure REF presents the 20 key skeletal joints that sufficiently describe the process of general human movement.", "In this model, the spatial coordination information of each joint point is set as $P(x,y,z)$ , where $x$ and $y$ are the abscissa and ordinate respectively, and the $z$ dimension represents the distance from the human body to the camera.", "During the movement, the relative positions among the joints change.", "For example, in the waving action, the joint of wrist is initially under the joint of shoulder.", "But as the movement progresses, the wrist joint moves above the shoulder joint.", "At the same time, the joints represent torso are always relatively stable.", "Therefore, to better represent the offset of the joint points of the limbs relative to the hip and remove the effect of distance from the camera, taking the central node of the hip as the central origin, the calculation formula for obtaining the initial spatial position feature is as follows: $f = p_{n}-p_{hip}(n = 2,3,....N).$ where $p_{n}$ represents other nodes except the hip joint, and $p_{hip}$ is the hip-center joint.", "The representation in a 3D space is: $\\left\\lbrace \\begin{matrix}\\Delta x_{n}^{m} = x_{n}^{m} - x_{h}^{m}\\\\\\Delta y_{n}^{m} = y_{n}^{m} - y_{h}^{m}\\\\\\Delta z_{n}^{m} = z_{n}^{m} - z_{h}^{m}\\end{matrix}\\right.$ $f_{x}^{m} = [\\Delta x_{1}^{m}, \\Delta x_{2}^{m}, .....\\Delta x_{n}^{m}]$ The difference between the $X$ coordinates of the remaining 19 points in the $m$ _th frame and the center point can be obtained by Eq.", "REF .", "In the same way, the difference in $Y$ axis $f_{y}^{m}$ and the difference in $Z$ axis $f_{z}^{m}$ can be obtained.", "The three dimensional coordinates can be connected to the feature vector of the current frame: $f_{m} = [f_{x}^{m},f_{y}^{m},f_{z}^{m}]$ with the size of $19\\times 3$ .", "An action can be represented as a set of feature vector of all image as it is shown in Eq.", "REF .", "$F = [f_{1},f_{2}, ......f_{M}]$ It should be noted that due the the different heights of human bodies, the numerical value of the coordinate of skeleton will ranges.", "The taller a person is, the longer the skeletal segment will be.", "This difference may cause misidentifying of the same action but made by different people.", "Considering the skeletal segment size, a normalization function is used.", "As shown in Fig.", "REF , the point 1 is the hip center, and the point 2 is the middle of spine.", "The final action space feature vector is shown by the Eq.", "7.", "$&\\bar{f} = f*Scale \\\\&Scale = distance(P_{1},P_{2})/(length\\_of\\_spine/2) \\\\&\\bar{F} = [\\bar{f_{1}},\\bar{f_{2}}, ......\\bar{f_{M}}]$" ], [ "Model Creating", "RNN is a class of deep neural networks which take not only the current input but also the previous hidden state as the true input.", "Depending upon the number of time steps, RNN can efficiently retain information about the past.", "Therefore, RNN is more suitable for time-series analysis.", "However, RNN and its variants Long short-term memory (LSTM) network are easily prone to long-range dependency problem, such as gradient explode and vanishes during back-propagation operations.", "To overcome such issues, IndRNN is proposed [15] as an enhanced version of simple RNNs.", "In IndRNNs the neurons in the same layer are independent of each other while connected across layers.", "Figure REF shows the architecture differences between simple RNNs and IndRNNs.", "In a particular hidden layer of IndRNN as shown in Fig.", "REF , each neuron only receive its own past context information instead of having full connectivity among all neurons in the same layer as the simple RNN does as shown in Fig.", "REF .", "Figure: Architecture of Simple RNN and IndRNN unfloded in time.Figure: Basic architecture of IndRNN.In this work, an IndRNN architecture [15] is adopted to handle the HAR task.", "The basic architecture is represent in Fig.", "REF .", "The IndRec+ReLU block represents the input and recurrent process carried out at each time step with ReLU activation function.", "BN donates the batch normalization preformed before and after the activation function [15].", "The Hadamard product is used to process the recurrent inputs in the hidden layer of the processed architecture.", "For each time step $t$ , the $n_{th}$ hidden state $h_{n,t}$ can be updated with Eq.", "REF , $h_{n,t} = \\sigma (W_{n}x_{t}+u_{n} \\odot h_{n,t-1}+b_{n})$ where $W_{n}$ is a vector represents the input weight, $u_{n}$ is the recurrent weight, $\\sigma $ is the ReLU activation function, $\\odot $ denotes the hadamard product, and $b_{n}$ is the bias.", "After being trained, the IndRNN-based HAR model is able to handle the extracted skeleton features.", "The 4-layer IndRNN structure has achieved comparable experimental results [15].", "Particularly, the IndRNN is set to be a 4-layer simple structure to meet the requirement of smart-home environment." ], [ "Real-Time Testing and Sliding Windows", "Most work on continuous HAR assumes that pre-segmented video clips are used in recognition part of the task.", "However, the information of the start and ending time of an observed action are import in order to provide a credible recognition result for untrimmed action streams.", "In this paper, we apply a sliding window method to divide the input skeleton video into short overlapped skeleton segments as shown in Fig.", "REF .", "Specifically, the input skeleton video streams are sampled every five frames to construct a sequence of frames for processing by a sliding window of 20 frames as time elapses.", "After sliding window video trim, the short sequence is fed into the action recognition system to estimate the result.", "But the sliding window method greatly increases the result number, which consequently will reduce the final accuracy and cause confusion at the junction of two actions.", "Besides, in real-world applications, such a high frequency result report is unnecessary.", "Therefore, Non-maximum suppression (NMS) method is adopted to increase the robustness of the system.", "In every five sliding windows, the sliding results are utilized to find the real action result and provide the start and stop time.", "The result is annotated at the Starting time of 6-th sliding window.", "Figure: Sliding window method for action recognition." ], [ "Experimental Setups", "Experimental study has been conducted to evaluate the proposed REMS scheme,and the results are analyzed in terms of the training and testing accuracy using IndRNN architecture and mean Average Precision of the instance-level action recognition.", "The model is trained on an Intel i7-8700k 3.7GHz and two NVIDIA GeForce GTX1080Ti graphic cards under Windows 10.", "The experiments are conducted under python 3.8 that utilizes Pytorch backend and NVIDIA CUDA 11.6 library for parallel computing." ], [ "Dataset: NTU_RBG+D", "In this study, the training process is conducted on the NTU_RGB+D dataset, which has 60 action classes performed by 40 subjects from 80 views [20].", "Each action is captured by three Microsoft Kinect V2 places on different positions, the available data for each frame includes RGB image, depth map sequence, 3D skeleton data, and infrared (IR) data.", "The skeleton data contains 3D coordination of 25 body joints for each frame.", "In order to satisfy the feature extraction requirement, the last five joints are ignored, which are 21 (top of spine), 22 (tip of left hand), 23 (left thumb), 24 (tip of right hand), and 25 (right thumb).", "It contains 56880 video samples from 60 types of different classes including 2 segmentation methods: Cross-Subject 40 subjects are splited in to training and testing sets based on the subject id, each group consists of 20 subjects.", "40320 and 16560 samples for training and testing, respectively) and Cross-View (All samples from camera 1 are picked for testing and other samples of camera 2 and 3 for training.", "37920 and 18960 samples for training and testing, respectively).", "Specifically, there are nine types of medical conditions as shown in Table REF .", "Also, there are 11 types of actions that are interaction among multiple people.", "Therefore, 48419 video sample of 49 types of action in total are used for training.", "20 frames were sampled from each sequence as one input [15].", "After sampling, the skeleton data used to represent one action is only 10KB, while the RGB image and depth image representing the same action are 1.88MB and 878KB respectively.", "The low memory consumption makes skeleton image more suitable for action recognition conducted by resource constrained edge devices in smart-home environments.", "Table: Class of actions related to medical conditionsTable: Results on NTU RGB+D datasetTable REF presents the comparisons of REMS with other existing methods.", "In [23], the joint coordinates of two subject skeletons are set as the input.", "If only one person is presented, the other is set as zero.", "With a different feature extraction method, our REMS system is more focusing on single skeleton analysis in purpose to accommodate the situation of elders who live alone.", "The accuracy of REMS is slightly lower than the ST-GCN based models [31], in which the input is spatially processed through Graph convolution.", "However, in the context of smart home monitoring at the network edge, the available computational resources are limited.", "Consequently, a lightweight action recognition model with less complexity especially in feature extraction step is more affordable and the trade-off is acceptable." ], [ "Continuous Action Recognition Results", "Table REF reports the Average Precision (AP) of REMS on activity classification.", "Beside NTU_RGB+D, the Toyota Smarthome Untrimmed (TSU) dataset is newly published and contains 536 video streams with an average of 21 minutes, which annotated with 51 activities.", "The subjects are seniors in the age range from 60-80 years old.", "Therefore, this dataset is a qualified to represent the environment of elderly living alone.", "The sliding window overlapping is chosen to be 75% while the window length is 20 as the training data size as suggested in [22].", "The use of two different datasets is also purposed to be closer to the real-world situations.", "In real life, human actions are often unpredictable and actions that are not included in the training set are very likely encountered.", "There are total eight types of actions that occur in both dataset at the same time.", "Table: Average Precision (AP) of REMS on activity classification using TSU and NTU_RGB+D datasetThe start-of-art mAP of the TSU is 0.327 [6] while the mAP over selected action types is 0.308, which shows that our REMS scheme achieved a comparable accuracy.", "However, it is worthy to mention that only eight types of actions are contained in TSU and NTU_RGB+D dataset both, and REMS is trained using NTU_RGB+D dataset only.", "This experiment is conducted to evaluate the performance of REMS in case it is deployed to handle actions that are not in the training data set.", "This study leads to some interesting observations.", "Although there are identically labeled actions in both dataset, the results vary widely due to multiple factors, e.g.", "different camera settings.", "Actions that do not appear in the training set are very mislabeled, which lead to a question on how to avoid that situation in real-life applications.", "Finally, the healthcare-related actions are not given in this dataset and there is very few untrimmed dataset related to healthcare.", "Hence, introducing artificially generated action data using tools such as GANs might be feasible in the next step." ], [ "Conclusions", "In this work, we propose REMS, a real-time elderly monitoring for senior safety by applying lightweight human action recognition system.", "Leveraging a plain IndRNN structure as the main action recognition core, the REMS system is efficient and is ready to be transplanted to edge devices that are affordable in smart-home environments.", "There are time step delays after the implementation of using NMS.", "Considering that the Kinect sensor works around 30 frames per second (FPS), the time delay is around five seconds.", "This delay does not significantly impact the healthcare monitoring because the time that emergency personnel arrives after receiving the alarm is measured in minutes.", "Certainly shorter delay time will be more approving, which is also included in our future work.", "Privacy and security are among the top concerns that need to be considered in tracking and monitoring process especially in the application scenarios like healthcare monitoring [7].", "In our REMS system, only the skeleton information is used.", "REMS does not involve any privacy sensitive information, such as the living environments, the color and style of clothing, items held in the person's hand, etc..", "Regarding the data that is closely related to personal information such as height, the personal tag can be weakened after the normalization in feature extraction step.", "Therefore, even if the data is leaked into cyberspace, it will not bring any impact on personal privacy." ] ]
2207.10519
[ [ "Room geometry blind inference based on the localization of real sound\n source and first order reflections" ], [ "Abstract The conventional room geometry blind inference techniques with acoustic signals are conducted based on the prior knowledge of the environment, such as the room impulse response (RIR) or the sound source position, which will limit its application under unknown scenarios.", "To solve this problem, we have proposed a room geometry reconstruction method in this paper by using the geometric relation between the direct signal and first-order reflections.", "In addition to the information of the compact microphone array itself, this method does not need any precognition of the environmental parameters.", "Besides, the learning-based DNN models are designed and used to improve the accuracy and integrity of the localization results of the direct source and first-order reflections.", "The direction of arrival (DOA) and time difference of arrival (TDOA) information of the direct and reflected signals are firstly estimated using the proposed DCNN and TD-CNN models, which have higher sensitivity and accuracy than the conventional methods.", "Then the position of the sound source is inferred by integrating the DOA, TDOA and array height using the proposed DNN model.", "After that, the positions of image sources and corresponding boundaries are derived based on the geometric relation.", "Experimental results of both simulations and real measurements verify the effectiveness and accuracy of the proposed techniques compared with the conventional methods under different reverberant environments." ], [ "INTRODUCTION", "Room geometry inference is a process of estimating the reflective boundaries, allowing the geometry of the room to be inferences [1], [2], [3].", "Unlike visual-based techniques, audio-based room geometry inference methods are conducted based on the recorded acoustic signals and play an increasingly important role in numerous applications, such as sound source localization [4], [5], [6], signal enhancement [7], immersive audio and augmented reality (AR) [8].", "The traditional methods for inferencing the room geometry are based on the measurement of Room impulse response (RIR), whose temporal structure is subsequently analyzed [9], [10], [11].", "The RIR is composed of direct sound, early reflections and later reverberation.", "The time delay between the direct sound and early reflections is determined by the sound source and the receiver's locations and the geometry of the room.", "Therefore, it is possible to derive the boundary information from the measured RIRs.", "Dokmanic et al.", "demonstrated that the room can be reconstruected from a single RIR by using the information of direct sound and reflections up to second order [7].", "Considering that the second-order reflections are hard to obtain, some researchers achieve this target by using the mobile receiver or a large microphone array that uses the first-order reflection information in RIR measured at different positions to estimate the boundary parameters [12].", "However, the RIR-based room geometry estimation methods need to activate the sound source with specific signals, which can not be used in the scenarios such as the sound source is unknown.", "Unlike the above approaches based on the measurement of multiple room impulse responses, Mabanda et al.", "proposed a room geometry inference method by introducing the beamforming techniques for a compact spherical microphone array [13].", "This kind of method does not need the prior knowledge of the source signals but only the position of the sound source, which makes it more widely used.", "Nevertheless, the room geometry is challenging to reference based on one-point measurement because of the limitation of the current DOA and TDOA estimation algorithms.", "Therefore, multiple measurements of the first-order reflections corresponding to different boundaries are required to estimate the room geometry thoroughly.", "With the development of machine learning algorithms, researchers have proposed many room geometry estimation techniques by using the deep neural network model.", "For example, Yu et al.", "realized the room acoustic parameters estimation from a single RIR using the proposed DNN model [3].", "Multiple RIRs are used to increase estimation precision by averaging estimates.", "Furthermore, Niles et al.", "investigated the performance of a convolutional-recurrent neural network for blind room geometry estimation, which is conducted based on higher-order Ambisonics (HOA) signals [14].", "Although this end-to-end method has proved to be effective under specific circumstances, it ignores the position information of the sound source and the receiver in the room environment, which also determine the spatial characteristics of reverberation signals.", "This setting will result in insufficient constraints in solving room geometry estimation problems and affect the network model's reliability and the accuracy of the results.", "In this paper, we proposed a novel room geometry estimation method based on the localization of the real sound source and first-order image sources.", "This work is inspired by the related work proposed by Mabande et al.", "in Ref.", "[13] and further improved, which can be embodied in the following points.", "First, we have realized the sound source localization based on the geometric relationship between the real and image source.", "Therefore, the location of the sound source does not need to be given in advance.", "Besides, the direction of arrival (DOA) and time difference of arrival (TDOA) of direct signal and reflections are estimated using the proposed DNN models, so the perception of boundary position can be realized through one-point measurement.", "In contrast to most approaches found in the literature, the proposed method does not involve measuring RIRs and any other prior knowledge of the source.", "Therefore, it can be applied to many applications, e.g., speech enhancement or immersive audio recording.", "This paper is organized as follows: an overview of the proposed room geometry blind inference method is given in Sec.", "II.", "The proposed DOA and TDOA estimation methods that are processed in the Eigen beam (EB) domain are briefly reviewed in Sec.", "III.", "Sec.", "IV and Sec.", "V have analyzed the geometric relation between the direct source and first-order reflections, based on which the sound source localization methods are proposed, and then the room geometry is inferred.", "The experiments and results are described in Sec.", "VI, which proves the effectiveness and robustness of our method, and the conclusions are given in Sec.", "VII." ], [ "Room geometry estimation method", "In this section, a room geometry estimation method is proposed, which can be performed under enclosure rooms with the shape of a convex polyhedron.", "This work is conducted based on the assumption that the impinging waves are reflected from the boundaries in a specular fashion, which is justified for most room boundaries with a wide frequency range [15].", "In this case, the first-order image sources can be viewed as the real source's mirror point concerning the corresponding boundaries.", "Therefore, the estimation of room boundaries can be achieved by the localization of the real sound source and first-order image sources based on the recorded microphone array signals.", "The proposed method does not involve measuring RIRs and prior knowledge of the sound source.", "So, it can be applied in the process of the other sound source processing algorithms, such as speech enhancement or immersive audio recording.", "Figure: The framework of the proposed method.The procedure of the proposed techniques is expressed in Fig.REF .", "The DOA of the direct signal and reflections are firstly estimated, based on which the signals from the corresponding directions are then extracted, and the TDOAs between the direct signals and first-order reflections are estimated.", "After that, the DOA and TDOA information is combined to local the sound source and the image sources based on the geometric criterion.", "In the last, the desired geometric boundary parameters are inferred.", "Due to the absorption of the boundaries and the attenuation during propagation, the power of the reflected signals is lower than that of the direct signal.", "Therefore, signal extraction of first-order reflections is difficult because of their low signal to interference-noise ratio (SINR) and their coherence with the direct signals and among each other.", "To alleviate the disturbance of the no-target signals and background noise, the DNN-based methods for TDOA estimation and source localization are proposed in Sec.", "III" ], [ "DOA and TDOA estimation", "In our work, the reverberant signals are recorded using a spherical microphone array.", "And then, the recorded multiple-channel microphone array signals are transformed to the Eigen beam (EB) domain based on the spherical harmonics’ decomposition theory [16].", "So, the spatial sound field is represented by a set of orthogonal bases, which provides an elegant mathematical framework for spatial signal processing and facilitates the localization and extraction of target sources.", "To reduce noise amplification and suppress spatial aliasing error while retaining the spatial characteristic of of estimation EB domain signals under the reverberation environment, this process can be realized using the sparse DNN model [17].", "The obtained spherical harmonics signals can be expressed as ${B}(kr) = [B_0^0(kr), B_1^{-1}(kr), B_1^0(kr), \\ \\cdots , B_N^N(kr)]^T,$ which is also regarded as the HOA signals.", "In the above equation, $k$ is the wave number and $r$ is the radius of the microphone array.", "$B_n^m (\\cdot )$ is the EB domain signal of order $n$ degree $m$ , and $N$ is the truncation order.", "The following DOA and TDOA estimation are all performed based on the HOA signals." ], [ "DOA estimation", "DOA estimation of room reflections is challenging due to the high coherence and low energy of the reflected signals related to the direct signal.", "To solve this problem, researchers have given a variety of solutions.", "The steered beamformer-based and subspace-based reflections localization techniques have been compared in Ref.", "[18], and the results prove that the EB domain MVDR (EB-MVDR) is more suitable for the DOA estimation of the reflections.", "To further improve the accuracy of reflected signals localization results, a DNN-based DOA estimation model (DCNN) is conducted for the DOA estimation of the direct and first-order reflected signals.", "This model uses the correlation matrix of HOA signals as the input feature and the deconvolutional network to construct the spatial pseudo-spectrum [19], which performs better than the EB-MVDR algorithm and previous DNN-based method.", "Therefore, the DCNN model is used here for DOA estimation and is briefly reviewed in the following.", "Given the HOA signals in the time domain $B_t$ , the covariance matrix of broadband EB signals can be calculated as $R_t=B_t B_t^H$ .", "Using $R_t$ as the input of the neural network model, the spatial pseudo-spectrum (SPS) map correlated with the directions in the spherical coordinate system can be reconstructed.", "The peaks of the SPS determine the DOAs $\\Omega _i,i=0,1,2,…,I$ , where $I+1$ is the total number of the estimated DOAs, and thus $i=0$ denotes the direct path and $i=1, …, I$ denotes the first-order reflections.", "To ensure this, the supervision of the DCNN model is the SPS map with the peaks corresponding to the direct path and first-order reflections, and the initial microphone array signals are processed with the weighted prediction error (WPE) algorithms to eliminate the late reverberation [20]." ], [ "TDOA estimation", "Having estimated the DOAs of the direct path and first-order reflections, the signals from the localized directions are firstly extracted using the robustness beamforming techniques.", "Then a statistical analysis of cross-correlations is performed to estimate the TDOA between the direct signal and each reflection.", "Since the accurate estimation of the covariance matrix used in the MVDR algorithms is disturbed by the coherent signals, a fixed beamformer is used here for the signals’ extraction.", "According to the analysis in Ref.", "[21] and Ref.", "[22], the pure phase mode beamformer in the EB domain can obtain the maximum white noise gain (WNG) for the case of isotropic noise, which corresponds to the reverberation environment.", "Therefore, to realize the optimal signal extraction under reverberant environment, given the direction of the sound source $\\Omega _i$ , the coefficients vector of the beamformer can be written as the ${w}(\\Omega _i) = [Y_0^0(\\Omega _i), Y_1^{-1}(\\Omega _i), Y_1^0(\\Omega _i), \\ \\cdots , Y_N^N(\\Omega _i)]^T,$ where $Y_n^m (\\cdot )$ is the spherical harmonics function of order $n$ and degree $m$ .", "Due to the limitation of the main lobe width in the beamforming process, the extracted reflected signal often contains the direct signal, which will decrease the accuracy of the traditional generalized cross-correlation (GCC) algorithm for TDOA estimation.", "To alleviate the influence of background noise or interference signals, traditional GCC-based TDOA estimation methods are always conducted with specific weight coefficients, e.g., GCC with Phase Transform (GCC-PHAT) [23].", "This section proposed a convolutional neural network for the time delay estimation (TD-CNN) of the reflect signals related to the direct signal.", "The motivation is as follows: the convolutional neural network can extract and compare the phase difference of the input dual-channel signals, based on which the time delay is estimated.", "Besides, the weight coefficients of different frequencies are learned using the supervision method, which will make the proposed model more robust under reverberant and noisy environments.", "We use the direct signal and reflect signal in the frequency domain as the input of the network.", "The real and imaginary parts of the dual-channel input signals are spliced and fed into a multiple-layer convolutional neural network to estimate the time delay.", "In order to ensure the accuracy and robustness of the network model, the length of the input signals should be much longer than the estimated time delay.", "Considering that the maximum delay of direct sound and early reflection under room reverberation environments is about 50ms [1] and the signal sampling rate $f_s=16k$ Hz, we set the length of the network input signal as 5000 sampling points, that is, 313ms.", "The output layer contains 1000 nodes to represent the time delay with the exact time resolution as the reciprocal sampling rate.", "Ideally, only one node is activated, and its position represents the time delay.", "The enormous time delay obtained is about 62ms, which is sufficient for the TDOA estimation of the first-order signal.", "Therefore, we formalize the time delay estimation problem into a classification problem, with the input signal $x\\in R^{5000\\times 2}$ and the output signal $y\\in R^{1000}$ The proposed neural network architecture is shown in Fig.REF , and the specific parameters are depicted in Table., which are set according to our pre-experiments.", "We use the cross between the network output and accuracy time delay representation as the loss function and use the Adam optimizer for the training of the network.", "Figure: Flow diagram of the TD-CNN model.", "TC-Conv denotes the convolutional layer followed by the batch normalization (BN) and drop out processing, and FC denotes the fully-connected layer for time delay classification with the nodes number.Table: Architecture of TD-CNN.After the network is The TDOAs of the input dual-channel signals can be obtained by searching the maxima of the output layer $\\tau = \\lambda /f_s,$ where $\\lambda $ is the lag index of the maximum value, and $f_s$ is the sampling rate." ], [ "Sound source localization", "Since the impact microphone array can hardly distinguish the wavefront curvature difference of the sources at different distances in the same direction, it is challenging to realize the sound source localization using such a microphone array.", "However, the existence of reverberant signals makes it possible since different sound source positions lead to different reverberation characteristics.", "The Gaussian mixture model or neural networks are conducted to realize sound source location.", "The input features of these models include amplitude or phase difference, binaural cross-correlation [24], [25], direct sound reverberation energy ratio [26] and others [27].", "Considering that the mapping relationship between the sound source location and the above features is not fixed in different room environments, the current models trained in a specific room are hard to generalize in other rooms.", "In this section, we propose a novel sound source location method based on the first-order reflection of the sound source.", "Because the distance and orientation of the vertical walls are not fixed in different rooms, the corresponding reflection information is difficult to be directly used as a robust feature for estimating the sound source distance.", "However, the floor reflection always contains fixed characteristics, that is, the floor is generally horizontal and vertical to the microphone array, which can be used in the sound source localization process.", "Figure: The side view of the sound source and its reflection corresponding to the floor.Fig.REF is the side view of the propagation mode of the sound source and its reflection corresponding to the floor, based on which the proposed sound source localization method is illustrated.", "The microphone array $M$ is vertical to the floor with the height of $h$ , which is reasonable in applications and easy to obtain.", "The sound source signal propagates directly to the microphone array with the elevation $\\alpha _1$ .", "At the same time, it also the receiver through the reflection point $R$ on the floor with the elevation $\\alpha _2$ and the corresponding image source is shown as $S^{^{\\prime }}$ .", "All of the angles can be estimated with the DCNN model.", "Then the included angle between MS and SR is $\\alpha _2-\\alpha _1$ , and the included angle between MR and SR is $\\pi -2\\alpha _2$ , all of which can be derived.", "Using the above geometric information, the sound source distance is obtained according to the following clues.", "Sound source distance estimation based on array height (D-height) According to sine theorem, given the height of the microphone array, the sound source distance $d_1$ can be estimated with the following formulation: $\\frac{d_2}{sin(\\alpha _2-\\alpha _1)} = \\frac{d_1}{sin(\\pi -2\\alpha _2)},$ where $d_2$ is the distance between the microphone array M and reflection point R, and can be calculated as $d_2=h/sin(\\alpha _2$ ).", "Then the sound source distance can be estimated as $d_1=\\frac{d_2sin(\\pi -2\\alpha _2)}{sin(\\alpha _2-\\alpha _1)} = \\frac{h sin(\\pi -2\\alpha _2)}{sin(\\alpha _2)sin(\\pi -2\\alpha _2)},$ Sound source distance estimation based on TDOA (D-TDOA) Given the TDOA $\\Delta T$ between the direct signal and the reflection from the floor, the distance from the microphone array to the image source S’ can be derived as $d_4 = c\\cdot \\Delta T+d_1,$ Since the projection distance between the image source and the real source is the same in the horizontal plane, the following equation can be established $d_4cos(\\alpha _2) = d_1cos(\\alpha _1),$ Then the sound source distance can be calculated as $d_1 = \\frac{c\\cdot \\Delta T cos(\\alpha _2)}{cos(\\alpha _1-cos(\\alpha _2)},$ Sound source distance estimation based on DNN (D-DNN) When the DOAs and TDOAs of the real and image source can be estimated, the sound source distance can be derived based on the above two criteria.", "However, due to the existence of the estimation error of above spatial clues, the use of a single feature is easily to cause a large sound source distance estimation error.", "In practice, effectively integrating different distance cues will reduce the impact of these estimation errors and improve the accuracy of the localization results.", "In order to achieve the above objectives, we construct a sound source distance estimation model based on a multi-layer feedforward network in this section.", "The height of the microphone array, the directions of the direct signal and floor reflection, as well as the TDOA information are used as the input of the model.", "We use three layers fully-connected neural network to extract the high dimensional information of the input feature.", "The sound source distance is obtained using the nonlinear character of the network with the regression method.", "The architecture and activation functions of the network are depicted in Fig.REF .", "We use the mean square error between the network output and accuracy sound source distance as the loss function and use the Adam optimizer for the training of the network.", "In order to ensure the robustness of the proposed network model in applications, the zero-mean white Gaussian noise interference is added to the network input feature in the training process to simulate the input error that may occur in practice.", "The variance of the Gaussian noise is determined based on the trial method.", "Figure: The architecture of the sound source distance estimation DNN model.", "The coefficient in each fully connected layer denotes the nodes number." ], [ "Room geometry estimation", "After obtaining the real sound source location information, the position of the boundaries can be estimated according to the image source direction and time delay information of each boundary to obtain the room geometry.", "Since the real sound source and its first-order image source are symmetrically distributed relative to the reflecting surface, the connecting line between the above two points can be viewed as the vertical line of the corresponding boundary.", "Therefore, the position of the vertical point and the direction of the vertical line can be used to uniquely determine the boundary, so as to realize the geometric reconstruction of the room.", "Here the center point of the microphone array is settled as the origin of the coordinate system.", "Assuming that the sound source position has been obtained, the DOA and TDOA of the first-order reflected signal of boundary $j$ are expressed as $\\Omega _j$ and $T_j$ , respectively.", "The distance between the first-order image source $S_j$ and the center point of the microphone array can be calculated by the following formula: $d_{S_j} = d_{S_0} + c\\cdot T_j,$ where $c$ is the sound speed, $d_{S_0}$ is the distance between the real source and the microphone array.", "The location of image source $S_j$ can be easily derived using $d_{S_j}$ and $\\Omega _j$ , which can be expressed as $(d_{S_j} cos(\\Omega _j),\\; d_{S_j} sin(\\Omega _j))$ .", "Having obtained the real source position ${P}_S$ and image source position ${P}_{\\hat{S}_j}$ , the position of the vertical point ${P}_{\\hat{b}_j}$ and the direction of the vertical line ${n}_{\\hat{b}_j}$ can be expressed as ${P}_{\\hat{b}_j} = \\frac{{P}_S+{P}_{\\hat{S}_j}}{2},$ ${n}_{\\hat{b}_j} = \\frac{{P}_S-{P}_{\\hat{S}_j}}{||{P}_S-{P}_{\\hat{S}_j}||},$" ], [ "EXPERIMENTS", "To verify the effectiveness and quantity the accuracy of the proposed methods, the network models are firstly trained based on the simulated data, and then, its performance with both simulations and real measurements is evaluated.", "Both speech and white Gaussian noise are used as the sound source signals in the test process, which ensure that the room modes across all frequencies are sufficiently excited.", "Note that in experiments with speech signals, the DOAs and TDOAs are only estimated during periods of speech activity.", "A simple energy-based voice activity detector (VAD) is used to ensure the accuracy of estimated results.", "The microphone array signals are simulated or recorded at a sampling rate of $16\\,k$ Hz with a frame length of 5000 samples and then processed offline.", "The angular difference between the look direction of two neighboring beams for DOA estimation and signal extraction is set to 3° both along azimuth and elevation.", "The effectiveness of the proposed TD-CNN model and sound source distance estimation method is firstly evaluated, and then the results of room geometry reconstruction are analyzed." ], [ "Database", "For the training and testing of the proposed network, we create a simulation database under different room reverberant scenarios based on the image-source model [15].", "The length, width and height of rectangular rooms range from $3\\,m\\times 3\\,m\\times 2\\,m$ to $10\\,m\\times 10\\,m\\times 4\\,m$ , and the reverberation time is randomly settled in the range from $300\\,ms$ to $1000\\,ms$ .", "In each room, the reverberant signals are recorded using the spherical microphone array, which consists 32 microphones placed on a rigid sphere with a radius of $4.2\\,cm$ , which has exactly the same geometry as the Eigenmike (EM32) [28].", "It can decompose the sound field for up to fourth-order spherical harmonics.", "Both the microphone array and the sound source are randomly distributed within the room with the minimum distance as $0.5\\,m$ .", "The total number of rooms is 10000, corresponding to 10000 different simulated room impulse responses (RIRs), 80$\\%$ for training ,10$\\%$ for validation and 10$\\%$ for testing.", "The speech signals from the LibriSpeech database are used as the sources signals with the sampling rate as $16\\,k$ Hz.", "The length of the recorded signals in each room is about 10 seconds.", "Since the frame length is set to 5000 sampling point in the training and testing process, we have generated about 30 frames signals in each reverberant environment." ], [ "Evaluation metric", "The mean value $T_{mean}$ and standard deviation $T_{sd}$ of the time delay error with unit $ms$ are used for evaluation of the proposed TD-CNN model.", "Although the direct signals and the first-order reflections are extracted based on the localization results, the time delay estimation might also fail because of the low SNR of extracted signals.", "Therefore, the accuracy estimation rate of the time delay $R_{dect}$ is also used as the metric.", "To ensure the effectiveness of the model while reducing the impact of the interference on the results, a threshold value $\\sigma =0.3$ , which is settled based on the experimental method.", "Only the maximum peak that exceeds $\\sigma $ is regarded as the valid estimation.", "Having obtain the DOAs and TDOAs of the direct and first order reflections, the mean value $S_{mean}$ and standard deviation $S_{sd}$ of sound source distance error with unit $m$ are calculated for the evaluation of the proposed sound source distance estimation method.", "Besides, for evaluating the accuracy of the estimated room boundaries, two parameters are calculated here [13].", "The first parameter $D_{S,\\hat{S}_j}=D_{S}-\\hat{D}_{S_j}$ is the difference between the distance $\\hat{D}_{S_j}$ from the estimated plane to the origin and the distance $D_{S}$ from the 'ground truth' plane to the origin, while the second parameter $\\Theta _{{n}_{j},\\hat{{n}}_j}$ is the angle between both planes' normal vectors.", "$\\Theta _{{n}_{j},\\hat{{n}}_j}=arccos({n}_{j}^T\\cdot \\hat{{n}}_j),$ where ${n}_j$ and $\\hat{{n}}_j$ is the normal vector of 'ground truth' plane and estimated plane." ], [ "DOA and TDOA estimation", "We first conduct our experiments based on simulated data to evaluate the performance of the proposed methods under different scenarios.", "For insight into the DOA and TDOA estimation results, A specified simulated room with the dimension of $4\\,m\\times 5\\,m\\times 2.6\\,m$ is used here.", "Set the lower-left corner of the room as the origin of the coordinate system, and the coordinates (unit: $m$ ) of the sound source and microphone array are (3.0, 3.0, 1.5)and (2.0, 2.0, 1.5), respectively.", "The reverberant time is set to $800\\,ms$ .", "Figure: The estimated spatial pseudo-spectrums of the DCNN model and EB-MVDR with frequency smoothing.The exemplary spatial pseudo-spectrums (SPS) for DOA estimation are depicted in Fig.REF .", "Out of all peaks found in the SPS, only those that exceed the threshold $\\beta =-3\\,dB$ are selected as the direct and first-order reflected signals’ directions.", "The value of $\\beta $ is determined experimentally to be a good compromise as to provide directions of direct signal and all first-order reflections while suppressing the disturbance of higher-order reflections and background noise.", "In all sub-figures depicting acoustic SPS, the ground truth DOAs for the direct source and first-order reflections are denoted with asterisks.", "Compared with the result of the MVDR method conducted in the Eigen beam domain with frequency smoothing (EB-MVDR), the results of the DCNN model has higher resolution and has the ability to obtain all first-order reflection directions by single point observation.", "Figure: The time delay estimation results of GCC-PHAT and TD-CNN model, and the black line denotes the real time delay.Figure: Statistical result on the TDOA results under different rooms with different T 60 T_{60}.The estimated DOAs are used for signals extraction and TDOAs estimation.", "The time delay estimation results between the extracted direct signals and floor reflection are depicted in Fig.REF .", "Through both the results of GCC-PHAT and TD-CNN model present peaks at the right time delay position, the former contains several distinct peaks corresponding to the TDOAs of the direct signal and early reflections.", "In addition to the peak corresponding to the correct time delay, the interference of direct or non-target reflected signals will also lead to the peaks at the wrong position, which will confuse the estimation.", "In contrast, the proposed TD-CNN can effectively suppress the disturbance and makes the target time delay value more significant.", "To ensure the effectiveness of the model while reducing the impact of the interference on the results, a threshold value $\\alpha =0.3$ , which is set based on the experimental method.", "Only the maximum peak that exceeds $\\alpha $ is regraded as the valid estimation.", "Besides, the peaks around the zeros-delay point (within 10 sampling points) is ignored to eliminate the influence of direct signal in the beamforming result of the reflected signals.", "To verify the effectiveness of the proposed model under different reverberant scenarios, we make statistics on the TDOA results under different rooms with different $T_{60}$ , as shown in Fig.REF .", "In order to improve the results' differential of different algorithms, we have added different displacements to the error curves on the horizontal axis.", "Multiple groups of results near the same abscissa correspond to the same test environment, and the same processing is added to the following figures.", "It can be seen that the TD-CNN model has higher accuracy, as well as more minor angle error compared with the GCC-PHAT algorithm, which proves the robustness and effectiveness of the network model under different scenarios." ], [ "Sound source Localization", "Having obtained the DOAs and TDOAs of the direct and reflected signals, the sound source distance can be estimated based on the geometric derivations or the proposed neural network model.", "To evaluate the accuracy of these methods under different situations, we make statistics on the variation trend of the localization error with the distance of the sound source and the reverberation time $T_{60}$ , as shown in Fig.REF and Fig.REF .", "From the overall results, we can see that both the reflected sound delay information and array height information can be used as effective sound source localization features under different reverberant environments.", "Compared with the ground reflection delay information, sound source localization based on the height information of the array itself is much more accurate because there is no measurement error.", "The method based on the network model can effectively integrate the above features and obtain the optimal localization results under different cases.", "When the sound source distance is within $8\\,m$ , the distance determination error can be basically maintained within 1m." ], [ "Room geometry inference", "Given the estimated sound source location, the DCNN and TD-CNN models are used for DOA and TDOA estimation of the reflected signals corresponding to each boundary, and the EB-MVDR and GCC-PHAT algorithms are presented as the baseline.", "In this process, we have trained a specific sound source distance estimation DNN model based on the estimated TDOA and DOA values from conventional methods.", "The detection rate and estimation errors of room boundaries under different tested reverberant scenarios are depicted in Fig.REF .", "Figure: The proposed neural network.From the results of the traditional method we can see that with the increasing of the reverberation time, the position and angle estimation error of the boundaries also increase gradually.", "Compared with the baseline system, the proposed method can effectively increase the sensitivity and accuracy of the boundaries detection, and is relatively robust in most reverberation environments.", "When $T_{60}=800\\,ms$ , the average boundary detection rate in different environments is about 0.85, which means that the complete observation of room boundary can be realized in most cases.", "The exception may be caused by the excessive coincidence of the directions between direct and reflected signal.", "In these cases, the spatial resolution of DOA algorithms and the beamformer does not suffice to discriminate them well.", "This problem can be solved by moving the microphone array to observe the sound field from different angles.", "It should be noted that the position error of the proposed method is slightly higher than the traditional method when $T_{60}=300\\,ms$ , which might be caused by the high detection rate of the reflections.", "The proposed method can derive more boundaries corresponding to the reflected signals with weak energy, which will bring much high localization and time delay estimation error." ], [ "Experimental setting", "The measured signals are recorded using Eigenmike spherical microphone array under different rectangular rooms, as shown in Fig.REF .", "The size of the rooms, as well as the location of the microphone array and the loudspeaker are shown in Table.REF .", "Each room has four walls with a smooth lime surface, one of which contains glass windows, the floor is a smooth stone floor (room 1) or a wood floor (room 2), and the ceiling is a porous gypsum board.", "Note that the inference results for the real room were compared to the “ground truth” obtained through manual measurements of the respective distances, and thus can be considered accurate up to manual measurement errors.", "The recording is performed at $16k$ Hz sampling rate.", "Figure: The picture of measurement rooms.Table: Geometric parameters of the measurement rooms" ], [ "DOAs and TDOAs estimation", "Fig.REF and Fig.REF depict the estimated acoustic SPS and time delay information of room 2.", "In Fig.REF , seven peaks can be obviously found from the output of DCNN, corresponding to the directions of the direct signals and first-order reflections.", "It proves that the DCNN model has excellent generalization performance, and a better spatial resolution than the EB-MVDR algorithm in the actual environments.", "From Fig.REF we can see that both traditional methods and network models can achieve an approximate estimation of peak position.", "However, due to the interference of direct sound in the beamforming process of the reflected signal, the GCC-PHAT algorithm also has a significant peak near the delay of 0, which will cause the confuse in TDOA estimation.", "The TD-CNN output results can clearly indicate the delay information and suppress the interference of non-target signals.", "Figure: The estimated SPS of DCNN and EB-MVDR methods using measurement data.Figure: The time delay estimation results of GCC-PHAT and TD-CNN model using measurement data.Table.REF shows the statistical results of delay estimation error and accuracy rate of the measurement signals.", "The average detection rate and mean square error of the TD-CNN model for related signals are 0.86 and 1.47$ms$ , which are significantly improved compared with the 0.79 detection rate and 2.01 $ms$ of the GCC-PHAT method.", "Compared with the simulation results, the error of the measurement data increases significantly, which is mainly caused by the measurement errors of room size, microphone array, and source position.", "Table: Statistical results of time delay estimation methods using the measurement data" ], [ "Sound source localization", "Table.REF depicted the sound source distance estimation results using the above three methods.", "It can be seen that our method is able to estimate the distance of the source at about $2\\,m$ with an accuracy of about $33\\,cm$ .", "To the best of the author’s knowledge, the best sound source distance estimation method based on DNN model is introduced in the literature [4], which shows an accuracy of about $54\\,cm$ by using binaural amplitude and phase difference as model inputs.", "By comparison, the proposed method significantly improves the accuracy of the sound source location.", "Table: Statistical results of sound source distance estimation methods using the measurement data" ], [ "Room geometry inference", "Based on the above results, we have reconstructed room 1 and room 2, and the results are shown in Fig.REF .", "The red image represents the actual rectangular room structure, and the gray image represents the reconstructed result.", "The origin position of the coordinate system is the center of the microphone array.", "The average distance error and angle error of room1 are $0.53\\,m$ and 4.81°, respectively; The average distance error and angle error of room 2 are $0.34\\,m$ and 4.86 °.", "Compared with the reconstruction result of room 1, the result of room 2 is more accurate, mainly due to the smaller size of room 2 and the more minor distance error of image sources.", "In general, the method proposed in this paper can effectively use the single point observation of the sound field to estimate the room boundary in the case of unknown sound source location in the actual environments.", "Figure: Reconstructed results of measured rooms." ], [ "CONCLUSION", "In this paper, we proposed a room geometry blind inference method based on estimating the direct source and first-order reflections in the acoustic enclosure.", "In this process, the room reverberation information is calculated and used for the localization of the sound source and the estimation of the boundaries with a compact microphone array.", "Besides, the DNN-based models are designed to realize the time delay estimation and sound source localization with high precision and robustness.", "The proposed method has two advantages over the conventional room geometry inference techniques.", "One is that it does not need any prior information about the environment or the measuring of RIR.", "Apart from that, by using DNN models, the accuracy and integrity of reflection information estimation in the sound field are improved, which is helpful to realize room boundary estimation based on single-point measurement.", "Experimental results of both simulations and real measurements verify the effectiveness and accuracy of the proposed techniques compared with the conventional methods under different reverberant environments.", "According to the measurement data results, the sound source localization error of the proposed DNN model is about 17%, which is much better than the best results of traditional methods.", "Besides, the proposed DNN-based models can reduce the distance and angle error of the boundaries estimation results by about 10% and 25%, respectively." ], [ "ACKNOWLEDGMENT", "This work is supported by the National Key Research and Development Program (No.2019YFC1408501), the National Natural Science Foundation of China (No.U1713217, No.61175043, No.61421062), and the High-performance Computing Platform of Peking University." ] ]
2207.10478
[ [ "Polarization-sensitive Compton scattering by accelerated electrons" ], [ "Abstract We describe upgrades to a numerical code which computes synchrotron and inverse-Compton emission from relativistic plasma including full polarization.", "The introduced upgrades concern scattering kernel which is now capable of scattering the polarized and unpolarized photons on non-thermal population of electrons.", "We describe the scheme to approach this problem and we test the numerical code against known analytic solution.", "Finally, using the upgraded code, we predict polarization of light that is scattered off sub-relativistic thermal or relativistic thermal and non-thermal free electrons.", "The upgraded code enables more realistic simulations of emissions from plasma jets associated with accreting compact objects." ], [ "Introduction", "Accreting black holes in Active Galactic Nuclei, X-ray binaries or $\\gamma $ -ray bursts often produce relativistic jets.", "Depending on the system size, jets are usually observed in radio and infra-red wavelengths.", "Interestingly, the radio emission is often correlated with the X-rays ([13], [6]).", "The latter suggests that some of the X-ray emission observed in accreting black holes may be produced by jets as well.", "In such picture, the radio and the X-ray photons are produced by electrons which experience acceleration.", "New insights into black hole accretion and jet emission may be soon provided by simultaneous spectral-timing-polarimetry at keV energies by missions such as NASA's X-ray polarimetry mission Imaging X-ray Polarimetry Explorer (IXPE) ([20]) and Chinese/European Enhanced X-ray Timing and Polarization mission (eXTP) ([23]) (and a few other similar experiments).", "The first results from IXPE have been recently reported [12].", "We are therefore motivated to find out what information about electron acceleration in accretion flows or jets can be carried by polarization of light, with a particular focus on the inverse-Compton scattered light.", "Polarization of X-ray emission (or more generally, higher energy emission) produced by plasma in strong gravity depends on whether the high energy emission is of synchrotron origin (direct emission) or arises in the inverse-Compton process (scattered emission).", "In the latter case the polarization of scattered light may be due to transfer of polarization of synchrotron emission in the inverse-Compton process or may be due to scattering process itself [4].", "Hence the polarization of scattered emission depends on many factors: on magnetic field configuration in the plasma (which impacts polarization of synchrotron radiation), energy distribution of synchrotron emitting plasma electrons, Faraday effects, opacity of the plasma for scatterings or whether the scattering in the electron frame occurs in Thomson (TH) or Klein-Nishina (KN) regime.", "In addition, photon emission and propagation depends on spacetime curvature and on overall geometry and dynamics of the accretion flow.", "The complexity of the theoretical predictions for polarimetric properties of high energy radiation is large (for complete overview see [11]).", "To enable theoretical studies of polarimetric properties of emission from complex systems, we developed radpol radpol code is an extention of grmonty which originally assumed unpolarized emission and emission and scattering off thermal population of electrons [5].", "Notice that most of the polarization-insensitive algorithms in radpol are inherited from grmonty.", "- a covariant Monte Carlo scheme for calculating multiwavelenght polarized spectral energy distributions (SEDs) of three-dimentional General Relativistic Magnetohydrodynamic (3-D GRMHD) simulations of black hole accretion [14].", "The code samples a large number of polarized synchrotron photons, propagates them in curved spacetime, simulates their inverse-Compton scatterings and collects information about outgoing spectrum in a spherically shaped detector at large distance from the center of the model grid.", "In our modeling we include synchrotron emission, synchrotron self-absorption in all Stokes parameters and Faraday effects To integrate radiative transfer equations radpol is using the numerical scheme of another code, ipole, ray-tracing scheme for making polarimetric images of black holes, developed by [15]., and inverse-Compton process and takes into account all effects that are important in relativistic plasma in strong gravitational fields of e.g., black holes.", "Our numerical code, until now, assumed that electron in plasma have thermal distribution function.", "In this work we overcome this major over-simplification.", "Here we present a new scattering kernel for radpol code to permit emission and polarization from plasma in which electrons are accelerating.", "Our model for scattering is completely covariant and allows us to build more realistic models of emission from relativistic jets.", "The structure of the paper is as follows.", "In Section  we write basic equations which describe inverse-Compton scattering of polarized and unpolarized photons off an electron at rest.", "We then show how scattering is computed for an ensemble of electrons with four energy distribution functions.", "We show that our numerical method recovers some well known theoretical expectations.", "In Section  we present examples of scattering in Minkowski spacetime that can be used to understand results from more complex simulations.", "Section  list other code developments carried out to calculate polarized non-thermal spectra of complex accretion models.", "We conclude in Section .", "We begin with improving the original radpol polarization-sensitive inverse-Compton scattering kernel by converting it from an average intensity conserving one (originally implemented in radpol) into a photon conserving one [19].", "The latter make the scheme more robust and enables us to include scattering off accelerated electrons with greater precision.", "We first re-consider the inverse-Compton scattering of polarized photon beam in the rest frame of an electron.", "The differential cross-section for the Compton scattering of polarized photons on free electrons is given by the general KN formula ([1]): $\\frac{d\\sigma ^{KN}}{d\\Omega } = \\frac{1}{4} r_e^2\\left(\\frac{\\epsilon _e^{\\prime }}{\\epsilon _e}\\right)^2 [ F_{00}+ F_{11} \\xi _1 \\xi _1^{\\prime } + F_1 (\\xi _1 + \\xi _1^{\\prime })+ F_{22} \\xi _2 \\xi _2^{\\prime } + F_{33} \\xi _3 \\xi _3^{\\prime } ],$ where $r_e=e^2/(4\\pi \\epsilon _0 m_e c^2)$ is the electron classical radius, $\\epsilon _e$ and $\\epsilon _e^{\\prime }$ are incident and scattered energy of photon in units of $m_ec^2$ , $\\xi _{1,2,3}$ ($\\xi _{1,2,3}^{\\prime }$ ) are normalized polarizations of incident (scattered) photon, which are defined as follows: $\\xi _1 \\equiv {\\mathcal {Q}}/{\\mathcal {I}}$ , $\\xi _2\\equiv {\\mathcal {U}}/{\\mathcal {I}}$ , and $\\xi _3 \\equiv {\\mathcal {V}}/{\\mathcal {I}}$ .", "In Equation REF , Stokes ${\\mathcal {Q}}$ and ${\\mathcal {U}}$ (or their fractions $\\xi _{1,2}$ ) are measured with respect to tetrad defined by $\\vec{k}$ and the scattering plane, i.e., plane normal to $\\vec{k}\\times \\vec{k}^{\\prime }$ where $\\vec{k}$ ($\\vec{k}^{\\prime }$ ) is an incident (scattered) photon four-vector in the rest frame of an electron.", "The coefficients $F$ are elements of the following scattering matrix ([7], [8]): ${\\bf F}=\\frac{1}{2} r_e^2 \\left(\\frac{\\epsilon _e^{\\prime }}{\\epsilon _e}\\right)^2\\left(\\begin{array}{cccc}F_{00} & F_1 & 0 & 0\\\\F_1 & F_{11} & 0 & 0\\\\0 & 0 & F_{22} & 0\\\\0 & 0 & 0 & F_{33}\\end{array}\\right)=\\frac{1}{2} r_e^2 \\left(\\frac{\\epsilon _e^{\\prime }}{\\epsilon _e}\\right)^2\\left(\\begin{array}{cccc}\\frac{\\epsilon _e^{\\prime }}{\\epsilon _e} + \\frac{\\epsilon _e}{\\epsilon _e^{\\prime }} -\\sin ^2\\theta ^{\\prime } & \\sin ^2\\theta ^{\\prime } & 0 & 0 \\\\\\sin ^2\\theta ^{\\prime } & 1+\\cos ^2\\theta ^{\\prime } & 0 & 0\\\\0&0& 2 \\cos \\theta ^{\\prime } &0\\\\0&0&0& \\left( \\frac{\\epsilon _e^{\\prime }}{\\epsilon _e} + \\frac{\\epsilon _e}{\\epsilon _e^{\\prime }} \\right) \\cos \\theta ^{\\prime }\\end{array}\\right)$ where $\\theta ^{\\prime }$ is the polar scattering angle.", "In the TH regime ($\\epsilon _e^{\\prime }=\\epsilon _e$ ), $F$ becomes phase matrix for Rayleigh scattering of Stokes parameters [4].", "Equation REF summed over all possible polarizations of the scattered photon ($\\xi _{123}^{\\prime }$ ) gives the scattering cross-section as a function of the incident light linear polarization: $\\frac{d\\sigma ^{KN} (\\xi _{123}) }{d\\Omega } = \\frac{1}{2} r_e^2\\left(\\frac{\\epsilon _e^{\\prime }}{\\epsilon _e}\\right)^2\\left( \\frac{\\epsilon _e}{\\epsilon _e^{\\prime }} + \\frac{\\epsilon _e^{\\prime }}{\\epsilon _e} - (1-\\xi _1) \\sin ^2\\theta ^{\\prime } \\right).$ Since $\\xi _1$ is defined with respect to scattering plane one can rewrite Equation REF into: $\\frac{d\\sigma ^{KN} (\\xi _{123}) }{d\\Omega } = \\frac{1}{2} r_e^2\\left(\\frac{\\epsilon _e^{\\prime }}{\\epsilon _e}\\right)^2\\left( \\frac{\\epsilon _e}{\\epsilon _e^{\\prime }} + \\frac{\\epsilon _e^{\\prime }}{\\epsilon _e} -\\sin ^2\\theta ^{\\prime } - \\delta \\sin ^2\\theta ^{\\prime } cos(2 \\phi ^{\\prime }) \\right),$ where $\\xi _1={\\mathcal {Q}}/{\\mathcal {I}}= - \\delta cos(2\\phi ^{\\prime })$  The minus sign appears because of the conventions used in this paper and in our numerical code: for fully polarized light, $\\delta =1$ , $EVPA=0\\deg $ means ${\\mathcal {Q}}$ =+1 and $\\phi ^{\\prime }=90\\deg $ measured from x axis, $EVPA=90\\deg $ corresponds with ${\\mathcal {Q}}$ =-1 and $\\phi ^{\\prime }=0$ or $180 \\deg $ .", "and where $\\phi ^{\\prime }$ is the azimuthal scattering angle.", "The fractional linear polarization of incident light $\\delta = \\sqrt{{\\mathcal {Q}}^2+{\\mathcal {U}}^2}/{\\mathcal {I}}$ is invariant to rotations and the azimuthal scattering angle $\\phi ^{\\prime }$ is measured with respect to x axis which is chosen arbitrarily.", "Sampling of $\\theta ^{\\prime }$ scattering angle and $\\epsilon _e^{\\prime }$ is carried out using azimuthal angle integrated differential crosssection and kinematic relation for scattering energy and $\\theta ^{\\prime }$ angle ($\\cos \\theta = 1+ 1/\\epsilon _e - 1/\\epsilon _e^{\\prime }$ ).", "This step is polarization independent.", "Figure: Angular histograms showing that our Monte Carlo scheme(marked with points) recovers the assumed differential crosssection forCompton scattering (marked with dashed lines).", "Left and right panels display results for azimuthal scattering angles φ ' =0\\phi ^{\\prime }=0 and φ ' =90 ∘ \\phi ^{\\prime }=90^\\circ , respectively.", "Notice that all angles are measured in the electron rest frame.The colors encode the scattered light fractional polarizations.", "When the incident beam is unpolarized (unpol, scattered light marked with circles) then the scattering angle has no azimuthal dependency and light scattered in the direction perpendicular to the incident beam is 100% polarized (as expected).", "When the incident beam is fully polarized (pol, scattered light marked with diamonds) the preferred azimuthal scattering angle is that one that is perpendicular to the incident beam polarization direction.", "Scattered light polarization is then 100% independently of the scattering angle.For unpolarized light $\\phi ^{\\prime } \\in (0,2\\pi )$ angle can be randomly chosen from a uniform distribution function, however, if the incident light is polarized, $\\phi ^{\\prime }$ cannot be random.", "The $\\phi ^{\\prime }$ angle is sampled from the conditional probability distribution function [24]: $p(\\phi ^{\\prime }|\\epsilon _e^{\\prime })=\\frac{1}{2\\pi }-\\frac{\\delta \\sin ^2\\theta ^{\\prime }\\cos 2\\phi ^{\\prime }}{2\\pi (\\frac{\\epsilon _e}{\\epsilon _e^{\\prime }}+\\frac{\\epsilon _e^{\\prime }}{\\epsilon _e} - \\sin ^2 \\theta ^{\\prime })}.$ The $\\phi ^{\\prime }$ sampling is carried out via inversion of the cumulative distribution function of the equation above which is: $\\mathrm {CDF}(\\phi ^{\\prime })=\\frac{\\phi ^{\\prime }}{2\\pi } -\\frac{\\delta \\sin ^2\\theta \\sin 2\\phi ^{\\prime }}{4\\pi (\\frac{\\epsilon _e}{\\epsilon _e^{\\prime }}+\\frac{\\epsilon _e^{\\prime }}{\\epsilon _e} - \\sin ^2 \\theta ^{\\prime })}$ In the limit of $\\delta =0$ or in the limit of $\\cos (\\theta ^{\\prime }) = \\pm 1$ the formula reduces to sampling $\\phi ^{\\prime }$ angle from the uniform distribution.", "Given two scattering angles one can construct $\\vec{k}^{\\prime }$ and define the scattering plane.", "The fractional Stokes parameters of the scattered photon, $\\xi ^{\\prime }_{123}$ , can be finally computed using: $\\xi _1^{\\prime }=\\frac{F_1 + \\xi _1 F_{11}}{F_{00}+\\xi _1 F_1},\\, \\,\\,\\xi _2^{\\prime }=\\frac{\\xi _2 F_{22}}{F_{00}+\\xi _1 F_1}, \\, \\,\\,\\xi _3^{\\prime }=\\frac{\\xi _3 F_{33}}{F_{00}+\\xi _1 F_1}.$ where $\\xi _{1,2,3}$ are measured with respect to the scattering plane.", "The scattering kernel defined this way is photon-conserving so Stokes ${\\mathcal {I}}$ does not have to be changed in the scattering event.", "In the originally published version of radpol, we sampled $\\phi ^{\\prime }$ angle from uniform distribution function so transformation of polarization included transformation of all Stokes parameters, including Stokes ${\\mathcal {I}}$ , using Equation REF .", "Hence, the original scheme was not photon conserving but only averaged intensity conserving [19].", "We have tested the new implementation of the Compton scattering in electron rest-frame.", "If we reconsider scattering of photons in the electron rest-frame, the scattering angle $\\phi ^{\\prime }$ depends on the polarization degree and angle of the incident light.", "For fully polarized light, i.e., $\\delta =1$ , the scattering of polarized light is favored in the direction perpendicular to the polarization angle.", "In Figure REF we show that the outcome of our numerical calculations are consistent with these theoretical expectations (marked in the figure as dashed line).", "Scattering an unpolarized light off an electron at rest can produce polarized emission for scattering angles $\\theta ^{\\prime }= 90 \\deg $ ." ], [ "Electron Acceleration Models", "Next we consider scattering off a population of electrons.", "We assume the following electron energy distribution functions (eDFs) that are usually considered for astrophysical applications.", "relativistic thermal eDF: $\\frac{1}{n_e}\\frac{dn_e}{d\\gamma } = \\frac{\\gamma ^2 \\beta }{\\Theta _e K_2(1/\\Theta _e)} \\exp ^{(-\\gamma /\\Theta _e)}$ where $\\beta \\equiv \\sqrt{1-1/\\gamma ^2}$ and $\\Theta _e=k_b T_e/m_e c^2$ is the dimensionless electron temperature, purely power-law eDF: $\\frac{1}{n_e}\\frac{d n_e}{d\\gamma } = \\frac{ (p-1)}{(\\gamma _{min}^{1-p} - \\gamma _{max}^{1-p})} \\gamma ^{-p}$ where $p$ , $\\eta $ , $\\gamma _{min}$ and $\\gamma _{max}$ are parameters, hybrid eDF where we assume that the electrons are accelerated from a thermal eDF.", "Accelerated electrons energies are described by a power-law distribution: $\\frac{1}{n_{pl}}\\frac{d n_{pl}}{d\\gamma }=\\frac{(p-1)}{(\\gamma _{min}^{1-p} - \\gamma _{max}^{1-p})} \\gamma ^{-p},$ where $\\gamma _{\\rm min}$ , $\\gamma _{\\rm max}$ , and $p$ are parameters of the acceleration model (we will assume that $\\gamma _{\\rm max} \\gg 1$ , in practice we assume $\\gamma _{\\rm max}=10^6$ ).", "The power-law function is “stitched” to the thermal eDF as follows (the same methodology is presented by [16] and [22]).", "The energy density of the thermal electrons is $u_{th}=n_{th} \\Theta _e a(\\Theta _e) m_e c^2$ where $a(\\Theta _e)\\approx (6+15\\Theta _e)/(4+5\\Theta _e)$ [10] while the energy density of the accelerated electrons is $u_{pl}=n_{pl} \\frac{p-1}{p-2} \\gamma _{min} m_e c^2.$ where the simple form of $u_{pl}$ is due to normalization of the power-law function.", "We assume that $u_{pl}=\\eta u_{th}$ where $\\eta $ is a fourth free parameter of the acceleration model indicating the fraction of thermal energy transferred to the non-thermal tail.", "Using Equations REF and REF we calculate the resulting number density of accelerated electrons, $n_{pl}$ : $n_{pl}=\\frac{p-2}{p-1} \\gamma _{min}^{-1} \\eta a(\\Theta _e) \\Theta _e n_{th}.$ In this model the power-law eDF should smoothly connect with the thermal distribution so we require that: $n_{th}(\\gamma _{min})=n_{pl}(\\gamma _{min}).$ For a set of $p$ , $\\eta $ and $\\Theta _e$ , we solve $\\gamma _{min}^4 \\beta _{min} \\exp (-\\gamma _{min}/\\Theta _e)= 2 (p-2) \\eta a(\\Theta _e) \\Theta _e^4$ to find the $\\gamma _{\\rm min}$ .", "$\\kappa $ eDF is a more natural eDF inspired by kinetic studies of relativistic plasmas: $\\frac{1}{n_e} \\frac{d n_e}{d\\gamma }= \\gamma \\sqrt{\\gamma ^2-1} \\left(1+\\frac{\\gamma +1}{\\kappa w}\\right)^{-(\\kappa +1)}$ where $\\kappa $ and $w$ are parameters.", "For $\\kappa \\rightarrow \\infty $ , $\\kappa $ distribution function becomes Maxwell-Jüttner distribution." ], [ "Thermal and non-thermal electron energy sampling", "In upgraded radpol, the scattering kernel is sampling electron four momentum $p^\\mu $ from thermal and non-thermal distribution functions above assuming that the spatial parts of electron four-momentum are isotropic in the fluid co-moving frame.", "Isotropic eDF model limits the discussion to energy sampling.", "To sample electron Lorentz factor $\\gamma _e$ in thermal distribution function we use the sampling procedure introduced by [3] (implemented in grmonty and radpol codes).", "In case of pure power-law distribution function the electron Lorentz factor is sampled using inversion of cumulative distribution function where the inversion has analytic form: $\\gamma _e= \\left(\\gamma _{min}^{1-p}(1-r) + \\gamma _{max}^{1-p}r \\right)^{1/(1-p)}$ where $r\\in (0,1)$ is a random number and $\\gamma _{min},\\gamma _{max},p$ are the eDF parameters.", "To sample Lorentz factor from hybrid and $\\kappa $ distribution functions we re-write these two eDFs as a product of two probability functions $p_1$ and $p_2$ , where $p_1$ is used for tentative sampling and $p_2$ is used for rejection sampling (the procedure closely follows [3] but differs in details of tentative sampling).", "For both hybrid and $\\kappa $ DF: $p_1=\\frac{1}{n_e} \\frac{dn_e(\\gamma )}{d\\gamma _e} \\frac{1}{\\beta _e}$ and $p_2=\\beta _e$ where $\\beta _e=\\sqrt{1-1/\\gamma _e^2}$ .", "In our model the tentative sampling of $\\gamma _e$ from $p_1$ is carried out by inversion of cumulative distribution function.", "We found analytic forms of cumulative distribution function of $p_1$ (hereafter modified cumulative distribution function, MCDF) for hybrid and $\\kappa $ eDFs.", "For hybrid distribution function it is: $\\mathrm {MCDF}_{\\rm hybrid}(\\gamma _e)= 1-\\frac{\\exp {(-\\frac{\\gamma _e}{\\Theta _e})}}{\\exp (-\\frac{1}{\\Theta _e})}\\frac{(2\\Theta _e^2+2\\Theta _e\\gamma _e+\\gamma _e^2)}{(2\\Theta _e^2+2\\Theta _e+1)} (1-f) +\\\\{\\left\\lbrace \\begin{array}{ll}0 & \\mathrm {for} \\,\\, \\gamma _e<\\gamma _{min} \\\\f \\frac{(p-1)}{(\\gamma _{min}^{1-p}-\\gamma _{max}^{1-p})}\\left(g_{pl}(\\gamma _e,p)-g_{pl}(\\gamma _{min},p)\\right) & \\mathrm {for} \\,\\,\\gamma _e>\\gamma _{min}\\end{array}\\right.", "}$ where the third term is added only for $\\gamma _e>\\gamma _{min}$ , where $f=n_{pl}/n_{th}$ (given by Equation REF ) and where: $g_{pl}(\\gamma _e)={\\left\\lbrace \\begin{array}{ll}\\sqrt{\\gamma _e^2-1}\\left(\\frac{1}{\\gamma _e}\\right) & \\mathrm {for} \\,p=3 \\\\\\sqrt{\\gamma _e^2-1}\\left(\\frac{1}{2\\gamma _e^2}\\right)-\\frac{1}{2}\\arcsin (\\frac{1}{\\gamma _e}) & \\mathrm {for} \\,p=4 \\\\\\sqrt{\\gamma _e^2-1}\\left(\\frac{2}{3\\gamma _e}+\\frac{1}{3\\gamma _e^3}\\right) & \\mathrm {for} \\,p=5 \\\\\\sqrt{\\gamma _e^2-1}\\left(\\frac{3}{8\\gamma _e^2}+\\frac{1}{4\\gamma _e^4}\\right)-\\frac{3}{8}\\arcsin (\\frac{1}{\\gamma _e}) & \\mathrm {for} \\,p=6.\\end{array}\\right.", "}$ For $\\kappa $ eDF the $p_1$ cumulative distribution function for sampling $\\gamma _e$ is: $\\mathrm {MCDF}_{\\kappa }(\\gamma _e)= f_{\\kappa ,n}\\left( f_{\\kappa ,1} e^{(\\kappa log(\\gamma _e+\\kappa w+1))}+f_{\\kappa ,2} e^{(\\kappa log(\\kappa w+2))}\\right)/(\\kappa ^2-3\\kappa +2)\\\\e^{(-\\kappa \\log (\\gamma _e+\\kappa w+1) -\\kappa \\log (\\kappa w+2))}$ where $f_{\\kappa ,1}=w^{\\kappa +1} \\left(2\\kappa ^{\\kappa +2}w^2+(2\\kappa ^{\\kappa +2}+4 \\kappa ^{\\kappa +1})w+(\\kappa ^{\\kappa +2}+\\kappa ^{\\kappa +1}+2 \\kappa ^\\kappa )\\right),\\\\f_{\\kappa ,2}=\\kappa ^\\kappa (\\kappa -\\kappa ^2) w^{\\kappa +1} \\gamma _e^2+w^{\\kappa } (-2\\kappa ^{\\kappa +2}w^2 - 2 \\kappa ^{\\kappa +1}w) \\gamma _e+\\\\w^{\\kappa } (-2\\kappa ^{\\kappa +2}w^3 - 4 \\kappa ^{\\kappa +1}w^2-2\\kappa ^\\kappa w).$ and the distribution normalizing factor $f_{\\kappa ,n}$ is given in [17] (see their Equation 19).", "For fast and accurate numerical MCDF inversion we use Regula-Falsi root finder [9].", "Since $\\beta $ is close to one for relativistic electrons the rejection sampling is efficient." ], [ "Test of the numerical scheme against analytic model", "To test numerical code we consider single scattering of a beam of monochromatic polarized photons off an enable of electrons with four eDFs introduced in the previous sub-sections.", "[2] provided semi-analytic solution to this problem as long as electron-frame scattering is in TH limit ($\\epsilon ^{\\prime }=\\epsilon $ ).", "The analytic model has been already briefly described in Appendix A of our previous work [14] and recently also reproduced in more details by [21].", "Our numerical model can be confronted with the theoretical expectation for light intensity and polarization with predictions of [2].", "In Figure REF we show agreement between the theoretical prediction with our numerical kernel calculations using our new updated scattering kernel using thermal, power-law, hybrid and $\\kappa $ electron distribution functions for single scattering angle.", "The Monte Carlo simulations with radpol scattering kernel converge to the predicted values.", "Our results are also consistent with results presented in [21] (see their Figure 25) who carried out the same tests using independent numerical scheme.", "In all cases the fractional linear polarization is increasing with frequency.", "In particular, for eDF with a power-law component (power-law, hybrid, and $\\kappa $ eDFs) the fractional linear polarization converges to a constant value at high energies ($\\epsilon ^{\\prime } \\gg \\epsilon $ ) in analogy to the fractional linear polarization of the optically thin synchrotron emission (which can be also thought of as a scattering process) from electrons distributed into a power-law eDF." ], [ "Scattering off low- and high-energy thermal and\nnon-thermal electrons", "Next we simulate a single inverse-Compton scattering of monochromatic beam as a function of the incident light polarization, eDF, and scattering regime (TH and KN).", "It is expected that scattering of unpolarized photon beam of hot relativistic plasma should produce no polarization (e.g., [18]), here we can test our code against this expectation.", "Otherwise, the results presented in this section can be used as a guiding line for analysis of more complex models (e.g., radiation produced in accretion disks and jets in GRMHD simulations), keeping in mind that in realistic accretion flows and jets scatterings may be multiple.", "Notice that here we neglect circular polarization of the incident beam because the circular polarization cannot be generated in the scattering process.", "In Figure REF we show intensity (upper panels) and fractional polarization (lower panels) spectra of scattered light when the scattering occurs in the TH regime (i.e.", "the energy of the incident beam is low compared to the electron rest mass energy, $\\epsilon = 2.5\\times 10^{-11}$ ).", "Panels left to right display results for scattering on sub-relativistic (characterized by the dimensionless temperature $\\Theta _e=0.1$ ) and relativistic electrons distributed into thermal (with $\\Theta _e=100$ ) and $\\kappa $ (with $w=100$ and $\\kappa =4.5$ ) eDF.", "Initially unpolarized light ($S_{in}=(1,0,0,0)$ ) scattering off an ensemble of subrelativistic electrons becomes polarized and the degree of polarization depends on the angle of scattering and on the scattered photons frequency.", "Initially polarized light ($S_{in}=(1,1,0,0)$ ) scattering off cold electrons will stay polarized only for certain scattering angles.", "Scattering unpolarized beam off hot electrons (characterized by dimensionless temperature $\\Theta _e=100$ ) does not produce polarization as expected.", "(The residual polarization seen in the high energies in this case is a Monte Carlo noise.)", "The latter is valid for thermal and non-thermal electron distribution function.", "For initially polarized beam scattering off hot electrons, the scattered radiation is partially polarized with fractional polarization increasing with frequency.", "Only for certain scattering angle ($(\\theta ^{\\prime },\\phi ^{\\prime })=(90^\\circ ,90^\\circ )$ ) the polarization cancels out to zero.", "In Figure REF we display results of the same numerical tests as shown in Figure REF but with scatterings in KN regime (i.e.", "the energy of the incident beam is comparable to the electron rest mass energy, $\\epsilon = 1$ ).", "The conclusions are similar as for TH scattering however for scattering of polarized radiation on the hot electrons the linear polarization of the scattered light is sharply decreasing with frequency." ], [ "Polarimetric properties of scattered light in complex models of\naccretion", "Our upgraded scattering kernel in radpol code is now well tested and produces results consistent with theoretical expectations for variety of electron distribution functions.", "Simulating polarized emission and scattering off non-thermal electrons in complex models of accretion (for example in GRMHD simulations of accreting black holes) requires modifications of the photon sampling routines as well as scattering cross-sections.", "Manufacturing photons in radpol is carried out just like in its unpolarized version grmonty (see method paper by [5]) with a difference that now all angle averaged synchrotron emissivities incorporate thermal and non-thermal eDF.", "Once a photon wavevector, $k^\\mu $ , is build in the fluid frame, the photon polarization is assigned to it using corresponding thermal/non-thermal synchrotron emissivities.", "Finally, to determine the place of scattering along a ray path in radpol simulation, an optical depth for scattering is calculated in each step on geodesic path.", "The so called “hot crosssection” is calculated to estimate cross-section for a photon interaction with an ensemble of free electrons.", "This requires integrating KN (or TH) cross-section over assumed electron distribution function that can be now also non-thermal.", "In radpol such integrations are done numerically and tabulated.", "Full exploration of polarization of high energy emission produced in complex models of accretion flows with electron acceleration is beyond the scope of this work and will be presented in the forthcoming publication." ], [ "Conclusion", "In [14] we have introduced a Monte Carlo code radpol, which is capable of tracing light polarization of synchrotron emission and polarization-sensitive inverse-Compton scattering processes in full general relativity.", "In the current work we describe a major extension of the code to compute emission and scattering when electrons are non-thermal.", "The numerical scheme tests converge to the theoretical expectations.", "Updated code enables more realistic fully relativistic and covariant models of emission for jets produced by accreting objects of any kind." ], [ "Acknowledgements", "The author thanks Hector Olivares for comments on Regula-Falsi root finder.", "The author acknowledges support by the NWO grant no.", "OCENW.KLEIN.113." ] ]
2207.10487
[ [ "Mining Relations among Cross-Frame Affinities for Video Semantic\n Segmentation" ], [ "Abstract The essence of video semantic segmentation (VSS) is how to leverage temporal information for prediction.", "Previous efforts are mainly devoted to developing new techniques to calculate the cross-frame affinities such as optical flow and attention.", "Instead, this paper contributes from a different angle by mining relations among cross-frame affinities, upon which better temporal information aggregation could be achieved.", "We explore relations among affinities in two aspects: single-scale intrinsic correlations and multi-scale relations.", "Inspired by traditional feature processing, we propose Single-scale Affinity Refinement (SAR) and Multi-scale Affinity Aggregation (MAA).", "To make it feasible to execute MAA, we propose a Selective Token Masking (STM) strategy to select a subset of consistent reference tokens for different scales when calculating affinities, which also improves the efficiency of our method.", "At last, the cross-frame affinities strengthened by SAR and MAA are adopted for adaptively aggregating temporal information.", "Our experiments demonstrate that the proposed method performs favorably against state-of-the-art VSS methods.", "The code is publicly available at https://github.com/GuoleiSun/VSS-MRCFA" ], [ "Introduction", "Image semantic segmentation aims at classifying each pixel of the input image to one of the predefined class labels, which is one of the most fundamental tasks in visual intelligence.", "Deep neural networks have made tremendous progresses in this field [41], [52], [5], [21], [55], [17], [18], [58], [30], [50], [24], [25], [11], [10], benefiting from the availability of large-scale image datasets [9], [54], [3], [35] for semantic segmentation.", "However, in real life, we usually confront more complex scenarios in which a series of successive video frames need to be segmented.", "Thus, it is desirable to explore video semantic segmentation (VSS) by exploiting the temporal information.", "Figure: Left: recent VSS methods , for which the affinity is directly forwarded to the next step (feature retrieval).", "The affinity is shown in a series of 2D maps.", "Right: We propose to mine the relations within the affinities before outputting the affinity, by Single-scale Affinity Refinement (SAR) and Multi-scale Affinity Aggregation (MAA).The core of VSS is how to leverage temporal information.", "Most of the existing VSS works rely on the optical flow to model the temporal information.", "Specifically, they first compute the optical flow [14] that is further used to warp the features from neighboring video frames for feature alignment [56], [16], [48], [36], [22], [33], [28].", "Then, the warped features can be simply aggregated.", "Although workable in certain scenarios, those methods are still unsatisfactory because i) the optical flow is error-prone and thus the error could be accumulated; ii) directly warping features may yield inevitable loss on the spatial correlations [31], [20].", "Hence, other approaches [37], [29] directly aggregate the temporal information in the feature level using attention techniques, as shown in Fig.", "REF .", "Since they are conceptually simple and avoid the problems incurred by optical flow, we follow this way to exploit temporal information.", "In general, those methods first calculate the attentions/affinities between the target and the references, which are then used to generate the refined features.", "Though promising, they only consider the single-scale attention.", "What's more, they do not mine the relations within the affinities.", "In this paper, we propose a novel approach MRCFA by Mining Relations among Cross-Frame Affinities for VSS.", "Specifically, we compute the Cross-Frame Affinities (CFA) between the features of the target frame and the reference frame.", "Hence, CFA is expected to have large activation for informative features and small activation for useless features.", "When aggregating the CFA-based temporal features, the informative features are highlighted and useless features are suppressed.", "As a result, the segmentation of the target frame would be improved by embedding temporal contexts.", "With the above analysis, the main focus of this paper is mining relations among CFA to improve the representation capability of CFA.", "Since deep neural networks usually generate multi-scale features and CFA can be calculated at different scales, we can obtain multi-scale CFA accordingly.", "Intuitively, the relations among CFA are twofold: single-scale intrinsic correlations and multi-scale relations.", "For the single-scale intrinsic correlations, each feature token in a reference frame (i.e., reference token) corresponds to a CFA map for the target frame.", "Intuitively, we have the observation that the CFA map of each reference token should be locally correlated as the feature map of the target frame is locally correlated, which is also the basis of CNNs.", "It is interesting to note that the traditional 2D convolution can be adopted to model such single-scale intrinsic correlations of CFA.", "Generally, convolution is used for processing features.", "In contrast, we use convolution to refine the affinities of features for improving the quality of affinities.", "We call this step Single-scale Affinity Refinement (SAR).", "For the multi-scale relations, we propose to exploit the relations among multi-scale CFA maps.", "The CFA maps generated from high-level features have a small scale and a coarse representation, while the CFA maps generated from low-level features have a large scale and a fine representation.", "It is natural to aggregate multi-scale CFA maps using a high-to-low decoder structure so that the resulting CFA would contain both coarse and fine affinities.", "Generally, the decoder structure is usually used for fusing multi-scale features.", "In contrast, we build a decoder to aggregate the multi-scale affinities of features.", "We call this step Multi-scale Affinity Aggregation (MAA).", "When we revisit the above MAA, one requirement arises: the reference tokens at different scales should have the same number and corresponding semantics; otherwise, it is impossible to connect a decoder.", "As discussed above, each reference token corresponds to a CFA map for the target frame.", "Only when two reference tokens have the same semantics, their CFA maps can be merged.", "For this goal, a simple solution is to downsample reference tokens at different scales into the same size.", "This also saves the computation due to the reduction of reference tokens.", "It inspires us to further reduce the computation by sampling reference tokens.", "To this end, we propose a Selective Token Masking strategy to select $S$ most important reference tokens and abandon less important ones.", "Then, the relation mining among CFA is executed based on the selected tokens.", "In summary, there are three aspects for mining relations among CFA: i) We propose Single-scale Affinity Refinement for refining the affinities among features, based on single-scale intrinsic correlations; 2) We further introduce Multi-scale Affinity Aggregation by using an affinity decoder for aggregating the multi-scale affinities among features; 3) To make it feasible to execute MAA and improve efficiency, we propose Selective Token Masking (STM) to generate a subset of consistent reference tokens for each scale.", "After strengthened with single-scale and multi-scale relations, the final CFA can be directly used for embedding reference features into the target frame.", "Extensive experiments show the superiority of our method over previous VSS methods.", "Besides, our exploration of affinities among features would provide a new perspective on VSS.", "Image semantic segmentation has always been a hot topic in image understanding since it plays an important role in many real applications such as autonomous driving, robotic perception, augmented reality, aerial image analysis, and medical image analysis.", "In the era of deep learning, various algorithms have been proposed to improve semantic segmentation.", "Those related works can be divided into two groups: CNN-based methods [41], [51], [8], [19], [5], [49], [40], [44], [1], [12] and transformer-based methods [53], [47].", "Among CNN-based methods, FCN [41] is a pioneer work, which adopts fully convolutional networks and pixel-to-pixel classification.", "Since then, other methods [4], [5], [52], [21], [58], [15] have been proposed to increase the receptive fields or representation ability of the network.", "Another group of works [53], [47] is based on the transformer which is first proposed in natural language processing [45] and has the ability to capture global context [13].", "Though tremendous progress has been achieved in image segmentation, researchers have paid more and more attention to VSS since video streams are a more realistic data modality." ], [ "Video Semantic Segmentation", "Video semantic segmentation (VSS), aiming at classifying each pixel in each frame of a video into a predefined category, can be tackled by applying single image semantic segmentation algorithms [5], [52], [47], [6], [7] on each video frame.", "Though simple, this approach serves as an important baseline in VSS.", "One obvious drawback of this method is that the temporal information between consecutive frames is discarded and unexploited.", "Hence, dedicated VSS approaches [27], [42], [32], [23], [16], [36], [46], [48], [22], [31], [57], [33], [20], [34], [37], [38], [28] are proposed to make use of the temporal dimension to segment videos.", "Most of the current VSS approaches can be divided into two groups.", "The first group of approaches focuses on using temporal information to reduce computation.", "Specifically, LLVS [31], Accel [22], GSVNET [28] and EVS [38] conserve computation by propagating the features from the key frames to non-key frames.", "Similarly, DVSNet [17] divides the current frame into different regions and the regions which do not differ much from previous frames do not traverse the slow segmentation network, but a fast flow network.", "However, due to the fact that they save computation on some frames or regions, their performance is usually inferior to the single frame baseline.", "The second group of methods focuses on exploring temporal information to improve segmentation performance and prediction consistency across frames.", "Specifically, NetWarp [46] wraps the features of the reference frames for temporal aggregation.", "TDNet [20] aggregates the features of sequential frames with an attention propagation module.", "ETC [33] uses motion information to impose temporal consistency among predictions between sequential frames.", "STT [29], LMANet [37] and CFFM [43] exploit the features from reference frames to help segment the target frame by the attention mechanism.", "Despite the promising results, those methods do not consider correlation mining among cross-frame affinities.", "This paper provides a new perspective on VSS by mining the relations among affinities.", "Figure: Network overview of MRCFA.", "Our method is illustrated when the clip contains three frames (T=3T=3).", "The first two frames are reference frames while the last one is the target frame.", "All frames first go through the encoder to extract the multi-scale features (L=3L=3) from the intermediate layers.", "For each reference frame, we compute the Cross-Frame Affinities (CFA) across different scales of features.", "To save computation, Selective Token Masking is proposed.", "Then, the multi-scale affinities are input to an affinity decoder to learn a unified and informative affinity, through the Single-scale Affinity Refinement (SAR) module and Multi-scale Affinity Aggregation (MAA).", "The new representation of the target frame using the reference is obtained by exploiting the refined affinity to retrieve the corresponding reference features.", "Finally, all the new representations of the target are merged to segment the target.", "Best viewed in color." ], [ "Methodology", "In this section, we target VSS and present a novel approach MRCFA through Mining Relations among Cross-Frame Affinities.", "The main idea of MRCFA is to mine the relations among multi-scale affinities computed from multi-scale intermediate features between the target frame and the reference frames, as illustrated in Fig.", "REF .", "We first provide the preliminaries in §REF .", "Next, we introduce Single-scale Affinity Refinement (SAR) which independently refines each single-scale affinity in §REF .", "After that, Multi-scare Affinities Aggregation (MAA) which merges affinities across various scales is presented in §REF .", "Finally, we explain the Selective Token Masking mechanism (§REF ) to reduce the computation." ], [ "Preliminaries", "Given a video clip $\\lbrace \\mathbf {I}_{t_i} \\in \\mathbb {R}^{H \\times W \\times 3} \\rbrace _{i=1}^{T}$ containing $T$ video frames and corresponding ground-truth masks $\\lbrace \\mathbf {M}_{t_i} \\in \\mathbb {R}^{H \\times W} \\rbrace _{i=1}^{T}$ , our objective is to learn a VSS model.", "Without loss of generalizability, we focus on segmenting the last frame $\\mathbf {I}_{t_{T}}$ , which is referred as the target frame.", "All the previous frames $\\lbrace \\mathbf {I}_{t_i}\\rbrace _{i=1}^{T-1}$ are referred as the reference frames.", "Each frame $\\mathbf {I}_{t_i}$ is first input into an encoder to extract intermediate features $\\lbrace \\mathbf {F}_{t_{i}}^{l} \\in \\mathbb {R}^{H_l W_l \\times C_l}\\rbrace ^{L}_{l=1}$ in various scales from $L$ intermediate layers of the deep encoder, where $H_l$ , $W_l$ , $C_l$ correspond to the height, width, number of channels of the feature map, respectively.", "For simplicity, multi-scale features $\\lbrace \\mathbf {F}_{t_{i}}^{l}\\rbrace ^{L}_{l=1}$ are in the order that shallow features are followed by deep features.", "We have $H_{l_1}\\ge H_{l_2}$ and $W_{l_1}\\ge W_{l_2}$ , if $l_1<l_2$ .", "In this paper, we aim to exploit the contextual information in the reference frames to refine the features of the target frame and thus improve the target's segmentation.", "Instead of simply modeling the affinities among frames for feature aggregation, we devote our efforts to mine relations among cross-frame affinities." ], [ "Single-scale Affinity Refinement", "We start with introducing the process of generating multi-scale affinities between the target frame and each reference frame.", "We first map the features $\\lbrace \\mathbf {F}_{t_{T}}^{l}\\rbrace ^{L}_{l=1}$ of the target frames into the queries $\\lbrace \\mathbf {Q}^{l}\\rbrace ^{L}_{l=1}$ by a linear layer, as: $\\footnotesize \\mathbf {Q}^l=f(\\mathbf {F}_{t_{T}}^{l}; \\mathbf {W}^l_{query}),$ where $\\mathbf {W}^l_{query} \\in \\mathbb {R}^{C_l \\times C_l}$ is the weight matrix of the linear layer $f$ and $\\mathbf {Q}^l \\in \\mathbb {R}^{H_l W_l \\times C_l}$ .", "Similarly, the multi-scale features $\\lbrace \\mathbf {F}_{t_{i}}^{l}\\rbrace ^{L}_{l=1}$ of the reference frame ($i \\in [1, T-1]$ ) are also processed to generate the keys $\\lbrace \\mathbf {K}^{l}_{t_{i}}\\rbrace ^{L}_{l=1}$ , as follows: $\\footnotesize \\mathbf {K}^{l}_{t_{i}} = f(\\mathbf {F}_{t_{i}}^{l}; \\mathbf {W}^l_{key}),$ where $\\mathbf {W}^l_{key} \\in \\mathbb {R}^{C_l\\times C_l}$ is the corresponding weight matrix and $\\mathbf {K}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l W_l \\times C_l}$ .", "After obtaining the queries and the keys, we are ready to generate the affinities between the target frame $\\mathbf {I}_{t_{T}}$ and each reference frame $\\mathbf {I}_{t_i}$ ($i \\in [1, T-1]$ ) across all scales.", "Then, Cross-Frame Affinities (CFA) are computed as: $\\footnotesize \\mathbf {A}^{l}_{t_{i}} = \\mathbf {Q}^{l} \\times {\\mathbf {K}^{l\\top }_{t_{i}}},$ where we have $\\mathbf {A}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l W_l \\times H_l W_l}$ , $l \\in [1, L]$ and $i \\in [1, T-1]$ .", "It means that, at each scale, the target frame has an affinity map with each reference frame.", "Based on the affinities $\\lbrace {\\mathbf {A}}^{l}_{t_{i}}\\rbrace _{l=1}^{L}$ , our affinity decoder is designed to mine the correlations between them to learn a better affinity between the target and the reference frame.", "As shown in Fig.", "REF , it is comprised of two modules: Single-scale Affinity Refinement (SAR) and Multi-scale Affinity Aggregation (MAA).", "Please refer to § for our motivations.", "In order to reduce computation and prepare the affinities for MAA module which requires the same number and corresponding semantics (see §), our affinity decoder operates on $\\lbrace \\tilde{\\mathbf {A}}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l W_l \\times S}\\rbrace _{l=1}^{L}$ , rather than $\\lbrace {\\mathbf {A}}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l W_l \\times H_l W_l}\\rbrace _{l=1}^{L}$ .", "The affinities $\\tilde{\\mathbf {A}}^{l}_{t_{i}}$ is a downsampled version of ${\\mathbf {A}}^{l}_{t_{i}}$ along the second dimension, which will be explained in §REF .", "Single-scale Affinity Refinement (SAR).", "For the affinity matrix $\\tilde{\\mathbf {A}}^{l}_{t_{i}}$ , each of its elements corresponds to a similarity between a token in the query and a token in the key.", "We reshape $\\tilde{\\mathbf {A}}^{l}_{t_{i}}$ from $\\mathbb {R}^{H_l W_l \\times S}$ to $\\mathbb {R}^{H_l \\times W_l \\times S}$ .", "In order to learn the correlation within the single-scale affinity $\\tilde{\\mathbf {A}}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l \\times W_l \\times S}$ , a straightforward way is to exploit 3D convolution.", "However, this approach suffers from two weaknesses.", "First, it requires a large amount of computational cost.", "Second, not all the activations within the 3D window are meaningful.", "Considering a 3D convolution with a kernel $\\mathcal {K} \\in \\mathbb {R}^{k\\times k \\times k}$ , the normal 3D convolution at the location $x=(x_1, x_2, x_3)$ is formulated as: $\\footnotesize \\begin{split}(\\tilde{\\mathbf {A}}^{l}_{t_{i}} * \\mathcal {K})_{x}=\\sum _{(o_1,o_2,o_3) \\in \\mathcal {N}(x)} \\tilde{\\mathbf {A}}^{l}_{t_{i}}(o_1,o_2,o_3)\\mathcal {K}(o_1-x_1,o_2-x_2,o_3-x_3),\\end{split}$ where $\\mathcal {N}(x)$ is the set of locations in the 3D window ($k\\times k \\times k$ ) centered at $x$ , and $|\\mathcal {N}(x)| = k^3$ .", "As seen in Eq.", "(REF ), all the neighbors along three dimensions are used to conduct the 3D convolution.", "However, the last dimension of $\\tilde{\\mathbf {A}}^{l}_{t_{i}}$ is the sparse selection in the key (§REF ) and thus does not contain spatial information.", "Including the neighbors along the last dimension could introduce noise and bring more complexity.", "Thus, we propose to refine the affinities across the first two dimension.", "For affinity $\\tilde{\\mathbf {A}}^{l}_{t_{i}}$ of each scale, we first permute it to $\\mathbb {R}^{S\\times H_l \\times W_l}$ and then use 2D convolutions to learn the relations within the affinity.", "The refined affinity is denoted as $\\bar{\\mathbf {A}}^{l}_{t_{i}} \\in \\mathbb {R}^{S\\times H_l \\times W_l}$ .", "This process can be formulated as: $\\footnotesize \\begin{aligned}&\\tilde{\\mathbf {A}}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l \\times W_l \\times S} \\rightarrow \\tilde{\\mathbf {A}}^{l}_{t_{i}} \\in \\mathbb {R}^{S\\times H_l \\times W_l}, \\\\&\\bar{\\mathbf {A}}^{l}_{t_{i}} = G(\\tilde{\\mathbf {A}}^{l}_{t_{i}}),\\end{aligned}$ where $G$ represents a few connvolutional layers.", "Due to the use of 2D convolution and the token reduction mentioned in §REF , the refinement of affinities is fast.", "After refining affinity for each scale, we collect the refined affinities $\\lbrace \\bar{\\mathbf {A}}^{l}_{t_{i}}\\rbrace ^{L}_{l=1}$ for all scales.", "Next, we present Multi-scale Affinity Aggregation (MAA) module." ], [ "Multi-scale Affinity Aggregation", "Multi-scale Affinity Aggregation (MAA).", "The affinity from the deep features contains more semantic but more coarse information, while the affinity from the shallow features contains more fine-grained but less semantic information.", "Thus, we propose a Multi-scale Affinity Aggregation module to aggregate the information from small-scale affinities to large-scale affinities, as: $\\footnotesize \\begin{aligned}&\\mathbf {B}^{L}_{t_{i}}=\\bar{\\mathbf {A}}^{L}_{t_{i}}, \\\\&\\mathbf {B}^{l}_{t_{i}}=G(\\Gamma (\\mathbf {B}^{l+1}_{t_{i}})+\\bar{\\mathbf {A}}^{l}_{t_{i}}), ~~~l=L-1,...,1, \\\\\\end{aligned}$ where $\\Gamma $ denotes upsampling operation to match the spatial size when necessary.", "By Eq.", "(REF ), we generate the final refined affinity $\\mathbf {B}^{1}_{t_{i}}$ between the target frame $\\mathbf {I}_{t_{T}}$ and each reference frame $\\mathbf {I}_{t_{i}}$ ($i \\in [1, L-1]$ ).", "Feature Retrieval.", "For single-frame semantic segmentation, SegFormer [47] generates the final feature $\\hat{\\mathbf {F}}_{t_{i}} \\in \\mathbb {R}^{\\hat{H} \\hat{W} \\times \\hat{C}}$ by merging multiple intermediate features.", "The final features are informative and directly used to predict the segmentation mask [47].", "Using the refined affinity $\\mathbf {B}^{1}_{t_{i}}$ and the informative features $\\hat{\\mathbf {F}}_{t_{i}}$ , we compute the new refined feature representations for the target frame.", "Specifically, the feature $\\hat{\\mathbf {F}}_{t_{i}}$ is first downsampled to the size of $\\mathbb {R}^{H_L W_L \\times \\hat{C}}$ .", "To correspond the refined affinity and the informative feature, we sample feature $\\hat{\\mathbf {F}}_{t_{i}}$ using the token selection mask $\\tilde{\\mathbf {M}}_{t_{i}}$ (§REF ) and obtain $\\tilde{\\mathbf {F}}_{t_{i}} \\in \\mathbb {R}^{S \\times \\hat{C}}$ .", "The new feature representation for the target frame using the reference is obtained as: $\\footnotesize \\begin{aligned}\\mathbf {B}^{1}_{t_{i}} \\in \\mathbb {R}^{S\\times H_1 \\times W_1} \\rightarrow \\mathbf {B}^{1}_{t_{i}} \\in \\mathbb {R}^{H_1 W_1 \\times S}, \\qquad \\quad \\mathbf {O}_{t_i} = \\mathbf {B}^{1}_{t_{i}} \\times {\\tilde{\\mathbf {F}}_{t_{i}}}.\\end{aligned}$ Intuitively, this step is to retrieve the informative features from the reference frame to the target frame using affinity.", "Computing Eq.", "(REF ) for all reference frames, we obtain the new representations of the target frame as $\\lbrace \\mathbf {O}_{t_i}\\rbrace ^{T-1}_{i=0}$ .", "The final feature used to segment the target frame is merged from $\\lbrace \\mathbf {O}_{t_i}\\rbrace ^{T-1}_{i=0}$ and $\\hat{\\mathbf {F}}_{t_{L}}$ as follows: $\\footnotesize \\begin{split}\\mathbf {O}_{t_L}=\\frac{1}{T-1}\\Gamma (\\sum _{i=1}^{T-1}\\mathbf {O_{t_i}})+\\hat{\\mathbf {F}}_{t_{L}}.\\end{split}$ Finally, a simple MLP decoder projects $\\mathbf {O}_{t_L}$ to the segmentation logits, and typical cross-entropy loss is used for training.", "In the test period, when segmenting the target frame $I_{t_{T}}$ , the encoder only needs to generate the features for the current target while the reference frames are already processed in previous steps and the corresponding features can be directly used." ], [ "Selective Token Masking", "As discussed in §, there should be the same number of reference tokens with corresponding semantics across scales.", "Besides, computing cross-frame affinities requires a lot of computation.", "Thus, our affinity decoder does not process $\\lbrace {\\mathbf {A}}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l W_l \\times H_l W_l}\\rbrace _{l=1}^{L}$ , but rather its downsampled version $\\lbrace \\tilde{\\mathbf {A}}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l W_l \\times S}\\rbrace _{l=1}^{L}$ .", "Here, we explain how to generate $\\lbrace \\tilde{\\mathbf {A}}^{l}_{t_{i}}\\rbrace ^{L}_{l=1}$ , by reducing the number of tokens in the multi-scale keys $\\lbrace \\mathbf {K}^{l}_{t_{i}}\\rbrace ^{L}_{l=1}$ before computing Eq.", "(REF ).", "We exploit convolutional layers to downsample the multi-scale keys to the spatial size of $H_L \\times W_L$ .", "Specifically, for the key $\\mathbf {K}^{l}_{t_{i}}$ ($l \\in [1, L-1]$ ), we process it by a convolutional layer with both kernel and stride size of $(\\frac{H_l}{H_L}, \\frac{W_l}{W_L})$ .", "As a result, we obtain new keys $\\hat{\\mathbf {K}}^{l}_{t_{i}}$ with smaller spatial size, which is given by $\\footnotesize \\begin{aligned}\\mathbf {K}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l W_l \\times C_l} \\rightarrow \\mathbf {K}^{l}_{t_{i}} \\in \\mathbb {R}^{C_l \\times H_l \\times W_l}, \\\\\\hat{\\mathbf {K}}^{l}_{t_{i}} = g(\\mathbf {K}^{l}_{t_{i}}; (\\frac{H_l}{H_L},\\frac{W_l}{W_L}); (\\frac{H_l}{H_L},\\frac{W_l}{W_L})), \\\\\\hat{\\mathbf {K}}^{l}_{t_{i}} \\in \\mathbb {R}^{C_l \\times H_L \\times W_L} \\rightarrow \\hat{\\mathbf {K}}^{l}_{t_{i}} \\in \\mathbb {R}^{H_L W_L \\times C_l}.\\end{aligned}$ where $g(\\cdot ;(k_h,k_w);(s_h,s_w))$ represents a convolutional layer with the kernel size $(k_h,k_w)$ and the stride $(s_h,s_w)$ .", "After this step, we obtain the downsampled keys $\\lbrace \\hat{\\mathbf {K}}^{l}_{t_{i}}\\rbrace ^{L-1}_{l=1}$ , where $\\hat{\\mathbf {K}}^{l}_{t_{i}} \\in \\mathbb {R}^{H_L W_L \\times C_l}$ , $l \\in [1, L-1]$ and $i \\in [1, T-1]$ .", "To further reduce the number of tokens in $\\lbrace \\hat{\\mathbf {K}}^{l}_{t_{i}}\\rbrace ^{L-1}_{l=1}$ , we propose to select important tokens and discard less important ones.", "The idea is to first compute the affinity for the deepest query/key pair ($\\mathbf {Q}^{L}$ and $\\mathbf {K}^{L}_{t_{i}}$ ), then generate a binary mask of important token locations, and finally select tokens in keys using the mask.", "The process of Binary Mask Generation (BMG) is in the following.", "The affinity between the deepest query and key is given by $\\mathbf {A}^{L}_{t_{i}} \\in \\mathbb {R}^{H_L W_L \\times H_L W_L}$ , following Eq.", "(REF ).", "Next, we choose the top-$n$ maximum elements across each column of $\\mathbf {A}^{L}_{t_{i}}$ , given by $\\footnotesize \\begin{split}\\hat{\\mathbf {A}}^{L}_{t_{i}}[:, j] = \\operatornamewithlimits{arg\\,max}_{n}(\\mathbf {A}^{L}_{t_{i}}[:, j]), \\qquad j \\in [1,H_L W_L],\\end{split}$ where $\\operatornamewithlimits{arg\\,max}_{n}$ means to take the top-$n$ elements, and $\\hat{\\mathbf {A}}^{L}_{t_{i}} \\in \\mathbb {R}^{n \\times H_L W_L}$ .", "Then, we sum over the top-$n$ elements and generate a token importance map $\\mathbf {M}_{t_{i}}$ as $\\footnotesize \\footnotesize \\begin{split}\\mathbf {M}_{t_{i}} = \\sum _{j=1}^{n}(\\hat{\\mathbf {A}}^{L}_{t_{i}}[j, :]),\\end{split}$ in which we have $\\mathbf {M}_{t_{i}} \\in \\mathbb {R}^{H_L W_L}$ .", "We recover the spatial size of $\\mathbf {M}_{t_{i}}$ by reshaping it to $\\mathbb {R}^{H_L \\times W_L}$ .", "The token importance map $\\mathbf {M}_{t_{i}}$ shows the importance level of every location in the key feature map.", "Since $\\mathbf {M}_{t_{i}}$ is derived from the deepest/highest level of features, the token importance information it contains is semantic-oriented and can be shared in other shallow levels.", "We use it to sample the tokens in $\\lbrace \\hat{\\mathbf {K}}^{l}_{t_{i}}\\rbrace ^{L-1}_{l=1}$ .", "Specifically, we sample $p$ percent of the locations with the top-$p$ highest importance scores in $\\mathbf {M}_{t_{i}}$ , where $p$ is referred as the token selection ratio.", "The binary token selection mask with $p$ percent of the locations highlighted is denoted as $\\tilde{\\mathbf {M}}_{t_{i}}$ .", "The location with the value 1 in $\\tilde{\\mathbf {M}}_{t_{i}}$ means the token importance is within the top-$p$ percent and the corresponding token will be selected.", "The location with the value 0 in $\\tilde{\\mathbf {M}}_{t_{i}}$ means the token in that location is less important and will thus be discarded.", "The total number of locations with the value 1 in $\\tilde{\\mathbf {M}}_{t_{i}}$ is denoted by $S=p H_L W_L$ .", "Using mask $\\tilde{\\mathbf {M}}_{t_{i}}$ , we select $p$ percent of tokens in $\\lbrace \\hat{\\mathbf {K}}^{l}_{t_{i}}\\rbrace ^{L-1}_{l=1}$ .", "The keys after selection are denoted as $\\lbrace \\tilde{\\mathbf {K}}^{l}_{t_{i}} \\in \\mathbb {R}^{S \\times C_l}\\rbrace ^{L-1}_{l=1}$ .", "With $\\mathbf {Q}^{l}$ and $\\tilde{\\mathbf {K}}^{l}_{t_{i}}$ , we compute the affinities $\\lbrace \\tilde{\\mathbf {A}}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l W_l \\times S}\\rbrace ^{L-1}_{l=1}$ using Eq.", "(REF ).", "For ${\\mathbf {A}}^{L}_{t_{i}}$ , we also conduct sampling using $\\tilde{\\mathbf {M}}_{t_{i}}$ and obtain $\\tilde{\\mathbf {A}}^{L}_{t_{i}} \\in \\mathbb {R}^{H_L W_L \\times S}$ .", "Merging the affinities from all $L$ scales gives final affinities of $\\lbrace \\tilde{\\mathbf {A}}^{l}_{t_{i}} \\in \\mathbb {R}^{H_l W_l \\times S}\\rbrace ^{L}_{l=1}$ .", "After computing the affinities for all reference frames, we have the downsampled affinities $\\lbrace \\lbrace \\tilde{\\mathbf {A}}^{l}_{t_{i}}\\rbrace ^{L}_{l=1}\\rbrace ^{T-1}_{i=1}$ ." ], [ "Experimental Setup", "Datasets.", "Densely annotating video frames requires intensive manual labeling efforts.", "The widely used datasets for VSS are Cityscapes [9] and CamVid [2] datasets.", "However, these datasets only contain sparse annotations, which limits the exploration of temporal information.", "Fortunately, the Video Scene Parsing in the Wild (VSPW) dataset [34] is proposed to facilitate the progress of this field.", "It is currently the largest-scale VSS dataset with 198,244 training frames, 24,502 validation frames and 28,887 test frames.", "For each video, 15 frames per second are densely annotated for 124 categories.", "These aspects make VSPW the best benchmark for VSS up till now.", "Hence, most of our experiments are conducted on VSPW.", "To further demonstrate the effectiveness of MRCFA, we also show results on Cityscapes, for which only one out of 30 frames is annotated.", "Implementation details.", "For the encoder, we use the MiT backbones as in Segformer [47], which have been pretrained on ImageNet-1K [39].", "For VSPW dataset, three reference frames are used, which are 9, 6 and 3 frames ahead of the target, following [34].", "Three-scale features from the last three transformer blocks are used to compute the cross-frame affinities and mine their correlations.", "For the Mask-based Token Selection (MTS), we set $p$ =80% for MiT-B0 and $p$ =50% for other backbones unless otherwise specified.", "For training augmentations, we use random resizing, horizontal flipping, and photometric distortion to process the original images.", "Then, the images are randomly cropped to the size of $480 \\times 480$ to train the network.", "We set the batch size as 8 during training.", "The models are all trained with AdamW optimizer for a maximum of 160k iterations and “poly” learning rate schedule.", "The initial learning rate is 6e-5.", "For simplicity, we perform the single-scale test on the whole image, rather than the sliding window test or multi-scale test.", "The input images are resized to $480 \\times 853$ for VSPW.", "We also do not perform any post-processing such as CRF [26].", "For Cityscape, the input image is cropped to $512 \\times 1024$ during training and resized to the same resolution during inference.", "And we use two reference frames and four-scale features.", "The number of frames being processed per second (FPS) is computed in a single Quadro RTX 6000 GPU (24G memory).", "Table: The impact of the selection of reference frames.Table: The impact of token selection ratio pp.The row which best deals with the trade-off between performance and computation resources is shown in red.Evaluation metrics.", "To evaluate the segmentation results, we adopt the commonly used metrics of Mean IoU (mIoU) and Weighted IoU (WIoU), following [41].", "We also use Video Consistency (VC) [34] to evaluate the category consistency among the adjacent frames in the video, following [34].", "Formally, video consistency VC$_n$ for $n$ consecutive frames for a video clips $\\lbrace \\mathbf {I}_c\\rbrace _{c=1}^{C}$ , is computed by: $\\text{VC}_{n}=\\frac{1}{C-n+1}\\sum _{i=1}^{C-n+1}\\frac{(\\cap _{i}^{i+n-1}\\mathbf {S}_i)\\cap (\\cap _{i}^{i+n-1}\\mathbf {S}^{^{\\prime }}_i)}{\\cap _{i}^{i+n-1}\\mathbf {S}_i}$ , where $C\\ge n$ .", "$\\mathbf {S}_i$ and $\\mathbf {S}^{^{\\prime }}_{i}$ are the ground-truth mask and predicted mask for $i^{th}$ frame, respectively.", "We compute the mean of video consistency VC$_n$ for all videos in the dataset as mVC$_n$ .", "Following [34], we compute mVC$_8$ and mVC$_{16}$ to evaluate the visual consistency of the predicted masks.", "Please refer to [34] for more details about VC." ], [ "Ablation Studies", "We conduct ablation studies on the large-scale VSPW dataset [34] to validate the key designs of MRCFA.", "For fairness, we adopt the same settings as in §REF unless otherwise specified.", "The ablation studies are conducted on MiT-B1 backbone.", "Influence of the reference frames.", "We study the performance of our method with respect to different choices of reference frames in Tab.", "REF .", "We have the following observations.", "First, using a single reference frame largely improves the segmentation performance (mIoU).", "For example, when using a single reference frame which is 3 frames ahead of the target one, the mIoU improvement over the baseline (SegFormer) is 1.6%, i.e., 38.1 over 36.5.", "Further adding more reference frames, better segmentation performance is observed.", "The best mIoU of 38.9 is obtained when using reference frames of 9, 6, and 3 frames ahead of the target.", "Second, for the prediction consistency metrics (mVC$_8$ and mVC$_{16}$ ), the advantage of exploiting more reference frames is more obvious.", "For example, using one reference frame ($t_1=-6$ ) gives mVC$_8$ and mVC$_{16}$ of 85.1 and 80.3, improving the baseline by 0.4% and 0.4%, respectively.", "However, when using three reference frames ($t_1=-9$ , $t_2=-6$ , $t_3=-3$ ), the achieved mVC$_8$ and mVC$_{16}$ are much more superior to the baseline, improving by 4.1% and 4.5%.", "The results are reasonable because using more reference frames gives the model a bigger view of the previously predicted features and thus generates more consistent predictions.", "Influence of token selection ratio $p$ .", "We study the influence of the token selection ratio $p$ in terms of performance and computational resources in Tab.", "REF .", "Smaller $p$ represents that less number of tokens in the key features are selected and thus less computation resource is required.", "Hence, there is a trade-off between the segmentation performance and the required resources (GPU memory and additional latency).", "In the experiments, when reducing $p=100\\%$ to $50\\%$ , the performance reduces slightly (0.5 in mIoU) while the GPU memory reduces by 15.4% and FPS increases by 21.9%.", "When further reducing $p$ to $10\\%$ , the performance largely decreases in terms of mIoU, mVC$_8$ and mVC$_{16}$ .", "The reason is that too many tokens are discarded in the reference frames and the remained tokens are not informative enough to provide the required contexts for segmenting the target frame.", "To sum up, the best trade-off is achieved when $p=50\\%$ .", "Influence of the feature scales.", "For VSPW dataset, we use three-scale features output from the last three transformer blocks.", "Here, we conduct an ablation study on the impact of the used feature scales.", "The results are shown in Tab.", "REF .", "It can be observed that using the features from the last stage ($L=1$ ) or the last two stages ($L=2$ ) gives inferior performance while consuming less computational resources and achieving faster running speed.", "When using three-scale features, the best results are achieved in terms of mIoU, mVC$_{8}$ , and mVC$_{16}$ .", "This is due to the fact that the features in different scales contain complementary information, and the proposed affinity decoder successfully mines this information through learning correlations between multi-scale affinities.", "Table: Ablation study on the number of feature scales (LL).", "Using more scales of features for our method progressively increases the performance.Table: Ablation study on the affinity decoder.Within our design, SAR and MAA are essential parts which contribute to the refinement of the affinity.Ablation study on affinity decoder.", "We conduct ablation studies on the proposed affinity decoder.", "The results are shown in Tab.", "REF .", "Our affinity decoder processes the multi-scale affinities and generates a refined affinity matrix for each pair of the target and reference frames.", "It is reasonable to ask whether this design is better than the feature pyramid baseline.", "For this baseline (Feature Pyramid), we first compute the features for the target frame using the reference frame features at each scale and then merge those multi-scale features.", "For fair comparisons, we use a similar number of parameters for this baseline and other settings are also the same as ours.", "The result shows that while Feature Pyramid performs favorably over the single-frame baseline, our approach clearly surpasses it.", "It validates the effectiveness of the proposed affinity decoder.", "As presented in §REF , our affinity decoder has two modules: Single-scale Affinity Refinement (SAR) and Multi-scale Affinity Aggregation (MAA).", "The ablation study of two modules is provided in Tab.", "REF .", "Only using SAR, our method obtains the mIoU of 37.8, while only using MAA gives the mIoU of 37.4.", "Both variants are clearly better than the baseline, validating their effectiveness.", "Combining both modules, the proposed approach achieves the best mIoU, mVC$_{8}$ , and mVC$_{16}$ .", "It shows that both SAR and MAA are essential parts of the affinity decoder to learn better affinities to help segment the target frame.", "Table: State-of-the-art comparison on the VSPW  validation set.", "MRCFA outperforms the compared methods on both accuracy (mIoU) and prediction consistency." ], [ "Segmentation Results", "The state-of-the-art comparisons on VSPW [34] dataset are shown in Tab.", "REF .", "Besides segmentation performance and visual consistency of the predicted masks, we also report the model complexity and FPS.", "According to the model size, the methods are divided into two groups: small models and large models.", "Table: State-of-the-art comparison on the Cityscapes  val set.Among all methods, our MRCFA achieves state-of-the-art performance and produces the most consistent segmentation masks across video frames.", "For small models, our method on MiT-B1 clearly outperforms the strong baseline SegFormer [47] by 2.4% in mIoU and 1.2% in weighted IoU.", "In terms of the visual consistency in the predicted masks, our approach is superior to other methods, surpassing the second best method with 4.1% and 4.5% in mVC$_{8}$ and mVC$_{16}$ , respectively.", "For large models, MRCFA shows similar behavior.", "The results indicate that our method is effective in mining the relations between the target and reference frames through the designed modules: SAR and MAA.", "Despite that our approach achieves impressive performance, it adds limited model complexity and latency.", "Specifically, compared to SegFormer (MiT-B2), MRCFA slightly increases the number of parameters from 24.8M to 27.3M and reduces the FPS from 39.2 to 32.1.", "The efficiency of our method benefits from the proposed STM mechanism for which we abandon unimportant tokens.", "Figure: Qualitative results.", "From top to bottom: the input frames, the predicted masks of SegFormer , the predictions of ours (T=3,t 1 =-3T=3, t_1=-3, t 2 =-6t_2=-6), the predictions of ours (T=4,t 1 =-3T=4, t_1=-3, t 2 =-6t_2=-6, t 3 =-9t_3=-9) and the ground-truth masks.", "Our model generates better results than the baseline in terms of accuracy and VC.We conduct additional experiments on the semi-supervised Cityscapes [9] dataset, for which only one frame in each video clip is pixel-wise annotated.", "Tab.", "REF shows the results.", "Similar to VSPW, MRCFA also achieves state-of-the-art results among the compared approaches under the semi-supervised setting and has a fast running speed.", "Besides the quantitative comparisons analyzed above, we also qualitatively compare the proposed method with the baseline on the sampled video clips in Fig.", "REF .", "For the two samples, our method generates more accurate segmentation masks, which are also more visually consistent." ], [ "Conclusions", "This paper presents a novel framework MRCFA for VSS.", "Different from previous methods, we aim at mining the relations among multi-scale Cross-Frame Affinities (CFA) in two aspects: single-scale intrinsic correlations and multi-scale relations.", "Accordingly, Single-scale Affinity Refinement (SAR) is proposed to independently refine the affinity of each scale, while Multi-scale Affinity Aggregation (MAA) is designed to merge the refined affinities across various scales.", "To reduce computation and facilitate MAA, Selective Token Masking (STM) is adopted to sample important tokens in keys for the reference frames.", "Combining all the novelties, MRCFA generates better affinity relations between the target and the reference frames without largely adding computational resources.", "Extensive experiments demonstrate the effectiveness and efficiency of MRCFA, by setting new state-of-the-arts.", "The key components are validated to be essential for our method by ablation studies.", "Overall, our exploration of mining the relations among affinities could provide a new perspective on VSS." ] ]
2207.10436
[ [ "Multidimensional Spectroscopy of Time-Dependent Impurities in Ultracold\n Fermions" ], [ "Abstract We investigate the system of a heavy impurity immersed in a degenerated Fermi gas, where the impurity's internal degree of freedom (pseudospin) is manipulated by a series of radiofrequency (RF) pulses at several different times.", "Applying the functional determinant approach, we carry out an essentially exact calculation of the Ramsey-interference-type responses to the RF pulses.", "These responses are universal functions of the multiple time intervals between the pulses for all time and can be regarded as multidimensional (MD) spectroscopy of the system in the time domain.", "A Fourier transformation of the time intervals gives the MD spectroscopy in the frequency domain, providing insightful information on the many-body correlation and relaxation via the cross-peaks, e.g., the off-diagonal peaks in a two-dimensional spectrum.", "These features are inaccessible for the conventional, one-dimensional absorption spectrum.", "Our scheme provides a new method to investigate many-body nonequilibrium physics beyond the linear response regime with the accessible tools in cold atoms." ], [ "Introduction", "Spectroscopy, which records the responses of materials to external electromagnetic fields, has long been and probably will always be an essential tool to investigate the structures, behaviors, chemical reactions, and physical processes in materials.", "Conventional spectroscopy, such as ordinary nuclear magnetic resonance (NMR) and optical spectroscopy, usually shows the responses as a function of a single variable, e.g., the frequency of the electromagnetic wave, and hence is called one-dimensional (1D).", "In contrast, multidimensional (MD) spectroscopy unfolds spectral information into several dimensions, which improves resolution and overcomes spectral congestion.", "In addition, MD spectroscopy carries rich information on the correlations between resonance peaks and provides insights into physics that 1D spectroscopy cannot access.", "One of the earliest and most widely successful MD spectroscopy is the two-dimensional (2D) NMR, first proposed by Jean Jeener and later demonstrated by Richard Ernst and collaborators [1], [2].", "2D NMR can help distinguish overlapping signals in complex molecules and unveil the couplings between different resonances, which revolutionize, e.g., molecular dynamics and structural biology [3], [4].", "As an analog of its NMR counterparts, optical MD coherent spectroscopy (MDCS) [5], [6] adapts similar technology for the IR, visible, or UV regions and sheds new light on chemical kinetics and solid-state physics [7], [8], [9], [10], [11], [12], [13], [14], [15].", "In particular, optical 2DCS reveals coherent and incoherent coupling dynamics between resonances near the energy of neutral and charged excitons in atomically thin transition metal dichalcogenides (TMD) [16], [17].", "More recently, people have believed that these resonances are excitons dressed by Fermi sea electrons, i.e., quasiparticles named attractive or repulsive exciton-polarons [18], [19], [20], [21], [22], [23].", "Polaron, arguably the most celebrated quasiparticle [24], [25], has also attracted intensive interest in atomic physics experimentally [26], [27], [28], [29], [30], [31], [32], [33], [34], [35], [36] and theoretically [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], [54], [55], [56], [57], [58], [59], [60], [61].", "A fundamental and quantitative understanding of these nonequilibrium many-body dynamics between quasiparticles shown in 2D spectroscopy is fascinating but challenging.", "While a commonly adapted approach, the modified optical Bloch equation with phenomenological terms to include many-body effects [62], [63], [64], gives some intuitive interpretations, first-principal calculations of 2D spectroscopy are rare.", "Despite some progress [65], [66], [67], a quantitative but perturbative study of the complete 2D spectroscopy has only been carried out recently using the nonlinear (four-wave mixing) Golden Rule [17] and more recently with functional determinant approach [68], with parameters that can only be approximately obtained in a complex solid-state system.", "By contrast, we perform an in-principal exact calculation in a much simpler but realistic system: a heavy impurity immersed in a degenerate Fermi gas.", "In such a system, a single parameter, scattering length, can fully describe the interaction between the impurity and the isolated and non-interacting Fermi gas at ultracold temperature and be accurately tuned by Feshbach resonance [69].", "This system is closely related to the Fermi polaron problem, whose 1D spectroscopy shows singularities [70], [71] that are remnants of polaron resonances destroyed by the well-known Anderson's “orthogonality catastrophe” (OC) [72].", "Our recent studies rigorously proved that these singularities could reduce back to polaron resonances if a mechanism exists to prevent OC, such as a superfluid pairing gap [73], [74].", "However, as far as we know, the correlations and coherent dynamics between the Fermi singularities or polaron resonances in ultracold gases have never been investigated; our work here is the first numerically exact calculation of the MD spectroscopy of a polaron-like system in ultracold gases.", "Here, we apply the functional determinant approach (FDA) [75], [76], [77], [78], a non-perturbative method that rigorously includes all high-order correlations and beyond mean-field many-body effects.", "Since exact solutions of many-body systems are rare, our results can give new insight, deepen our understanding of Fermi-edge singularity and polaron physics, and be regarded as a benchmark to access the accuracy of other approximation calculations of MD spectroscopy.", "Figure: (a) A sketch of system setup: A localized impurity (the big blackball with an arrow indicating pseudospin states) is immersed in asea of host fermions (red dots) and is manipulated by a series ofRF pulses with time intervals between pulses and detection (indicatedby the red six-point star).", "(b) is an example of a 2D spectrum, wherethe absorption (ω τ \\omega _{\\tau }) and emission (ω t \\omega _{t}) frequenciesare obtained from the Fourier transformation for τ\\tau and tt,respectively.", "The two red crosses on the diagonal (dashed line) mirrorthe two singularities in the 1D spectrum shown above and to the rightof the 2D spectrum.", "The two orange crosses on the off-diagonal arecalled cross-peaks, revealing the correlations between the two singularities.", "(c) shows a pulse sequence for the 1D Ramsey scheme, (d) for the 1Dspin-echo scheme, and (e) for the 2D spin-echo scheme.", "(f) and (g)are EXSY pulse schemes with different pulse phases, which we callEXSY++ and EXSY--, respectively.We also propose a realistic experimental scheme to measure the MD spectroscopy via a generalization of Ramsey spectroscopy.", "Ramsey spectroscopy is another technique similar to the NMR, which manipulates internal (e.g., pseudospin instead of spin) degrees of freedom and observes the interference determined by the surrounding many-body environment.", "Ramsey spectroscopy has found many vital applications in investigating many-body physics: characterizing quantum correlations [79], [80], [81], [82], measuring topological invariance [83], [84], accessing many-body interactions and beyond mean-field effects [85], [86], [87], [88], [89], [90], and studying impurity dynamics closely related to polaron physics [91], [70], [71], [92].", "Ramsey spectroscopy has become a well-established experimental technique in ultracold atomic gases [30], thanks to the unprecedented controllability and rich toolbox atomic physics provides [93], [69].", "However, to the best of our knowledge, all previous studies of Ramsey spectroscopies are 1D.", "Our work generalizes the Ramsey spectroscopy to multidimensional, opening the door to exploring high-order many-body correlations and beyond mean-field dynamics and providing a perfect meeting point for theoretical and experimental efforts to examine complex nonequilibrium responses that MD spectroscopy reveals.", "The rest of this paper is organized as follows.", "In the following section, we establish our general formalism and show how to apply the exact FDA approach to calculate MD spectroscopy.", "Section III is devoted to presenting our numerical results.", "Finally, we conclude our paper by discussing the physics and proposing future extensions in section IV." ], [ "Formalism", "The basic setup of our system is shown in Fig.", "REF (a).", "We place a localized fermionic or bosonic impurity (the big black ball) with two internal pseudospins (hyperfine) states $|\\uparrow \\rangle $ and $|\\downarrow \\rangle $ (illustrated by the black arrow) in the background of a single-component ultracold Fermi gas (the red dots).", "The localization of impurity can be either achieved by confinement of a deep optical lattice or treated as an approximation to an impurity atom with heavy mass.", "At ultralow temperature, the background Fermi gas is considered non-interacting.", "We also assume the fermionic background atoms do not interact with $|\\downarrow \\rangle $ , while $s$ -wave interactions dominate interaction with $|\\uparrow \\rangle $ .", "This interaction is characterized by the $s$ -wave scattering length $a$ and can be tuned via, e.g., Feshbach resonances [69].", "The general spirit of our scheme is similar to the original Ramsey interferometry, where one uses radio-frequency (RF) pulses to manipulate the superposition of pseudospin states.", "Throughout this work, we assume the RF pulses to be infinitely fast rotations of pseudospin that do not perturb the background Fermi gas.", "After some time of evolution, dynamical phases accumulate for different pseudospin states, which reflects the many-body responses to the different impurity-background interactions and can be measured by the interference.", "However, there is one crucial difference in our scheme: we use multiple pulses with different time delays in between to drive the pseudo-spin through many different quantum pathways that give nonlinear many-body responses in an MD spectroscopy.", "One example of a three-pulse scheme is shown in Fig.", "REF (a), which is similar to one of the most common 2D NMR pulse sequences, namely EXSY (EXchange SpectroscopY).", "In this scheme, we prepare the impurity in the non-interacting state $|\\downarrow \\rangle $ initially, and apply the first $\\pi /2$ pulse that rotates the pseudospin state to $(|\\uparrow \\rangle +|\\downarrow \\rangle )/\\sqrt{2}$ .", "After some time $\\tau $ , we apply the second pulse.", "Subsequently, we wait for another period of time, $T$ , before applying the third pulse and carry out a detection some time $t$ afterward.", "Following the same procedure as EXSY in NMR, we take the Fourier transformation with respect to both time variables $\\tau $ and $t$ to generate a 2D spectrum as a function of an absorption frequency $\\omega _{\\tau }$ and an emission frequency $\\omega _{t}$ , respectively, whose physical interpretation will become clear later (in the last paragraph of section II).", "The mixing time $T$ allows many-body dynamical evolution between absorption and emission.", "Figure REF (b) sketches a 2D spectrum in the box, where diagonal peaks (red crosses) on the dashed line mirror the singularities in the linear 1D spectrum (shown on the top and to the right of the box).", "Coupled resonances give rise to the off-diagonal cross-peaks (orange crosses) with the absorption frequency of one resonantce and the emission frequency of the other, whereas uncorrelated resonances produce no cross-peaks.", "The corss peaks are thus the signature of correlations between resonances, which the 1D spectrum cannot distinguish.", "Figure REF (c)-(g) shows several different pulse schemes investigated in this work.", "Most of the pulses in this work are $\\pi /2$ pulses, except for the second pulse in the scheme of Fig.", "REF (g), which is a $-\\pi /2$ pulse.", "For convenience, we name (f) and (g) EXSY$+$ and EXSY$-$ , respectively.", "Using the unit $\\hbar =1$ hereafter, we write the many-body Hamiltonian as, $\\hat{\\mathcal {H}}=\\mathcal {\\hat{H}}_{\\uparrow }|\\uparrow \\rangle \\langle \\uparrow |+\\mathcal {\\hat{H}}_{\\downarrow }|\\downarrow \\rangle \\langle \\downarrow |,$ where the non-interacting ($\\mathcal {H}_{\\downarrow }$ ) and interacting ($\\mathcal {H}_{\\uparrow }$ ) Hamiltonian are given by $\\mathcal {\\hat{H}}_{\\downarrow }=\\sum _{\\mathbf {k}}\\epsilon _{\\mathbf {k}}c_{\\mathbf {k}}^{\\dagger }c_{\\mathbf {k}}$ and $\\hat{\\mathcal {H}}_{\\uparrow }=\\mathcal {\\hat{H}}_{\\downarrow }+\\sum _{\\mathbf {k},\\mathbf {q}}\\tilde{V}(\\mathbf {k}-\\mathbf {q})c_{\\mathbf {k}}^{\\dagger }c_{\\mathbf {q}}+\\omega _{s}$ .", "Here, $\\omega _{s}$ denotes the energy differences between the two pseudospin levels.", "$c_{\\mathbf {k}}^{\\dagger }$ and $c_{\\mathbf {k}}$ are creation and annihilation operators of the background fermions with momentum $\\mathbf {k}$ , respectively.", "$\\epsilon _{\\mathbf {k}}=k^{2}/2m$ is the single-particle kinetic energy of the background fermions with mass $m$ .", "$\\tilde{V}(\\mathbf {k})$ is the Fourier transform of $V(\\mathbf {r})$ , the interaction potential between $|\\uparrow \\rangle $ and the background fermions.", "Initially, we prepare the impurity in $|\\downarrow \\rangle $ and the background fermions at some temperature $T^{\\circ }$ .", "The background fermions can be described by a thermal density matrix $\\rho _{{\\rm FS}}=\\exp [-(\\mathcal {\\hat{H}}_{\\downarrow }-\\mu \\hat{N})/k_{B}T^{\\circ }]/Z_{{\\rm FS}}$ , where $\\hat{N}=\\sum _{\\mathbf {k}}c_{\\mathbf {k}}^{\\dagger }c_{\\mathbf {k}}$ is the number operator, $Z_{{\\rm FS}}$ is a normalization constant, and $k_{B}$ is the Boltzman constant.", "Here, $\\mu \\simeq E_{F}$ is the chemical potential determined by number density $n$ , where $E_{F}$ is the Fermi energy that also gives a typical many-body time-scale $\\tau _{F}=E_{F}^{-1}$ and momentum scale $k_{F}=\\sqrt{2mE_{F}}$ .", "We aim to investigate the dynamics of the system under multiple RF pulses with different time delays in between.", "As a concrete example, we focus on a three-pulse EXSY$+$ scheme, as illustrated in Fig.", "REF (f).", "The RF pulses can manipulate the spin-state of the impurity within a much shorter time than the intrinsic time scales of the background fermions $\\tau _{F}$ .", "As a result, one can neglect the evolution of the Fermi sea during the pulse and describe the pulse's effect as a rotation of the impurity's spin state.", "For example, a pulse that achieves a $\\pi /2$ rotation can be defined as $R(\\pi /2)\\equiv \\left(\\begin{array}{cc}R_{\\uparrow \\uparrow }^{(\\pi /2)} & R_{\\uparrow \\downarrow }^{(\\pi /2)}\\\\R_{\\downarrow \\uparrow }^{(\\pi /2)} & R_{\\downarrow \\downarrow }^{(\\pi /2)}\\end{array}\\right)=\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{cc}1 & 1\\\\-1 & 1\\end{array}\\right).$ The total time evolution in EXSY$+$ scheme is thus given by the unitary transformation $\\mathcal {U}(t,T,\\tau )=U(t)R(\\pi /2)U(T)R(\\pi /2)U(\\tau )R(\\pi /2),$ where $U(t^{\\prime })=\\left(\\begin{array}{cc}e^{-i\\hat{H}_{\\uparrow }t^{\\prime }} & 0\\\\0 & e^{-i\\hat{H}_{\\downarrow }t^{\\prime }}\\end{array}\\right)$ gives the time evolution in between pulses.", "We denote the initial state as $\\rho _{i}=\\rho _{{\\rm FS}}\\otimes |\\downarrow \\rangle \\langle \\downarrow |$ and arrive at the final density matrix as $\\rho _{f}=\\mathcal {U}\\rho _{i}\\mathcal {U}^{\\dagger }$ .", "We can define a multidimensional response function in the time domain, $S(\\tau ,T,t)$ , by measuring ${\\rm Re}[S(\\tau ,T,t)]=-\\mathrm {Tr}\\left(\\sigma _{x}\\rho _{f}\\right),\\ {\\rm Im}[S(\\tau ,T,t)]=-\\mathrm {Tr}\\left(\\sigma _{y}\\rho _{f}\\right),$ where $\\sigma _{x}$ and $\\sigma _{y}$ are the usual Pauli matrices in the spin-basis.", "A tedious but straightforward manipulation of algebra can give a close form $S(\\tau ,T,t)=\\sum _{i=1}^{16}S_{i}(\\tau ,T,t)\\equiv \\frac{1}{4}\\sum _{i=1}^{16}\\mathrm {Tr}[I_{i}(\\tau ,T,t)\\rho _{{\\rm FS}}].$ Here, $I_{i}(\\tau ,T,t)$ , which we name as pathways, are a direct product of six operators in the form of $e^{\\pm i\\mathcal {\\hat{H}}t}$ , $I_{i}(\\tau ,T,t)=c_{\\vec{\\sigma }_{i}}e^{i\\mathcal {H}_{\\sigma _{1i}^{\\prime }}\\tau }e^{i\\mathcal {H}_{\\sigma _{2i}^{\\prime }}T}e^{i\\mathcal {H}_{\\uparrow }t}e^{-i\\mathcal {H}_{\\downarrow }t}e^{-i\\mathcal {H}_{\\sigma _{2i}}T}e^{-i\\mathcal {H}_{\\sigma _{1i}}\\tau },$ where $\\vec{\\sigma }_{i}\\equiv (\\sigma _{1i},\\sigma _{2i},\\sigma _{1i}^{\\prime },\\sigma _{2i}^{\\prime })$ is a collective index that takes sixteen different combinations, and $c_{\\vec{\\sigma }_{i}}=-8R_{\\uparrow \\sigma _{2i}^{\\prime }}^{(\\pi /2)}R_{\\sigma _{2i}^{\\prime }\\sigma _{1i}^{\\prime }}^{(\\pi /2)}R_{\\sigma _{1}^{\\prime }\\downarrow }^{(\\pi /2)}R_{\\downarrow \\sigma _{2i}}^{(\\pi /2)}R_{\\sigma _{2i}\\sigma _{1i}}^{(\\pi /2)}R_{\\sigma _{1i}\\downarrow }^{(\\pi /2)}$ are coefficients that take values of $\\pm 1$ .", "Here, we have applied the relation $R(\\pi /2)^{-1}=R(-\\pi /2)=R(\\pi /2)^{T}$ for the derivation, where the superscript $T$ denotes the transpose of a matrix (and should not be confused with mixing time $T$ ).", "The sorting of the pathways can be arranged arbitrarily for convenience.", "Here, our first four pathways are chosen as $I_{1}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau },$ $I_{2}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau },$ $I_{3}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau },$ and $I_{4}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau }.$ The expressions for other twelve pathways can be found in Appendix .", "The contribution of each pathway, $S_{i}(\\tau ,T,t)$ , can be calculated exactly via FDA.", "To proceed, we define $\\mathcal {H}_{\\downarrow }\\equiv \\Gamma (h_{\\downarrow })$ and $\\mathcal {H}_{\\uparrow }\\equiv \\Gamma (h_{\\uparrow })+\\omega _{s}$ .", "Here $\\Gamma (h)\\equiv \\sum _{\\mathbf {k},\\mathbf {q}}h_{\\mathbf {k}\\mathbf {q}}c_{\\mathbf {k}}^{\\dagger }c_{\\mathbf {q}}$ is a bilinear fermionic many-body Hamiltonian in the Fock space, and $h_{\\mathbf {k}\\mathbf {q}}$ represents the matrix elements of the corresponding operator in the single-particle Hilbert space.", "These matrix elements are explicitly given by $(h_{\\downarrow })_{\\mathbf {k}\\mathbf {q}}=\\epsilon _{\\mathbf {k}}\\delta _{\\mathbf {k}\\mathbf {q}}$ and $(h_{\\uparrow })_{\\mathbf {k}\\mathbf {q}}=\\epsilon _{\\mathbf {k}}\\delta _{\\mathbf {k}\\mathbf {q}}+\\tilde{V}(\\mathbf {k}-\\mathbf {q})$ .", "With these definitions, we can rewrite $S_{i}(\\tau ,T,t)=\\frac{1}{4}\\tilde{S}_{i}(\\tau ,T,t)e^{-i\\omega _{s}f_{i}(t,T,\\tau )},$ where $e^{-i\\omega _{s}f_{i}(t,T,\\tau )}$ gives a simple phase and $\\tilde{S}_{i}(\\tau ,T,t)$ is a product of the exponentials of the bilinear fermionic operator, both of which can be calculated exactly.", "For example, we have $S_{1}(\\tau ,T,t)=\\tilde{S}_{1}(\\tau ,T,t)e^{i\\omega _{s}t}e^{-i\\omega _{s}\\tau }/4$ , where $\\begin{aligned}\\tilde{S}_{1}(\\tau ,T,t)= & {\\rm Tr}[e^{i\\Gamma (h_{\\downarrow })\\tau }e^{i\\Gamma (h_{\\uparrow })T}e^{i\\Gamma (h_{\\uparrow })t}\\times \\\\& e^{-i\\Gamma (h_{\\downarrow })t}e^{-i\\Gamma (h_{\\uparrow })T}e^{-i\\Gamma (h_{\\uparrow })\\tau }\\rho _{{\\rm FS}}]\\end{aligned}.$ Applying Levitov’s formula gives $\\tilde{S}_{1}(\\tau ,T,t)={\\rm det}[(1-\\hat{n})+R_{1}(\\tau ,T,t)\\hat{n}],$ with $R_{1}(\\tau ,T,t)=e^{ih_{\\downarrow }\\tau }e^{ih_{\\uparrow }T}e^{ih_{\\uparrow }t}e^{-ih_{\\downarrow }t}e^{-ih_{\\uparrow }T}e^{-ih_{\\uparrow }\\tau },$ and $\\hat{n}=n_{\\mathbf {k}}\\delta _{\\mathbf {k}\\mathbf {k}^{\\prime }}$ , where $n_{\\mathbf {k}}=1/(e^{\\epsilon _{\\mathbf {k}}/k_{B}T^{\\circ }}+1)$ denotes the single-particle occupation number operator.", "Calculations of other pathway contributions are similar, which are presented in Appendix .", "Numerical calculations are carried out in a finite system confined in a sphere of radius $R$ .", "Keeping the density constant, we increase $R$ towards infinity until numerical results are converged.", "Typically, we choose $k_{F}R=250\\pi $ in a calculation.", "We focus on the $s$ -wave interaction channel between $|\\downarrow \\rangle $ and the background fermions near a broad Feshbach resonance, which can be well mimicked by a spherically symmetric and short-range van-der-Waals type potential $V(r)=-C_{6}\\exp (-r^{6}/r_{0}^{6})/r^{6}$ .", "Here, $C_{6}$ determines the van-der-Waals length $l_{{\\rm vdW}}=(2mC_{6})^{1/4}/2$ , and we choose $l_{{\\rm vdW}}k_{F}=0.01\\ll 1$ , so the short-range details are unimportant.", "The low-temperature many-body physics can be determined by the $s$ -wave energy-dependent scattering length $a(E_{F})=-\\tan \\eta (k_{F})/k_{F}$ at the Fermi energy $E_{F}$ , with $\\eta (E_{F})$ being an energy-dependent $s$ -wave scattering phase-shift tuned by adjusting $r_{0}$ .", "For the simplicity of notation, we denote $a\\equiv a(E_{F})$ hereafter.", "Consequently, $\\tilde{S}_{i}(t,T,\\tau )$ is a universal function of $k_{B}T^{\\circ }/E_{F}$ , $k_{F}a$ , $t/\\tau _{F}$ , and $\\tau /\\tau _{F}$ in the whole time domain.", "A summation of the contributions of all pathways gives the total response $S(t,T,\\tau )$ , and the spectrum in the frequency domain can be obtained via a double Fourier transformation $A(\\omega _{\\tau },T,\\omega _{t})=\\frac{1}{\\pi ^{2}}\\int _{0}^{\\infty }\\int _{0}^{\\infty }dtd\\tau e^{i\\omega _{\\tau }\\tau }S(\\tau ,T,t)e^{-i\\omega _{t}t},$ where $\\omega _{t}$ and $\\omega _{\\tau }$ are interpreted as an absorption and emission frequency, respectively.", "On the other hand, the $T$ -dependence of $A(\\omega _{\\tau },T,\\omega _{t})$ can reveal the many-body coherent and incoherent dynamics.", "We notice that $A(\\omega _{\\tau },T,\\omega _{t})=\\sum _{i=1}^{16}A_{i}(\\omega _{\\tau },T,\\omega _{t})$ can also be expressed as a summation of sixteen pathways, where the expression of each pathway is given by Eq.", "(REF ), with $A$ and $S$ replaced by $A_{i}$ and $S_{i}$ , respectively.", "We emphasize that the MD spectroscopy contains all the information of 1D spectroscopy.", "For example, one can examine the $T=t=0$ case, where the pulse scheme becomes the same as the original 1D Ramsey scheme shown in Fig.", "REF (f).", "In this case, $S(\\tau ,T=0,t=0)$ reduces to the 1D Ramsey response function $S_{a}(\\tau )=\\mathrm {Tr}(e^{i\\mathcal {\\hat{H}}_{\\downarrow }\\tau }e^{-i\\mathcal {\\hat{H}}_{\\uparrow }\\tau }\\rho _{{\\rm FS}})$ , which is also called the time-dependent overlap function.", "Similarly, we have $S_{e}(t)\\equiv S(\\tau =0,T=0,t)=S_{a}^{*}(t)$ , where the superscript $^{*}$ denotes complex conjugate.", "Correspondingly, we have $\\int d\\omega _{t}A(\\omega _{\\tau },T=0,\\omega _{t})=A_{a}(\\omega _{\\tau })$ , where $A_{a}(\\omega _{\\tau })=\\int d\\tau S_{a}(\\tau )e^{i\\omega _{\\tau }\\tau }/\\pi $ is the 1D absorption spectrum.", "Similarly, we have $A_{e}(\\omega _{t})=\\int d\\omega _{\\tau }A(\\omega _{\\tau },T=0,\\omega _{t})=A_{a}^{*}(\\omega _{t})$ .", "Since $A_{a}(\\omega _{\\tau })$ is the absorption spectrum, its complex conjugate $A_{e}(\\omega _{t})$ can thus be interpreted as an emission spectrum.", "These interpretations are consistent with the fact that the integration of $A(\\omega _{\\tau },T=0,\\omega _{t})$ over the emission frequency $\\omega _{t}$ gives the 1D absorption spectrum $A_{a}(\\omega _{\\tau })$ and vice versa.", "The physical process underlying $A(\\omega _{\\tau },T,\\omega _{t})$ can be interpreted as follows: the system first gets excited by absorbing a photon with frequency $\\omega _{\\tau }$ , after a period of mixing time $T$ , and then emits a photon with frequency $\\omega _{t}$ ." ], [ "Two-dimensional spin-echo response", "Let us first investigate a relatively simple situation, $T=0$ , which is equivalent to the two-pulse scheme illustrated in Fig.", "REF (e).", "The response function in the time domain is given by $S_{o}(\\tau ,t)\\equiv S(\\tau ,T=0,t)={\\rm Tr}[I_{o}(\\tau ,t)\\rho _{{\\rm FS}}],$ where $I_{o}(\\tau ,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau }$ .", "We notice that when $t=\\tau $ , the scheme is equivalent to 1D spin-echo scheme investigated in Refs.", "[70], [71] and illustrated in Fig.", "REF (d), hence naming our scheme as a 2D spin echo scheme.", "We can examine that $S_{o}(t,t)$ reduce to the 1D spin-echo response $S_{o}(t)=\\mathrm {Tr}[e^{i\\mathcal {H}_{\\downarrow }t}e^{i\\mathcal {H}_{\\uparrow }t}e^{-i\\mathcal {H}_{\\downarrow }t}e^{-i\\mathcal {H}_{\\uparrow }t}\\rho _{{\\rm FS}}]$ .", "While we can also calculate $S_{o}(\\tau ,t)$ by using $S(\\tau ,T,t)$ in Eq.", "(REF ) with $T=0$ , a direct calculation of Eq.", "(REF ) is more convenient.", "The expression of the 2D spin-echo response can be written as $S_{o}(\\tau ,t)=e^{i\\omega _{s}t}\\tilde{S}_{o}(\\tau ,t)e^{-i\\omega _{s}\\tau }$ , where $\\tilde{S}_{o}(\\tau ,t)={\\rm Tr}[e^{i\\Gamma (h_{\\downarrow })\\tau }e^{i\\Gamma (h_{\\uparrow })t}e^{-i\\Gamma (h_{\\downarrow })t}e^{-i\\Gamma (h_{\\uparrow })\\tau }\\rho _{{\\rm FS}}]$ can be calculated exactly by applying Levitov’s formula in the FDA $\\tilde{S}_{o}(\\tau ,t)={\\rm det}[(1-\\hat{n})+R_{o}(\\tau ,t)\\hat{n}]$ with $R_{o}(\\tau ,t)=e^{ih_{\\downarrow }\\tau }e^{ih_{\\uparrow }t}e^{-ih_{\\downarrow }t}e^{-ih_{\\uparrow }\\tau }.$ Examples of this universal 2D response function $|S_{o}(t,\\tau )|=|\\tilde{S}_{o}(t,\\tau )|$ with parameters $k_{F}a=-0.5$ and $k_{F}a=0.5$ at a finite temperature $k_{B}T^{\\circ }=0.03E_{F}$ are shown in Fig.", "REF (a) and (b), respectively.", "The solid and dash-dotted curves in Fig.", "REF (c) show $|S_{o}(\\tau ,t=0)|$ and $|S_{o}(\\tau ,t=\\tau )|$ as a function of $\\tau $ , i.e., the slice of (a) along the x-axis and diagonal, respectively.", "Figure REF (d) shows the same slices for (b).", "Fig.", "REF (c) indicates that $S_{o}(\\tau ,t=0)$ and $S_{o}(\\tau ,t=\\tau )$ reduce to 1D Ramsey response $S_{a}(\\tau )$ and 1D spin-echo signal $S_{o}(\\tau )$ , respectively." ], [ "2D spin-echo spectrum", "The 2D spin-echo spectrum in the frequency domain can be obtained by applying a double Fourier transformation, Eq.", "(REF ), with the relation $A_{o}(\\omega _{\\tau },\\omega _{t})=A(\\omega _{\\tau },T=0,\\omega _{t})$ .", "One can immediately observe that $A_{o}(\\omega _{\\tau },\\omega _{t})=\\int _{0}^{\\infty }\\int _{0}^{\\infty }dtd\\tau e^{i\\tilde{\\omega }_{\\tau }\\tau }\\tilde{S}_{o}(\\tau ,t)e^{-i\\tilde{\\omega }_{t}t}/\\pi ^{2}$ with $\\tilde{\\omega }_{\\tau }=\\omega _{\\tau }-\\omega _{s}$ and $\\tilde{\\omega }_{t}=\\omega _{t}-\\omega _{s}$ , i.e., the energy differences between two spin-states only give simple shifts of frequencies.", "Hereafter, unless specified otherwise, we denote $\\tilde{\\omega }=\\omega -\\omega _{s}$ for any frequency variable $\\omega $ .", "Figure REF shows our finite temperature ($k_{B}T^{\\circ }=0.03E_{F}$ ) results for attractive $k_{F}a=-0.5$ and repulsive $k_{F}a=0.5$ interactions in (a1)-(a4) and (b1)-(b4), respectively.", "We present the 2D contour of ${\\rm Re}[A_{o}(\\omega _{\\tau },\\omega _{t})]$ as a function of $\\tilde{\\omega }_{\\tau }$ and $\\tilde{\\omega }_{t}$ in Figs.", "REF (a2) and (b2) and the corresponding 3D landscape in Figs.", "REF (a3) and (b3).", "Here, we denote ${\\rm Re}[\\mathcal {C}]$ and ${\\rm Im}[\\mathcal {C}]$ as the real and imaginary parts of $\\mathcal {C}$ , respectively.", "For comparison, we also show the 1D spectra ${\\rm Re}[A_{a}(\\omega _{\\tau })]$ in Figs.", "REF (a1) and (b1) and ${\\rm Re}[A_{e}(\\omega _{t})]={\\rm Re}[A_{a}(\\omega _{\\tau })]$ in Figs.", "REF (a4) and (b4).", "For completeness, we also show the imaginary part and amplitude of the corresponding 2D spectroscopy in Fig.", "REF in Appendix .", "The absorption spectrum has been well studied before [70], [71].", "When the interaction is attractive $(k_{F}a<0)$ , only one power-law singularity ${\\rm Re}[A_{a}(\\omega _{\\tau })]\\sim \\theta (\\omega _{\\tau }-\\omega _{A-})|\\omega _{\\tau }-\\omega _{A-}|^{-a_{A-}}$ with exponent coefficient $a_{A-}>0$ appears near $\\omega _{A-}$ with a slight thermal broadening, as shown in Figs.", "REF (a1) and (a4).", "In contrast, Figs.", "REF (b1) and (b4) show two singularities for a repulsive interaction ($k_{F}a>0$ ).", "These singularities are understood as manifestations of the well-known Anderson's orthogonality catastrophe (OC).", "Due to the existence of multiple particle-hole excitations of the background Fermi sea induced by the infinitely massive impurity, the many-particle states with and without impurity interactions are orthogonal, which leads to a vanishing quasiparticle residue.", "Our recent studies further examined the scenario where a mechanism, such as a superfluid gap or finite impurity mass, suppresses those multiple particle-hole excitations [73], [74].", "In this case, OC can be prevented, and the singularities reduce back to the so-called attractive or repulsive Fermi polarons.", "At the same time, the “wings” attached to the singularity, indicated in Figs.", "REF (a1) and (b1), separate from the polaron signal and reduce to the so-called molecule-hole continuums.", "Because of their close relations to polaron resonances, we name these singularities as attractive and repulsive singularities and denote them by $A$ and $R$ , respectively, in Figs.", "REF (a1), (a4), (b1), and (b4).", "The 2D spectrum in Figs.", "REF (a2) and (a3) shows a double dispersion lineshape commonly found in 2D NMR around $(\\tilde{\\omega }_{\\tau },\\tilde{\\omega }_{t})\\approx (\\tilde{\\omega }_{A-},\\tilde{\\omega }_{A-})$ , which is called a diagonal peak denoted as $AA$ .", "For attractive interaction $k_{F}a=-0.5$ , the attractive singularity appears at $\\tilde{\\omega }_{A-}\\approx -0.28E_{F}$ in the absorption spectrum.", "We have numerically verified that the integration of 2D spectroscopy over emission frequency $\\omega _{t}$ gives the 1D absorption spectrum $A_{a}(\\omega _{\\tau })$ (not shown here).", "Interestingly, we can observe that there is no diagonal spectral weight corresponding to the wing.", "Rather, the spectral weight on the off-diagonal $A_{o}(\\omega _{\\tau },\\omega _{t}\\approx \\omega _{A-})$ and $A_{o}(\\omega _{\\tau }\\approx \\omega _{A-},\\omega _{t})$ is significant and resembles the lineshape of the wing.", "This is a non-trivial manifestation of OC in the 2D spectroscopy: the inhomogeneous and homogeneous lineshape does not have the OC characteristic.", "Here, the inhomogeneous and homogeneous lineshape refer to the lineshape near a singularity along the diagonal or the direction perpendicular to the diagonal, which is better illustrated in the amplitude of 2D spectroscopy shown in Fig.", "REF (c) in Appendix .", "As we can observe, the widths of the singularity are much sharper along these two directions, which might help experimental identification of the singularity, especially at finite temperatures.", "The homogeneous and inhomogeneous broadenings in MD spectroscopy also have their own experimental significance, similar to their NMR or optical counterpart.", "In a realistic experiment, the ensemble average of the impurity signal can give rise to a further inhomogeneous broadening induced by the disorder of the local environment (such as spatial magnetic field fluctuation).", "However, these disorders are usually non-correlated and would not introduce homogeneous broadening [6], [16], [15].", "For repulsive interaction $k_{F}a=0.5$ , there are two singularities, the attractive and repulsive singularities, in the 1D absorption spectrum.", "These singularities appear at $\\tilde{\\omega }_{A+}\\approx -0.98E_{F}$ and $\\tilde{\\omega }_{R+}\\approx 0.28E_{F}$ in Figs.", "REF (b1) and (b4).", "As shown in Fig.", "REF (b2) and (b3), there are two diagonal peaks, $AA$ and $RR$ , in the 2D spectroscopy that mirror the attractive and repulsive singularities.", "In addition, there are also two significant cross-peaks, $AR$ and $RA$ , which indicate a strong many-body quantum correlation between the attractive and repulsive singularity.", "As far as we know, this is the first prediction of many-body correlations between Fermi singularities in cold atom systems.", "If the impurity has a finite mass or the background Fermi gas is replaced by a superfluid with an excitation gap, we believe these cross-peaks would remain and represent the correlations between attractive and repulsive polarons." ], [ "2D EXSY spectrum", "In this section, we focus on the 2D spectrum, $A(\\omega _{\\tau },T,\\omega _{t})$ , of the EXSY$+$ pulse scheme illustrated by Fig.", "REF (f), which can be exactly calculated by Eqs.", "(REF ), (REF ) and (REF ) with the FDA.", "As mentioned above, in a 2D spin-echo spectrum (and the 1D spectra), the trivial energy difference between $|\\downarrow \\rangle $ and $|\\uparrow \\rangle $ , $\\omega _{s}$ , only introduces a frequency shift of the spectra as $(\\omega _{\\tau },\\omega _{t})\\rightarrow (\\tilde{\\omega }_{\\tau },\\tilde{\\omega }_{t})$ .", "In contrast, the scenario is a bit more complicated for the EXSY$+$ spectrum, where the spectrum can be expressed as a summation of sixteen pathway contributions, and each pathway is associated with a different phase $e^{-i\\omega _{s}f_{i}(t,T,\\tau )}$ in Eq.", "(REF ) (see Appendix for details).", "Consequently, each pathway contribution $A_{i}(\\omega _{\\tau },T,\\omega _{t})$ has shifted to different centers in the frequency domain accordingly.", "The features of Fermi singularities, in general, lie within a frequency range of a few Fermi energy $E_{F}$ around $(\\tilde{\\omega }_{\\tau },\\tilde{\\omega }_{t})=(0,0)$ in the 2D spectrum $A(\\omega _{\\tau },T,\\omega _{t})$ .", "In addition, for a typical ultracold experiment, $\\omega _{s}$ is usually much larger than the Fermi energy $E_{F}$ .", "As a result, only the first four pathways associated with $e^{i\\omega _{s}t}e^{-i\\omega _{s}\\tau }$ would give a non-negligible contribution, i.e., $A(\\omega _{\\tau },T,\\omega _{t})\\approx \\tilde{A}(\\omega _{\\tau },T,\\omega _{t})=\\sum _{i=1}^{4}A_{i}(\\omega _{\\tau },T,\\omega _{t})$ , within the frequency range in interest.", "A comparison between $A(\\omega _{\\tau },T,\\omega _{t})$ and $\\tilde{A}(\\omega _{\\tau },T,\\omega _{t})$ for $\\omega _{s}/E_{F}=2\\pi $ is shown in Appendix , where perfect agreement is observed.", "Such reduction of pathways not only allows a faster calculation but also helps us to identify the important pathways and further separate them using a so-called “phase cycling” technique detailed in the next section.", "We find that the general landscape of the 2D spectrum $\\tilde{A}(\\omega _{\\tau },T,\\omega _{t})$ also shows strong off-diagonal contributions and cross-peaks (see Fig.", "REF , for example), similar to the 2D spin-echo spectrum $A_{o}(\\omega _{\\tau },\\omega _{t})$ .", "However, the dependency of $\\tilde{A}(\\omega _{\\tau },T,\\omega _{t})$ on the mixing time $T$ can give us further information on the many-body coherent and incoherent dynamics.", "There is one additional complication, though: we observe a fast oscillation with frequency $\\omega _{s}$ in the $T$ -dependency of $\\tilde{A}(\\omega _{\\tau },T,\\omega _{t})$ , which originates from the interferences between the contribution of $I_{3}$ and $I_{4}$ , that is proportional to $e^{-i\\omega _{s}T}$ and $e^{i\\omega _{s}T}$ respectively.", "We are not interested in this trivial oscillation.", "Instead, we would like to investigate the dynamic in the time scale of $\\tau _{F}$ and choose to study the signals at $T_{M}\\omega _{s}=2\\pi M$ , where $M$ is an integer.", "Notice that since $\\omega _{s}\\gg E_{F}$ , $T_{M}/\\tau _{F}$ can be considered to be almost continuous.", "As we will see later, this choice of $T_{M}$ is not necessary if we apply the phase cycling to separate the pathways.", "Figure REF (a) shows the real and imaginary part of $A(\\omega _{\\tau }\\approx \\omega _{A-},T,\\omega _{t}\\approx \\omega _{A-})$ for $k_{F}a=-0.5$ as cross and circle symbols, showing a damping oscillation behavior at a late time.", "This long-time behavior can be fitted perfectly with a formula $F(T)=A_{a}\\cos (\\omega _{a}T+\\varphi _{a})\\exp (-T/T_{a})+B$ for the real and imaginary parts separately, both of which give $\\omega _{a}\\approx 0.28E_{F}$ and $T_{a}\\approx 80\\tau _{F}$ .", "We also find this damping oscillation behavior with the same $\\omega _{a}$ and $T_{a}$ at other parts of the spectrum.", "One can recognize $\\omega _{a}\\approx |\\omega _{A-}|$ and the damping lifetime $T_{a}$ reflects a non-coherent many-body dynamic, which might be related to the finite temperature $k_{B}T^{\\circ }\\approx 0.03E_{F}$ .", "Figure REF (b) shows the $T$ -dependence of ${\\rm Re}[\\tilde{A}(\\omega _{\\tau },T,\\omega _{t})]$ with $k_{F}a=0.5$ for $AR$ and $RA$ cross-peaks as cross and circle symbols, which can be fitted by a combination of two damping oscillations $F(T)=A_{a}\\cos (\\omega _{a}T+\\varphi _{a})\\exp (-T/T_{a})+A_{r}\\cos (\\omega _{r}T+\\varphi _{r})\\exp (-T/T_{r})+B$ illustrated by the solid curves.", "The numerical fitting gives $\\omega _{a}\\approx |\\omega _{A+}|$ , $\\omega _{r}\\approx |\\omega _{R+}|$ , $T_{a}\\approx 30\\tau _{F}$ and $T_{r}\\approx 80\\tau _{F}$ .", "These long-time damping oscillations indicate the non-trivial relaxation process during the mixing time $T$ , induced by multiple particle-hole excitations.", "However, at a very early time, probably only a few particle-hole pairs have been excited, and higher order correlation has not been built up, which explains the deviation between the fitting results and numerical calculation.", "It is also interesting to notice that $T_{a}$ and $T_{r}$ are different, which implies there might be an intrinsic dynamical process between the attractive and repulsive singularities." ], [ "Phase cycling", "Since the total spectrum is a summation of multiple pathway contributions, it is sometimes important to be able to separate and measure one or some of the pathway contributions.", "For example, we notice that the third and fourth pathways, $I_{3}(\\tau ,T,t)=...e^{i\\mathcal {H}_{\\downarrow }T}...e^{-i\\mathcal {H}_{\\uparrow }T}...$ and $I_{4}(\\tau ,T,t)=...e^{i\\mathcal {H}_{\\uparrow }T}...e^{-i\\mathcal {H}_{\\downarrow }T}...$ , are the ones responsible for a fast oscillation in the mixing time $T$ with the trivial frequency $\\omega _{s}$ .", "In addition, we have $I_{3}(\\tau ,T,t)=I_{o}(\\tau +T,t)$ and $I_{4}(\\tau ,T,t)=I_{o}(\\tau ,T+t)$ , implying these two pathways give the same information as the 2D spin-echo sequence.", "Therefore, it would be interesting to be able to eliminate the contributions of $I_{3}$ and $I_{4}$ , which can be achieved by following the same spirit as phase cycling, an important technique in NMR.", "Phase cycling is a technique that uses a linear combination of signals (with possibly different weights) from different pulse schemes to select the contribution of one or few coherent pathways.", "As a concrete example, we define $\\mathcal {A}(\\omega _{\\tau },T,\\omega _{t})\\equiv \\tilde{A}(\\omega _{\\tau },T,\\omega _{t})-\\tilde{A}^{-}(\\omega _{\\tau },T,\\omega _{t})$ , where $\\tilde{A}^{-}(\\omega _{\\tau },T,\\omega _{t})$ is the 2D Ramsey response for the EXSY$-$ pulse scheme indicated in Fig.", "REF (g).", "A manipulation of algebra gives (see Appendix ) $\\mathcal {A}(\\omega _{\\tau },T,\\omega _{t})=2A_{1}(\\omega _{\\tau },T,\\omega _{t})+2A_{2}(\\omega _{\\tau },T,\\omega _{t}),$ which only includes the first two pathways.", "One can immediately notice that both pathways, $I_{1}(\\tau ,T,t)$ and $I_{2}(\\tau ,T,t)$ , are independent of $\\omega _{s}$ , where we are no longer restricted to measuring signals at $T=T_{M}$ .", "The corresponding double Fourier transformation $\\mathcal {A}(\\omega _{\\tau },T,\\omega _{t})$ is studied in Fig.", "REF .", "For the attractive ($k_{F}a=-0.5$ ) interaction case shown in Fig.", "REF (a), a numerical fitting with formula $F(T)=A\\exp (-T/T_{a})+B$ indicates a pure exponential relaxation with lifetime $T_{a}\\approx 5\\tau _{F}$ .", "On the contrary, for the repulsive ($k_{F}a=+0.5$ ) interaction case shown in Fig.", "REF (a), a damping oscillation $F(T)=A_{\\Delta }\\cos (\\omega _{\\Delta }T_{\\Delta }+\\varphi _{\\Delta })\\exp (-T/T_{\\Delta })+B_{\\Delta }$ of $\\omega _{\\Delta }\\approx |\\omega _{R+}-\\omega _{A+}|$ and damping lifetime $T_{\\Delta }\\approx 10\\tau _{F}$ .", "This damping oscillation indicates the intrinsic coherent and incoherent many-body dynamics between the attractive and repulsive Fermi singularity.", "We notice that a similar damping oscillation between exciton-polarons in TMDs has previously been observed in the non-rephasing signal of an optical 2D spectroscopy [16] and explained by the nonlinear Golden Rule [17].", "More recently, this behavior has also been observed in a FDA study [68].", "To our knowledge, our result is the first prediction of the coherent and incoherent dynamic process between the two Fermi-edge singularities in ultracold gases.", "We also believe the same procedure can be applied to study the many-body dynamical process between attractive and repulsive polarons.", "In summary, we have investigated how to extend the 1D Ramsey spectrum to multidimensional, which goes beyond the linear response regime and can reveal correlations between many-body singularities and resonances.", "Multidimensional spectroscopy also allows us to investigate the many-body coherent and relaxation dynamics that are not accessible in 1D spectra.", "Such a scheme is especially suitable and accessible in the clean and controllable systems of ultracold gases.", "As a concrete example, we investigate the Fermi singularity problem and present a numerical exact many-body formalism for the simulation of the multidimensional Ramsey spectrum of a heavy impurity in a Fermi gas, both in the time domain and frequency domain.", "We believe this is the first investigation of the nonlinear responses in such systems and the first prediction of many-body correlations between attractive and repulsive singularities, remnants of polaron resonances destroyed by Anderson's orthogonal catastrophe.", "For the first time, we also predict the many-body coherent dynamic and relaxation between the two Fermi singularities.", "We believe these many-body correlations and dynamics should also exist between attractive and repulsive polarons, which can be calculated exactly if the background gas is a Bardeen–Cooper–Schrieffer superfluid [73], [74].", "Another approach would be to investigate mobile impurity with a Chevy ansatz.", "Although this is an approximated approach, it might lead to intuitive understanding.", "Finally, we argue that the application of multidimensional Ramsey spectroscopy should not be limited to impurity systems, and the same spirit can be generalized to other ultracold atom systems [79], [80], [81], [82], [83], [84], [85], [86], [87], [88], [89], [90]." ], [ "Acknowledgments", "We are grateful to Hui Hu and Xia-Ji Liu for their insightful discussions and critical reading of the manuscript.", "This research was supported by the Australian Research Council's (ARC) Discovery Program, Grants No.", "DE180100592 and No.", "DP190100815." ], [ "Pathway contributions", "In the main text, Eq.", "(REF ) indicates that $S(\\tau ,T,t)$ can be written as a summation of sixteen different pathway contributions $S(\\tau ,T,t)=\\sum _{i=1}^{16}S_{i}(\\tau ,T,t)$ where $S_{i}(\\tau ,T,t)={\\rm Tr}[I_{i}(\\tau ,T,t)\\rho _{{\\rm FS}}]/4$ .", "The sixteen pathways $I_{i}(\\tau ,T,t)$ can be written out explicitly: $I_{1}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau },$ $I_{2}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau },$ $I_{3}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau },$ $I_{4}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau }.$ $I_{5}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }\\tau },$ $I_{6}(\\tau ,T,t)=-e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }\\tau },$ $I_{7}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }\\tau },$ $I_{8}(\\tau ,T,t)=-e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }\\tau },$ $I_{9}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\uparrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau },$ $I_{10}(\\tau ,T,t)=-e^{i\\hat{\\mathcal {H}}_{\\uparrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau },$ $I_{11}(\\tau ,T,t)=-e^{i\\hat{\\mathcal {H}}_{\\uparrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau },$ $I_{12}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\uparrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }\\tau },$ $I_{13}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\uparrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }\\tau },$ $I_{14}(\\tau ,T,t)=e^{i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }\\tau },$ $I_{15}(\\tau ,T,t)=-e^{i\\hat{\\mathcal {H}}_{\\uparrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }\\tau },$ $I_{16}(\\tau ,T,t)=-e^{i\\hat{\\mathcal {H}}_{\\uparrow }\\tau }e^{i\\hat{\\mathcal {H}}_{\\uparrow }T}e^{i\\hat{\\mathcal {H}}_{\\uparrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }t}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }T}e^{-i\\hat{\\mathcal {H}}_{\\downarrow }\\tau }.$ Since the dimensions of the many-body operator $\\mathcal {H}_{\\downarrow }$ and $\\mathcal {H}_{\\uparrow }$ grows exponentially with respect to the number of particle $N$ , a direct calculation of $S_{i}(\\tau ,T,t)$ is not accessible.", "However, by applying Levitov’s formula in FDA, we can show that $S_{i}(\\tau ,T,t)$ reduces to a determinant in a single-particle Hilbert space that grows only linearly to $N$ , allowing an in-principle exact calculation.", "For this purpose, we rewrite $\\mathcal {H}_{\\downarrow }\\equiv \\Gamma (h_{\\downarrow })$ and $\\mathcal {H}_{\\uparrow }\\equiv \\Gamma (h_{\\uparrow })+\\omega _{s}$ , where $\\Gamma (h)\\equiv \\sum _{\\mathbf {k},\\mathbf {q}}h_{\\mathbf {k}\\mathbf {q}}c_{\\mathbf {k}}^{\\dagger }c_{\\mathbf {q}}$ is a bilinear fermionic many-body operator, and $h_{\\mathbf {k}\\mathbf {q}}$ represents the matrix elements corresponding to the single-particle operator.", "As a result, each pathway's contribution can be written as $S_{i}(\\tau ,T,t)=\\tilde{S}_{i}(\\tau ,T,t)e^{-i\\omega _{s}f_{i}(t,T,\\tau )}/4,$ where $\\tilde{S}_{i}(\\tau ,T,t)={\\rm Tr}[\\tilde{I}_{i}(\\tau ,T,t)\\rho _{FS}]$ .", "Here, $\\tilde{I}_{i}(\\tau ,T,t)$ has the same expression as $I_{i}(\\tau ,T,t)$ but with $\\mathcal {H}_{\\downarrow }$ and $\\mathcal {H}_{\\uparrow }$ replaced by $\\Gamma (h_{\\downarrow })$ and $\\Gamma (h_{\\uparrow })$ , respectively.", "Since both $\\Gamma (h_{\\downarrow })$ and $\\Gamma (h_{\\uparrow })$ are fermionic bilinear operators, applying Levitov’s formula gives $\\tilde{S}_{i}(\\tau ,T,t)={\\rm det}[(1-\\hat{n})+R_{i}(\\tau ,T,t)\\hat{n}],$ where $R_{i}(\\tau ,T,t)$ has the same expression as $I_{i}(\\tau ,T,t)$ but with $\\mathcal {H}_{\\downarrow }$ and $\\mathcal {H}_{\\uparrow }$ replaced by $h_{\\downarrow }$ and $h_{\\uparrow }$ , respectively.", "The phase factor $e^{-i\\omega _{s}f_{i}(\\tau ,T,t)}$ can also be obtained by replacing $e^{\\pm \\mathcal {H}_{\\downarrow }t^{\\prime }}$ and $e^{\\pm i\\mathcal {H}_{\\uparrow }t^{\\prime }}$ with 1 and $e^{\\pm i\\omega _{s}t^{\\prime }}$ in the expression of $I_{i}(\\tau ,T,t)$ , where $t^{\\prime }$ can be $\\tau $ , $T$ , or $t$ .", "This expression is now ready for numerical calculation as mentioned in the main text.", "Figure: (a) shows the 2D spectroscopy A(ω τ ,T,ω t )A(\\omega _{\\tau },T,\\omega _{t}) thatincludes pathway contributions from all sixteen pathways, and (b)A ˜(ω τ ,T,ω t )\\tilde{A}(\\omega _{\\tau },T,\\omega _{t}) only includes the first four.Solid and dashed curves in (c) compare A(ω τ =ω R ,T,ω t )A(\\omega _{\\tau }=\\omega _{R},T,\\omega _{t})[the slice along the dashed line in (a)] and A ˜(ω τ =ω R ,T,ω t )\\tilde{A}(\\omega _{\\tau }=\\omega _{R},T,\\omega _{t}),respectively, and show excellent agreement by an essentially overlapping.", "(d) shows the same comparison for A(ω τ ,T,ω t =ω R )A(\\omega _{\\tau },T,\\omega _{t}=\\omega _{R})and A ˜(ω τ ,T,ω t =ω R )\\tilde{A}(\\omega _{\\tau },T,\\omega _{t}=\\omega _{R}) [the slicealong the dashed line in (b)].", "Other parameters are k F a=0.5k_{F}a=0.5,k B T ∘ =0.03E F k_{B}T^{\\circ }=0.03E_{F}, ω s /E F =2π\\omega _{s}/E_{F}=2\\pi , and T=5τ F T=5\\tau _{F}.Figure: (a) and (b) shows contour plots of the imaginary part of the 2D spin-echospectrum A o (ω τ ,ω t )A_{o}(\\omega _{\\tau },\\omega _{t}) for attractive k F a=-0.5k_{F}a=-0.5and repulsive k F a=0.5k_{F}a=0.5 interactions, respectively.", "(c) and (d)show the amplitude for k F a=-0.5k_{F}a=-0.5 and k F a=0.5k_{F}a=0.5, respectively.The temperature is set as k B T ∘ =0.03E F k_{B}T^{\\circ }=0.03E_{F}.While $\\tilde{S}_{i}(\\tau ,T,t)$ are universal functions of $k_{B}T^{\\circ }/E_{F}$ , $k_{F}a$ , $t/\\tau _{F}$ , and $\\tau /\\tau _{F}$ , the total response $S(\\tau ,T,t)$ involves the interference of phase factors $e^{-i\\omega _{s}f_{i}(\\tau ,T,t)}$ between each contribution.", "The resulting oscillation in frequency $\\omega _{s}$ is not very interesting.", "Nevertheless, we find that only a few pathways contribute to the singularities of $A(\\omega _{\\tau },T,\\omega _{t})$ we are interested in.", "Our numerical results show that $\\tilde{S}_{i}(\\tau ,T,t)$ oscillates at frequency $\\sim E_{F}$ , much slower than $\\omega _{s}\\gg E_{F}$ in usual ultracold experiments.", "Consequently, if we focus on the Fermi singularity features that appear at $|\\omega _{\\tau }-\\omega _{s}|,|\\omega _{t}-\\omega _{s}|\\sim E_{F}$ in the 2D spectrum $A(\\omega _{\\tau },T,\\omega _{t})$ , only the pathways $I_{1}$ , $I_{2}$ , $I_{3}$ , and $I_{4}$ associated with the phase factor $e^{i\\omega _{s}t}e^{-i\\omega _{s}\\tau }$ contribute.", "The phase $e^{i\\omega _{s}t}e^{-i\\omega _{s}\\tau }$ only gives rise to a simple frequency shift of $\\omega _{t}\\rightarrow \\tilde{\\omega }_{t}$ and $\\omega _{\\tau }\\rightarrow \\tilde{\\omega }_{\\tau }$ .", "Figure REF compares $A(\\omega _{\\tau },T,\\omega _{t})$ and $\\tilde{A}(\\omega _{\\tau },T,\\omega _{t})=\\sum _{i=1}^{4}A_{i}(\\omega _{\\tau },T,\\omega _{t})$ that only includes contributions from the first four pathways, for a set of chosen parameters: $k_{F}a=0.5$ , $k_{B}T^{\\circ }=0.03E_{F}$ , $\\omega _{s}/E_{F}=2\\pi $ , and $T=5\\tau _{F}$ .", "Figure REF (a) and (b) shows the whole 2D spectrum for $A(\\omega _{\\tau },T,\\omega _{t})$ and $\\mathcal {A}(\\omega _{\\tau },T,\\omega _{t})$ , respectively.", "The solid and dashed curves in Fig.", "REF (c) show the spectrum $A(\\omega _{\\tau },T,\\omega _{t})$ and $\\tilde{A}(\\omega _{\\tau },T,\\omega _{t})$ along $\\omega _{\\tau }=\\omega _{R}$ , the slice indicated by the dashed line in Fig.", "REF (a).", "Fig.", "REF (d) shows the same comparison for $\\omega _{t}=\\omega _{R}$ , the slice indicated by the dashed line in Fig.", "REF (b).", "All comparison gives perfect agreement, e.g., the solid and dashed curves essentially overlap in Fig.", "REF (c) and (d).", "Among the first four pathways, there is still a phase dependence on $\\omega _{s}T$ .", "To be specific, while $A_{1}(\\omega _{\\tau },T,\\omega _{t})$ and $A_{2}(\\omega _{\\tau },T,\\omega _{t})$ are independent of $\\omega _{s}$ , $A_{3}(\\omega _{\\tau },T,\\omega _{t})$ and $A_{4}(\\omega _{\\tau },T,\\omega _{t})$ has a phase dependence of $e^{-i\\omega _{s}T}$ and $e^{i\\omega _{s}T}$ , respectively.", "To investigate the dynamics in the time scale of $\\tau _{F}$ , we can study the signals at $T_{M}\\omega _{s}=2\\pi M$ , where $M$ is an integer.", "Notice that since $\\omega _{s}\\gg E_{F}$ , $T_{M}/\\tau _{F}$ can be considered to be almost continuous.", "Such $T$ -dependence is shown in Fig.", "REF in the main text.", "Another way to observe the $T$ -dependence that is independent of $\\omega _{s}$ is by using the so-call phase cycling, which uses a linear combination of results from different pulse schemes to eliminate some of the pathway contributions.", "For example, in the EXSY$-$ pulse scheme illustrated in Fig.", "REF (g), we can carry out the same calculation as EXSY$+$ with the middle pulse replaced by a $-\\pi /2$ rotation $R(-\\pi /2)=R(\\pi /2)^{T}=\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{cc}1 & -1\\\\1 & 1\\end{array}\\right).$ We also only need to focus on the first four pathways and can find that $I_{1}^{-}=-I_{1}$ , $I_{2}^{-}=-I_{2}$ , $I_{3}^{-}=I_{3}$ , and $I_{4}^{-}=I_{4}$ , where the superscript “$-$ ” indicates the quantities for EXSY$-$ scheme.", "Consequently, we have $\\tilde{A}^{-}=-A_{1}-A_{2}+A_{3}+A_{4}$ .", "As a result, in the differences between the EXSY$+$ and EXSY$-$ , $\\mathcal {A}(\\omega _{\\tau },T,\\omega _{t})\\equiv \\tilde{A}(\\omega _{\\tau },T,\\omega _{t})-\\tilde{A}^{-}(\\omega _{\\tau },T,\\omega _{t})$ , only the contributions from the first two pathways remain, i.e., $\\mathcal {A}(\\omega _{\\tau },T,\\omega _{t})=2A_{1}(\\omega _{\\tau },T,\\omega _{t})+2A_{2}(\\omega _{\\tau },T,\\omega _{t})$ which no longer depends on $\\omega _{s}$ ." ], [ "Imaginary part and Amplitude of the 2D spin-echo spectrum", "For completeness, we show in Fig.", "REF the imaginary part and amplitude of the 2D spin-echo spectrum for the same parameters in Fig.", "REF in the main text.", "In particular, we indicate the homogeneous and inhomogeneous lineshape in Fig.", "REF (c)." ] ]
2207.10501
[ [ "Neural Network Learning of Chemical Bond Representations in Spectral\n Indices and Features" ], [ "Abstract In this paper we investigate neural networks for classification in hyperspectral imaging with a focus on connecting the architecture of the network with the physics of the sensing and materials present.", "Spectroscopy is the process of measuring light reflected or emitted by a material as a function wavelength.", "Molecular bonds present in the material have vibrational frequencies which affect the amount of light measured at each wavelength.", "Thus the measured spectrum contains information about the particular chemical constituents and types of bonds.", "For example, chlorophyll reflects more light in the near-IR rage (800-900nm) than in the red (625-675nm) range, and this difference can be measured using a normalized vegetation difference index (NDVI), which is commonly used to detect vegetation presence, health, and type in imagery collected at these wavelengths.", "In this paper we show that the weights in a Neural Network trained on different vegetation classes learn to measure this difference in reflectance.", "We then show that a Neural Network trained on a more complex set of ten different polymer materials will learn spectral 'features' evident in the weights for the network, and these features can be used to reliably distinguish between the different types of polymers.", "Examination of the weights provides a human-interpretable understanding of the network." ], [ "Introduction", "Hyperspectral imaging is a digital imaging technology in which each pixel contains not just the usual three visual colors (red, green, blue) but many, often hundreds, of wavelengths (or bands) of light enabling spectroscopic information about the materials located in the pixel [1].", "A common range of wavelengths covers the VNIRSWIR (visible, near-infrared, short-wave infrared) from about 450nm to 2500nm, where the visible red, green, and blue colors correspond approximately to 650nm, 550nm, and 450nm respectively.", "Each band in the image measure light in a range of about 10nm or less and the bands contiguously cover the range of wavelengths for the sensor.", "In each wavelength, some percentage of light is absorbed by the material located in the pixel and some percentage is reflected back, so that the measured spectrum is a vector of percent reflectance for each band.", "The absorption or reflection depends mainly on the interaction of the light at the given wavelength and molecular elements and bonds present, enabling some level spectroscopy - determining information about the materials present from the measured spectrum.", "A plot of three spectra, which are the class means for imagery we use in this paper, are shown in Figure REF .", "Figure: Spectra for three classes of vegetation.", "Note the steep rise around 750nm, which is the result of chlorophyll in the vegetation, and that the scenesed (green plot) vegetation has a less prominent rise, indicating lower amounts of chlorophyll.Spectra can be collected from individual pixels from a hyperspectral image on a satellite [2] or an aircraft [3] in which case the material is illuminated by sunlight, or from a sensor in a laboratory setting with controlled illumination [4].", "For outdoor settings with sunlight, atmospheric correction must be used to get at-surface reflectance from the at-sensor measured radiance [5].", "We will be working with both overhead-collected data and laboratory-collected data, all of which is in units of percent reflectance.", "There are known computations using the reflectance values in specific ranges of wavelengths that measure particular physical properties for materials on the ground are known as spectral indices.", "These are determined from variation in percent reflectance that is consistent for a particular material or varies in a consistent way with variation of a physical property.", "These consistent variation in reflectance at specific wavelengths are called spectral features.", "A standard spectral index is the normalized difference vegetation index (NDVI), which measures a known feature called the 'red edge' for vegetation identification and health/abundance assessment.", "This red edge is observable as the steep rise around 700nm in Figure REF .", "The NDVI is defined as: $\\textrm {NDVI}=\\frac{NIR-R}{NIR+R}$ where $R$ (for red)is the percent reflectance across bands around 650nm and NIR (for near-infrared) is percent reflectance for bands around 750-900nm.", "This is a standard measurement for vegetation, particularly common with multispectral imagery which has few bands than hyperspectral and with each band spanning a broader range of wavelengths.", "For example, The NASA Landsat multispectral imagery satellite program [6] offers an NDVI image (which has the NDVI value for each pixel) as a standard product.", "For Landsat 8 and 9 NDVI is computed using 850-880nm as the range for the NIR band and 640-670nm as the Red band, while Landsat 4-7 use approximately 760-900nm for the NIR band and 630-690nm for the Red band.", "For a hyperspectral image, the Red and NIR reflectance can be measued using reflectance for a single band in one of these ranges or the average reflectance over some set of bands for each range.", "In this paper we demonstrate that a neural network can effectively learn the NDVI.", "We show that the feature is evident in the network's weights, and we also show that the network separates the data into groups similar to separating by NDVI values.", "We then show a neural network learning numerous indices around features for a complex set of ten different polymer materials.", "For each dataset, the data is split 50/50 with stratification into training and testing sets.", "The network trained on the polymer training data has 99.98% accuracy on the test set, and the network trained on the vegetation training data has 99.92% accuracy on its test set." ], [ "Data", "The hyperspectral image we use to collect classes of vegetation data is shown in Figure REF .", "We selected three rectangular regions from this image, one on dark green vegetation (trees), one on light green vegetation (a field), and one on a senesced (dry tan) field, and showed the result of classification using the neural network.", "Figure: (Left) An RGB color image from the AVIRIS hyperspectral image.", "(Right) Classification results on the image using the NN.", "The classes are dark green trees, light green field, and a tan scenesed field colored blue, green, and red respectively in the classification images.The image has 181 bands a total of 560,000 pixels.", "There are 2,580 pixels labeled for the forest class has, 168 the field1 class, and 104 pixels for the field2_senesced class.", "Senescence is a process that occurs in vegetation during which chlorophyll content decreases and the vegetation looses its visual green color.", "This may be a seasonal variation or an indication that the vegetation is stressed for example from a lack of water.", "The hyperspectral image we use to collect polymer spectra is shown in Figure REF , with an RGB representation on the left and the NN classification on the right.", "This image was collected using a SPECIM spectrometer in a lab, with controlled illumination and materials on a moving table similar to [4].", "The image has 452 bands and a total around 320,000 pixels.", "The number of pixels that were selected for each class are shown in Table REF .", "Table: The number of pixels per class.Figure: (Left) An RGB color image from the SPECIM hyperspectral polymer image.", "(Right) Classification results on the image using the NN." ], [ "Methods", "For each dataset, the data is split 50/50 with stratification into training and testing sets.", "For each training set, we constructed a neural network with a single hidden layer of 128 ReLu neurons, followed by a 20% dropout layer, and the a softmax classification layer with the number of neurons equal to the number of classes.", "We trained the network for 50 epochs using Keras for Tensorflow.", "Each network was applied to the test data to obtain test accuracy and also to the full image to investigate how the network acts more generally.", "This is a relatively simpler neural network, and we used a relatively large dropout rate which corresponds to Bayesian model averaging on the neurons.", "This simple network architecture makes the model interpretable so that we can observe the learning of spectra features in the weights.", "The high drouput rate provides sufficient variation to get a robust measure of Bayesian uncertainty about the weights for each class." ], [ "Results", "The network trained on the polymer training data has 99.98% accuracy on the test set, and the network trained on the vegetation training data has 99.92% accuracy on its test set.", "This is of course excellent accuracy, and the rest of this section is devoted to interpreting the network as a means of leaning spectral indices and features; first leanding NDVI for vegetation and then applications for the polymers.", "Figure REF shows a scatterplot of the spectra from the AVIRIS image with reflectance in a red band (656nm) on the $x$ -axis and reflectance in an infrared (802nm) band as the $y$ -axis, with colors showing the labels from the neural network.", "For comparison, Figure REF shows a scatterplot of this data on the same bands, but with colors showing labels from an LDA classifier that was trained on the same data as the neural network.", "Figure: A scatterplot of the spectra from the AVIRIS image with colors showing the labels from the neural network.Figure: A scatterplot of the spectra from the AVIRIS image with colors showing the labels from Linear Discriminant Analysis.To see why the scatterplot in Figure REF represents separation by NDVI, observe from the formula for NDVI that NDIV is a number in $[-1,1]$ , and that for a fixed value of NDVI the relationship bewteen the NIR and R values is $NIR=R\\frac{\\textrm {NDVI}+1}{1-\\textrm {NDVI}}.$ This is a linear function, and the slope $\\frac{\\textrm {NDVI}+1}{1-\\textrm {NDVI}}$ is zero for $\\textrm {NDVI}=-1$ and monotonically increasing as NDVI increases.", "Thus, a given NDVI value corresponds to a line through the origin in the plane whose $x$ -axis is reflectance in a red band and whose $y$ -axis is reflectance in infrared, and spectra with larger NDVI values will lie above this line.", "The points are colored according to the classes from the network, and observe the linear separation in Figure REF shows that the network is separating pixels with the same geometry as NDVI when viewed in these bands.", "For comparison, the classification shown by LDA in Figure REF does not follow this geometry.", "The matrix of weights for the neural network trained on the vegetation data is shown in Figure REF .", "The band input into each neuron is on the vertical axis and the neurons are along the horizontal axis sorted by the weights assigned to each neuron by the final softmax classification layer.", "These are sorted approximately based on how they are used for classification in the final layer, with the weights from the hidden layer to final classification layer shown in Figure REF .", "The horizontal axis these both of these figures corresponds to neurons in the hidden layer, and they are provided in the same order.", "The first approximately 10 neurons are weighed highly for the softmax classification neuron for the Forest class, the middle group had higher weights for the Field1 class, and the last approximately 25 had the highest weights as inputs into the softmax classification neuron for the Field2-senesced class.", "Figure: The weights in the hidden layer of the neural network trained on the vegetation data.", "The band input into each neuron is on the y-axis and the neurons are along the x-axis sorted by the weights assigned to each neuron by the final softmax classification layer.Figure: The weights from the hidden layer, colored by the class assignment by the neuron in the final layer.From the weights of the hidden layer shown in Figure REF , the weights on the bands for the ten neurons whose outputs have the highest weights in the softmax classification neuron for each class were selected as weights that 'learn' a representation of the respective class.", "The mean and standard deviation for the inputs from each band were computet and are plotted for each group in Figures REF through REF .", "In each plot, observe that the weights for the neurons have learned to measure the contrast between reflectance values on either side of the red edge as primary features for distinguishing these three classes, with increasing emphasis on bands nearest the edge.", "The weights also show a learning of additional features; for example while the red edge is the strongest feature for the Forest class, the weights for the Field1 class seem to put some emphasis on visible green light (around 550nm) which is not present in the weights for the other classes.", "Figure: The mean reflectance per wavelength for the Forest class (red) along with the mean and standard deviation for weights from each wavelength going to hidden neurons associated with detecting this class.Figure: The mean reflectance per wavelength for the Field1 class (red) along with the mean and standard deviation for weights from each wavelength going to hidden neurons associated with detecting this class.Figure: The mean reflectance per wavelength for the Field2-senesced class (red) along with the mean and standard deviation for weights from each wavelength going to hidden neurons associated with detecting this class.A neural network, also with 128 ReLu neurons in a single hidden layer followed by a 20% dropout layer and a final classification layer of 10 softmax classification neurons (one for each class) for the classes described in Table REF and shown in Figure REF .", "The mean spectrum for each class is shown in Figure REF , and a scatterplot of the classification results of this network are shown in Figure REF .", "Observe that is more complexity and diversity in class means, as well as an increase in the number of classes, in comparison to the vegetation data.", "Figure: The mean reflectance spectrum for each polymer class.Figure: A scatterplot of the polymer spectra colored by labels from the neural network.Plots of the mean reflectance spectrum for each polymer class is shown in Figures REF through REF along with the mean and standard deviation for the ten neurons most strongly associated with this class (as measured by weights from the hidden layer to the classification layer).", "Above each plot is an image of the matrix of these weights aligned so the weight in the image is associated with the band in the plots directly below it.", "Figure: The mean reflectance for the red_bubble_wrap class (red) along with the associated weights.Figure: The mean reflectance for the clear_bubble_wrap class (red) along with the associated weights.Figure: The mean reflectance for the glove_loc class (red) along with the associated weights.Figure: The mean reflectance for the medicine_bottle class (red) along with the associated weights.Figure: The mean reflectance for the ping_pong_ball class (red) along with the associated weights.Figure: The mean reflectance for the pvc_pipe class (red) along with the associated weights.Figure: The mean reflectance for the red_lid class (red) along with the associated weights.Figure: The mean reflectance for the inflatable_football class (red) along with the associated weights.Figure: The mean reflectance for the pvc_extension_plug class (red) along with the associated weights.Figure: The mean reflectance for the pvc_extension_plug class (red) along with the associated weights." ], [ "Conclusions", "We showed that in these shallow neural networks with a single hidden layer, the weights learned by the network as strongly related to features in the spectra.", "These shallow networks had near 100% accuracy on test data from a test-train split.", "In the simple case of three vegetation classes with differing chlorophyll levels, the network learned a measurement similar to NDVI.", "Because the weights per band seem more impactful closer to the red edge, this seems to perhaps be an interesting band-weighted form of NDVI.", "For the far more complex spectra with multiple sharp features located across the spectra, the network trained on the polymer classes clearly learned which specific features were important for distinguishing between these spectra, and at the same time learned how to measure these features.", "The relationship between the weights and class men spectra shown in Figure REF through REF are extraordinary.", "Hopefully, this paper helps make neural networks for hyperspectral imagery more interpretable, and less of a black box method.", "The accuracy even with these shallow networks is impressive.", "It seems that there is great future potential for neural networks in hyperspectral imaging, but the challenge lies in inventing ways to use and train them.", "Simply inserting a NN in place of a already-good target detection or linear unmixing framework is probably under-utilizing them, but for example nonlinear unmixing and intimate mixtures are a promising new avenue [7].", "Neural networks, and deep learning in particular, is extraordinary powerful at learning to solve complex problems.", "Given the complexity and variation of spectra in the world and physical interactions that can alter the measurement of spectra, framing problems that can be solved by deep learning is a major part of the frontier of future use in the field.", "These new problems may be simple (for example training a NN to distinguish between multiple confuser materials as a second stage of target-identification) or may be complex with large varieties of materials and require new ways of preparing training data and displaying results." ] ]
2207.10530
[ [ "MQRetNN: Multi-Horizon Time Series Forecasting with Retrieval\n Augmentation" ], [ "Abstract Multi-horizon probabilistic time series forecasting has wide applicability to real-world tasks such as demand forecasting.", "Recent work in neural time-series forecasting mainly focus on the use of Seq2Seq architectures.", "For example, MQTransformer - an improvement of MQCNN - has shown the state-of-the-art performance in probabilistic demand forecasting.", "In this paper, we consider incorporating cross-entity information to enhance model performance by adding a cross-entity attention mechanism along with a retrieval mechanism to select which entities to attend over.", "We demonstrate how our new neural architecture, MQRetNN, leverages the encoded contexts from a pretrained baseline model on the entire population to improve forecasting accuracy.", "Using MQCNN as the baseline model (due to computational constraints, we do not use MQTransformer), we first show on a small demand forecasting dataset that it is possible to achieve ~3% improvement in test loss by adding a cross-entity attention mechanism where each entity attends to all others in the population.", "We then evaluate the model with our proposed retrieval methods - as a means of approximating an attention over a large population - on a large-scale demand forecasting application with over 2 million products and observe ~1% performance gain over the MQCNN baseline." ], [ "Introduction", "Multi-horizon probabilistic time series forecasting has many important applications in real-world tasks [6], [30], [5], [20], [23], [10].", "For example, consider a retailer who wishes to optimize their purchasing decisions.", "In order to make optimal decisions, they require forecasts of consumer demand at multiple time steps in the future.", "In the domain of multi-horizon, probabilistic time-series forecasting, deep neural networks (DNNs), especially those of the Seq2Seq variety [27], have increasingly been studied recently [30], [11], [21], [10], [25].", "They have various advantages over traditional time series models including the ability to easily handle a complex mix of historic covariates and the potential to incorporate recent advances in Seq2Seq learning.", "Like many other machine learning tasks, the canonical formulation of a forecasting model in this case considers time series for $N$ entities – e.g.", "$N$ products in the case of demand forecasting – and forecasts are produced using features specific to that entity.", "Models are trained using shared weights for each entity.", "Seq2Seq architectures consist of an encoder, which typically summarizes time-series covariates for an entity $i$ into time specific representations, whch we denote as $h_{i,t}$ and a decoder which takes the encoded context and produces the output sequence (in this case, probabilistic forecasts).", "For an entity $i$ and time $t$ , we expect $h_{i,t}$ to be more relevant to the forecast target in inference than $h_{j,t}$ for any other $j \\ne i$ .", "That does not mean, however, that the encoded contexts of other entities contain no relevant information.", "As an example, consider forecasting demand for soda from two competing brands (brand A and brand B) – these products may be substitutes, and when the demand goes down for one, the other increases.", "In this way, information from brand A may be useful for forecasting for brand B, and vice-versa.", "Traditional Seq2Seq neural architectures such as RNN (Recurrent Neural Network), LSTM (Long Short-Term Memory network) and CNN (Convolution Neural Network) fail to capture such cross-entity information.", "In natural language processing (NLP), a recent advance is retrieval-based language models [13], [15], [4] that directly search and utilize information from a large corpus, such as Wikipedia, to help inform predictions.", "For example, Retrieval-Enhanced Transformer (RETRO) introduced in [4] imprtoves language model performance not by scaling up model parameters or training data size, but via learning on information retrieved from a task-related database.", "The key idea is to apply attention [2] over representations of other entities in the population.", "Because the size of the population can be quite large, the authors propose using a $k$ -nearest neighbors ($k$ -NN) search similar to [15] to lookup up the most relevant entities and attend only over those.", "Inspired by these studies, we introduce a cross-entity attention mechanism along with a retrieval mechanism to the state-of-the art MQ-Forecaster framework [30], [29], [10] for probabilistic time series forecasting.", "In particular, we build this work on the MQCNN model [30] but our methods can be naturally extended to any generic Seq2Seq time series forecaster.", "For retrieval methods, in addition to the commonly used $k$ -NN search, we also propose using an arbitrary submodular function to select a relevant set of entities to attend over and motivate the use of a submodular function to approximate an attention mechanism.", "Our work is one of the first architectures to leverage cross-entity information with retrieval-augmentation, and to the best of our knowledge, is the first to do so in the domain of time-series forecasting.", "Our main contributions are the following: MQRetNN – a retrieval-based Seq2Seq architecture for multi-horizon time series forecasting.", "The model builds on the encoder-decoder architecture of MQCNN, and utilizes an offline database of entity representations which the model attends over during training and inference.", "We also incorporate a retrieval mechanism to efficiently select which entities to attend over so that the methodology can scale to large datasets.", "We show that our model brings noticeable accuracy gains over the MQCNN baseline on both a small-scale and large-scale demand forecasting problem.", "A new retrieval method that uses a submodular scoring function to efficiently summarize all contexts from the offline database rather than searching for nearest neighbors of each example as commonly used in the literature.", "As we show in the results section, this method achieves comparable performance to the $k$ -NN search in our applications.", "The rest of the paper is organized as follows: in Section , we provide an overview of the multi-horizon time series forecasting problem and related work.", "In Section we describe our proposed methods in detail.", "In Section we present the experimental results.", "We show that on our target application – demand forecasting – it is possible to achieve a 3% improvement over the baseline when attending over all other entities in the population on a small dataset (approximately 10K products).", "We then evaluate several retrieval mechanisms that scale the model to a much larger population (around 2M products) and allow us to obtain an improvement of approximately 1% over the baseline." ], [ "Time-Series Forecasting", "We consider the high-dimensional regression problem with a mix of inputs where at each time $t$ and for each entity $i$ , we forecast the distribution of $y$ over the next $H$ periods: $p\\left(y_{i, t+1}, \\ldots , y_{i, t + H} | y_{i, :t}, x_{i, :t}^{(h)}, x_{i, t:}^{(f)}, x_{i}^{(s)} \\right),$ where $y_{i, \\cdot }$ denotes the target time series of entity $i$ , $x_{i, :t}^{(h)}$ are historic covariates up through time $t$ , $x_{i, t:}^{(f)}$ are covariates that are known apriori (such as calendar information), and $x_{i}^{(s)}$ are static covariates.", "Many recent works [30], [21], [10] have considered this forecasting problem.", "In this paper, our application of interest is demand forecasting for a large e-commerce retailer and downstream applications require only specific quantiles, not the full distribution.", "Accordingly, we focus on producing quantile forecasts similar to other recent works [30], [21], [10].", "Our model architecture builds off of the MQCNN architecture introduced in [30]." ], [ "Attention Mechanisms", "Attention mechanisms [2], [12] compute an alignment between a set of queries and keys to extract a value.", "Formally, let $_1,\\dots ,_t$ , $_1,\\dots ,_t$ and $_1,\\dots ,_t$ be a series of queries, keys and values, respectively.", "The $s^{th}$ attended value is defined as $_s = \\sum _{i=1}^t \\mathop {\\text{score}}(_s,_t)_t,$ where $\\mathop {\\text{score}}$ is a scoring function – commonly $\\mathop {\\text{score}}(,) := ^\\top $ .", "Often, one takes $_s = _s = _s = _s$ , where $_s$ is the hidden state at time $s$ .", "The transformer architecture was first proposed in [28] and achieved state-of-the-art performance in language modeling.", "In the vanilla transformer, each encoder layer consists of a multi-headed attention block followed by a feed-forward sub-layer.", "For each head $i$ , the attention score between query $_s$ and key $_t$ is defined as follows $A^h_{s,t} = _s^\\top _q^{h,\\top } _k^{h}_t.$ This architecture design has been successfully adopted in many subsequent studies with various extensions such as Transformer-XL [7], Reformer [17] and most recently Retrieval-Enhanced Transformer [4]." ], [ "Retrieval Mechanisms", "Information retrieval is a classic topic for language modeling and a recent advance is the retrieval-based models.", "Several latest works have demonstrated the benefit of adding an explicit retrieval step to neural networks.", "In [15], kNN-LM is proposed to enhance a language model through a nearest neighbor search in suitable text collections.", "[13] introduces REALM which augments language model pretraining with a latent knowledge retriever.", "More recently, RETRO [4] enhances the model architecture not by increasing the number of parameters or the size of training data, but rather through the retrieval of information relevant for each sample.", "Similarly, [3] uses memorized similarity information from the training data for retrieval at at inference time." ], [ "Data Summarization and Submodular Functions", "Data summarization has gained a lot of interest in recent years with the application of so called Submodular Functions.", "Applications range from exemplar-based clustering [9] to document summarization [8], [22].", "The goal is to select representative subsets of elements from a large-scale dataset through a pre-defined optimization process.", "The key component of the optimization formulation is a submodular function which serves as a scoring function for any particular subset.", "Definition 1 (Submodular Function) Let $\\Omega $ be a finite set.", "A function $f: 2^\\Omega \\rightarrow $ is said to be submodular if for any $S \\subseteq T \\subseteq \\Omega $ and any $x \\in \\Omega \\setminus S$ $f(S\\cup \\lbrace x\\rbrace ) - f(S) \\ge f(T\\cup \\lbrace x\\rbrace ) - f(T).$ The essential property of submodular functions is known as submodularity, an intuitive diminishing returns condition that allows the search for nearly-optimal solutions in linear time and fits well into the purpose of subset selection.", "The formal definition is given as Definition REF and we direct the reader to [18] for a thorough overview of submodular functions and their optimization.", "Many recent applications of submodular optimization focus on scaling up traditional algorithms to dealing with massive amounts of data or data streams.", "Proposed methods include distributed algorithms [24], [19] and streaming algorithms [1]." ], [ "Problem Formulation", "As mentioned in Section , we aim to estimate the distribution of the target variable $y_i$ as presented in Equation (REF ) over the next $H$ horizons at each time $t$ .", "We train a quantile regression model to minimize the total quantile loss, summed over all forecast creation times (FCTs) $T$ with $Q$ quantiles and $H$ horizons $\\sum _{t}\\sum _{q}\\sum _{h} L_q\\left(y_{i, t+h}, \\hat{y}_{i, t+h}^{(q)}\\right),$ where $L_q(y,\\hat{y}) = q(y-\\hat{y})_{+} + (1-q)(\\hat{y}-y)_{+}$ , $(\\cdot )_+$ is the positive part operator, $t$ denotes a FCT, $q$ denotes a quantile, and $h$ denotes the horizon.", "In this paper, we adopt the multi-horizon forecasting setting described in [30], [21], [10] with the output of the 50th and 90th percentiles (P50 and P90) at each time step, and thus the model is trained to jointly minimize the P50 and P90 quantile loss." ], [ "Model Architecture", "We design our model to be capable of leveraging an offline database constructed using the encoded representations from a frozen, pre-trained base model.", "In this paper, we use MQCNN [30] as the base architecture rather than the state-of-the-art MQTransformer [10] as the latter one requires substantially more GPU memory and, as discussed below, we are already memory bound.", "Further, we expect that our retrieval mechanism offers an improvement that is orthogonal to those in MQTransformer, and the two sets of improvements could be combined in future work.", "Generally our model adopts the Seq2Seq structure of MQCNN with an encoder that produces an encoded context at time $t$ $h_{i,t} := \\mathop {\\text{encoder}}(y_{i,:t},x_{i,:t}^{(h)},x^{(s)}_i),$ and a decoder that differs from MQCNN in that we include an additional “cross-entity context”, which we denote as $\\tilde{h}_{i,t}$ .", "Formally, the decoder computes $\\hat{}_{i,t} := \\mathop {\\text{decoder}}(h_{i,t},\\tilde{h}_{i,t},x_{i,t:}^{(f)})$ where $\\hat{}_{i,t}$ is a matrix of shape $H\\times Q$ for forecast quantiles of different horizons.", "We also denote $:= \\lbrace h_{i,t} | \\forall i, t \\rbrace $ and $\\tilde{} := \\lbrace \\tilde{h}_{i,t} | \\forall i, t \\rbrace $ .", "Ideally, $\\tilde{}$ would be computed by attending all other entities in the database, but for large datasets this may become infeasible.", "Thus we add a retrieval mechanism to select an informative subset of entities to attend across at each time step, which we provide more details in the next section.", "To generate $\\tilde{}$ , we add a time series cross-attention layer after the encoder to extract the cross-entity information through attention between the retrieved contexts and examples during training.", "The attention is computed only at each time step across different entities, and no cross time (temporal) attention is currently considered.", "Proper masking is used to make sure the attended and attending entities are aligned as shown in Figure REF .", "We find in our experiments that this process increases the GPU memory consumption of the model because the retrieved contexts are loaded with each mini-batch during training.", "This in turn limits the total number of elements contained in these contexts, which makes the retrieval mechanism necessary on large datasets.", "The overall architecture of our model, MQRetNN, is depicted in Figure REF , and we adopt a similar mechanism to incorporate cross-entity information for NLP tasks as shown in Figure 2 of [4].", "Figure: An overview of the architecture; adapted from ." ], [ "Retreival Mechanisms", "In this paper, retrieval mechanisms play a key role in scaling up the model to large datasets.", "In particular, we denote the offline database as $^0 := \\lbrace h^0_{i,t} | \\forall i, t \\rbrace $ which consists of the encoded contexts produced by a pre-trained MQCNN encoder.", "The retrieval calculations are only based on $^0$ to determine which entities for each example to attend over during training.", "Broadly, we consider two types of retrieval mechanisms: Entity-specific retrieval of relevant entities defined as nearest neighbors.", "A shared set of entities from the population that are “maximally relevant” and used to produce the cross entity context for each entity.", "See Figure REF for a visualization of the two different approaches.", "Figure: The left diagram depicts nearest neighbor retrieval, the right diagram depicts retrieval using a shared global set.", "The output of the retrieval step is concatenated to the input embedding vector." ], [ "Entity-Specific Nearest Neighbors", "For each entity $i$ , we consider searching for the nearest nearest neighbors in our offline database.", "This can be formulated as: $_{S; |S| = K} \\sum _{j\\in S, j\\ne i} f(h^{0}_{i,t}, h^{0}_{j,t}), \\quad \\forall i, t.$ Here we find a set of $K$ elements that maximize some similarity metric between example $i$ and elements in $S$ , and we search for such set at each time step $t$ ; that is, we find a time-specific set of $k$ nearest neighbors.", "We can take $f(\\cdot , \\cdot )$ to be any similarity metric, and in this paper we consider the Pearson correlation – which is essentially equivalent to the dot-product attention – and is computed as $f(h^{0}_{i,t}, h^{0}_{j,t}) = \\frac{ <h^{0,c}_{i,t}, h^{0,c}_{j,t}>}{\\Vert h^{0,c}_{i,t}\\Vert \\Vert h^{0,c}_{j,t}\\Vert }$ where $h^{0,c}_{j,t}$ denotes the centered version of $h^{0}_{j, t}$ .", "Denote by $L(\\theta ;S)$ the loss in Equation (REF ) evaluated for a model with parameters $\\theta $ and set of $S$ of entities to attend over, and let $$ denote the set of all entities.", "We would like to select a set $S$ of size $K$ such that $_{S\\subseteq : |S| \\le K} \\min _\\theta L(\\theta ; S).$ Solving the outer minimization above is not tractable, so instead we consider using a submodular proxy objective to select the set $S$ .", "Specifically we formulate this as follows for each time $t$ : $_{S; |S| < k} \\sum _{i\\in } \\max _{j\\in S, j\\ne i} f(h^{0}_{i,t}, h^{0}_{j,t}).$ Here we use the same similarity metric $f$ as in Equation (REF ).", "In general, the form of Equation REF is referred to as the Facility Location problem in [18], and it is a classic example of optimizing submodular function.", "Equation (REF ) and (REF ) require the retrieval calculation to be carried out at each time step, both for model training and inference.", "The advantage is that the retrieval process can then be adaptive to each time step (e.g., nearest neighbors can be different for each time step) and is performed on-the-fly at the inference time.", "But it can be computationally expensive.", "Alternatively we propose using $v^{0}_i := \\sum ^{T}_{t=1} h^{0}_{i,t}$ instead of $h^{0}_{i,t}$ as follows $& _{S; |S| = k} \\sum _{j\\in S, j\\ne i} f(v^{0}_i, v^{0}_j) \\\\& _{S; |S| < k} \\sum _{i\\in } \\max _{j\\in S, j\\ne i} f(v^{0}_i, v^{0}_j) $ i.e., we define the retrieved set $S$ to be time-agnostic by considering all time steps in the training window rather than time-specific as done previously.", "In this case, we use the exact same set of entities (but with different contexts for the test period) for model inference and no more retrieval calculation is needed.", "Note that this does not lead to any information leakage as no computation is done on the test set.", "We compare the performance of these two types of retrieval mechanisms in the next section." ], [ "Results", "In this section we evaluate on a large demand forecasting dataset using two different experimental setups.", "The dataset comes from a large e-commerce retailer and includes time series features such as demand, promotions, holidays and detail page views as well as static metadata features such as catalog information.", "Similar datasets with the same set of features but generated in different time windows have been used in [30], [10].", "Here we have four years (2015-2019) of data for approximately over 2 million products.", "The task is to forecast the 50th and 90th quantiles of demand for each of the next 52 weeks at each forecast creation time $t$ .", "Each model is trained using up to 8 NVIDIA V100 Tensor Core GPUs, on three years of data (2015-2018) and one year is held out for evaluation (2019).", "In the “small scale” setup, we consider only 10,000 different products (entities) so that we can directly attend over a representation of all products rather than use any retrieval method.", "In the “large scale” setup, we have too many to directly attend over all of them simultaneously.", "Instead, we demonstrate our model can scale up to the entire dataset using retrieval methods, which we ablate and compare the resulting model performance." ], [ "Small Scale", "In this experiment we choose the 10K products with the largest total units sold during the training period, and we compare four different architectures: MQCNN: baseline MQCNN model MQCNN-L: MQCNN with the increased model capacity MQRet-Full: MQCNN with cross entity context $\\tilde{h}_{i,t}$ produced by attending the frozen context across all other entities at time $t$ (i.e.", "from the database $^0$ ).", "MQRet-Random: Same as above, but where all $\\tilde{h}_{i,t}$ are randomly generated.", "By comparing MQRet-Full with other models, we can better understand how much improvement is possible by augmenting the model with the cross-entity context generated from the entire population.", "We include two ablations to confirm that the improvement in performance is due to extracting useful cross-entity information.", "In particular, for testing whether increasing model capacity can lead to performance gain, we consider MQCNN-L which expands MQCNN's capacity by increasing the number of filters of the CNN layer, so that MQRet-Full and MQCNN-L have the same number of parameters.", "We also consider MQRet-Random, which has the same architecture as MQRet-Full but with randomly generated (non-informative) contexts.", "We train each model to 100 epochs using batch size of 256, and optimize using ADAM [16].", "Table REF gives the number of parameters in each trained model.", "Table: The number of parameters used in the four different architectures of the small scale experiment.Table: Experiment results on 10K products.", "All results are rescaled so they are relative improvements over the baseline MQCNN model, lower is better.Table REF shows the (rescaled) quantile loss results (P50, P90 and overall) for the four models described above.", "We calculate these results based on three different runs of each model and average the performance metrics.", "As expected, we observe no accuracy gains from MQRet-Random, as there is no signal to extract.", "MQCNN-L yields very slight improvement by simply increasing the model capacity.", "By contrast, MQRet-Full brings relatively substantial improvements in overall performance, improving P50 by 3.2% and P90 by 2.2%.", "Thus, the model seems to be extracting useful signal from other entities." ], [ "Large Scale", "For this experiment, we use the whole dataset of over 2 million products.", "The training and test split is kept the same as in the first experiment.", "We evaluate the architecture in Figure REF with both retrieval mechanisms described previously, and consider both time-specific and time-agnostic variants.", "For the nearest neighbor method, we use FAISS [14], an open source library for fast nearest neighbor retrieval in high dimensional spaces, and we set $k = 10$ .", "For the submodular method, we use Apricot [26] which provides efficient submodular optimization tools.", "In this case, we choose $k = 10000$ for the size of the global set.", "We selected these values for $K$ to maximize utilization of available GPU memory.", "Overall, we consider the following MQRet model variants: MQCNN: baseline MQCNN model MQRet-KNN: with time-agnostic, nearest neighbor retrieval.", "MQRet-Subm: with time-agnostic, submodular retrieval.", "MQRet-KNN-t: with time-specific, nearest neighbor retrieval.", "MQRet-Subm-t: with time-specific, submodular retrieval.", "Table: Experiment results on the whole dataset.", "Results are rescaled so they are relative improvements over the baseline MQCNN model, lower is better.We train each model for 100 epochs with a batch size of 512.", "Test results are summarized in Table REF .", "We include the model performance aggregated across all horizons (52 weeks) as well as for horizons $h \\le 10$ .", "We observe that all MQRet variants improve the overall performance by around 1% but the gains are smaller than the full cross-entity attention in Table REF .", "Larger performance improvements are observed for all models when aggregated over shorter horizons.", "The performance of time-specific models are generally similar to that of time-agnostic ones.", "The best variant – MQRet-KNN – improves by 1.3% over the baseline MQCNN model for all horizons, and by 1.5% when restricted to only shorter horizons ($h\\le 10$ )." ], [ "Conclusion", "In this paper we demonstrated that incorporating cross-entity information can improve the predictive accuracy of time-series forecasting models.", "On our target application, we showed approximately a 3% improvement over the baseline model when we attended over all other entities in the population.", "The gains on the large scale dataset were smaller – approximately over 1% improvement on the baseline.", "Accordingly, a future directions of interest is training a model that can attend across all entities during each forward pass – will require model parallelism across multiple machines.", "Another interesting direction of future inquiry is using pretrained graphs between entities to select the nearest neighbors." ] ]
2207.10517
[ [ "Fast Data Driven Estimation of Cluster Number in Multiplex Images using\n Embedded Density Outliers" ], [ "Abstract The usage of chemical imaging technologies is becoming a routine accompaniment to traditional methods in pathology.", "Significant technological advances have developed these next generation techniques to provide rich, spatially resolved, multidimensional chemical images.", "The rise of digital pathology has significantly enhanced the synergy of these imaging modalities with optical microscopy and immunohistochemistry, enhancing our understanding of the biological mechanisms and progression of diseases.", "Techniques such as imaging mass cytometry provide labelled multidimensional (multiplex) images of specific components used in conjunction with digital pathology techniques.", "These powerful techniques generate a wealth of high dimensional data that create significant challenges in data analysis.", "Unsupervised methods such as clustering are an attractive way to analyse these data, however, they require the selection of parameters such as the number of clusters.", "Here we propose a methodology to estimate the number of clusters in an automatic data-driven manner using a deep sparse autoencoder to embed the data into a lower dimensional space.", "We compute the density of regions in the embedded space, the majority of which are empty, enabling the high density regions to be detected as outliers and provide an estimate for the number of clusters.", "This framework provides a fully unsupervised and data-driven method to analyse multidimensional data.", "In this work we demonstrate our method using 45 multiplex imaging mass cytometry datasets.", "Moreover, our model is trained using only one of the datasets and the learned embedding is applied to the remaining 44 images providing an efficient process for data analysis.", "Finally, we demonstrate the high computational efficiency of our method which is two orders of magnitude faster than estimating via computing the sum squared distances as a function of cluster number." ], [ "Introduction", "Immunohistochemistry (IHC) is routinely used in the diagnostics of tissue pathology as it can visualise the expression of proteins by tagging with an enzymes or fluorophores to change the colour pigment of the tissue [1].", "Multiplex IHC uses multiple types of stain in order to visualise many different components in tissues simultaneously.", "Multiplex imaging technologies provide a powerful way to image tissue sections and visualise the spatial organisation of different cell types and differences between them [2].", "This mapping capability is vitally important in biological studies such as in oncology and cancer research by providing information about tissue heterogeneity and tumour micro-environment [3].", "However, when used in parallel, these tags can exhibit spatial and spectral overlap limiting their clinical usage [1].", "Another form of multiplex imaging is mass spectrometry based, such as imaging mass cytometry (IMC) that uses metal conjugated antibodies to label and measure specific protein markers in tissues [4].", "These are used in a spatially resolved manner to provide multidimensional images where each pixel corresponds to the measured labels at sub cellular resolution.", "The rich chemical information in IMC enhances the study of tissue revealing tumour heterogeneity, cell-cell interactions, and moving towards individualised diagnosis and therapies [4].", "A significant challenge for this data is the segmentation of individuals cells in the multiplex images.", "The analysis of high dimensional data can be aided by dimensionality reduction, for example two or three dimensional projections can reveal visual patterns in the low dimensional space.", "Patterns such as manifolds or clusters in the data can provide insight into structure in images [5], similarity of text documents [6], [7], informative overviews of hyperspectral data [8], and sub-groups of genes [9] or cell expression [10].", "This provides a powerful computational tool to efficiently analyse the chemical information in mass spectromertry data, such as IMC [11].", "Methods such as t-distributed stochastic neighbour embedding (t-SNE) [5] are state of the art techniques data reduction and visualisation.", "However the lack of a known mapping prohibits the application to unseen data [12].", "Autoencoders avoid this issue by learning the encoding and decoding transformation during training of the model [13].", "Despite their ability to learn low dimensional representations of high dimensional data, both t-SNE and autoencoders do not segment the learned patterns directly and require additional methods or training phases.", "Clustering data introduces additional challenges in the estimation of the number of clusters, which must either been known a priori or optimised in expensive computations.", "Density based clustering methods such as DBSCAN [14] do not require a number of clusters, though require selection of a minimum number of points and radius parameter, which may not be intuitive or easy to optimise.", "For multiplex data such as IMC, computational approaches lack the ability to fully explore the rich spatially resolved multiplexed (high dimensional) tissue measurements [15].", "Therefore in this work we develop a method to efficiently analyse these data in a purely data-driven way.", "Our method first embeds the data into a (reduced) three dimensional space, which we interpret as pseudo red blue green (RGB) components in providing a rich single image summary of the data.", "Next, we compute the density of points in binned regions of the embedded space.", "Finally we use the number of dense regions as an estimate for the number of clusters in the data and use this number to perform $k$ -means clustering of the embedded space.", "The use of an autoencoder enables unseen data to be embedded as the transformations are known, and the density estimate for the number of clusters allow data specific clustering.", "We demonstrate this method by training our model on one IMC dataset and applying this to 44 unseen multiplex images.", "The data used in this work are multiplex images of thin tissue sections of human patients with inflammatory bowel disease obtained from [16], [2].", "The imaging data contain multiple measurements for each pixel in the image, with each measurement corresponding to a different feature (more details are in Section REF ).", "This type of imaging is comparable to hyperspectral images where each pixel has multiple spectral components.", "Figure: IMC feature maps for the training data in this work.", "Spatial distributions of the 19 markers in the IMC hyperspectral data (saturated for clarity of image) and corresponding Fluorescence image." ], [ "Imaging Mass Cytometry", "The images are obtained from formalin-fixed paraffin embedded colonic tissue biopsies from patients collected from [16], [2] with appropriate ethics.", "The data are acquired using Imaging Mass Cytometry (IMC) where the sample is tagged with labeled antibodies that target particular cellular proteins and a mass spectrometer is used to detect these tags at each pixel.", "The output is an image dataset with $N$ measured components for each $X$ and $y$ pixel.", "Each hyperspectral image consists of 19 labelled measurements for each pixel, see Fig.", "REF , (for full details refer to [2])." ], [ "Fluorescent Microscopy", "Each of the IMC multiplex image datasets have a corresponding microscope image.", "These fluorescent microscopy (FM) images are captured at a much higher resolution than the IMC data though provide only a single optical measurement of the sample.", "The FM data provide a useful reference for the IMC data, see bottom left image in Fig.", "REF ." ], [ "Big Data", "For this dataset there are 45 different IMC images, one per patient, ranging from 679,770 to 5,866,652 pixels.", "The median number of pixels is 2,488,800 with upper and lower quartiles of 3,403,338 and 1,988,820 pixels respectively.", "Each of these pixels have 19 labeled measurement tags and each IMC image has a corresponding FM image.", "This represents a large and problematic dataset to analyse.", "Therefore, in order to analyse these data in an efficient and practical manner, we train our models on a single dataset (which contains $\\approx $ 10$^6$ pixels) and apply this to the remaining 44 patients.", "The end goal is to have a pre-trained model that clinicians can use with these data to aid rapid diagnosis of patients." ], [ "Deep Sparse Autoencoder", "Autoencoders are a special class of neural networks with a symmetric structure that encode data $\\mathbf {X} \\in \\mathbb {R}^D$ into a different dimensionality space, $\\mathbf {Z}\\in \\mathbb {R}^d$ , before decoding this back to the original data space, $\\mathbf {X}^{\\prime }\\in \\mathbb {R}^D$ .", "Typically the network is constructed such that the original dimensionality $D$ is embedded into a lower dimensional space $d$ .", "As the output of the network is a reconstruction of the input, the network can be trained using a loss function to minimise the difference between the input and decoded data and thus is an unsupervised method.", "The difference is typically formulated as a mean squared error, $\\frac{1}{N} ||\\mathbf {X} - \\mathbf {X}^{\\prime }||^2 ~.$ The embedded space $z$ is obtained using an encoding transformation $E$ , where as the reconstructed data $x^{\\prime }$ is recovered using a decoding transformation $D$ $z = E(x,\\theta )~; \\quad x^{\\prime } = D(z,\\theta ^{\\prime })~,$ where $x \\in \\mathbf {X}$ , $x^{\\prime } \\in \\mathbf {X}^{\\prime }$ and $z \\in \\mathbf {Z}$ .", "The parameters for the transformations, $\\theta $ and $\\theta ^{\\prime }$ , are optimised without labels by solving Eq.", "(REF ) using scaled conjugate gradient descent [17].", "We select a sigmoid activation for both $E$ and $D$ transformations, due to its ability to capture nonlinear patterns in the data [7], [18], [13].", "This canonical form can be extended in a number of ways, such as stacking several autoencoders where the encoded data from one layer becomes the input layer in the next layer.", "This yields a simple and fast method to train a deep autoencoder where each layer can be examined, or retrained, if desired.", "We employ regularisation terms to penalise non sparse solutions as these have been used in similar problems [19].", "Firstly, we employ a Tikhonov (L$_2$ ) regularisation term, $\\Omega _\\theta $ , on $\\theta $ and $\\theta ^{\\prime }$ , to prevent over fitting in training and to reduce their complexity [20], [21], [22].", "This is computed for the $l^{th}$ layer as $\\Omega _\\theta = \\frac{1}{2}\\sum _i\\sum _j \\left( \\theta _{ij}^{(l)}\\right)^2~.$ Secondly, we include a sparsity penalty for neurons with a high activity, $\\Omega _\\rho $ , enabling them to respond to specific features in the data.", "This is achieved by minimising the Kullback-Leibler divergence between a target level of activation, $\\gamma $ , and the average output $\\hat{\\gamma }$ $\\Omega _\\gamma = KL(\\gamma ||\\hat{\\gamma })~,$ The full cost function, $F$ , to be optimised by the deep sparse autoencoder (DSA) is given by combining Eq.", "(REF )-(REF ) $F = \\frac{1}{N} ||\\mathbf {X} - D(E(\\mathbf {X},\\theta ),\\theta ^{\\prime })||^2 + \\alpha \\Omega _\\theta + \\beta \\Omega _\\gamma ~,$ where $\\alpha $ and $\\beta $ are coefficients for the $\\Omega $ regularisation terms.", "A significant advantage of DSA is that once trained the encoding and decoding transformations are deterministic.", "This has two direct benefits: 1) it enables training on a subset of the data allowing application to datasets that exceed the available computational resources available in terms of memory and speed; 2) pre-trained models can be transferred to other datasets providing a very efficient framework to analyse data from large experiments e.g.", "cohort and longitudinal studies.", "An example of the embedding space obtained from the DSA is given in Fig.", "REF ." ], [ "Pseudo RGB Image", "Here we use a DSA to embed the multiplex IMC image data into three dimensions in the so called bottleneck layer, also referred to as the latent space.", "This enables the visualisation of the data as a 3D scatter plot to identify patterns and differences.", "Additionally to treat each of the embedded dimensions as a pseudo RGB (red green blue) space providing a powerful summary of the multidimensional image in a single figure (Fig.", "REF ).", "Viewing the embedded data is beneficial over multicoloured overlays as it scales to any dimensional data ([18], [19] used this method for 1,000s of channels), avoiding the user from selecting a specific combination of images and colours.", "Moreover, the embedded space distinguish areas in the images of unique or multiple feature presence in a data driven way.", "That is, for components A and B, the DSA will distinguish between areas unique to A and B respectively, from those that contain combinations of both.", "Furthermore, for continuous data, it can also distinguish between different ratios of these components when both present, which may be particularly important in biological data.", "Obtaining such information in a data driven approach is highly favourable as the dimensionality of the data grows.", "Figure: Deep embedded space from the DSA one of the IMC datasets.", "Heatmaps of the embedded density are also given for each of the pseudo RGB planes.", "Note the region of high density here corresponds to the background (non-tissue part) of the image.Figure: Boxplots of bin density in the embedded space for each 45 IMC datasets with outliers represented as red +.", "Note each dataset has a single high density outlier corresponding to the non-tissue region." ], [ "Density Based Estimation of Cluster Number", "Despite its ability to embed any dimensional data into an arbitrary dimensional space, which can reveal patterns and clusters, DSAs do not segment the data directly.", "It is possible to cluster the embedded space using methods such as $k$ -means or DBSCAN, however this requires the selection of parameters for the number of cluster, or minimum points and radius respectively.", "With only a single parameter to tune, that may also be more intuitive to the data, we use $k$ -means to cluster the embedded space and develop a data driven approach to determine the number of clusters based on the embedded density outliers.", "Selecting a sigmoid activation function in our autoencoder, $\\sigma (x) = \\left( 1+ e^{ - x } \\right) ^{-1},~$ restricts the values of $z$ to the range [0 1] in each of the embedded dimensions.", "This property can be exploited as any arbitrary input data is embedded into a range [0 1] and we can therefore divide the embedding space into $B$ number of bins that are of fixed width and applicable to all input data via the DSA.", "For computational efficiency, as the total number of bins scales with $B^d$ , where $d$ is the dimension of the embedded space, we define $B$ =10 in all dimensions ($d$ =3 for pseudo RGB) resulting in 1,000 bins across the embedded space.", "We can now count the number of points in each bin in the embedded space using the indicator function $\\mathbf {1}_A$ , $\\mathbf {1}_A(x) := {\\left\\lbrace \\begin{array}{ll}1 \\quad , & \\text{if } x \\in A\\\\0 \\quad , & \\text{if } x \\notin A~,\\end{array}\\right.", "}$ summing for all $N$ points in the data and for each bin $b$ in the embedded space.", "We count the number of points in bin $b$ , with a width of $1/B$ as $\\eta _b(x) = \\sum _{i=1}^N \\mathbf {1}_{b}(x_i).$ We interpret the counts per bin as an estimation of point density in the embedded space and use this to determine the number of clusters in the embedded space, and therefore the input data.", "An example of the bin density of the embedded space is shown in Fig.", "REF .", "We assume that the embedded space contains some dense regions and that there data are not homogeneously distributed in the embedding space, which we know from our prior knowledge of the data.", "The is, we know that biologically similar regions have similar chemical profiles and similar profiles will be grouped together in the embedded space to form dense regions.", "This has been observed in a number of studies that use dimensionality reduction of mass spectrometry data [12], [11], [23], [24], [25].", "In the case where the data are homogeneous, clustering is not possible and a test for homogeneity can detect this automatically.", "Additionally we make the assumption that the number of dense regions is small compared to the number of bins, $\\rho _n << B^d$ , and for a large $B^d$ the majority of the bins should contain zero or a small number of points (e.g.", "noise data, outliers, etc).", "Therefore the distribution of $\\eta _b(x)$ has a positive (right) skew with the dense regions as extreme points to the right (this is seen in Fig.", "REF ).", "This allows us to detect the dense regions automatically by computing the outliers of $\\eta _b(x)$ .", "The $b$ th bin is considered an outlier if $\\eta _b(x)$ is more than 1.5 times the inter-quartile range above the upper quartile.", "We then remove the bins below the 20th percentile to ensure we only select high density bins and avoid a region spanning several bins being counted as multiple clusters.", "This provides a data driven estimator of the number for the clusters automatically from the embedded space that we can use for clustering algorithms such as $k$ -means.", "Figure: Clustering results for all 45 IMC images in this work.", "Images are obtained by kk-means clustering of the embedded space using our embedded density outlier estimator for kk.", "Note kk is estimated for each dataset individually and the colours are not comparable between images.", "Images are zoomed in for clarity and clearly showing sub cellular structures." ], [ "Results and Discussion", "We train the DSA on one of the IBD datasets and apply this encoding to the remaining 44 images, none of which are used in the training stage.", "We tune the parameters of the deep autoencoder during a preliminary set of experiments, though as in [26] we find a range of parameters suitable for this task.", "Here we use a network consisting of layers with 15, 10 and 3 hidden neurons respectively each trained for a maximum of 10,000 epochs with $\\alpha $ = 10$^{-4}$ , $\\beta $ =100 and $\\hat{\\gamma }$ =0.5 for all layers.", "The total training time is 5 minutes 16 seconds using an NVIDIA GeForce RTX 3090 graphics card.", "As the 45 IMC images are from different patients there is no way to meaningfully average the clustering results across the datasets.", "Due to biological and technical variation, different patients as well as biopsies, may contain different tissue features so a universal $k$ for all images may not exist.", "Moreover, as we lack a expert annotations and there is some noise in the IMC data, we do not have a ground truth for comparison.", "Instead our method provides a estimate of $k$ for each dataset which we show is reasonable based on cluster correlation to IMC feature maps, silhouette index, estimation of $k$ based on the inflection point of the total sum squared distances plots, and visual inspection of the data in this section.", "We present the results of our method for all the 45 IMC datasets with summary metrics and graphs.", "The embedding of a single dataset by the DSA is given in Fig.", "REF that also includes embedded density (i.e.", "Eq.", "(REF )) as heatmaps for each plane in the low dimensional space.", "This shows clear regions of high density in the embedded data that we use to estimate the number of features for clustering the data via $k$ -means.", "These heatmaps clearly demonstrate that a large number of bins contain few or no points supporting the assumption that the number of dense regions is small compared to the number of bins, $\\rho _n << B^d$ .", "The DSA can capture highly detailed features in the data in just three dimensions allowing for powerful visualisations of the multiplex data.", "The IMC data, despite a comparatively lower spatial resolution, contains richer information than the FM images, see Fig.", "REF .", "A single embedded image can provide a convenient overview of all these features that can be easily reviewed with the FM images, and may be more effective than making multiple comparisons wit the individual feature maps, particularly when the dimensionality of the hyperspectral image is large.", "The perceptual similarity of some colours can make distinction of regions challenging in the embedded space, however the clustered images can over come this challenge by clearly segmenting these regions (see Fig.", "REF for instances from each IMC dataset).", "Clear sub cellular structures are visible in all 45 images indicating this is a robust method for analysing and segmenting these data.", "Each image in Fig.", "REF depicts several cells with the same segmentation (within a given IMC dataset) of sub cellular components and other features.", "The zoomed regions in Fig.", "REF are for clarity and visually demonstrate our methods effective estimation of $k$ for each dataset." ], [ "Clustered Features", "Firstly we consider the correlation of each of the cluster maps with the IMC feature maps as seen in Fig.", "REF , for brevity we restrict this to just the dataset used to train the DSA.", "To compute the correlation with the binary cluster maps, we convert the IMC features maps to binary by considering any nonzero signal as 1.", "The colour scales for the feature maps in Fig.", "REF have all been saturated in order to coarsely visualise the images for comparison with the cluster maps and reflect how the correlation was computed.", "This highlights the additional challenge in analysing IMC data of signal intensity and variation between feature maps that is overcome by our method.", "Several clusters correlate reasonably well with the IMC feature maps, though it is worth noting that clusters may incorporate information from several IMC features (due to the lower dimensionality) and hence a high correlation is not necessarily expected.", "This is highlighted by the fact that cluster 5 is correlated with several IMC feature maps and is visually similar to several feature maps (Fig.", "REF ).", "Cluster 3 in Fig.", "REF , corresponds to the non tissue region, is unsurprisingly anti-correlated with all feature maps, this also accounts for the highest density regions in the embedded space due to the large number of pixels and low variation in signal compared to the tissue regions.", "Cluster 4 is visually similar to aSMA with a moderate correlation and uncorrelated with all others indicating that this feature is distinct from the rest.", "The features correlated with cluster 5 are 193Ir (DNA intercalator), H3 (chromatin/DNA marker), E-Cadherin (a tumour suppressant), Ki-67 (structural marker), LMNB1 (movement of molecules into/out of the nucleus), Pan-Keratin (an Epithelial tissue marker) and TCRgd (a protype T cell) [2], [27].", "Our method has grouped these features together in Cluster 5 indicating they are related functions or biological process.", "Figure: Correlation of each feature map (see Fig. )", "with the corresponding cluster maps obtained from our method.In generalising to several feature maps, the correlation of a given cluster to each individual map may reduce.", "However, there is clear spatial information in the clusters that relates to the IMC feature maps.", "The observation that some clusters contain specific features while others generalise across several demonstrates flexibility of this method.", "The ability to identify general regions of co-location of features as well as isolating unique regions is important in biological and pharmaceutical applications where the presence and absence of specific chemicals is of vital importance." ], [ "Silhouette Index", "The quality of the Clustering is shown via the silhouette index in Fig REF .", "The high median values (all $>$ 0.71) indicate the number and membership of identified clusters is good and the majority of points have medians above 0.80.", "Some individual members of clusters (i.e.", "pixels) yield lower silhouette scores which may be due to noise in the data, the quality of the embedding, or the clustering process.", "The mean silhouette index is lower than the median in all cases but is $>$ 0.70 in most cases.", "The mean is likely to be less robust than the median given the skewness in these distributions.", "Robust clustering of experimental data is a challenging task where differences may be due to the underlying sample, acquisition method, noise, or data processing for example.", "Further investigation is required to determine this, though we do note that the high median scores indicate the embedded density method is a reasonable estimate of the number of clusters.", "Figure: (a) Average silhouette index for each dataset.", "Averages are the median (black circles) ±\\pm the median absolute deviation (MAD), and mean (red crosses) ±\\pm the standard error in the mean (stderr).", "(b) Estimated kk, number of outliers and sparsity %\\% for each IMC dataset." ], [ "Embedding Unseen Data", "The trained DSA provides a consistent embedding space for all images providing a very efficient way to analyse a large number of images.", "As the clustering is done individually on each IMC dataset the number of clusters, and the assignment to specific regions varies with each dataset (see Fig.", "REF and Fig.", "REF ).", "Matching cluster regions between datasets is beyond the scope of this work, but could be achieved via linkage algorithms or performing t-tests for example." ], [ "Comparison of $k$ and Runtime", "Another estimator of $k$ is to identify the inflection point in the total sum squared distances (SSD) to cluster centroids for increasing $k$ [28].", "This however requires the data first to be clustered over a wide range of $k$ which is computationally restrictive for large data such as the IMC data here.", "Moreover, this is extremely expensive for a single dataset and in this work we analyse 45 making this an extremely expensive alternative.", "However, as the inflection point estimator provides a comparison to our method we calculate the SSD for increasing $k$ for all 45 IMC dataset, using a subset of each dataset to reduce the computational burden.", "Scaling the SSD by the maximum allows all 45 curves to be clearly viewed together revealing that they all show a plateau around $k\\ge $ 20 (Fig.", "REF ).", "The average $k$ estimated across the IMC datasets is given in Table REF .", "Due to a combination of noise and sub-setting the data, the SSD plots exhibit fluctuations and determining the exact inflection point is not robust.", "Hence we also determine the inflection point within a small tolerance (0.005).", "Our method compares well to both inflection point methods and only requires a single clustering run per dataset, compared to the 30 needed to generate the SSD plot in Fig.", "REF .", "Figure: Total sum squared distance to cluster centroids scaled by the maximum values, as a function of cluster number kk for all 45 IMC datasets.Table: Estimated kk averaged over the 45 IMC datasets with mean (±\\pm standard error in the mean) and median (±\\pm median absolute deviation).", "The Inflection Point is computed when the second derivative equals zero or when this is within a small tolerance factor (Tolerance) set to 0.005.Furthermore, the estimation of $k$ prior to clustering is very efficient compared to the SSD method, estimating $k$ in $\\approx $ 2 s and $\\approx $ 700 s respectively per dataset (see Table REF ).", "This equates to 1.5 minutes for our methods and 8.6 hours for the SSD method for all 45 IMC images.", "The training time of the autoencoder was 316 s for the training data and is only required once for the 45 IMC datasets.", "This is less than half the time to generate an SSD plot for a single IMC dataset, highlighting the significant computational savings with this method.", "Table: Runtimes for estimating kk.", "* ^*The autoencoder training time is averaged over the 45 datasets." ], [ "Conclusion", "In this work we have developed a method for estimating the number of clusters in order to analyse multiplex cell imaging data.", "By using a unsupervised method to embed the data into a fixed range space, our method can determine the cluster number automatically via the density of regions in the space using the assumption that there are far few clusters than binned regions.", "We have demonstrated the use of this in detail with a single IMC dataset of colonic tissue from a patient with irritable bowel disease.", "Moreover, we show how this can be readily applied to other dataset as a efficient means to analyse large sets of multiplex images, 45 IMC images in this work.", "Our method is purely data driven and is able to circumvent the need for selecting a number of cluster a priori thus providing a powerful approach for unguided data exploration.", "Finally, our method is extremely efficient and we show a reduction in computation time by two orders of magnitude compared to estimating via the total sum squared distances as a function of cluster number.", "Our generic methodology can be applied to other types of multiplex and hyperspectral data as well as utilising any future developments for embedding high dimensional data by replacing the DSA stage.", "Further developments of the method could include the usage of other clustering methods, e.g hierarchical clustering for consistent clustering results." ] ]
2207.10469
[ [ "Complements of discriminants of real boundary singularities" ], [ "Abstract We study the topology of the complements of discriminants of simple real boundary singularities by counting the connected components of these sets and assigning to them certain topological characteristics.", "Results of this paper serve as a generalization of those recently acquired by Vassiliev arXiv:2109.12287 for ordinary function singularities." ], [ "Introduction", "In [1] Arnol'd found a deep connection between simple function singularities and the Dynkin diagrams $A_k$ , $D_k$ and $E_k$ and their corresponding Weyl groups.", "Later in [2] he expanded this classification to the diagrams of types $B_k$ , $C_k$ and $F_4$ , which turned out to be closely connected to simple singularities of functions on manifolds with boundary.", "The present work expands upon the work of V.A.Vassiliev (see [10]) by considering simple boundary singularities of types $B_k$ , $C_k$ and $F_4$ .", "For a function singularity $f:({\\mathbb {R}}^n, 0) \\rightarrow ({\\mathbb {R}}, 0)$ , $df|_0 = 0$ and its arbitrary smooth deformation $F:({\\mathbb {R}}^n \\times {\\mathbb {R}}^k, 0) \\rightarrow ({\\mathbb {R}},0)$ , which can be viewed as a family of functions $f_\\lambda = F(-,\\lambda ):{\\mathbb {R}}^n \\rightarrow {\\mathbb {R}}$ , s.t.", "$f_0 = f$ , we can define the set $\\Sigma = \\Sigma (F) \\subset {\\mathbb {R}}^k$ , called the (real) discriminant of $F$ to be the set of all parameters $\\lambda $ , s.t.", "$f_\\lambda $ has a zero critical value.", "In cases we will consider this will be an algebraic subset of codimension one in ${\\mathbb {R}}^k$ , which divides a small neighborhood of the origin in ${\\mathbb {R}}^k$ into several parts.", "The following theorem holds for standard versal deformations of real simple function singularities.", "Theorem 1 (E.Looijenga, [5]) All connected components of the complements of the real discriminant varieties of standard versal deformations of simple real function singularities are contractible.", "This theorem implies that the topology of the set ${\\mathbb {R}}^k \\setminus \\Sigma $ is completely defined by the number of connected components of this set.", "We will prove a similar result for singularities $B_k$ and $C_k$ .", "The author believes that this should also be true in the case $F_4$ , but this is yet to be proven.", "The topology and combinatorics of such complements was described by V.D.", "Sedykh in his works [7], [8] for simple singularities with Milnor number $\\le 6$ (see [7], Theorems 2.8 and 2.9 for the numbers of local components for singularities $D_4$ , $D_5$ , $D_6$ and $E_6$ ).", "Recently (see [10]) Vassiliev fully described the topology of the complement ${\\mathbb {R}}^k \\setminus \\Sigma $ for simple real function singularities and their versal deformations by listing the number of local components in each case and assigning a certain topological characteristic to each of them.", "We will enumerate the local components of the complements for $B_k$ , $C_k$ and $F_4$ boundary singularities and assign to them similar topological invariants.", "Any simple function singularity, up to stable equivalence, can be realized as a function $f:{\\mathbb {R}}^2 \\rightarrow {\\mathbb {R}}$ in two variables and has a versal deformation of dimension $\\mu $ , where $\\mu $ is the Milnor number of $f$ .", "For any parameter $\\lambda \\in {\\mathbb {R}}^\\mu $ of a versal deformation of a simple singularity, consider the set of lower values $W(\\lambda ) = \\lbrace x \\in {\\mathbb {R}}^2 | f_\\lambda (x) \\le 0\\rbrace $ .", "These sets can go to infinity along several asymptotic sectors the number of which stays the same for any $\\lambda $ for a given versal deformation.", "We say that two sets of lower values $W(\\lambda _1)$ and $W(\\lambda _2)$ are topologically equivalent if there exists an orientation-preserving homeomorphism of ${\\mathbb {R}}^2$ which sends $W(\\lambda _1)$ to $W(\\lambda _2)$ but doesn't permute the asymptotic sectors.", "Naturally, if two parameters $\\lambda _1$ and $\\lambda _2$ lie in the same connected component of ${\\mathbb {R}}^\\mu \\setminus \\Sigma $ then the corresponding sets of lower values are topologically equivalent.", "The main theorem of [10] states the converse is also true: Theorem 2 (Vassiliev,[10]) If $\\lambda _1$ and $\\lambda _2$ are non-discriminant points of the parameter space ${\\mathbb {R}}^\\mu $ of a versal deformation of a simple function singularity, and the corresponding sets $W(\\lambda _1), W(\\lambda _2)$ are topologically equivalent, then $\\lambda _1$ and $\\lambda _2$ belong to the same component of ${\\mathbb {R}}^\\mu \\setminus \\Sigma $ .", "Our main goal will be to prove a similar statement for simple boundary singularities, and describe the connected components of the complement of the discriminant and their corresponding sets of lower values." ], [ "Notions and Definitions", "Assuming the reader is familiar with basic notions of singularity theory (for a classic reference see [3], [4]) we will only describe analogues of the usual constructions for the case of boundary function singularities.", "Another good reference is [6].", "To avoid ambiguity, further in the text we will use the term ordinary singularity for singularities of functions on manifolds without boundary.", "Consider the space ${\\mathbb {R}}^n$ with a fixed hyperplane $\\lbrace x = (x_1, \\ldots , x_n) \\in {\\mathbb {R}}^n|x_1 = 0\\rbrace $ , which will act as a germ of an $n$ -dimensional manifold with boundary.", "A (real) boundary function singularity is a germ of a function $f: ({\\mathbb {R}}^n,0) \\rightarrow ({\\mathbb {R}},0)$ on a manifold with boundary such that 0 is a critical point of the restriction of $f$ onto the boundary, i.e.", "$\\frac{\\partial f}{\\partial x_i} \\bigg |_{x=0} = 0, \\; i =2,\\ldots ,n$ We'll call two boundary singularities $f_i$ , $i = 1,2$ equivalent if there exists a local diffeomorphism $\\varphi : {\\mathbb {R}}^n \\rightarrow {\\mathbb {R}}^n$ preserving the boundary s.t.", "$\\varphi ^* f_2 = f_1$ .", "Hence the classification problem can be formulated in terms of describing the orbits of action of the group $Loc_{B}({\\mathbb {R}}^n)$ of local diffeomorphisms preserving the boundary on the space of function germs.", "The modality of a singularity $f$ is the minimal number $m$ such that a sufficiently small neighborhood of $f$ in the space of germs can be covered by a finite number of no more than $m$ -parametric orbits of the action of $Loc_B({\\mathbb {R}}^n)$ .", "A singularity of modality 0 will be called simple.", "As was mentioned earlier, simple boundary singularities are classified by diagrams $B_k$ , $C_k$ and $F_4$ .", "A gradient ideal of an ordinary singularity is the ideal generated by its partial derivatives: $I_f = (\\frac{\\partial f}{\\partial x_1}, \\ldots , \\frac{\\partial f}{\\partial x_n} )$ which in case of boundary singularities is defined as $I_{f|x_1} = (x_1 \\frac{\\partial f}{\\partial x_1}, \\ldots , \\frac{\\partial f}{\\partial x_n}).$ Local algebra of a germ $f$ is defined as the factor algebra $Q_{f} = {\\mathbb {R}}[[x_1,\\ldots ,x_n]]/I_f$ and its boundary analog as $Q_{f|x_1} = {\\mathbb {R}}[[x_1,\\ldots ,x_n]]/I_{f|x_1}.$ It's easy to see that for equivalent singularities the corresponding local algebras are isomorphic, so this construction gives a powerful invariant of singularities.", "The Milnor number of an ordinary singularity is defined as $\\mu = \\dim Q_f$ , and in case of a boundary singularity as $\\mu = \\dim Q_{f|x_1}$ .", "A classic result is that a germ $f$ has finite Milnor number iff $f$ is an isolated singularity.", "For boundary singularities we also define two additional numbers $\\mu _0 = \\dim {\\mathbb {R}}[[x_1,\\ldots ,x_n]]/(\\partial f/\\partial x_2 |_{x_1 = 0}, \\ldots , \\partial f/\\partial x_n |_{x_1 = 0})$ $\\mu _1 = \\dim {\\mathbb {R}}[[x_1,\\ldots ,x_n]]/(\\frac{\\partial f}{\\partial x_1}, \\ldots , \\frac{\\partial f}{\\partial x_n}).$ The number $\\mu _0$ is the Milnor number of $f$ viewed as a function on the boundary $\\lbrace x_1 = 0\\rbrace $ and $\\mu _1$ is the Milnor number of $f$ viewed as an ordinary germ in ${\\mathbb {R}}^n$ .", "It's easy to see that $\\mu = \\mu _1 + \\mu _0$ .", "Singularities for which $\\mu _1 = 0$ will be called purely boundary.", "All the above definitions also work if we take ${\\mathbb {C}}$ as the base field and consider holomorphic functions instead of smooth ones.", "The classification of boundary singularities also includes ordinary ones: from a function $f:{\\mathbb {R}}^n \\rightarrow {\\mathbb {R}}$ we can obtain a purely boundary singularity $\\tilde{f}(x_0,x) = x_0 + f(x)$ , for a manifold ${\\mathbb {R}}^{n+1}$ with boundary $\\lbrace (x_0,x) | x_0 = 0\\rbrace $ .", "Thus the simple boundary function singularities also include ones of types $A_k$ , $D_k$ and $E_k$ .", "The remaining types, specific to the boundary case, are listed in table 1.", "Table: Normal forms of real simple boundary singularities in two variables (x,y)(x,y), with boundary given by x=0x = 0.A deformation of a germ $f(x)$ is a function $F(x,\\lambda ):({\\mathbb {R}}^n \\times {\\mathbb {R}}^l, 0) \\rightarrow ({\\mathbb {R}},0)$ s.t.", "$F(x,0) = f(x)$ .", "The space ${\\mathbb {R}}^l$ is called the base of deformation $F$ .", "We call a deformation $F$ versal if any other deformation can be induced from it, meaning for any $G:{\\mathbb {R}}^n\\times {\\mathbb {R}}^k \\rightarrow {\\mathbb {R}}$ , $G(x,0) = f(x)$ we can find a germ of a smooth map $\\psi :{\\mathbb {R}}^k \\rightarrow {\\mathbb {R}}^l$ and a diffeomorphism $\\eta (x,\\xi )$ of ${\\mathbb {R}}^n$ , $\\xi \\in {\\mathbb {R}}^k$ , which depends smoothly on $\\xi $ and s.t.", "$\\eta (x,0) = x$ , so that $G(x, \\xi ) = F(\\eta (x,\\xi ),\\psi (\\xi ))$ Geometrically a deformation is just a germ of a surface at the point $f$ in the space of germs, and it is versal iff this germ is transversal to the orbit of $f$ under the action of $Loc({\\mathbb {R}}^n)$ .", "The deformation is called miniversal, if the dimension of the parameter space is minimal.", "For a boundary function singularity $f$ with local algebra $Q_{f|x_1}$ and (finite) Milnor number $\\mu $ the (mini)versal deformation can be obtained by setting $F(x,\\lambda ) = f(x) + \\sum _{i = 1}^\\mu \\lambda _i f_i(x)$ where $f_i$ form a basis of the local algebra $Q_{f|x_1}$ .", "Of course, the same construction holds in the ordinary case.", "Hence, the miniversal deformations for simple boundary singularities can be given as follows: $B_\\mu & \\qquad & f_\\lambda (x,y) = x^\\mu \\pm y^2 + \\lambda _1 x^{\\mu -1} + \\ldots + \\lambda _\\mu \\\\C_\\mu & \\qquad & f_\\lambda (x,y) = xy \\pm y^\\mu + \\lambda _1 y^{\\mu -1} + \\ldots + \\lambda _\\mu \\\\F_4 & \\qquad & f_\\lambda (x,y) = \\pm x^2 + y^3 + \\lambda _1 x + \\lambda _2 y + \\lambda _3 xy + \\lambda _4 $ Given a (mini)versal deformation $F(x,\\lambda ) = f_\\lambda (x)$ of a boundary singularity we define the discriminant variety as the subset $\\Sigma \\subset {\\mathbb {R}}^\\mu $ of the parameter space consisting of all singular parameter values.", "A parameter $\\lambda \\in {\\mathbb {R}}^\\mu $ is called non-singular, if 0 is a regular value of the function $f_\\lambda $ the zero level set $V(\\lambda ) = f^{-1}_\\lambda (x)$ is transversal to the boundary.", "In case of ordinary singularities only the first condition is required.", "For a boundary singularity the discriminant consists of two parts corresponding to these conditions.", "We will denote by $\\Sigma _0$ the component corresponding to the first condition, and by $\\Sigma _1$ the one corresponding to the second.", "The discriminant of an ordinary singularity is an irreducible affine variety, and it follows that in the boundary case the components $\\Sigma _i$ are also irreducible.", "In her note [9] I. G. Shcherbak introduced the notion of decomposition of a boundary singularity $f$ .", "The decomposition is defined as a pair (type of $f$ as an ordinary singularity, type of the restriction $f|_{\\lbrace x_1=0\\rbrace })$ .", "Naturally, any boundary singularity possesses such a decomposition, moreover there exists an involution on the set of boundary singularities that swaps these types.", "The singularities that go into each other under this involution are called dual.", "In particular, the singularity $B_\\mu $ has a decomposition $(A_{\\mu -1}, A_1)$ , $C_\\mu $ the decomposition $(A_1, A_{\\mu -1})$ and $F_4$ the decomposition $(A_2, A_2)$ .", "This means that the singularities $B_\\mu $ and $C_\\mu $ are dual (and $B_2 = C_2$ is self-dual), and $F_4$ is self-dual.", "For a boundary singularity $f$ that decomposes into $(f_0, f_1)$ the sets $\\Sigma _0$ and $\\Sigma _1$ are diffeomorphic to the discriminants $\\Sigma _{f_0}$ and $\\Sigma _{f_1}$ multiplied by euclidean spaces of suitable dimensions.", "This will be useful later in the study of the discriminant set of $F_4$ .", "As it was mentioned earlier, the following statement holds: Proposition 1 All connected components of the complements of the real discriminant varieties of versal deformations 1-2 of singularities $B_\\mu $ , $C_\\mu $ are contractible.", "Recall that by $W(\\lambda ) = f_\\lambda ^{-1}((-\\infty , 0])$ we denote the set of lower values for $\\lambda $ .", "Given a deformation of a boundary singularity, we say that two sets of lower values $W(\\lambda _1)$ and $W(\\lambda _2)$ are topologically equivalent if there exists an orientation preserving homeomorphism of ${\\mathbb {R}}^n$ sending one set to the other, which preserves the boundary and does not permute the asymptotic sectors of $f$ and its restriction to the boundary $f|_{x_1 = 0}$ .", "The number of asymptotic sectors is equal to 0 in case $B^+_{2k}$ , 1 in cases $B_{2k+1}$ and $F_4$ and 2 in cases $C_\\mu $ and $B^-_{2k}$ .", "Now we are ready to formulate the main theorem: Theorem 3 The numbers of components of the complements of the real discriminant varieties of deformations 1-3 are listed in Table 1, and each of these components is uniquely defined by the topological type of corresponding set of lower values.", "The proofs of theorems 3 and 4 in cases $B_\\mu $ and $C_\\mu $ will be presented in the next section, and the proof in case $F_4$ in sections 4 and 5." ], [ "Cases $B_\\mu $ and {{formula:510ab374-1d7e-4132-8d78-55ac6bc1706e}}", "As it was mentioned earlier, the discriminant $\\Sigma $ consists of two components $\\Sigma _0$ and $\\Sigma _1$ each defined by an appropriate condition.", "For $B_\\mu $ singularities the first condition means that the polynomial $x^\\mu + \\lambda _1 x^{\\mu -1} + \\ldots + \\lambda _\\mu $ can only have simple roots, and the second condition means that 0 is not a root of this polynomial.", "For $C_\\mu $ these conditions interchange.", "Denote by $h_\\lambda (x) = \\pm x^\\mu + \\lambda _1 x^{\\mu -1} + \\ldots + \\lambda _\\mu ,$ then the deformation can be expressed as $f_\\lambda (x,y) = h_\\lambda (x) \\pm y^2$ , and the equation for the zero set is given by $y^2 = -h_\\lambda (x)$ .", "Hence the zero set is symmetric with respect to the line $\\lbrace y = 0\\rbrace $ and consists of some number of ovals, each intersecting the line $\\lbrace y=0\\rbrace $ at neighboring pairs of roots of $h_\\lambda $ , and no more than two non-compact components.", "Any connected component of the complement ${\\mathbb {R}}^\\mu \\setminus \\Sigma $ is completely defined by the configuration of roots of the polynomial $h_\\lambda (x)$ .", "If $\\lambda $ is a non-discriminant parameter, then, since the polynomial $h_\\lambda $ must only have simple non-zero roots, it can only have $p$ negative and $q$ positive roots, and since they all should be simple the parity of $p+q$ should coincide with that of $\\mu $ .", "For any $\\lambda $ such that $h_\\lambda $ has $p$ negative and $q$ positive roots there is an obvious path to any other $\\lambda ^{\\prime }$ with equal numbers of negative and positive roots.", "Moreover, we can construct a homotopy contracting the whole connected component of $\\lambda $ to any of its points.", "It's also easy to see that the numbers $p, q$ uniquely define the topological type of $W(\\lambda )$ .", "In case $C_\\mu $ the zero set is given by the equation $xy = -h_\\lambda (y),$ hence, if $\\lambda $ is non-discriminant, it consists of the graph of the function $x = -h_\\lambda (y)/y$ (notice that by changing the equation this way we don't lose the solution $y=0$ , as otherwise $\\lambda $ would be a discriminant parameter).", "This function has a vertical asymptote $y=0$ and, as before, the topological type of $W(\\lambda )$ and the connected component of $\\lambda $ is completely defined by the numbers $p$ and $q$ of negative and positive roots of $h_\\lambda $ .", "The above description gives proofs of proposition 1 and theorem 3 in cases $B_\\mu $ and $C_\\mu $ ." ], [ "The case $F_4$", "In this section we will obtain 6 out of possible 8 types of sets of lower values for $F_4$ and study a certain hyperplane section of the discriminant $\\Sigma $ .", "For brevity we will denote the versal deformation of $F_4$ by $f_\\lambda (x,y) = x^2 + y^3 + a x + b y + c xy + d.$ The non-transversality condition for points of $\\Sigma _1$ means the polynomial $y^3 + b y + d$ has a non-simple root.", "A direct calculation gives the following equation: $27d^2 + 4b^3 = 0,$ meaning $\\Sigma _1$ is a direct product of a cusp in the plane $(b, d)$ and a plane ${\\mathbb {R}}^2$ spanned by coordinates $a, c$ .", "Figure: The 3-dimensional section of the discriminant variety of F 4 F_4 and corresponding zero setsAs we will see later, the equation for $\\Sigma _0$ is also computable, however it turns out to be quite complex.", "However, if we restrict our deformation to dimension 3 by setting $c = 0$ , we will get a nice section of $\\Sigma _0$ by the plane $\\lbrace c = 0\\rbrace $ , in which the equation will have the following form: $27 \\left(d + \\frac{a^2}{4}\\right)^2 + 4b^3 = 0$ meaning $\\Sigma ^{\\prime }_0 = \\Sigma _0 \\cap \\lbrace c = 0\\rbrace $ is a cuspidal edge bent along the parabola given by equations $d = -a^2/4, b = 0$ .", "As depicted in fig.", "1 edges of $\\Sigma ^{\\prime }_0$ and $\\Sigma ^{\\prime }_1$ are tangent at the origin, and these sets are tangent along the cusp lying in the plane $\\lbrace a = 0\\rbrace $ .", "It's easy to see that the local components of the set ${\\mathbb {R}}^3 \\setminus \\Sigma ^{\\prime } = {\\mathbb {R}}^3 \\setminus (\\Sigma ^{\\prime }_0 \\cup \\Sigma ^{\\prime }_1)$ (here ${\\mathbb {R}}^3$ denotes the hyperplane $\\lbrace c=0\\rbrace $ of the parameter space) are all contractible, and the number of these components is equal to 6.", "As also depicted in fig.", "1, corresponding sets of lower values are all distinct and correspond to different topological types of the real elliptic curve with respect to the boundary $x^2 + y^3 + a x + b y + d = 0.$ Concrete realizations of these sets are easy to obtain by taking values of the parameter $\\lambda = (a,b,0,d)$ to lie in the corresponding component.", "Figure: Possible remaining topological types of sets of lower values for F 4 F_4.", "Notice that for each set in the bottom row, except for №10, the one reflected through the boundary is also a possible type." ], [ "Further calculations for $F_4$", "In order to complete the proof of the main theorem we need to describe the topology of the set ${\\mathbb {R}}^4 \\setminus \\Sigma $ for $F_4$ .", "In this section we will find out which remaining topological types of sets of lower values can be realized through the versal deformation and calculate the homology of the complement." ], [ "Remaining topological types of $W(\\lambda )$ for {{formula:caaa0cfa-9223-464f-affe-6e9d617021b4}}", "For the versal deformation $f_\\lambda (x,y)$ the corresponding zero set $f_\\lambda ^{-1}(0)$ is either a non-compact line going to infinity along the $x$ -axis, or a union of such a line with a compact oval.", "Notice also, that since the polynomial $f_\\lambda (0,y)$ in $y$ has degree 3 the number of points of intersection of the zero set with the boundary $\\lbrace x=0\\rbrace $ is equal to 1 or 3 if $\\lambda $ is a non-discriminant parameter.", "Thus the possible remaining topological types of sets of lower values look like ones listed in fig.2.", "However, only the sets 7 and 8 can be realized as ones coming from the versal deformation of $F_4$ , and as we'll see later that these are the only ones remaining.", "Concrete realizations of the types 7 and 8 can be obtained as follows.", "First, a direct computation shows that the component of the set $\\Sigma _0 \\cap \\Sigma _1$ (which will be further denoted by $\\Xi _0$ ) corresponding to functions $f_\\lambda $ that have a Morse critical point with zero critical value which lies on the boundary, can be parametrized through $c$ and $d$ the following way: $f_\\lambda (x,y) = x^2 + y^3 - c \\@root 3 \\of {\\frac{d}{2}} x - 3\\@root 3 \\of {\\frac{d}{4}} y + cxy + d.$ We first take a sufficiently small $d > 0$ and $c \\ne 0$ (the sign of $c$ dictates what type of set of lower values, 7 or 8, will be obtained), so that the root of multiplicity 2 of the polynomial $f_\\lambda (0,y)$ in $y$ would be greater than the remaining root.", "This produces a zero set shown in the leftmost picture of fig.3.", "Notice that a substitution of the form $x \\mapsto x - \\varepsilon $ can be realized through our deformation by an appropriate change of parameters $d$ and $a$ , hence we can move the boundary away from the crossing to obtain the middle picture of fig.3.", "Finally, by subtracting a small constant $\\delta > 0$ from our function we can remove the crossing so that the curve splits into two separate components, hence we obtain the sets 7 and 8.", "Figure: Construction of the remaining sets of lower values" ], [ "Homological calculations for the complement", "One way to calculate the number of connected components of the set ${\\mathbb {R}}^4 \\setminus \\Sigma $ is to study its reduced cohomology groups $\\tilde{H}^i({\\mathbb {R}}^4 \\setminus \\Sigma )$ .", "By Alexander duality ($\\tilde{H}^*$ denotes the reduced cohomology group, $\\bar{H}_*$ - the Borel-Moore homology) we have $\\tilde{H}^i({\\mathbb {R}}^4 \\setminus \\Sigma ) \\simeq \\bar{H}_{3-i}(\\Sigma )$ We will prove the following theorem, which will imply the completeness of the lists of topological types given in this and previous section.", "Theorem 4 The (reduced) homology group $\\bar{H}_i(\\Sigma ;{\\mathbb {Z}}_2)$ is isomorphic to $({\\mathbb {Z}}_2)^{7}$ if $i=3$ and is trivial otherwise.", "Remark 1 Such homology groups can be studied by standard methods developed by Vassiliev (see [11], [12]), which were used in [10] in order to compute cohomology of the complements of the discriminant varieties of $D_\\mu $ singularities.", "We will use a different approach.", "The conditions on the parameters of $\\Sigma _0$ yield the following system of polynomial equations: ${\\left\\lbrace \\begin{array}{ll}f_\\lambda (x,y) = x^2 + y^3 + a x + b y + c xy + d = 0\\\\\\frac{\\partial f_\\lambda }{\\partial x} = 2x + a + cy = 0 \\\\\\frac{\\partial f_\\lambda }{\\partial y} = 3y^2 + cx + b = 0\\end{array}\\right.", "}$ To get an equation on $\\lambda $ we can substitute $x$ by a linear function in $y$ using the second equation.", "We obtain a system of two polynomial equations, for which we can then write down the resultant in $y$ to eliminate it.", "Notice that taking the resultant does not add any imaginary solutions for this system, as this would imply that the deformation $f_\\lambda $ , as a function of complex variable, has two distinct conjugate critical points with critical value 0, which is impossible for the singularity $A_2$ , which is the type of $F_4$ as an ordinary singularity.", "As it was mentioned earlier, the equation for $\\Sigma _1$ has the form $27d^2 + 4b^3 = 0,$ so we obtain a system of two polynomial equations for the intersection $\\Sigma _0 \\cap \\Sigma _1$ .", "The solution of this system consists of two 2-dimensional irreducible components $\\Xi _0$ and $\\Xi _1$ , each homeomorphic to ${\\mathbb {R}}^2$ .", "The component $\\Xi _0$ is comprised of parameters $\\lambda $ corresponding to deformations $f_\\lambda $ which have an ordinary Morse critical point with critical value 0 lying on the boundary and the closure of such points, and the component $\\Xi _1$ corresponds to deformations for which the zero set is non-transversal to the boundary and which have an ordinary Morse critical point with critical value 0 outside the boundary (and their closure).", "Once again, we can calculate the intersection $\\Xi _0 \\cap \\Xi _1$ .", "Turns out, $\\Xi _0$ intersects $\\Xi _1$ along two curves $\\Psi _0$ and $\\Psi _1$ which intersect transversely at the origin and are both homeomorphic to ${\\mathbb {R}}^1$ .", "The curve $\\Psi _0$ corresponds to deformations which have a Morse critical point at the origin, such that one of the branches of the curve $f^{-1}_\\lambda (0)$ is tangent to the boundary.", "The curve $\\Psi _1$ is comprised of parameters $\\lambda $ for which the deformation has a critical point of type $A_2$ lying on the boundary.", "Recall that the two components of the discriminant $\\Sigma $ of a boundary singularity can be obtained from the discriminants of the ordinary singularities into which it decomposes.", "As $F_4$ has the decomposition $(A_2,A_2)$ , the components $\\Sigma _0$ and $\\Sigma _1$ are both diffeomorphic to a cusp multiplied by ${\\mathbb {R}}^2$ , meaning $\\Sigma _i$ are both homeomorphic to ${\\mathbb {R}}^3$ .", "Now we can calculate the Borel-Moore homology of $\\Sigma $ by first applying the Mayer-Vietoris long exact sequence to the decomposition $\\Sigma _0 \\cap \\Sigma _1 = \\Xi _0 \\cup \\Xi _1$ and after that to $\\Sigma = \\Sigma _0 \\cup \\Sigma _1$ (the ${\\mathbb {Z}}_2$ coefficients are omitted): ... Hj(0 1) Hj(0) Hj(1) Hj(0 1) ...", "Since the one-point compactification of $\\Psi _0 \\cup \\Psi _1$ is homotopy equivalent to a bouquet of 3 circles, we get that $\\bar{H}_1(\\Psi _0 \\cup \\Psi _1) \\simeq ({\\mathbb {Z}}_2)^3$ and 0 in other dimensions.", "Hence, from the long exact sequence we get $\\bar{H}_j(\\Xi _0 \\cup \\Xi _1) = {\\left\\lbrace \\begin{array}{ll}({\\mathbb {Z}}_2)^5 & \\text{for} \\; j = 2, \\\\0 & \\text{otherwise.}\\end{array}\\right.", "}$ Now applying this calculation to the exact sequence ... Hj(0 1) Hj(0) Hj(1) Hj() ... we get that $\\bar{H}_j(\\Sigma ) = {\\left\\lbrace \\begin{array}{ll}({\\mathbb {Z}}_2)^7 & \\text{for} \\; j = 3, \\\\0 & \\text{otherwise,}\\end{array}\\right.", "}$ which completes the proof." ] ]
2207.10518
[ [ "A micropolar shell formulation for hard-magnetic soft materials" ], [ "Abstract Hard-magnetic soft materials (HMSMs) are particulate composites that consist of a soft matrix embedded with particles of high remnant magnetic induction.", "Since the application of an external magnetic flux induces a body couple in HMSMs, the Cauchy stress tensor in these materials is asymmetric, in general.", "Therefore, the micropolar continuum theory can be employed to capture the deformation of these materials.", "On the other hand, the geometries and structures made of HMSMs often possess small thickness compared to the overall dimensions of the body.", "Accordingly, in the present contribution, a 10-parameter micropolar shell formulation to model the finite elastic deformation of thin structures made of HMSMs and subject to magnetic stimuli is developed.", "The present shell formulation allows for using three-dimensional constitutive laws without any need for modification to apply the plane stress assumption in thin structures.", "Due to the highly nonlinear nature of the governing equations, a nonlinear finite element formulation for numerical simulations is also developed.", "To circumvent locking at large distortions, an enhanced assumed strain formulation is adopted.", "The performance of the developed formulation is examined in several numerical examples.", "It is shown that the proposed formulation is an effective tool for simulating the deformation of thin bodies made of HMSMs." ], [ "Introduction", " Magneto-active soft materials consist of magnetizable particles embedded into a soft elastomeric matrix and exhibit large mechanical deformations under magnetic stimuli.", "These materials have found potential applications in, e.g., sensors, actuators, vibration absorbers, isolators, soft and flexible electronics, and soft robots (see, e.g., [1], [2], [4], [3], [5], [6] and references therein).", "For the optimum and effective design of various devices made of these materials, it is essential to develop reliable theoretical formulations for predicting their response under various geometries and loading conditions.", "Based on the type of the embedded particles, magneto-active soft materials are divided into two sub-classes, namely soft-magnetic soft materials (SMSMs) and hard-magnetic soft materials (HMSMs).", "The former contains particles with low coercivity, such as iron or iron oxides, and their magnetization vector changes by applying external magnetic stimuli.", "This sub-class has been the subject of a huge amount of research work in the past two decades (e.g., [7], [8], [9], [10], [11], [12], [13]).", "The latter sub-class is composed of particles, such as CoFe$_2$ O$_4$ or NdFeB, that have a high coercivity, and their magnetization vector, or equivalently, their remnant magnetic flux, remains unchanged for a wide range of the applied external magnetic flux (e.g., [14], [15]).", "One of the main characteristics of HMSMs is that they quickly undergo large deformations under relatively small values of external magnetic induction (e.g., [16], [17]).", "Moreover, using the 3D printing technologies, it is possible to program the local orientation of the magnetized particles, which leads to the desired complex deformations [18], [19], [20], [21], [22].", "Theoretical modeling of HMSMs has been the subject of a plethora of research articles in recent years (e.g., [23], [24], [25], [26], [27], [28], [29], [30], [31], [32]).", "In particular, Zhao et al.", "[24] developed a continuum formulation with an asymmetric Cauchy stress tensor, which is very similar to the classical continuum theory in the sense that it neither needs non-classical material parameters nor additional degrees of freedom.", "Their theory has been the foundation for the analysis of hard-magnetic soft beams (HMSBs) in Wang et al.", "[33], Chen et al.", "[34], [35], Rajan and Arockiarajan [36], and Yan et al.", "[37] among others.", "The same formulation has been employed to model the deformation of magneto-active shells by Yan et al. [38].", "Dadgar-Rad and Hossain [39] added the viscoelastic effects to the theory of Zhao et al.", "[24] to analyze the time-dependent dissipative response of HMSBs.", "Micromechanical and lattice models for the deformation analysis of HMSMs have been also formulated by Zhang et al.", "[28], Garcia-Gonzalez and Hossain [29], [30], and Ye et at. [31].", "From a different point of view, Dadgar-Rad and Hossain [32] focused on the well-known phenomenon that the interaction between remnant and external magnetic fluxes induces a body couple on the continuum body (e.g., [40]).", "Therefore, the Cauchy stress in HMSMs is asymmetric, as had been previously pointed out by Zhao et al. [24].", "However, instead of following the methodology advocated in [24], the authors developed a formulation based on micropolar continuum theory to model the finite deformation of three-dimensional bodies made of HMSMs.", "Two significant differences between the results of the formulation of Zhao et al.", "[24] and those based on the micropolar-enhanced formulation have been expressed by Dadgar-Rad and Hossain [32].", "Eringen and his coworkers established the theoretical foundations of micropolar theory (e.g., [41], [42], [43]).", "In this theory, each material particle is associated with a micro-structure that can undergo rigid rotations independently from its surrounding medium.", "Formulations of the micropolar theory to model localized elastic-plastic deformations (e.g., [44], [45], [46], [47], [48], [49], [50]) and size-dependent elastic deformations (e.g., [53], [51], [52], [54], [55], [56]) have been developed.", "Some formulations to model micropolar shells have been also proposed (e.g., [59], [57], [58]).", "Moreover, the theory has been used in the modeling of lattice structures, crystal plasticity, phononic crystals, chiral auxetic lattices, phase-field fracture mechanics, and vertebral trabecular bone (e.g., [60], [61], [62], [63], [64], [65]).", "The current research is essentially the continuation of the previous work of the authors, namely Dadgar-Rad and Hossain [32], which had been developed for three-dimensional bodies.", "However, most bodies made of HMSMs are thin structures, and using three-dimensional elements is computationally expensive.", "Therefore, the main objective of this work is to develop a shell formulation based on the micropolar continuum theory to predict the deformation of thin HMSMs.", "To do so, the 7-parameter shell formulation of Sansour (e.g., [66], [67]) has been extended to a 10-parameter one that takes into account the micro-rotation of the micropolar theory.", "On the other hand, one of the successful methods for eliminating locking effects in shell structures is the enhanced assumed strain method (EAS), e.g., [68], [69], [70], [71].", "Accordingly, this method is adopted here to circumvent locking effects in the present micropolar shell formulation.", "The rest of this paper is organized as follows: In Section , the basic kinematic and kinetic relations of the micropolar continuum theory are presented.", "In Section , the main characteristics of hard-magnetic soft materials are introduced.", "The kinematic equations describing a 10-parameter micropolar shell model are provided in Section .", "The variational formulation of the problem is then formulated in Section .", "A nonlinear finite element formulation for the numerical simulation of the related problems is developed in Section .", "Several numerical examples are solved in Section .", "Finally, a summary of the work is provided in Section .", "Notation: Throughout this work, all lower-case and upper-case Latin indices range over $\\lbrace 1, 2, 3\\rbrace $ , and Greek indices range over $\\lbrace 1, 2\\rbrace $ .", "Upper-case Latin indices with calligraphic font, e.g., $\\mathcal {I}$ and $\\mathcal {J}$ , do not obey a general rule and take the specified values defined in the corresponding equations.", "The summation convention holds over all repeated Greek and Latin indices.", "For the two second-order tensors ${P}$ and ${Q}$ , the tensorial products defined based on the symbols $\\otimes $ , $\\odot $ , and $\\boxtimes $ generate fourth-order tensors, so that the corresponding components are given by $(\\mathcal {C})_{ijkl}=({P}\\otimes {Q})_{ijkl}=P_{ij}Q_{kl}$ , $(\\mathcal {B})_{ijkl}=({P}\\odot {Q})_{ijkl} =P_{ik}Q_{jl}$ , and $(\\mathcal {C})_{ijkl}=({P}\\boxtimes {Q})_{ijkl} =P_{il}Q_{kj}$ , respectively.", "The notations $\\hbox{tr}\\hspace{1.111pt}{P}$ , ${P}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}$ , $\\det {P}$ , ${P}^{-1}$ , and ${P}^{-\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}$ are the trace, transpose, determinant, inverse, and inverse transpose of the second-order tensor ${P}$ .", "For numerical simulations, the notation $\\mathbb {P}=\\lbrace P_{11}, P_{22}, P_{33}, P_{12}, P_{21} , P_{13} , P_{31} , P_{23}, P_{32} \\rbrace ^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}$ will be used as the $9 \\times 1 $ vectorial representation of the arbitrary second-order tensor ${P}$ ." ], [ "A brief review of the micropolar continuum theory", " Some fundamental relations of the micropolar continuum theory are presented in this section.", "The interested reader is referred to the pioneering works developed in Refs.", "[42], [43], [46] for more details and discussions.", "In this section, to describe, respectively, the material and spatial quantities, two coincident Cartesian coordinate systems $\\lbrace X_1,X_2,X_3\\rbrace $ and $\\lbrace x_1,x_2,x_3\\rbrace $ are consdiered.", "The corresponding orthonormal basis vectors are denoted by $\\lbrace \\mathbb {e}_1,\\mathbb {e}_2,\\mathbb {e}_3\\rbrace $ and $\\lbrace \\mathbb {E}_1,\\mathbb {E}_2,\\mathbb {E}_3\\rbrace $ , respectively.", "The right gradient and right divergence operators of the form $\\hbox{Grad}\\hspace{1.111pt}\\lbrace \\bullet \\rbrace = \\frac{ \\partial \\lbrace \\bullet \\rbrace }{\\partial X_I}\\otimes \\mathbb {E}_I $ , $\\hbox{Div}\\hspace{1.111pt}\\lbrace \\bullet \\rbrace = \\frac{ \\partial \\lbrace \\bullet \\rbrace }{\\partial X_I}\\cdot \\mathbb {E}_I$ , $\\hbox{grad}\\hspace{1.111pt}\\lbrace \\bullet \\rbrace = \\frac{ \\partial \\lbrace \\bullet \\rbrace }{\\partial x_i}\\otimes \\mathbb {e}_i$ , and $\\hbox{div}\\hspace{1.111pt}\\lbrace \\bullet \\rbrace = \\frac{ \\partial \\lbrace \\bullet \\rbrace }{\\partial x_i}\\cdot \\mathbb {e}_i $ are used in this work.", "Let $\\mathcal {B}_0$ and $\\mathcal {B}$ be the reference and current configurations of the continuum body at the times $t=0$ and $t>0$ , respectively.", "At each material point in $\\mathcal {B}_0$ a macro-element is considered, the center of which is denoted by $\\mathbb {X}$ .", "After deformation by the macro-deformation $\\psi $ , the center of the macro-element in $\\mathcal {B}$ is denoted by $\\mathbb {x}$ , so that $\\mathbb {x}=\\psi (\\mathbb {X},t)$ .", "As usual, the local deformation of the macro-element is described by the deformation gradient tensor ${F}$ , given by ${F}=\\hbox{Grad}\\hspace{1.111pt}\\psi =\\frac{\\partial x_i}{\\partial X_I} \\mathbb {e}_i \\otimes \\mathbb {E}_I,\\quad J=\\det {F}>0.$ From the polar decomposition theorem, the deformation gradient is uniquely decomposed as ${F}={R}{U}= {V}{R}$ .", "Here, ${R}$ is the macro-rotation tensor, and ${U}$ and ${V}$ are the symmetric positive definite right and the left stretch tensors, respectively.", "For later use, the variation of the deformation gradient is written as follows: $\\begin{split}\\delta {F}= \\hbox{Grad}\\hspace{1.111pt}\\delta \\hat{\\mathbb {u}} = \\delta {Y}{F}\\quad \\text{with} \\quad (\\delta {Y})_{ij} \\stackrel{\\text{def}}{=}(\\hbox{grad}\\hspace{1.111pt}\\delta \\hat{\\mathbb {u}})_{ij}=\\frac{ \\partial \\delta \\hat{u}_i}{\\partial x_j}.\\end{split}$ Moreover, $\\hat{\\mathbb {u}} = \\mathbb {x}-\\mathbb {X}$ is the actual displacement field, and $\\delta \\hat{\\mathbb {u}}= \\delta \\mathbb {x}$ is the virtual displacement.", "In the micropolar theory, it is assumed that there exists a micro-structure inside each macro-element so that it experiences rigid micro-rotations independent of the macro-motion $\\mathbb {x}$ .", "Let $\\theta =\\theta _i \\mathbb {e}_i$ denote the micro-rotation pseudo-vector, and $\\theta =(\\theta _i \\theta _i)^{1/2}$ be its magnitude.", "The micro-rotation tensor $\\tilde{{R}}$ corresponding to $\\theta $ can be expressed via the Euler–Rodriguez formula, namely (e.g., [46], [52]) $\\tilde{{R}} (\\theta ) = \\exp \\hat{\\theta }={I}+ \\frac{ \\sin \\theta }{\\theta } \\hat{\\theta }+\\frac{1- \\cos \\theta }{\\theta ^2} \\hat{\\theta }^2,$ where $\\hat{\\theta }= - \\mathcal {E}\\theta $ , or $\\hat{\\theta }_{ij}= -\\epsilon _{ijk} \\theta _k$ , is the skew-symmetric tensor corresponding to $\\theta $ .", "Moreover, $\\epsilon _{ijk}$ are the components of the alternating symbol $\\mathcal {E}$ .", "By defining $\\delta \\theta $ as the virtual micro-rotation pseudo-vector, the variation of $\\tilde{{R}}$ may be expressed via the following relations [52], [32] $\\left.\\begin{split}\\delta \\tilde{{R}}= \\delta \\hat{{\\omega }} \\tilde{{R}}\\quad \\text{with} \\quad \\delta \\hat{{\\omega }}=- \\mathcal {E}\\delta {\\omega },\\quad \\delta {\\omega }=\\Lambda \\delta \\theta \\\\\\text{and} \\quad \\Lambda = \\frac{ \\sin \\theta }{\\theta } {I}+\\frac{1-\\cos \\theta }{\\theta ^2}\\hat{\\theta }+\\frac{\\theta - \\sin \\theta }{\\theta ^3} \\theta \\otimes \\theta \\end{split}\\right\\rbrace .$ The deformation gradient, in the micropolar theory, is decomposed as ${F}=\\tilde{{R}} \\tilde{{U}} = \\tilde{{V}} \\tilde{{R}}$ , which is apparently similar to the classical polar decomposition.", "The deformation tensors $\\tilde{{U}}$ and $\\tilde{{V}}$ are then defined by (e.g., [46]): $\\begin{split}\\tilde{{U}}=\\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} {F}, \\quad \\tilde{U}_{IJ}=\\tilde{R}_{nI} F_{nJ}, \\quad \\tilde{{V}}={F}\\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}, \\quad \\tilde{V}_{ij}=F_{iN} \\tilde{R}_{jN}.\\end{split}$ It is noted that in contrast to ${U}$ and ${V}$ in the classical theory, the micropolar deformation tensors $\\tilde{{U}}$ and $\\tilde{{V}}$ are not symmetric, in general.", "To take the gradient of the micro-rotation into account, the material wryness tensor $\\Gamma $ and the spacial one $\\gamma $ , are defined by [42], [43], [46], [53] $\\Gamma = -\\frac{1}{2} \\mathcal {E}\\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}( \\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\hbox{Grad}\\hspace{1.111pt}\\tilde{{R}} ),\\quad \\Gamma _{IJ}=-\\frac{1}{2} \\epsilon _{IKL} \\tilde{R}_{iK} \\tilde{R}_{iL,J},\\quad \\gamma = \\tilde{{R}} \\Gamma \\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}},\\quad \\gamma _{ij}=\\tilde{R}_{iI} \\Gamma _{IJ} \\tilde{R}_{jJ}.$ The deformation measures $\\tilde{{U}}$ and $\\Gamma $ are the main kinematic tensors to develop a formulation in material framework (see also, Refs.", "[42], [46]).", "Combinations of Eqs.", "(REF ), (REF ), (REF )$_1$ , and (REF )$_1$ , leads to the following expressions for the virtual kinematic tensor $\\delta \\tilde{{U}}$ and $\\delta \\Gamma $ [32]: $\\delta \\tilde{{U}}= \\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} (\\delta {Y}- \\delta \\hat{{\\omega }}) {F}, \\quad \\delta \\Gamma = \\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\hbox{Grad}\\hspace{1.111pt}\\delta {\\omega }= \\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\hbox{grad}\\hspace{1.111pt}\\delta {\\omega }{F}.$ Next, let $\\text{d} \\mathcal {A}$ and $\\mathbb {n}$ be an infinitesimal area element and its outward unit normal vector in the current configuration, respectively.", "In the micropolar theory, besides the classical traction vector $\\mathbb {t}^{(\\mathbb {n})}$ , the couple vector $\\mathbb {s}^{(\\mathbb {n})}$ also acts on $\\text{d} \\mathcal {A}$ .", "Accordingly, there exist the asymmetric Cauchy stress $\\sigma $ and the couple stress tensor ${m}$ so that $\\begin{split}\\mathbb {t}^{(\\mathbb {n})} =\\sigma \\mathbb {n}, \\quad t^{(\\mathbb {n})}_i=\\sigma _{ij} n_j, \\quad \\mathbb {s}^{(\\mathbb {n})} ={m}\\mathbb {n}, \\quad s^{(\\mathbb {n})}_i=m_{ij} n_j.\\end{split}$ For later use, the first Piola–Kirchoff type stress and couple stress $\\lbrace {P},{M}\\rbrace $ , and the material stress and couple stress $\\lbrace \\tilde{{P}}, \\tilde{{M}} \\rbrace $ are defined by $\\begin{split}\\lbrace {P},{M}\\rbrace =J \\lbrace \\sigma ,{m}\\rbrace {F}^{-\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}, \\quad \\lbrace \\tilde{{P}}, \\tilde{{M}} \\rbrace =\\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\lbrace {P}, {M}\\rbrace = J \\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\lbrace \\sigma ,{m}\\rbrace {F}^{-\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}.\\end{split}$ Following the standard procedures, the spatial and material descriptions of the balance of linear and angular momentum will be as follows (e.g., Refs.", "[32], [42], [46], [43]): $\\left.\\begin{split}\\hbox{div}\\hspace{1.111pt}\\sigma + \\mathbb {f}={\\bf 0}, \\quad \\hbox{Div}\\hspace{1.111pt}{P}+ \\mathbb {f}^{\\ast } ={\\bf 0}\\\\\\hbox{div}\\hspace{1.111pt}{m}- \\mathcal {E}\\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\sigma + \\mathbb {p}={\\bf 0}, \\quad \\hbox{Div}\\hspace{1.111pt}{M}- \\mathcal {E}\\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}{P}+ \\mathbb {p}^{\\ast } ={\\bf 0}\\\\\\end{split}\\right\\rbrace ,$ where inertia effects have been neglected.", "Moreover, $(\\mathbb {f},\\mathbb {f}^{\\ast }=J \\mathbb {f})$ and $(\\mathbb {p}, \\mathbb {p}^{\\ast }=J \\mathbb {p})$ are the body force and body couple per unit current and reference volume, respectively." ], [ "Basic relations of hard-magnetic soft materials", " The main property of HMSMs is the existence of a remnant magnetic flux density, that remains almost unchanged under a wide range of the applied external magnetic flux $\\mathbb {B}^{\\text{ext}}$ (e.g., [16], [17], [24]).", "Let $\\tilde{\\mathbb {B}}^{\\text{rem}}$ and ${\\mathbb {B}}^{\\text{rem}}$ be the remnant magnetic flux in the reference and current configurations, respectively.", "The relation between $\\tilde{\\mathbb {B}}^{\\text{rem}}$ and ${\\mathbb {B}}^{\\text{rem}}$ is as follows [24]: ${\\mathbb {B}}^{\\text{rem}} = J^{-1} {F}\\tilde{\\mathbb {B}}^{\\text{rem}},\\quad B^{\\text{rem}}_i=J^{-1} F_{iJ} \\tilde{B}^{\\text{rem}}_J.$ The action of $\\mathbb {B}^{\\text{ext}}$ on ${\\mathbb {B}}^{\\text{rem}}$ leads to a body couple that acts on the material points.", "By using the same notations $\\mathbb {p}$ and $\\mathbb {p}^{\\ast }$ as introduced in the previous section, the magnetic body couple per unit reference volume is given by (e.g., [24], [40]) $\\mathbb {p}^{\\ast }= J \\mathbb {p}= \\frac{J}{\\mu _0} {\\mathbb {B}}^{\\text{rem}} \\times \\mathbb {B}^{\\text{ext}}=\\frac{1}{\\mu _0} \\big ({F}\\tilde{\\mathbb {B}}^{\\text{rem}} \\big ) \\times \\mathbb {B}^{\\text{ext}},$ where the constant $\\mu _0= 4 \\pi \\times 10^{-7} \\frac{N}{A^2}$ is the free space magnetic permeability.", "For HMSMs, it is often assumed that the external magnetic flux density is uniform in space (e.g., Refs.", "[24], [33], [34], [35], [36]).", "Accordingly, Zhao et al.", "[24] showed that the Maxwell equations of the following form are satisfied in HMSMs (e.g., [40]): $\\text{Curl}\\mathbb {H}=\\epsilon _{IJK} H_{J,K} \\mathbb {E}_I= {\\bf 0}, \\quad \\text{Div}\\mathbb {B}=B_{I,I}=0,$ where $\\mathbb {H}$ is the referential magnetic field, $\\mathbb {B}$ is the referential magnetic flux density, and \"$\\text{Curl}$ \" is the referential curl operator." ], [ "Kinematics of a 10-parameter micropolar shell model", " The geometry of a part of a shell in the reference configuration $\\mathcal {B}_0$ and the current configuration $\\mathcal {B}$ is displayed in Fig.", "REF .", "Mid-surface of the shell in the reference configurations is denoted by $\\mathcal {S}_0$ , which deforms into the surface $\\mathcal {S}$ in the current configuration.", "As shown in Fig.", "REF , in addition to the two common-frame Cartesian coordinates $\\lbrace X_1 X_2 X_3\\rbrace $ and $\\lbrace x_1 x_2 x_3\\rbrace $ , described in the previous section, the convective coordinate system $\\lbrace \\zeta ^1\\zeta ^2\\zeta ^3\\rbrace $ at each material particle $q$ of the reference mid-surface $\\mathcal {S}_0$ is also constructed.", "The coordinate lines $\\zeta ^i$ deform during the motion of the shell in space so that the coordinated lines $\\zeta ^1$ and $\\zeta ^2$ are tangent to both $\\mathcal {S}_0$ and $\\mathcal {S}$ .", "Moreover, the coordinated line $\\zeta ^3 \\in [-{\\textstyle {\\frac{1}{2}}}h, {\\textstyle {\\frac{1}{2}}}h]$ , with $h$ as the initial thickness of the shell, is considered to be perpendicular to $\\mathcal {S}_0$ in the reference configuration.", "However, it does not remain perpendicular to $\\mathcal {S}$ in the current configuration, in general.", "In the sequel, for the sake of simplicity, the coordinate $\\zeta ^3$ may be replaced by $z$ .", "Figure: Schematic view of the deformation of a shellThe position of the material particle $q$ on the mid-surface $\\mathcal {S}_0$ may be described by the vector $\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\mathbb {X}\\hspace{-0.83328pt}}\\hspace{0.83328pt}(\\zeta ^1,\\zeta ^2)$ .", "Let $\\lbrace \\mathbb {A}_{\\alpha },\\mathbb {A}^{\\alpha }, A_{\\alpha \\beta },A^{\\alpha \\beta }, \\mathbb {D},{B}\\rbrace $ be, respectively, the covariant and contravariant basis vectors, covariant and contravariant components of the metric tensor, outward unit normal vector, and the curvature tensor on the undeformed mid-surface $\\mathcal {S}_0$ .", "Then the following relations hold (e.g., [72]): $\\left.\\begin{split}\\mathbb {A}_\\alpha =\\frac{ \\partial \\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\mathbb {X}\\hspace{-0.83328pt}}\\hspace{0.83328pt}}{\\partial \\zeta ^{\\alpha }}, \\quad A_{\\alpha \\beta }=\\mathbb {A}_{\\alpha } \\cdot \\mathbb {A}_{\\beta }, \\quad A^{\\alpha \\eta } A_{\\eta \\beta }= \\delta ^{\\alpha }_{\\beta }, \\quad \\mathbb {A}^{\\alpha }=A^{\\alpha \\beta } \\mathbb {A}_{\\beta }, \\quad \\mathbb {A}_{\\alpha } \\cdot \\mathbb {A}^{\\beta } = \\delta _{\\alpha }^{\\beta }\\\\A=\\det [A_{\\alpha \\beta }], \\quad \\mathbb {D}=\\mathbb {A}_3=\\mathbb {A}^3=\\frac{ \\mathbb {A}_1 \\times \\mathbb {A}_2}{| \\mathbb {A}_1 \\times \\mathbb {A}_2 |}= \\frac{ \\mathbb {A}_1 \\times \\mathbb {A}_2}{\\sqrt{A}}, \\quad {B}= - \\mathbb {D}_{,\\alpha } \\otimes \\mathbb {A}^{\\alpha }\\end{split}\\right\\rbrace ,$ where $\\delta _{\\alpha }^{\\beta }$ is the two-dimensional Kronecker delta.", "For later use, the surface contravariant basis vectors may be written as $\\mathbb {A}^{\\alpha }=A^{\\ast \\alpha J} \\mathbb {E}_J$ , where $A^{\\ast \\alpha J}$ are the Cartesian components of $\\mathbb {A}^{\\alpha }$ .", "The position of the material particle $p$ located at the elevation $z$ with respect to $\\mathcal {S}_0$ is described by $\\mathbb {X}(\\zeta ^1,\\zeta ^2,z)=\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\mathbb {X}\\hspace{-0.83328pt}}\\hspace{0.83328pt}(\\zeta ^1,\\zeta ^2)+z \\mathbb {D}(\\zeta ^1,\\zeta ^2),$ from which the covariant basis vectors $\\mathbb {G}_i$ are obtained to be $\\mathbb {G}_{\\alpha }=\\frac{\\partial \\mathbb {X}}{\\partial \\zeta ^{\\alpha }}=\\mathbb {A}_{\\alpha }+z \\mathbb {D}_{,\\alpha }, \\quad \\mathbb {G}_{3}=\\frac{\\partial \\mathbb {X}}{\\partial z}=\\mathbb {D}.$ Motivated by Eqs.", "(REF )$_8$ and (REF ), the symmetric shifter tensor ${Q}={Q}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}=\\mathbb {G}_i \\otimes \\mathbb {A}^i= {I}- z {B}$ , with ${{I}}$ as the identity tensor, is defined.", "Accordingly, it is possible to map the covariant and contravariant basis vectors from $z \\ne 0$ to the midsurface with $z=0$ , and vice versa.", "More precisely, the following relations hold: $\\mathbb {G}_i={Q}\\mathbb {A}_i, \\quad \\mathbb {A}_i = {Q}^{-1} \\mathbb {G}_i, \\quad \\mathbb {G}^i={Q}^{-1} \\mathbb {A}^i, \\quad \\mathbb {A}^i = {Q}\\mathbb {G}^i, \\quad $ where $\\mathbb {G}^i$ are the contravariant basis vectors at $\\mathbb {X}$ , and use has been made of the symmetry property of ${Q}$ .", "For later use, the three-dimensional material gradient operator $\\text{Grad}_{\\zeta }$ , with respect to the convective coordinate system $\\lbrace \\zeta ^1\\zeta ^2\\zeta ^3\\rbrace $ in the reference configuration, and the material surface gradient operator $\\text{Grad}_{\\mathcal {S}_0}$ with respect to $\\lbrace \\zeta ^1\\zeta ^2\\rbrace $ are defined as follows: $\\hbox{Grad}\\hspace{1.111pt}_{\\zeta } \\lbrace \\bullet \\rbrace = \\frac{\\partial \\lbrace \\bullet \\rbrace }{\\partial \\zeta ^i} \\otimes \\mathbb {G}^i, \\quad \\hbox{Grad}\\hspace{1.111pt}_{\\mathcal {S}_0} \\lbrace \\bullet \\rbrace = \\frac{\\partial \\lbrace \\bullet \\rbrace }{\\partial \\zeta ^{\\alpha }} \\otimes \\mathbb {A}^{\\alpha }.$ By assuming that a straight material fiber perpendicular to $\\mathcal {S}_0$ remains straight during deformation, the following macro deformation field is considered (e.g., [66], [67]): $\\mathbb {x}=\\psi (\\zeta ^1,\\zeta ^2,z,t)=\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\mathbb {x}\\hspace{-0.83328pt}}\\hspace{0.83328pt}(\\zeta ^1,\\zeta ^2,t)+z[1+z \\phi (\\zeta ^1,\\zeta ^2,t)] \\mathbb {d}(\\zeta ^1,\\zeta ^2,t),$ where $\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\mathbb {x}\\hspace{-0.83328pt}}\\hspace{0.83328pt}$ is the image of $\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\mathbb {X}\\hspace{-0.83328pt}}\\hspace{0.83328pt}$ on $\\mathcal {S}$ , and $\\mathbb {d}$ is a director vector along the deformed $z$ -axis.", "Moreover, the scalar field $\\phi $ describes through the thickness stretching of the shell.", "Similar to the quantities defined on $\\mathcal {S}_0$ in Eq.", "(REF ), let $\\lbrace \\mathbb {a}_{\\alpha },\\mathbb {a}^{\\alpha }, a_{\\alpha \\beta },a^{\\alpha \\beta }, \\mathbb {n},{b}\\rbrace $ be the surface quantities defined on $\\mathcal {S}$ .", "It then follows that $\\left.\\begin{split}\\mathbb {a}_\\alpha =\\frac{ \\partial \\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\mathbb {x}\\hspace{-0.83328pt}}\\hspace{0.83328pt}}{\\partial \\zeta ^{\\alpha }}, \\quad a_{\\alpha \\beta }=\\mathbb {a}_{\\alpha } \\cdot \\mathbb {a}_{\\beta }, \\quad a^{\\alpha \\eta } a_{\\eta \\beta }= \\delta ^{\\alpha }_{\\beta }, \\quad \\mathbb {a}^{\\alpha }=a^{\\alpha \\beta } \\mathbb {a}_{\\beta }, \\quad \\mathbb {a}_{\\alpha } \\cdot \\mathbb {a}^{\\beta } = \\delta _{\\alpha }^{\\beta }\\\\a=\\det [a_{\\alpha \\beta }], \\quad \\mathbb {n}=\\frac{ \\mathbb {a}_1 \\times \\mathbb {a}_2}{| \\mathbb {a}_1 \\times \\mathbb {a}_2 |} = \\frac{ \\mathbb {a}_1 \\times \\mathbb {a}_2}{\\sqrt{a}}, \\quad {b}= - \\mathbb {n}_{,\\alpha } \\otimes \\mathbb {a}^{\\alpha }\\end{split}\\right\\rbrace .$ Moreover, based on Eq.", "(REF ), the covariant basis vectors $\\mathbb {g}_i$ at $\\mathbb {x}$ are as follows: $\\mathbb {g}_{\\alpha }=\\frac{\\partial \\mathbb {x}}{\\partial \\zeta ^{\\alpha }}=\\mathbb {a}_{\\alpha }+z^2 \\phi _{,\\alpha } \\mathbb {d}+ z(1+z \\phi ) \\mathbb {d}_{,\\alpha }, \\quad \\mathbb {g}_{3}=\\frac{\\partial \\mathbb {x}}{\\partial z}=(1+2z \\phi ) \\mathbb {d}.$ It is observed from Eq.", "(REF )$_2$ that the director vector $\\mathbb {d}$ and the basis vector $\\mathbb {g}_{3}$ are in the same direction.", "However, the normal vector $\\mathbb {n}$ and $\\mathbb {g}_{3}$ are not in the same direction, in general.", "Next, the vector quantities $\\mathbb {u}$ and $\\mathbb {w}$ as, respectively, the displacement field of the mid-surface and the director displacement are defined by $\\mathbb {u}=\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\mathbb {x}\\hspace{-0.83328pt}}\\hspace{0.83328pt}-\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\mathbb {X}\\hspace{-0.83328pt}}\\hspace{0.83328pt}=u_i \\mathbb {e}_i, \\quad \\mathbb {w}=\\mathbb {d}-\\mathbb {D}=w_i \\mathbb {e}_i,$ where $u_i$ and $w_i$ are the Cartesian components of $\\mathbb {u}$ and $\\mathbb {w}$ .", "The deformation gradient tensor ${F}$ , described in the convective coordinate system $\\lbrace \\zeta ^1\\zeta ^2\\zeta ^3\\rbrace $ , takes the following form: ${F}=\\hbox{Grad}\\hspace{1.111pt}_{\\zeta }{\\mathbb {x}}=\\frac{\\partial \\mathbb {x}}{\\partial \\zeta ^i} \\otimes \\mathbb {G}^i= \\mathbb {g}_i \\otimes \\mathbb {G}^i= (\\mathbb {g}_i \\otimes \\mathbb {A}^i) {Q}^{-1}.$ In the present shell model, using Eqs.", "(REF ), (REF )$_3$ , (REF ), and (REF ), and neglecting the higher-order terms involving $z^2$ , the deformation gradient is approximated as follows: ${F}\\approx \\tilde{{F}} {Q}^{-1}\\quad \\text{with} \\quad \\tilde{{F}} = {F}^{(0)} + z {F}^{(1)}.$ Here, the second-order tensors ${F}^{(0)}$ and ${F}^{(1)}$ are given by $\\left.\\begin{split}{F}^{(0)}=\\mathbb {a}_i \\otimes \\mathbb {A}^i=\\mathbb {a}_{\\alpha } \\otimes \\mathbb {A}^{\\alpha } + \\mathbb {d}\\otimes \\mathbb {D}={I}+ \\hbox{Grad}\\hspace{1.111pt}_{\\mathcal {S}_0} \\mathbb {u}+\\mathbb {w}\\otimes \\mathbb {D}\\\\{F}^{(1)}=\\mathbb {d}_{,\\alpha } \\otimes \\mathbb {A}^{\\alpha } + 2 \\phi \\ \\mathbb {d}\\otimes \\mathbb {D}= \\hbox{Grad}\\hspace{1.111pt}_{\\mathcal {S}_0} \\mathbb {w}+2 \\phi (\\mathbb {D}+\\mathbb {w})\\otimes \\mathbb {D}-{B}\\end{split}\\right\\rbrace ,$ where use has been made of Eqs.", "(REF )$_8$ and (REF ).", "To circumvent numerical difficulties in finite element solution, the in-plane deformation gradient term ${F}^{(0)}$ is enhanced by the second-order tensor $\\bar{{F}}$ , to be introduced in Section .", "Accordingly, the term ${F}^{(0)}$ in Eq.", "(REF )$_2$ is replaced by ${F}^{(0)}+\\bar{{F}}$ .", "Moreover, following Ramezani and Naghadabadi [73] in the context of the micropolar Timoshenko beam model, it is assumed that the micro-rotation pseudo-vector is constant along the shell thickness, namely $\\theta = \\tilde{\\theta }(\\zeta ^1,\\zeta ^2)$ .", "Accordingly, the micro-rotation tensor $\\tilde{{R}}(\\theta )$ is independent of the $z$ coordinate.", "Keeping this in mind and using Eqs.", "(REF )$_1$ , (REF )$_1$ , (REF )$_3$ , (REF )$_2$ , and (REF ), the micropolar deformation measures $\\tilde{{U}}$ and $\\Gamma $ , in the present shell formulation, may be written as $\\begin{split}\\tilde{{U}}=\\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} {F}^{\\ast } {Q}^{-1}=\\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} ({F}^{(0)} + \\bar{{F}} + z {F}^{(1)}) {Q}^{-1}, \\quad \\Gamma = -\\frac{1}{2} \\mathcal {E}\\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\big [ \\tilde{{R}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} (\\hbox{Grad}\\hspace{1.111pt}_{\\mathcal {S}_0}\\tilde{{R}}) {Q}^{-1} \\big ].\\end{split}$ where ${F}^{\\ast } = \\tilde{{F}} + \\bar{{F}}$ is the enhanced form of $\\tilde{{F}}$ .", "The present formulation with $\\lbrace \\mathbb {u}, \\mathbb {w}, \\phi , \\theta \\rbrace $ as the unknown field variables may be regarded as a 10-parameter micropolar shell model.", "In other words, the present formulation is the extension of the classical 7-parameter shell model, with $\\lbrace \\mathbb {u}, \\mathbb {w}, \\phi \\rbrace $ as its unknowns, introduced by Sansour (e.g., [66], [67])." ], [ "Variational formulation", " In this section, the virtual work statement of the problem is presented.", "The principle of virtual work is based on the requirement that $\\delta \\mathcal {U}-\\delta \\mathcal {W}=0$ , where $\\delta \\mathcal {U}$ and $\\delta \\mathcal {W}$ are the virtual internal energy and the virtual work of external loads, respectively [74].", "Let $\\delta \\Psi $ and $\\delta \\hat{\\mathcal {W}}$ denote, respectively, $\\delta \\mathcal {U}$ and $\\delta \\mathcal {W}$ per unit reference volume.", "In what follows, the expressions for $\\delta \\Psi $ and $\\delta \\hat{\\mathcal {W}}$ in the present formulation are derived.", "Moreover, for the linearization purpose to be used in the next section, the increments of $\\delta \\Psi $ and $\\delta \\hat{\\mathcal {W}}$ are also calculated.", "In the sequel, it is assumed that the material is hyperelastic, and thermal effects are also neglected.", "Accordingly, the internal energy density per unit reference volume is of the form $\\Psi =\\bar{\\Psi }(\\tilde{{U}}, \\Gamma )=\\tilde{\\Psi }(\\tilde{{V}}, \\gamma )$ [42], [46].", "In particular, from the dependency of $\\Psi $ to the material tensors $\\tilde{{U}}$ and $\\Gamma $ it follows that $\\begin{split}\\delta \\Psi &= \\frac{\\partial \\Psi }{\\partial \\tilde{{U}}} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\delta \\tilde{{U}}+ \\frac{\\partial \\Psi }{\\partial \\Gamma } \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\delta \\Gamma =(\\tilde{{R}} \\frac{\\partial \\Psi }{\\partial \\tilde{{U}}} {F}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}) \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}(\\delta {Y}- \\delta \\hat{{\\omega }})+(\\tilde{{R}} \\frac{\\partial \\Psi }{\\partial \\Gamma }{F}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}) \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\hbox{grad}\\hspace{1.111pt}\\delta {\\omega },\\end{split}$ where use has been made of Eq.", "(REF ).", "Moreover, the constitutive relations for the various stress and couple stress measures are as follows [32]: $\\begin{split}\\lbrace \\tilde{{P}} , \\tilde{{M}} \\rbrace =\\bigg \\lbrace \\frac{\\partial \\Psi }{\\partial \\tilde{{U}}} ,\\frac{\\partial \\Psi }{\\partial \\Gamma } \\bigg \\rbrace , \\quad \\lbrace {P}, {M}\\rbrace = \\tilde{{R}} \\bigg \\lbrace \\frac{\\partial \\Psi }{\\partial \\tilde{{U}}} ,\\frac{\\partial \\Psi }{\\partial \\Gamma } \\bigg \\rbrace , \\quad \\lbrace \\sigma , {m}\\rbrace =\\frac{1}{J} \\tilde{{R}} \\bigg \\lbrace \\frac{\\partial \\Psi }{\\partial \\tilde{{U}}} ,\\frac{\\partial \\Psi }{\\partial \\Gamma } \\bigg \\rbrace {F}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}.\\end{split}$ It is noted that Eqs.", "(REF ) and (REF ) hold for all three-dimensional micropolar hyperelastic solids.", "For the present shell model, first the quantities denoted by $\\delta \\Upsilon ^{(1)}$ and $\\delta \\Upsilon ^{(2)}$ are defined by $\\left.\\begin{split}\\delta \\Upsilon ^{(1)} =\\tilde{{R}} \\delta \\tilde{{U}} {Q}=\\delta {F}^{(0)}+\\delta \\bar{{F}}+z\\delta {F}^{(1)}- \\delta \\hat{{\\omega }} {F}^{\\ast }\\\\\\delta \\Upsilon ^{(2)} =\\tilde{{R}} \\delta \\Gamma {Q}=(\\hbox{Grad}\\hspace{1.111pt}\\delta {\\omega }) {Q}=\\hbox{Grad}\\hspace{1.111pt}_{\\mathcal {S}_0} \\delta {\\omega }\\end{split}\\right\\rbrace .$ Next, after replacing $\\tilde{{F}}$ by the enhanced form ${F}^{\\ast }$ , combination of Eqs.", "(REF ), (REF )$_1$ , (REF )$_{1,2}$ , and (REF ) leads to the following expression for $\\delta \\Psi $ : $\\begin{split}\\delta \\Psi &= {P}^{(0)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\delta \\Upsilon ^{(1)}+{M}^{(0)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\delta \\Upsilon ^{(2)}\\quad \\text{with} \\quad \\lbrace {P}^{(0)}, {M}^{(0)} \\rbrace =\\lbrace {P}{Q}^{-1} , {M}{Q}^{-1} \\rbrace .\\end{split}$ For linearization purpose, the increment of $\\delta \\Psi $ under the increment of the field variables $\\Delta \\mathbb {u}$ , $\\Delta \\mathbb {w}$ , $\\Delta \\phi $ , and $\\Delta \\theta $ is needed.", "Accordingly, from Eqs.", "(REF )$_1$ and (REF ) it follows that $\\begin{split}\\Delta \\delta \\Psi =& \\delta \\Upsilon ^{(1)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\mathcal {C}^{(1)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\Delta \\Upsilon ^{(1)}+\\delta \\Upsilon ^{(2)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\mathcal {C}^{(2)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\Delta \\Upsilon ^{(2)}+\\delta \\Upsilon ^{(1)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\mathcal {C}^{(3)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\Delta \\Upsilon ^{(2)}\\\\&+\\delta \\Upsilon ^{(2)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\mathcal {C}^{(4)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\Delta \\Upsilon ^{(1)}+{P}^{(0)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\Delta \\delta {H}^{(1)}+{M}^{(0)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\Delta \\delta {H}^{(2)},\\end{split}$ where $\\Delta \\delta {H}^{(1)}$ and $\\Delta \\delta {H}^{(2)}$ are as follows: $\\left.\\begin{split}\\Delta \\delta {H}^{(1)} = \\frac{1}{2}(\\Delta \\hat{{\\omega }} \\delta \\hat{{\\omega }}+\\delta \\hat{{\\omega }} \\Delta \\hat{{\\omega }}) {F}^{\\ast }-(\\delta \\hat{{\\omega }} \\Delta {F}^{\\ast }+\\Delta \\hat{{\\omega }} \\delta {F}^{\\ast }) \\\\\\Delta \\delta {H}^{(2)} = -\\frac{1}{2}(\\Delta \\hat{{\\omega }} \\hbox{Grad}\\hspace{1.111pt}_{\\mathcal {S}_0} \\delta {\\omega }+\\delta \\hat{{\\omega }} \\hbox{Grad}\\hspace{1.111pt}_{\\mathcal {S}_0} \\Delta {\\omega })\\end{split}\\right\\rbrace .$ Moreover, the fourth-order tensors $\\mathcal {C}^{(\\mathcal {I})}$ ($\\mathcal {I}=1,2,3,4$ ) have the following components: $\\begin{split}\\mathcal {C}^{(\\mathcal {I})}_{iJkL} = \\tilde{R}_{iP} \\tilde{R}_{kQ} Q^{-1}_{JR} Q^{-1}_{LS}\\tilde{\\mathcal {C}}^{(\\mathcal {I})}_{PRQS}\\quad \\text{with} \\quad Q^{-1}_{JR}= ({Q}^{-1})_{JR},\\end{split}$ and $\\tilde{\\mathcal {C}}^{(\\mathcal {I})}_{PJQL}$ are the components of the following fourth-order tensors: $\\begin{split}\\big \\lbrace \\tilde{\\mathcal {C}}^{(1)}, \\tilde{\\mathcal {C}}^{(2)}, \\tilde{\\mathcal {C}}^{(3)}, \\tilde{\\mathcal {C}}^{(4)} \\big \\rbrace =\\bigg \\lbrace \\frac{\\partial ^2 \\Psi }{\\partial \\tilde{{U}} \\partial \\tilde{{U}} },\\frac{\\partial ^2 \\Psi }{\\partial \\Gamma \\partial \\Gamma },\\frac{\\partial ^2 \\Psi }{\\partial \\tilde{{U}} \\partial \\Gamma },\\frac{\\partial ^2 \\Psi }{\\partial \\Gamma \\partial \\tilde{{U}} } \\bigg \\rbrace .\\end{split}$ In this work, a micropolar neo-Hookean constitutive model of the form proposed in Ref.", "[32] is employed, according to which the free energy per unit reference volume is given by $\\Psi =(\\eta +{\\textstyle {\\frac{1}{2}}}\\mu ) \\hbox{tr}\\hspace{1.111pt}(\\tilde{{U}} \\tilde{{U}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}})-\\eta \\hbox{tr}\\hspace{1.111pt}(\\tilde{{U}} ^2)+\\frac{1}{2}\\lambda (\\ln J)^2-\\mu \\ln J+ \\frac{1}{2} \\mu l^2 \\hbox{tr}\\hspace{1.111pt}( \\Gamma \\Gamma ^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} ),$ where $\\eta $ is a material constant, and $l$ is a length-scale parameter.", "From Eqs.", "(REF )$_1$ and (REF ), the material stress $\\tilde{{P}}$ and the couple stress $\\tilde{{M}}$ are then calculated to be $\\tilde{{P}} = (\\mu +\\eta )\\tilde{{U}} -\\eta \\tilde{{U}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}+(\\lambda \\ln J -\\mu ) \\tilde{{U}}^{-\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}, \\quad \\tilde{{M}}= \\mu l^2 \\Gamma .$ Moreover, Eqs.", "(REF ) and (REF ) lead to the following fourth-order tensors $\\tilde{\\mathcal {C}}^{(\\mathcal {I})}$ ($\\mathcal {I}=1,2,3,4$ ): $\\left.\\begin{split}\\tilde{\\mathcal {C}}^{(1)}=(\\mu +\\eta ) {I}\\odot {I}-\\eta {I}\\boxtimes {I}+\\lambda \\tilde{{U}}^{-\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\otimes \\tilde{{U}}^{-\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}+(\\mu -\\lambda \\ln J) \\tilde{{U}}^{-\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\boxtimes \\tilde{{U}}^{-\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}\\\\\\tilde{\\mathcal {C}}^{(2)}=\\mu l^2 {I}\\odot {I}, \\quad \\tilde{\\mathcal {C}}^{(3)} = \\tilde{\\mathcal {C}}^{(4)} = {\\bf 0}\\end{split}\\right\\rbrace .$ Next, it is recalled that $\\mathbb {B}^{\\text{ext}}$ exerts the body couple per unit reference volume $\\mathbb {p}^{\\ast }$ on an HMSM, as given by Eq.", "(REF )$_2$ .", "Noting that $\\mathbb {p}^{\\ast }$ is work-conjugate to the micro-rotation $\\theta $ , the virtual work per unit reference volume $\\delta \\hat{\\mathcal {W}}$ expended by $\\mathbb {B}^{\\text{ext}}$ on an HMSM is given by $\\delta \\hat{\\mathcal {W}} = \\mathbb {p}^{\\ast } \\cdot \\delta \\theta =\\frac{1}{\\mu _0} [({F}\\tilde{\\mathbb {B}}^{\\text{rem}}) \\times \\mathbb {B}^{\\text{ext}}] \\cdot \\delta \\theta .$ For the linearization purpose, the increment of $\\delta \\hat{\\mathcal {W}}$ takes the following form: $\\Delta \\delta \\hat{\\mathcal {W}} = \\frac{1}{\\mu _0} [(\\Delta {F}^{\\ast } {Q}^{-1} \\tilde{\\mathbb {B}}^{\\text{rem}}) \\times \\mathbb {B}^{\\text{ext}}] \\cdot \\delta \\theta ,$ which leads to the expression for the load stiffness matrix in the next section." ], [ "Finite element formulation", " In this section, a nonlinear finite element formulation in the material framework is developed.", "Let $\\mathcal {S}_0^{\\mathfrak {e}}$ be a typical element in the referential midsurface $\\mathcal {S}_0^{\\mathfrak {e}}$ .", "To perform numerical integration, the typical element is mapped to the two-dimensional parent square element $\\square =[-1,1] \\times [-1,1]$ in the $\\lbrace \\xi , \\eta \\rbrace $ space, with $\\xi , \\eta \\in [-1,1]$ .", "The field variables $\\lbrace u_i, w_i, \\theta _i, \\phi \\rbrace $ , over the parent element $\\mathcal {S}_0^{\\mathfrak {e}}$ , are interpolated as follows: $\\begin{split}u_i= \\mathbb {N}_{u} \\mathbb {U}_i, \\quad w_i=\\mathbb {N}_{w} \\mathbb {W}_i, \\quad \\theta _i=\\mathbb {N}_{w} \\Theta _{i}, \\quad \\phi =\\mathbb {N}_{\\phi } \\Phi ,\\end{split}$ where $\\mathbb {N}_{u}=\\lbrace N_u^1,N_u^2, ..., N_u^{n_u} \\rbrace $ is a row vector containing the shape functions that interpolate the midsurface displacement $u_i$ over the element.", "Here, $n_u$ is the number of nodes of the element that possess the $u_i$ -DOF.", "Let $U_i^{\\mathcal {I}}$ be the displacement component $u_i$ at the $\\mathcal {I}$ 'th node ($\\mathcal {I}=1,2,...,n_u$ ) of the element.", "Accordingly, $\\mathbb {U}_i=\\lbrace U_i^1, U_i^2,..., U_i^{n_u}\\rbrace ^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}$ is a column vector that involves all $U_i^{\\mathcal {I}}$ 's over the element.", "Similar definitions hold for the other quantities in Eq.", "(REF ).", "Moreover, similar relations hold for the increment $\\lbrace \\Delta u_i,\\Delta w_i,\\Delta \\theta _i,\\Delta \\phi \\rbrace $ and variation $\\lbrace \\delta u_i,\\delta w_i,\\delta \\theta _i,\\delta \\phi \\rbrace $ of the field variables.", "The generalized displacement vector ${\\mathbb {v}}^{\\mathfrak {e}}$ involving all nodal DOFs of the typical element may be written as ${\\mathbb {v}}^{\\mathfrak {e}}_{n^{\\mathfrak {e}} \\times 1}=\\lbrace \\mathbb {U}_1^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}},\\mathbb {U}_2^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}, \\mathbb {U}_3^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}},\\mathbb {W}_1^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}},\\mathbb {W}_2^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}},\\mathbb {W}_3^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}},\\Theta _1^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}},\\Theta _2^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}},\\Theta _3^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}},\\Phi ^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\rbrace ^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}},$ where $n^{\\mathfrak {e}}=3(n^u+n^w+n^{\\theta })+n^{\\phi }$ is the number of nodal DOFs.", "Based on Esq.", "(REF ), (REF ), (REF ), and (REF ), the following relations hold: $\\left.\\begin{split}\\delta F^{(0)}_{iJ}= A^{\\ast \\alpha J} \\mathbb {N}_{u,\\alpha } \\delta \\mathbb {U}_{i}+ \\mathbb {N}_w D_J \\delta \\mathbb {W}_{i}=\\mathbb {b}^{(0)}_{iJ} \\delta {\\mathbb {v}}^{\\mathfrak {e}}\\\\\\delta F^{(1)}_{iJ}=(A^{\\ast \\alpha J} \\mathbb {N}_{w,\\alpha }+2 \\phi D_J \\mathbb {N}_w ) \\delta \\mathbb {W}_{i}+2 d_i D_j \\mathbb {N}_{\\phi } \\delta \\Phi = \\mathbb {b}^{(1)}_{iJ} \\delta {\\mathbb {v}}^{\\mathfrak {e}}\\\\(\\delta \\hat{{\\omega }} {F}^{\\ast })_{iJ}=\\epsilon _{ijk} \\Lambda _{kp} F^{\\ast }_{jJ}\\mathbb {N}_{\\theta } \\delta {\\Theta }_p=\\mathbb {b}^{(\\omega F)}_{iJ} \\delta {\\mathbb {v}}^{\\mathfrak {e}}\\\\\\delta Y^{(2)}_{iJ}=A^{\\ast \\alpha J} (\\Lambda _{ip} \\mathbb {N}_{\\theta } )_{,\\alpha }\\delta \\Theta _{p}= \\mathbb {b}^{(2)}_{iJ} \\delta {\\mathbb {v}}^{\\mathfrak {e}}\\\\\\end{split}\\right\\rbrace .$ Here, the last equality in each relation indicates that all components can be expressed in terms of the generalized virtual displacement vector $\\delta {\\mathbb {v}}^{\\mathfrak {e}}$ .", "Next, the enhanced deformation gradient tensor $\\bar{{F}}$ is considered.", "Let $\\alpha = \\lbrace \\alpha _1, \\alpha _2, ..., \\alpha _{\\mathcal {P}^{\\ast } } \\rbrace ^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}$ be the column vector of enhanced parameters $\\alpha _{\\mathcal {P}}$ 's ($\\mathcal {P}=1,2,...,\\mathcal {P}^{\\ast }$ ), where $\\mathcal {P}^{\\ast }$ is the total number of enhanced parameters.", "The components of $\\bar{{F}}$ and its variation/increment depend linearly on $\\alpha _{\\mathcal {P}}$ 's (see, e.g., Refs.", "[68], [69], [70], [71]).", "Here, following the notation used in Eq.", "(REF ), one may write $\\begin{split}\\lbrace \\bar{F}_{iJ}, \\delta \\bar{F}_{iJ}, \\Delta \\bar{F}_{iJ} \\rbrace = \\bar{\\mathbb {b}}_{iJ} \\lbrace \\alpha , \\delta \\alpha , \\Delta \\alpha \\rbrace \\quad \\text{or} \\quad \\lbrace \\bar{\\mathbb {F}}, \\delta \\bar{\\mathbb {F}}, \\Delta \\bar{\\mathbb {F}} \\rbrace =\\bar{\\mathbb {B}} \\lbrace \\alpha , \\delta \\alpha , \\Delta \\alpha \\rbrace ,\\end{split}$ where $\\bar{\\mathbb {b}}_{iJ}$ are the $\\mathcal {P}^{\\ast } \\times 1$ row vectors, $\\bar{\\mathbb {F}}$ is the $9 \\times 1$ vectorial representation of $\\bar{{F}}$ , and $\\bar{\\mathbb {B}}$ is a $9 \\times \\mathcal {P}^{\\ast }$ matrix the rows of which are $\\bar{\\mathbb {b}}_{iJ}$ .", "Now, let $\\mathbb {B}^{(\\mathcal {N})}$ ($\\mathcal {N}=0,1,2$ ) and $\\mathbb {B}^{(\\omega F)}$ be the $9 \\times n^{\\mathfrak {e}}$ matrices whose rows are $\\mathbb {b}^{(\\mathcal {N})}_{iJ}$ and $\\mathbb {b}^{(\\omega F)}_{iJ}$ , respectively.", "From Eqs.", "(REF ), (REF ), and (REF ) it then follows that $\\delta \\mathbb {Y}^{(1)}=\\tilde{\\mathbb {B}} \\delta {\\mathbb {v}}+ \\bar{\\mathbb {B}} \\delta \\alpha \\quad \\text{and} \\quad \\delta \\mathbb {Y}^{(2)}=\\mathbb {B}^{(2)} \\delta {\\mathbb {v}}\\quad \\text{with} \\quad \\tilde{\\mathbb {B}}= \\mathbb {B}^{(0)}+\\mathbb {B}^{(\\omega F)}+z \\mathbb {B}^{(1)}.$ The differential volume element $\\text{d}\\mathcal {V}^{\\mathfrak {e}}_0$ located at the elevation $z$ with respect to the typical element $\\mathcal {S}^{\\mathfrak {e}}_0$ is given by (e.g., [66]) $\\text{d}\\mathcal {V}^{\\mathfrak {e}}_0=Q \\text{d}\\mathcal {S}^{\\mathfrak {e}}_0 \\text{d}z\\quad \\text{with} \\quad Q=\\det {Q}\\quad \\text{and} \\quad \\text{d}\\mathcal {S}^{\\mathfrak {e}}_0 =\\sqrt{A} \\text{d} \\zeta ^1 \\text{d} \\zeta ^2.$ By integrating Eqs.", "(REF ) and (REF ) over the reference volume, the virtual internal energy of the element, $\\delta \\mathcal {U}^{\\mathfrak {e}} = \\int _{\\mathcal {V}_0^{\\mathfrak {e}}} \\Psi \\text{d} \\mathcal {V}_0^{\\mathfrak {e}}$ , and the virtual work of external magnetic loading on the element, $\\delta \\mathcal {W}^{\\mathfrak {e}} = \\int _{\\mathcal {V}_0^{\\mathfrak {e}}} \\hat{\\mathcal {W}} \\text{d} \\mathcal {V}_0^{\\mathfrak {e}}$ , are obtained.", "Using Eqs.", "(REF ) and (REF ), the expressions for $\\delta \\mathcal {U}^{\\mathfrak {e}}$ and $\\delta \\mathcal {W}^{\\mathfrak {e}}$ may be written as $\\delta \\mathcal {U}^{\\mathfrak {e}} = \\delta {\\mathbb {v}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\mathbb {F}_{\\text{int}}^{v }+\\delta \\alpha ^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\mathbb {F}_{\\text{int}}^{\\alpha }, \\quad \\delta \\mathcal {W}^{\\mathfrak {e}}=\\delta \\Theta _i^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\mathbb {F}^{\\theta }_{ \\text{ext} i}=\\delta {\\mathbb {v}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\mathbb {F}_{\\text{ext}}^{v } ,$ where the internal force vectors $\\mathbb {F}^{\\text{int} v }$ and $\\mathbb {F}^{\\text{int} \\alpha }$ are as follows: $\\begin{split}\\mathbb {F}_{\\text{int}}^{v } = \\int _{\\mathcal {V}_0^{\\mathfrak {e}}}\\big ( \\tilde{\\mathbb {B}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\mathbb {P}^{(0)}+{\\mathbb {B}}^{(2)\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\mathbb {M}^{(0)} \\big ) \\text{d} \\mathcal {V}_0^{\\mathfrak {e}}, \\quad \\mathbb {F}_{\\text{int}}^{\\alpha } = \\int _{\\mathcal {V}_0^{\\mathfrak {e}}}\\bar{\\mathbb {B}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\mathbb {P}^{(0)} \\text{d} \\mathcal {V}_0^{\\mathfrak {e}}.\\end{split}$ Moreover, the external force vector $\\mathbb {F}^{\\theta }_{ \\text{ext} i}$ , work conjugate to $\\Theta _i$ , is given by $\\mathbb {F}^{\\theta }_{ \\text{ext} i} =\\frac{1}{\\mu _0} \\int _{V_0^{\\mathfrak {e}}}\\epsilon _{imj} F_{mJ} \\tilde{B}^{\\text{rem}}_J B^{\\text{ext}}_j \\mathbb {N}_{\\theta }^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}\\text{d} \\mathcal {V}_0^{\\mathfrak {e}}.$ Next, the linearized equations resulting from Eqs.", "(REF ), (REF ), and (REF ) may be written as $\\Delta \\delta \\mathcal {U}^{\\mathfrak {e}} - \\Delta \\delta \\mathcal {W}^{\\mathfrak {e}}= -(\\delta \\mathcal {U}^{\\mathfrak {e}} - \\delta \\mathcal {W}^{\\mathfrak {e}}),$ from which the following system of algebraic equations is extracted: $\\begin{split}\\begin{bmatrix}\\mathbb {K}^{v v}_{\\text{mat}}+\\mathbb {K}^{v v}_{\\text{geo}}-\\mathbb {K}^{v v}_{\\text{load}} &\\mathbb {K}^{v \\alpha }_{\\text{mat}}+\\mathbb {K}^{v \\alpha }_{\\text{geo}}-\\mathbb {K}^{v \\alpha }_{\\text{load}}\\\\\\mathbb {K}^{v \\alpha \\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}_{\\text{mat}}+\\mathbb {K}^{\\alpha v}_{\\text{geo}} &\\mathbb {K}^{\\alpha \\alpha }_{\\text{mat}}\\\\\\end{bmatrix}\\begin{Bmatrix}\\Delta {\\mathbb {v}} \\\\\\Delta \\alpha \\end{Bmatrix}=-\\begin{Bmatrix}\\mathbb {F}_{\\text{int}}^{v } - \\mathbb {F}_{\\text{ext}}^{v } \\\\\\mathbb {F}_{\\text{int}}^{\\alpha }\\end{Bmatrix},\\end{split}$ where the subscripts \"mat\", \"geo\", and \"load\", represent the material, geometric, and load part of the element stiffness matrix.", "In particular, the material sub-matrices $\\mathbb {K}^{v v}_{\\text{mat}}$ , $\\mathbb {K}^{v \\alpha }_{\\text{mat}}$ , and $\\mathbb {K}^{\\alpha \\alpha }_{\\text{mat}}$ in Eq.", "(REF ) are as follows: $\\left.\\begin{split}\\mathbb {K}^{v v}_{\\text{mat}} = \\int _{V_0^{\\mathfrak {e}}}[\\tilde{\\mathbb {B}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}(\\mathbb {A}^{(1)}\\tilde{\\mathbb {B}}+\\mathbb {A}^{(3)}\\mathbb {B}^{(2)})+\\mathbb {B}^{(2) \\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} (\\mathbb {A}^{(2)}\\mathbb {B}^{(2)}+\\mathbb {A}^{(4)} \\tilde{\\mathbb {B}})]\\text{d} \\mathcal {V}_0^{\\mathfrak {e}}\\\\\\mathbb {K}^{v \\alpha }_{\\text{mat}}= \\int _{V_0^{\\mathfrak {e}}}(\\tilde{\\mathbb {B}}^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\mathbb {A}^{(1)}+ \\mathbb {B}^{(2) \\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\mathbb {A}^{(4)}) \\bar{\\mathbb {B}}\\text{d} \\mathcal {V}_0^{\\mathfrak {e}}, \\quad \\mathbb {K}^{\\alpha \\alpha }_{\\text{mat}}= \\int _{V_0^{\\mathfrak {e}}}\\bar{\\mathbb {B}} ^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\mathbb {A}^{(1)} \\bar{\\mathbb {B}}\\text{d} \\mathcal {V}_0^{\\mathfrak {e}}\\end{split}\\right\\rbrace .$ Moreover, the load sub-matrices $\\mathbb {K}^{v v}_{\\text{load}}$ , $\\mathbb {K}^{v \\alpha }_{\\text{load}}$ are given by $\\left.\\begin{split}\\mathbb {K}^{v v}_{\\text{load}} = \\int _{V_0^{\\mathfrak {e}}}\\epsilon _{ijk} Q^{-1}_{JN} \\tilde{B}^{\\text{rem}}_N B^{\\text{ext}}_j\\mathbb {y}_k (\\mathbb {b}^{(0)}_{iJ}+z \\mathbb {b}^{(1)}_{iJ})\\text{d} \\mathcal {V}_0^{\\mathfrak {e}}\\\\\\mathbb {K}^{v \\alpha }_{\\text{load}} = \\int _{V_0^{\\mathfrak {e}}}\\epsilon _{ijk} Q^{-1}_{JN} \\tilde{B}^{\\text{rem}}_N B^{\\text{ext}}_j\\mathbb {y}_k \\bar{\\mathbb {b}}_{iJ} \\text{d} \\mathcal {V}_0^{\\mathfrak {e}}\\end{split}\\right\\rbrace ,$ where $\\mathbb {y}_k=\\lbrace {\\bf 0}_{1 \\times 3(n^u+n^w)},{\\bf 0}_{1 \\times (k-1)n^{\\theta }},\\mathbb {N}_{\\theta },{\\bf 0}_{1 \\times (3-k)n^{\\theta }},{\\bf 0}_{1 \\times n^{\\phi }} \\rbrace ^{\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}}$ , with $k \\in \\lbrace 1,2,3\\rbrace $ , is a column vector whose nonzero entry is $\\mathbb {N}_{\\theta }$ .", "The expressions for the geometric sub-matrices, resulting from the term ${P}^{(0)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\Delta \\delta {H}^{(1)}+{M}^{(0)} \\hspace{0.97214pt}\\raisebox {0.25ex}:\\hspace{1.111pt}\\Delta \\delta {H}^{(2)}$ in Eq.", "(REF ), are too lengthy and are not presented here.", "Finally, the assembled system of equations may be written of the form $\\tilde{\\mathbb {K}} \\Delta \\tilde{\\mathbb {V}}=-\\tilde{\\mathbb {R}}$ .", "In this relation, $\\Delta \\tilde{\\mathbb {V}}$ is the incremental generalized displacement vector that contains all nodal DOFs, $\\tilde{\\mathbb {K}}$ is the assembled stiffness matrix, and $\\tilde{\\mathbb {R}}$ is the assembled residual vector.", "After finding $\\Delta \\tilde{\\mathbb {V}}$ , the non-rotational quantities are update via the relations $\\mathbb {u}+\\Delta \\mathbb {u}\\rightarrow \\mathbb {u}$ , $\\mathbb {w}+\\Delta \\mathbb {w}\\rightarrow \\mathbb {w}$ , and $\\phi +\\Delta \\phi \\rightarrow \\phi $ .", "However, the update procedure for the rotation pseudo-vector is completely different.", "Let $\\Delta \\theta $ be the increment of the rotation pseudo-vector.", "The updated rotation pseudo-vector resulting from the two subsequent rotations $\\theta $ and $\\Delta \\theta $ is then calculated via the following relations [75]: ${\\theta }^{\\ast }_{\\text{updated}} = \\frac{ {\\theta }^{\\ast } + \\Delta {\\theta }^{\\ast } + ({\\Delta \\hat{\\theta }}^{\\ast }) {\\theta }^{\\ast } }{1-{\\theta }^{\\ast } \\cdot {\\Delta \\theta }^{\\ast } }\\quad \\text{with} \\quad {\\theta }^{\\ast }= \\frac{\\theta }{\\theta } \\tan \\frac{\\theta }{2}.$ It is noted that ${\\theta }^{\\ast }$ is the normalized rotation pseudo-vector.", "Moreover, ${\\Delta \\hat{\\theta }}^{\\ast }$ is the skew-symmetric tensor corresponding to ${\\Delta {\\theta }}^{\\ast }$ .", "The proof of Eq.", "(REF ) is lengthy and is available in, e.g., Argyris [75]." ], [ "Numerical examples", " To examine the applicability and performance of the developed formulation, several numerical examples are provided in this section.", "To do so, a home-written FE code based on the formulation presented in the previous sections has been prepared.", "The 10-parameter micropolar shell element designed for the present numerical simulations is an eight-node quadrilateral.", "All eight nodes contain the three displacement components $u_i$ .", "However, only the corner nodes contain the $w_i$ , $\\phi $ , and $\\theta _i$ DOFs.", "In other words, the DOF parameters defined after Eqs.", "(REF ) and (REF ) are $n^u=8$ and $n^w=n^{\\theta }=n^{\\phi }=4$ .", "Following Korelc and Wriggers [70], the enhancing deformation gradient $\\bar{{F}}$ is considered to be of the following form: $\\bar{{F}}= {J}^{-\\scriptscriptstyle \\hspace{-0.55542pt}\\top \\hspace{-1.111pt}} \\bar{{F}}^{\\text{ref}} {J}^{-1},$ where ${J}$ is the Jacobi matrix between the physical and parent elements.", "Moreover, $\\bar{{F}}^{\\text{ref}}$ is the enhancing deformation gradient defined in the parent $\\lbrace \\xi , \\eta \\rbrace $ space.", "In this work, the nonzero components of $\\bar{{F}}^{\\text{ref}}$ are considered as follows: $\\bar{F}^{\\text{ref}}_{13}=\\alpha _1 \\xi + \\alpha _2 \\eta , \\quad \\bar{F}^{\\text{ref}}_{23}=\\alpha _3 \\xi + \\alpha _4 \\eta , \\quad \\bar{F}^{\\text{ref}}_{33}=\\alpha _5 \\xi + \\alpha _6 \\eta ,$ which are linear functions in terms of the parent coordinates $\\xi $ and $\\eta $ .", "This indicates that $\\bar{{F}}$ contains six enhanced parameters, namely $\\mathcal {P}^{\\ast }=6$It is also possible to include the nonlinear terms involving $\\lbrace \\xi ^2, \\eta ^2, \\xi \\eta ^2,\\xi ^2 \\eta \\rbrace $ or $\\lbrace 1-3\\xi ^2, 1-3\\eta ^2, \\xi (1-3\\eta ^2),\\eta (1-3\\xi ^2) \\rbrace $ in the components of $\\bar{{F}}^{\\text{ref}}$ .", "In this case, the number of the enhanced parameters increases from 6 to 39.", "However, our numerical simulations reveal that the change in the results is negligible..", "The standard $2 \\times 2$ Gauss–Legendre integration rule has been employed to evaluate all integrals over the element surface.", "Moreover, the two-point rule has been used for integration along the shell thickness." ], [ "VERIFICATION EXAMPLE: bending of beam-like strips", " To examine the validity of the results of the proposed formulation, the flexural deformation of four beam-like strips, made of HMSMs and subject to an external magnetic flux is studied in this example.", "Extensive experiments on these structures have been previously conducted by Zhao et al. [24].", "The mechanical properties of the material are $\\mu =303$ and $\\lambda =7300$ (kPa).", "The referential residual magnetic flux density is along the undeformed centreline and its magnitude is $|\\tilde{\\mathbb {B}}^{\\text{rem}}|= 143 $ (mT).", "The width of all strips is 5 mm, and their lengths and heights are given by the sets $L \\in \\lbrace 11, 19.2, 17.2, 17.2 \\rbrace $ (mm) and $h \\in \\lbrace 1.1, 1.1, 0.84, 0.42 \\rbrace $ (mm), respectively.", "The aspect ratio parameter is defined by \"$AR=L/h$ \".", "For the given data, the aspect ratios of the strips are 10, $17.5$ , $20.5$ , and 41, respectively.", "The strips are clamped at $X_1=0$ , and are subjected to the maximum external magnetic flux $\\mathbb {B}^{\\text{ext}}_{\\text{max}}= 50 \\mathbb {e}_3$ (mT).", "Convergence analysis reveals that the minimum required number of elements along the length of the strips is 10, 15, 30, and 40, respectively.", "Additionally, two elements in the width direction are necessary for the four strips.", "Furthermore, for $\\eta = 0.1 \\mu $ and $l=0.1h$ , the results of the present formulation will be very close to the available data reported in [24].", "These relations will be also used for all simulations presented in this work.", "The nondimensional tip deflection $u_3^{\\text{T}}/L$ versus the load parameter $\\frac{10^3}{\\mu \\mu _0}|\\mathbb {B}^{\\text{ext}}||\\tilde{\\mathbb {B}}^{\\text{rem}}|$ is depicted in Fig.", "REF (a).", "It is observed that the present results are in good agreement with the experimental and numerical data obtained by Zhao et al. [24].", "The deformed shapes of the strips for four values of the external magnetic flux are displayed in Figs.", "REF (b,c,d,e).", "To have a comparison between the deformation of the strips for a specific value of $|\\mathbb {B}^\\text{ext}|$ , the four strips are plotted in the same figure.", "In particular, from Fig.", "REF (b) it is observed that the slender strip with $AR=41$ exhibits very large deformations even for small values of $|\\mathbb {B}^\\text{ext}|$ .", "Figure: Beam-like strips under magnetic loading,(a): load-deflection curves,(b,c,d,e): deformed shapes with the contour plots of u 3 u_3 (in mm) for |𝔹 ext |∈{2,5,15,50}|\\mathbb {B}^\\text{ext}| \\in \\lbrace 2,5, 15, 50 \\rbrace (mT)" ], [ "Deformation of a hollow cross", "In this example, the large deformation of a hollow cross under magnetic loading is investigated.", "This example has been previously studied by Kim et al.", "[18] and Zhao et al. [24].", "As shown in Fig.", "REF (a), the geometry is composed of 24 trapezoidal blocks.", "The block dimensions in the $X_1X_2$ plane are displayed in the figure, and its thickness is $0.41$ mm.", "The mechanical properties are the same as those in the previous example, namely $\\mu =303$ and $\\lambda =7300$ (kPa).", "The magnitude of the referential remnant magnetic flux density is $|\\tilde{\\mathbb {B}}^{\\text{r}}|=102$ (mT).", "As shown on Fig.", "REF (a), the direction of $\\tilde{\\mathbb {B}}^{\\text{r}}$ is constant in each block, but varies in different blocks.", "The maximum external magnetic flux density $\\mathbb {B}^{\\text{ext}}_{\\text{max}}= -200 \\mathbb {e}_3$ (mT) is applied to the body.", "Due to symmetry in the $X_1X_2$ plane, it is sufficient to discretize merely one-quarter of the geometry.", "Numerical simulations reveal that a mesh of $6 \\times 6$ elements in each trapezoidal block leads to convergent results.", "Variations of the displacement component $u_3$ for several material points versus the nondimensional loading parameter $\\frac{10^3}{\\mu \\mu _0}|\\mathbb {B}^{\\text{ext}}||\\tilde{\\mathbb {B}}^{\\text{rem}}|$ are depicted in Fig.", "REF (a).", "It is noted that the lateral deflection at the material points $A$ and $G$ is considered to be zero.", "At the final stage of deformation, the lateral displacement at the points $E$ and $C$ is very close to each other.", "More precisely, the maximum lateral displacement of about $10.39$ mm at the material point $C$ is observed.", "The final deformed shape of the body observed in the experiments of Kim et al.", "[18] is illustrated in Fig.", "REF (b).", "Moreover, the deformed shapes of the hollow cross under four different values of the external magnetic flux are displayed in Figs.", "REF (c,d,e,f).", "By comparing figures REF (b) and REF (e), it is deduced that the final deformed shape obtained by the present formulation is qualitatively similar to that reported in the experimental studies of Kim et al. [18].", "Figure: A hollow cross under magnetic loading,(a): load-displacement curves,(b): experiment ,(c,d,e,f): deformed shapes with the contour plots of u 3 u_3 (in mm) for |𝔹 ext |∈{10,50,100,200}|\\mathbb {B}^\\text{ext}| \\in \\lbrace 10,50, 100, 200 \\rbrace (mT)" ], [ "Deformation of a cross-shaped geometry", " In this example, the finite elastic deformation of a cross-shaped thin body made of HMSMs is studied.", "As shown in Fig.", "REF (a), the geometry is composed of nine equal blocks.", "The block dimensions in the $X_1X_2$ plane are $6 \\times 6$ (mm), and its thickness is $0.9$ (mm).", "The blocks are welded together by a specific procedure advocated in Kuang et al. [20].", "The mechanical properties are considered to be $\\mu =135$ and $\\lambda =3250$ (kPa).", "The magnitude of the referential remnant magnetic flux density at each block is $|\\tilde{\\mathbb {B}}^{\\text{r}}|=94$ (mT).", "However, as can be seen from the figure, the direction of $\\tilde{\\mathbb {B}}^{\\text{r}}$ is not the same in all blocks.", "To deform the body by magnetic loading, the maximum external magnetic flux density $|\\mathbb {B}^{\\text{ext}}_{\\text{max}}|= 40$ (mT) is applied along the $X_3$ -axis.", "Due to symmetry in the $X_1X_2$ plane, only one-quarter of the geometry is discretized.", "Numerical simulations reveal that a mesh containing 15 elements along $AC$ and 3 elements along $AA^{\\prime }$ is sufficient to obtain convergent results.", "The displacement component $u_3$ at some material points versus the nondimensional loading parameter $\\frac{10^3}{\\mu \\mu _0}|\\mathbb {B}^{\\text{ext}}||\\tilde{\\mathbb {B}}^{\\text{rem}}|$ is plotted in Fig.", "REF (a).", "It is noted that the displacement $u_3$ of the point $D$ has been considered to be zero.", "The maximum lateral displacement occurs at the point $A$ and is about $22.78$ mm.", "The final deformed shape of the body observed in the experimental studies of Ref.", "[20] is displayed in Fig.", "REF (b).", "Moreover, the deformed shapes of the cross under four different values of the external magnetic flux are illustrated in Figs.", "REF (c,d,e,f).", "Obviously, the final deformed shape in Fig.", "REF (e), predicted by the present formulation, is qualitatively similar to that observed in the experiments of Kuang et al.", "[20] in Fig.", "REF (b).", "Figure: A cross-shaped geometry under magnetic loading,(a): load-displacement curves,(b): experiment ,(c,d,e,f): deformed shapes with the contour plots of u 3 u_3 (in mm) for |𝔹 ext |∈{2,5,10,40}|\\mathbb {B}^\\text{ext}| \\in \\lbrace 2,5, 10, 40 \\rbrace (mT)" ], [ "Deformation of an H-shaped geometry", " In this example, the large deformation of an H-shaped geometry under magnetic loading is investigated.", "As shown in Fig.", "REF (a), the geometry is composed of 15 blocks, of which dimensions, material, and magnetic properties are the same as those given in the previous example.", "The maximum external magnetic flux density $\\mathbb {B}^{\\text{ext}}_{\\text{max}}= -50 \\mathbb {e}_3$ (mT) is applied to the body.", "Due to symmetry, only one-quarter of the geometry is discretized.", "Numerical simulations indicate that a mesh of $4 \\times 4$ elements in each block is sufficient to obtain convergent results.", "In other words, the number of elements along $AA^{\\prime }$ , $AC$ and $CD$ is 2, 14, and 10, respectively.", "The displacement component $u_3$ at some material points versus the nondimensional loading parameter $\\frac{10^3}{\\mu \\mu _0}|\\mathbb {B}^{\\text{ext}}||\\tilde{\\mathbb {B}}^{\\text{rem}}|$ is displayed in Fig.", "REF (a).", "It is noted that the lateral deflection at the point $D$ is zero.", "The maximum lateral displacement of about $24.65$ mm at the material point $A$ is observed.", "The final deformed shape of the body from the experimental observations of Kuang et al.", "[20] is illustrated in Fig.", "REF (b).", "Moreover, the deformed shapes of the body under four different values of the external magnetic flux are displayed in Figs.", "REF (c,d,e,f).", "A comparison of figures REF (b) and REF (e) shows that the final deformed shape obtained by the present formulation is qualitatively similar to that observed in the experiments of Kuang et al. [20].", "Figure: An H-shaped geometry under magnetic loading,(a): load-displacement curves,(b): experiment ,(c,d,e,f): deformed shapes with the contour plots of u 3 u_3 (in mm) for |𝔹 ext |∈{2,5,15,50}|\\mathbb {B}^\\text{ext}| \\in \\lbrace 2,5, 15, 50 \\rbrace (mT)" ], [ "Deformation of a cylinder (magnetic pump)", " In this example, the elastic deformation of a cylindrical shell made of HMSMs and subject to magnetic loading is investigated.", "As will be shown below, the deformation pattern in the cylinder is so that it may be used as a macro- or micro-fluidic magnetic pump in practical applications.", "In a relatively similar context, an electro-active polymer-based micro-fluidic pump can be seen in Yan et al.", "[76].", "In the present case, it is assumed that the cylinder has been made of the same blocks as described in the example REF .", "To construct the geometry, 24 blocks in the circumferential direction and 20 blocks along the axis of the cylinder are used.", "Therefore, the mean radius and length of the cylinder are $R=22.9$ and $L=120$ (mm), respectively.", "It is assumed that the remnant magnetic flux $\\tilde{\\mathbb {B}}^{\\text{rem}}$ is tangent to the cylinder surface, perpendicular to the $X_2$ axis, and has a positive component along the $X_3$ axis.", "The maximum external magnetic flux density $\\mathbb {B}^{\\text{ext}}_{\\text{max}}= 150 \\mathbb {e}_3$ (mT) is applied to the body.", "Moreover, both ends of the cylinder are assumed to be clamped.", "Due to symmetry, only one-quarter of the geometry is discretized by the shell elements.", "Numerical simulations show that a mesh of $24 \\times 20$ elements is sufficient to obtain convergent results.", "Variations of the displacement components $u_1$ and $u_3$ at some material points against the nondimensional loading parameter $\\frac{10^3}{\\mu \\mu _0}|\\mathbb {B}^{\\text{ext}}||\\tilde{\\mathbb {B}}^{\\text{rem}}|$ are plotted in Fig.", "REF (a).", "The coordinates of the material points $A$ , $B$ and $C$ , lying in the $XZ$ -plane, are ($0,R$ ), ($R,0$ ), and $\\frac{1}{\\sqrt{2}}(R,R)$ , respectively.", "The maximum (horizontal) displacement occurs at the point $B$ and is about $16.75$ mm.", "The deformed shapes of the cylinder under four different values of the external magnetic flux are demonstrated in Figs.", "REF (b,c,d,e).", "It is observed that under the applied magnetic flux, the cylinder contracts at its middle section.", "This is the reason why it can be used as a magnetic pump in real applications.", "Figure: Deformation of a cylinder under magnetic loading,(a): load-displacement curves,(b,c,d,e): deformed shapes with the contour plots of |u 1 ||u_1| (in mm) for |𝔹 ext |∈{20,50,100,150}|\\mathbb {B}^\\text{ext}| \\in \\lbrace 20,50, 100, 150 \\rbrace (mT)" ], [ "A magnetic gripper", " The elastic deformation of a spherical gripper made of HMSMs and subject to magnetic loading is studied in this example.", "Soft grippers made of magneto-active materials have the potential as actuating components in soft robotics.", "For instance, Ju et al.", "[77] and Carpenter et al.", "[78] demonstrated additively manufactured magneto-active grippers while Kadapa and Hossain [79] simulated the viscoelastic influences of underlying polymeric materials.", "In our case, the gripper is composed of 12 equal arms.", "In the undeformed configuration, the arms cover the surface of an incomplete sphere of radius $R$ .", "It is assumed that the mechanical and magnetic properties, and the thickness of the HMSM are the same as those given in the example REF .", "The geometry of a single arm is shown in Fig.", "REF (a).", "The arc $DE$ lies in the $X_1X_2$ plane, its length is 12 mm, and covers $30^{\\circ }$ of a full circle.", "Therefore, the mean radius of the arm is $R=\\frac{12}{\\pi / 6} =22.92$ mm.", "The arc $AC$ lies in the $X_1X_3$ plane and its length is 60 mm.", "The angle between the radius $OA$ and the $X_3$ -axis is $15^{\\circ }$ , and the geometry is symmetric w.r.t.", "the $X_1X_2$ plane.", "Moreover, the topmost arc of the arm is assumed to be clamped.", "As shown in the figure, let $\\mathbb {e}_{\\varphi }$ be the standard meridian unit tangent vector to the sphere.", "It is assumed that the remnant magnetic flux $\\tilde{\\mathbb {B}}^{\\text{rem}}$ is along $\\mathbb {e}_{\\varphi }$ for $X_3>0$ , and along $-\\mathbb {e}_{\\varphi }$ for $X_3<0$ .", "The maximum external magnetic flux density $\\mathbb {B}^{\\text{ext}}_{\\text{max}}= 10 \\mathbb {e}_3$ (mT) is applied to the body.", "Numerical simulations show that a mesh of $6 \\times 30$ elements in the arm leads to convergent results.", "Variations of the displacement components $u_1$ and $u_3$ at some material points versus the nondimensional loading parameter $\\frac{10^3}{\\mu \\mu _0}|\\mathbb {B}^{\\text{ext}}||\\tilde{\\mathbb {B}}^{\\text{rem}}|$ are plotted in Fig.", "REF (a).", "For a single arm under the maximum external magnetic flux of 10 mT, the maximum value of the displacement component $u_3$ is obtained to be about $43.9$ mm.", "The deformed shapes of the gripper under four different values of the external magnetic flux are illustrated in Figs.", "REF (b,c,d,e).", "It is noted that the maximum value of the external magnetic flux to avoid intersection between the arms is $6.8$ mT.", "In this case, the maximum $u_3$ component of displacement is about $41.6$ mm.", "Figure: Deformation of a spherical gripper with 12 arms,(a): load-displacement curves,(b,c,d,e): deformed shapes with the contour plots of u 3 u_3 (in mm) for |𝔹 ext |∈{1,2,4,6.8}|\\mathbb {B}^\\text{ext}| \\in \\lbrace 1,2,4,6.8\\rbrace (mT)" ], [ "Summary", " In this research, a 10-parameter micropolar shell model for large elastic deformation analysis of thin structures made of hard-magnetic soft materials was developed.", "The idea of employing the micropolar theory comes from the fact that an external magnetic flux induces a body couple on the material particles of HMSMs, which in turn leads to asymmetric Cauchy stress tensor.", "Since the governing equations at finite strains, including magnetic effects, cannot be solved analytically, a nonlinear finite element formulation for the numerical solution of the governing equations under arbitrary geometry, boundary conditions, and loading cases was developed.", "Six different numerical examples were solved to investigate the applicability of the developed formulation.", "It was observed that the present formulation can capture the numerical and experimental results reported in the literature.", "The generalization of the present research to include thermal and viscoelastic effects will be made in the subsequent contributions." ], [ "Declaration of competing interest", " The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.", "M. Hossain acknowledges the funding through an Engineering and Physical Sciences Research Council (EPSRC) Impact Acceleration Award (EP/R511614/1).", "M. Hossain also acknowledges the support by EPSRC through the Supergen ORE Hub (EP/S000747/1), which has awarded funding for the Flexible Fund project Submerged bi-axial fatigue analysis for flexible membrane Wave Energy Converters (FF2021-1036)." ] ]
2207.10480
[ [ "Magic ELF: Image Deraining Meets Association Learning and Transformer" ], [ "Abstract Convolutional neural network (CNN) and Transformer have achieved great success in multimedia applications.", "However, little effort has been made to effectively and efficiently harmonize these two architectures to satisfy image deraining.", "This paper aims to unify these two architectures to take advantage of their learning merits for image deraining.", "In particular, the local connectivity and translation equivariance of CNN and the global aggregation ability of self-attention (SA) in Transformer are fully exploited for specific local context and global structure representations.", "Based on the observation that rain distribution reveals the degradation location and degree, we introduce degradation prior to help background recovery and accordingly present the association refinement deraining scheme.", "A novel multi-input attention module (MAM) is proposed to associate rain perturbation removal and background recovery.", "Moreover, we equip our model with effective depth-wise separable convolutions to learn the specific feature representations and trade off computational complexity.", "Extensive experiments show that our proposed method (dubbed as ELF) outperforms the state-of-the-art approach (MPRNet) by 0.25 dB on average, but only accounts for 11.7\\% and 42.1\\% of its computational cost and parameters.", "The source code is available at https://github.com/kuijiang94/Magic-ELF." ], [ "Introduction", "Rain perturbation causes detrimental effects on image quality and significantly degrades the performance of multimedia applications like image understanding [33], [47], object detection [67] and identification [52].", "Image deraining [5], [44], [20] tends to produce the rain-free result from the rainy input, and has drawn widespread attention in the last decade.", "Prior to deep neural networks, the early model-based deraining methods [12] rely more on statistical analyses of image contents, and enforce handcrafted priors (e.g., sparsity and non-local means filtering) on both rain and background.", "Still, they are not robust to varying and complex rain conditions [2], [6], [66].", "Because of the powerful ability to learn generalizable priors from large-scale data, CNNs have emerged as a preferable choice compared to conventional model-based methods.", "To further promote the deraining performance, various sophisticated architectures and training practices are designed to boost the efficiency and generalization [17], [55], [57], [55].", "However, due to intrinsic characteristics of local connectivity and translation equivariance, CNNs have at least two shortcomings: 1) limited receptive field; 2) static weight of sliding window at inference, unable to cope with the content diversity.", "The former thus prevents the network from capturing the long-range pixel dependencies while the latter sacrifices the adaptability to the input contents.", "As a result, it is far from meeting the requirement in modeling the global rain distribution, and generates results with obvious rain residue (PreNet [40] and DRDNet [18]) or detail loss (MPRNet [60] and SWAL [14]).", "Please refer to the deraining results in Figure REF .", "Self-attention (SA) calculates response at a given pixel by a weighted sum of all other positions, and thus has been explored in deep networks for various natural language and computer vision tasks [43], [49], [61].", "Benefiting from the advantage of global processing, SA achieves significant performance boost over CNNs in eliminating the degradation perturbation [32], [4], [51].", "However, due to the global calculation of SA, its computation complexity grows quadratically with the spatial resolution, making it infeasible to apply to high-resolution images [59].", "More recently, Restormer [59] proposes a multi-Dconv head “transposed” attention (MDTA) block to model global connectivity, and achieves impressive deraining performance.", "Although MDTA applies SA across feature dimension rather than the spatial dimension and has linear complexity, still, Restormer quickly overtaxes the computing resources.", "As illustrated in Figure REF , the high-accuracy model Restormer [59] requires much more computation resource for a better restoration performance.", "It has 563.96 GFlops and 26.10 Million parameters, and consumes 0.568s to derain an image with $512\\times 512$ pixels using one TITAN X GPU), which is computationally or memory expensive for many real-world applications with resource-constrained devices.", "Figure: Comparison of mainstream deraining methods in terms of efficiency (inference time (ms) and computational cost (Gflops)) vs. performance (PSNR) on the TEST1200 dataset with image size of 512×512512\\times 512.", "Compared with the top-performing method Restormer , our ELF achieves comparable deraining performance (33.38dB vs. 33.19dB) while saving 88.0% inference time (ms) (125 vs. 568) and 88.4% computational cost (Gflops) (66.39 vs. 568).", "Our light-weight model ELF-LW is still competitive, surpassing the real-time deraining method PCNet by 0.52dB while with less computation cost (Gflops) (21.53 vs. 28.21).Besides low efficiency, there are at least another two shortcomings for Restormer [59].", "1) Regarding the image deraining as a simple rain streaks removal problem based on the additive model is debatable, since the rain streak layer and background layer are highly interwoven, where rain streaks destroy image contents, including the details, color, and contrast.", "2) Constructing a pure Transformer-based framework is suboptimal, since SA is good at aggregating global feature maps but immature in learning local contexture relations which CNNs are skilled at.", "That in turn naturally raises two questions: (1) How to associate the rain perturbation removal and background recovery?", "(2) How to unify SA and CNNs efficiently for image deraining?", "To answer the first question, we take inspiration from the observation that rain distribution reflects the degradation location and degree, in addition to the rain distribution prediction.", "Therefore, we propose to refine background textures with the predicted degradation prior in an association learning manner.", "As a result, we accomplish the image deraining by associating rain streak removal and background recovery, where an image deraining network (IDN) and a background recovery network (BRN) are specifically designed for these two subtasks.", "The key part of association learning is a novel multi-input attention module (MAM).", "It generates the degradation prior and produces the degradation mask according to the predicted rainy distribution.", "Benefited from the global correlation calculation of SA, MAM can extract the informative complementary components from the rainy input (query) with the degradation mask (key), and then help accurate texture restoration.", "An intuitive idea to deal with the second issue is to construct a unified model with the advantages of these two architectures.", "It has been demonstrated that the SA and standard convolution exhibit opposite behaviors but complementary [38].", "Specifically, SA tends to aggregate feature maps with self-attention importance, but convolution diversifies them to focus on the local textures.", "Unlike Restormer equipped with pure Transformer blocks, we promote the design paradigm in a parallel manner of SA and CNNs, and propose a hybrid fusion network.", "It involves one residual Transformer branch (RTB) and one encoder-decoder branch (EDB) (The detailed pipeline is provided in Supplementary.).", "The former takes a few learnable tokens (feature channels) as input and stacks multi-head attention and feed-forward networks to encode global features of the image.", "The latter, conversely, leverages the multi-scale encoder-decoder to represent contexture knowledge.", "We propose a light-weight hybrid fusion block (HFB) to aggregate the outcomes of RTB and EDB to yield a final solution to the subtask.", "In this way, we construct our final model as a two-stage Transformer-based method, namely ELF, for single image deraining, which outperforms the CNN-based SOTA (MPRNet [60]) by 0.25dB on average, but saves 88.3% and 57.9% computational cost and parameters.", "The main contributions of this paper are summarized as follows.", "To the best of our knowledge, we are the first to consider the high efficiency and compatibility of Transformer and CNNs for the image deraining task, and unify the advantages of SA and CNNs into an association learning-based network for rain perturbation removal and background recovery.", "This showcases an efficient and effective implementation of part-whole hierarchy.", "We design a novel multi-input attention module (MAM) to associate rain streaks removal and background recovery tasks elaborately.", "It significantly alleviates the learning burden while promoting the texture restoration.", "Comprehensive experiments on image deraining and detection tasks have verified the effectiveness and efficiency of our proposed ELF method.", "ELF surpasses MPRNet [60] by 0.25dB on average, while the latter suffers from 8.5$\\times $ computational cost and 2.4$\\times $ parameters.", "Figure: The architecture of our proposed ELF deraining method.", "It consists of an image deraining network (IDN), a multi-input attention module (MAM), and a background reconstruction network (BRN).", "IDN learns the corresponding rain distribution I R,S * I^*_{R,S} from the sub-sample I Rain,S I_{Rain,S}, and produces the corresponding deraining result I B,S * I^*_{B,S} by subtracting I R,S * I^*_{R,S}.", "Then, MAM takes I R,S * I^*_{R,S}, I Rain I_{Rain} and I B,S * I^*_{B,S} as inputs, where the predicted rain distribution provides the prior (local and degree) to exploitcomplementary background components f BT f_{BT} from I Rain I_{Rain} to promote the background recovery." ], [ "Related Work", "Image deraining has achieved significant progress in innovative architectures and training methods in the last few years.", "Next, we briefly describe the typical models for image deraining and visual Transformer relative to our studies." ], [ "Single Image Deraining", "Traditional deraining methods [23], [23], [36] adopt image processing techniques and hand-crafted priors to address the rain removal problem.", "However, these methods produce unsatisfied results when the predefined model do not hold.", "Recently, deep-learning based approaches [27], [62], [19] have emerged for rain streak removal and demonstrated impressive restoration performance.", "Early deep learning-based deraining approaches [10], [9] apply convolution neural networks (CNNs) to directly reduce the mapping range from input to output and produce rain-free results.", "To better represent the rain distribution, researchers take rain characteristics such as rain density [63], size and the veiling effect [27], [29] into account, and use recurrent neural networks to remove rain streaks via multiple stages [31] or the non-local network [45] to exploit long-range spatial dependencies for better rain streak removal [26].", "Further, self-attention (SA) is recently introduced to eliminate the rain degradation with its powerful global correlation learning, and achieves impressive performance.", "Although the token compressed representation and global non-overlapping window-based SA [51], [16] are adopted to promote the global SA to alleviate the computational burden, these models still quickly overtaxes the computing resource.", "Apart from the low efficiency, these methods [16], [59] regard the deraining task as the rain perturbation removal only, ignoring the additional degradation effects of missing details and contrast bias." ], [ "Vision Transformers", "Transformer-based models are first developed for sequence processing in natural language tasks [43].", "Due to the distinguishing feature of the strong capability to learn long-range dependencies, ViT [8] introduces Transformer into computer vision field, and then a plenty of Transformer-based methods have been applied to vision tasks such as image recognition [8], [15], segmentation [48], object detection [3], [35].", "Vision Transformers [8], [42] decompose an image into a sequence of patches (local windows) and learn their mutual relationships, which is adaptable to the given input content [24].", "Especially for low-level vision tasks, since the global feature representation promotes accurate texture inference, Transformer models have been employed to solve the low-level vision problems [32], [51].", "For example, TTSR [53] proposes a self-attention module to transfer the texture information in the reference image to the high-resolution image reconstruction, which can deliver accurate texture features.", "Chen et al.", "[4] propose a pre-trained image processing transformer on the ImageNet datasets and uses the multi-head architecture to process different tasks separately.", "However, the direct application of SA fails to exploit the full potential of Transformer, resulting from heavy self-attention computation load and inefficient communications across different depth (scales) of layers.", "Moreover, little effort has been made to consider the intrinsic complementary characteristics between Transformer and CNNs to construct a compact and practical model.", "Naturally, this design choice restricts the context aggregation within local neighborhoods, defying the primary motivation of using self-attention over convolutions, thus not ideally suited for image-restoration tasks.", "In contrast, we propose to explore the bridge, and construct a hybrid model of Transformer and CNN for image deraining task." ], [ "Proposed Method", "Our main goal is to construct a high-efficiency and high-accuracy deraining model by taking advantage of the CNN and Transformer.", "Theoretically, the self-attention (SA) averages feature map values with the positive importance-weights to learn the global representation while CNNs tend to aggregate the local correlated information.", "Intuitively, it is reasonable to combine them to fully exploit the local and global textures.", "A few studies try to combine these two structures to form a hybrid framework for low-level image restoration but have failed to give full play to it.", "Taking the image deraining as an example, unlike the existing Transformer-based methods that directly apply Transformer blocks to replace convolutions, we consider the high efficiency and compatibility of these two structures, and construct a hybrid framework, dubbed ELF to harmonize their advantages for image deraining.", "Compared to the existing deraining methods, our proposed ELF departs from them in at least two key aspects.", "Differences in design concepts: unlike the additive composite model that predicts the optimal approximation $I^*_{B}$ of background image $I_{B}$ from the rainy image $I_{Rain}$ , or learns the rain residue $I^*_{R}$ and subtracting it to generate $I^*_{B}$ , ELF casts the image deraining task into the composition of rain streak removal and background recovery, and introduces the Transformer to associate these two parts with a newly designed multi-input attention module (MAM).", "Differences in composition: since the low-frequency signals and high-frequency signals are informative to SA and convolutions [38], a dual-path framework is naturally constructed for the specific feature representation and fusion.", "Specifically, the backbone of ELF contains a dual-path hybrid fusion network, involving one residual Transformer branch (RTB) and one encoder-decoder branch (EDB) to characterize global structure (low-frequency components) and local textures (high-frequency components), respectively.", "Figure REF outlines the framework of our proposed ELF, which contains an image deraining network (IDN), a multi-input attention module (MAM), and a background recovery network (BRN).", "For efficiency, IDN and BRN share the same dual-path hybrid fusion network, which are elaborated in Section REF ." ], [ "Pipeline and Model Optimization", "Given a rainy image $I_{Rain}\\in \\mathbb {R}^{H\\times W\\times 3}$ and its clean version $I_{B}\\in \\mathbb {B}^{H\\times W\\times 3}$ , where $H$ and $W$ denote the spatial height and weight, we observe that the reconstructed rainy image $I_{Rain, SR}\\in \\mathbb {R}^{H\\times W\\times 3}$ via bilinear interpolation from the sampled rainy image $I_{Rain,S}\\in \\mathbb {R}$ has the similar statistical distribution to the original one, shown in Figure REF .", "This inspires us to predict the rain streak distribution at sampling space to alleviate the learning and computational burden.", "Figure: Fitting results of “Y\" channel histogram for Real and Synthetic samples.“Rain\" and “Rain-LR\" denote the original and corresponding low-dimension space distribution of rainy image.", "“Rain-LR-SR\" is the distribution via Bilinear interpolation from “Rain-LR\".", "The fitting results show that the reconstructed sampling space (“Rain-LR-SR\") from “Rain-LR\" can get the similar statistical distribution with that of the original space.In this way, we first sample $I_{Rain}$ and $I_{B}$ with Bilinear operation to generate the corresponding sub-samples ($I_{Rain,S}\\in \\mathbb {R}$ and $I_{B,S}\\in \\mathbb {B}$ ).", "As illustrated before, our ELF contains two subnetworks (IDN and BRN) to complete the image deraining via association learning.", "Thus, $I_{Rain,S}$ is then input to IDN to generate the corresponding rain distribution $I^*_{R,S}$ and deraining result $I^*_{B,S}$ , expressed as $I^*_{R,S} = \\mathcal {G}_{IDN}(F_{BS}(I_{Rain})), $ where $F_{BS}(\\cdot )$ denotes the Bilinear downsampling to generate the sampled rainy image $I_{Rain,S}$ .", "$\\mathcal {G}_{IDN}(\\cdot )$ refers to the rain estimation function of IDN.", "Rain distribution reveals the degradation location and degree, which is naturally reasonable to be translated into the degradation prior to help accurate background recovery.", "Before passing $I^*_{B,S}$ into BRN for background reconstruction, a multi-input attention module (MAM), shown in Figure REF , is designed to fully exploit the complementary background information from the rainy image $I_{Rain}$ via the Transformer layer, and merge them to the embedding representation of $I^*_{B,S}$ .", "These procedures of MAM are expressed as $\\begin{split}f_{BT} &= F_{SA}(I^*_{R,S}, I_{Rain}),\\\\f_{MAM} &= F_{HFB}(f_{BT}, F_{B}(I^*_{B,S})).\\end{split}$ In Equation (REF ), $F_{SA}(\\cdot )$ denotes self-attention functions, involving the ebedding function and dot-product interaction.", "$F_{B}(\\cdot )$ is the embedding function to generate the initial representation of $I^*_{B,S}$ .", "$F_{HFB}(\\cdot )$ refers to the fusion function in HFB.", "Following that, BRN takes $f_{MAM}$ as input for background reconstruction as $I^*_{B} = \\mathcal {G}_{BRN}(f_{MAM}) + F_{UP}(I^*_{B,S}),$ where $\\mathcal {G}_{BRN}(\\cdot )$ denotes the super-resolving function of BRN, and $F_{UP}(\\cdot )$ is the Bilinear upsampling.", "Unlike the individual training of rain streak removal and background recovery, we introduce the joint constraint to enhance the compatibility of the deraining model with background recovery, automatically learned from the training data.", "Then the image loss (Charbonnier penalty loss [25], [22], [13]) and structural similarity (SSIM) [50] loss are employed to supervise networks to achieve the image and structural fidelity restoration simultaneously.", "The loss functions are given by $\\begin{split}\\mathcal {L}_{IDN} &= \\sqrt{(I^*_{B,S}- I_{B,S})^2+\\varepsilon ^2} + \\alpha \\times SSIM(I^*_{B,S}, I_{B,S}),\\\\\\mathcal {L}_{BRN} &= \\sqrt{(I^*_{B}- I_{B})^2+\\varepsilon ^2} + \\alpha \\times SSIM(I^*_{B}, I_{B}),\\\\\\mathcal {L} &= \\mathcal {L}_{IDN} + \\lambda \\times \\mathcal {L}_{BRN},\\end{split}$ where $\\alpha $ and $\\lambda $ are used to balance the loss components, and experimentally set as $-0.15$ and 1, respectively.", "The penalty coefficient $\\varepsilon $ is set to $10^ {-3}$ ." ], [ "Hybrid Fusion Network", "It is known that the self-attention mechanism is the core part of Transformer, which is good at learning long-range semantic dependencies and capturing global structure representation in the image.", "Conversely, CNNs are skilled at modeling the local relations due to the intrinsic local connectivity.", "To this end, we construct the backbone of IDN and BRN into a deep dual-path hybrid fusion network by unifying the advantages of Transformer and CNNs.", "As shown in Figure REF , the backbone involves a residual Transformer branch (RTB) and an encoder-decoder branch (EDB).", "RTB takes a few learnable tokens (feature channels) as input and stacks multi-head attention and feed-forward networks to encode the global structure.", "However, capturing long-range pixel interactions is the culprit for the enormous amount of Transformer computational, making it infeasible to apply to high-resolution images, especially for the image restoration task.", "Besides processing the feature representation on the sampled space, inspired by [1], instead of learning the global spatial similarity, we apply SA to compute cross-covariance across channels to generate the attention map encoding the global context implicitly.", "It has linear complexity rather than quadratically complexity.", "EDB is designed to infer locally-enriched textures.", "Inspired by U-Net [41], we also construct EDB with the U-shaped framework.", "The first three stages form the encoder, and the remaining three stages represent the decoder.", "Each stage takes a similar architecture, consisting of sampling layers, residual channel attention blocks (RCABs) [65] and hybrid fusion block.", "Instead of using the strided or transposed convolution for rescaling spatial resolution of features, we use Bilinear sampling followed by a $1\\times 1$ convolution layer to reduce checkerboard artifacts and model parameters.", "To facilitate residual feature fusion at different stages or scales, we design HFB to aggregate multiple inputs among stages in terms of the spatial and channel dimensions.", "HFB enables more diverse features to be fully used during the restoration process.", "Moreover, to further reduce the number of parameters, RTB and EDB are equipped with depth-wise separable convolutions (DSC).", "For RTB, we integrate DSC into multi-head attention to emphasize on the local context before computing feature covariance to produce the global attention map.", "Moreover, we construct EDB into an asymmetric U-shaped structure, in which the encoder has the portable design with DSC, but the standard convolutions for the decoder.", "This scheme can save about 8% parameters of the whole network.", "We have experimentally verified that utilizing DSC in the encoder is better than that in the decoder.", "Figure: Visualization of MAM, including the embedding representation f B,S f_{B,S} of background image and the extracted background texture information (f BT f_{BT}).", "Using the complementary texture f BT f_{BT} from the rainy image, the network can achieve more accurate background restoration.For a better visual effect, we respectively select three of the channels (48) from f B,S f_{B,S} and f BT f_{BT}, and then rescale their pixel values into [0, 255] to generate the corresponding grayscale image." ], [ "Multi-input Attention Module", "To associate rain streaks removal and background recovery, as shown in Figure REF , we construct a multi-input attention module (MAM) with Transformer to fully exploit the complementary background information for enhancement.", "Unlike the standard Transformer receiving a sequence of image patches as input, MAM takes the predicted rain distribution $I^*_{R,S}$ , sub-space deraining image ($I^*_{B,S}$ ) and rainy image $I_{Rain}$ as inputs, and first learns the embedding representation ($f^*_{B,S}$ , $f^*_{R,S}$ , $f_{Rain}$ ), enriched with local contexts.", "$f^*_{R,S}$ and $f_{Rain}$ serve as query (Q), key (K) and value (V) projections.", "Instead of learning the spatial attention map of size $\\mathbb {R}^{HW\\times HW}$ , we then reshape query and key projections, and generate cross-covariance transposed-attention map $M \\in \\mathbb {R}^{C\\times C}$ via the dot-product interaction between $f^*_{R,S}$ and $f_{Rain}$ .", "As shown in Figure REF , the attention map guides the network to excavate background texture information ($f_{BT}$ ) from the embedding representation ($f_{Rain}$ ) of $I_{Rain}$ .", "The procedures in SA are expressed as $F_{SA} =(Softmax(F_{K}(I^*_{R,S}) \\circ F_{Q}(I_{Rain})))\\circ F_{V}(I_{Rain}),$ where $F_{K}(\\cdot )$ , $F_{Q}(\\cdot )$ and $F_{V}(\\cdot )$ are the embedding functions to produce the projections.", "$\\circ $ and $F_{S}(\\cdot )$ denote the dot-product interaction and softmax function.", "Followed by a hybrid fusion block, the extracted complementary information is merged with the embedding presentation of $I^*_{B,S}$ to enrich background representation." ], [ "Hybrid Fusion Block", "Considering the feature redundancy and knowledge discrepancy among residual blocks and encoding stages, we introduce a novel hybrid fusion block (HFB) where the low-level contextualized features of earlier stages help consolidate high-level features of the later stages (or scales).", "Specifically, we incorporate depth-wise separable convolutions and the channel attention layer into HFB to discriminatively aggregate multi-scale features in spatial and channel dimensions.", "Compared to skip pixel-wise superimposition or convolution fusion, our HFB is more flexible and effective.", "Table: Ablation study on the depth-wise separable convolutions (DSC), multi-input attention module (MAM), hybrid fusion block (HFB), SSIM loss, super-resolution (SR), residual Transformer branch (RTB) and encoder-decoder branch (EDB) on Test1200 dataset.", "We obtain the model parameters (Million (M)), average inference time (Second (S)), and calculation complexity (GFlops (G)) of deraining on images with the size of 512×512512\\times 512.To validate our proposed ELF, we conduct extensive experiments on synthetic and real-world rainy datasets, and compare ELF with several mainstream image deraining methods.", "These methods include MPRNet [60], SWAL [14], RCDNet [46], DRDNet [7], MSPFN [18], IADN [17], PreNet [40], UMRL [56], DIDMDN [63], RESCAN [31] and DDC [30].", "Five commonly used evaluation metrics, such as Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), Feature Similarity (FSIM), Naturalness Image Quality Evaluator (NIQE) [37] and Spatial-Spectral Entropy-based Quality (SSEQ) [34], are employed for comparison.", "Table: Comparison results of average PSNR, SSIM, and FSIM on Test100/Test1200/R100H/R100L datasets.", "When averaged across all four datasets, our ELF advances state-of-the-art (MPRNet) by 0.25 dB, but accounts for only its 11.7% and 42.1% computational cost and parameters.", "We obtain the model parameters (Million) and average inference time (Second) of deraining on images with the size of 512×\\times 512.", "☆ ^\\star denotes the recursive network using the parameter sharing strategy.", "ELF-LW denotes the light-weight version of our ELF with the number of Transformer blocks to 5 and channels to 32." ], [ "Implementation Details", "Data Collection.", "Since there exits the discrepancy in training samples for all comparison methods, following [18], [17], we use $13,700$ clean/rain image pairs from [64], [10] for training all comparison methods with their publicly released codes by tuning the optimal settings for a fair comparison.", "For testing, four synthetic benchmarks (Test100 [64], Test1200 [63], R100H and R100L [54]) and three real-world datasets (Rain in Driving (RID), Rain in Surveillance (RIS) [28] and Real127 [63]) are considered for evaluation.", "Experimental Setup.", "In our baseline, the number of Transformer blocks in RTB is set to 10 while RCAB is empirically set to 1 for each stage in EDB with filter numbers of 48.", "The training images are coarsely cropped into small patches with a fixed size of $256\\times 256$ pixels to obtain the training samples.", "We use Adam optimizer with the learning rate ($2\\times 10^{-4}$ with the decay rate of 0.8 at every 65 epochs till 600 epochs) and batch size (12) to train our ELF on a single Titan Xp GPU." ], [ "Ablation Study", "Validation on Basic Components.", "We conduct ablation studies to validate the contributions of individual components, including the self-attention (SA), depth-wise separable convolutions (DSC), super-resolution reconstruction (SR), hybrid fusion block (HFB) and multi-input attention module (MAM) to the final deraining performance.", "For simplicity, we denote our final model as ELF and devise the baseline model w/o all by removing all these components above.", "Quantitative results in terms of deraining performance and inference efficiency on the Test1200 dataset are presented in Table REF , revealing that the complete deraining model ELF achieves significant improvements over its incomplete versions.", "Compared to w/o MAM model (removing MAM from ELF), ELF achieves 1.92dB performance gain since the association learning in MAM can help the network to fully exploit the background information from the rainy input with the predicted rain distribution prior.", "In addition, disentangling the image deraining task into rain streaks removal and texture reconstruction at the low-dimension space exhibits considerable superiority in terms of efficiency (19.8% and 67.6% more efficient in inference time and computational cost, respectively) and restoration quality (referring to the results of ELF and ELF$^*$ models (accomplishing the deraining and texture recovery at the original resolution space [7])).", "Moreover, using the depth-wise separable convolution allows increasing the channel depth with approximately the same parameters, thus enhancing the representation capacity (referring to the results of ELF and w/o DSC models).", "Compared to the w/o SA that replaces the Transformer blocks in RTB with the standard RCABs, ELF gains 0.45dB improvement with acceptable computational cost.", "We also conduct ablation studies to validate the dual-path hybrid fusion framework, involving an residual Transformer branch (RTB) and a U-shaped encoder-decoder branch (EDB).", "Based on ELF, we devise two comparison models (w/o RTB and w/o EDB) by removing these two branches in turn.", "Quantitative results are presented in Table REF .", "Removing RTB may greatly weaken the representation capability on the spatial structure, leading to the obvious performance decline (2.09dB in PSNR) (referring to the results of ELF and w/o RTB models).", "Moreover, EDB allows the network to aggregate multi-scale textural features, which is crucial to enrich the representation of local textures.", "Figure: Visual comparison of derained images obtained by seven methods on R100H/R100L (1 st -2 th 1^{st}-2^{th} rows) and Test100/Test1200 (3 st -4 th 3^{st}-4^{th} rows) datasets.", "Please refer to the region highlighted in the boxes for a close up comparison." ], [ "Comparison with State-of-the-arts", "Synthesized Data.", "Quantitative results on Test1200, Test100, 100H and R100L datasets are provided in Table REF .", "Meanwhile, the inference time, model parameters and computational cost are also compared.", "It is observed that most of the deraining models obtain impressive performance on light rain cases with high consistency.", "However, only our ELF and MPRNet still perform favorably on heavy rain conditions, exhibiting great superiority over other competing methods in terms of PSNR.", "As expected, our ELF model achieves the best scores on all metrics, surpassing the CNN-based SOTA (MPRNet) by 0.25 dB on average, but only accounts for its 11.7% and 42.1% computational cost and parameters.", "Meanwhile, our light-weight deraining model ELF-LW is still competitive, which gains the third-best average PSNR score on four datasets.", "In particular, ELF-LW averagely surpasses the real-time image deraining method PCNet [21] by 1.08dB, while with less parameters (saving 13.6%) and computational cost (saving 23.7%).", "Table: Comparison of average NIQE/SSEQ scores with ten deraining methods on three real-world datasets.For more convincing evidence, we also provide visual comparisons in Figure REF .", "High-accuracy methods, such as PreNet, MSPFN and RCDNet, can effectively eliminate the rain layer and thus bring an improvement in visibility.", "But they fail to generate visual appealing results by introducing considerable artifacts and unnatural color appearance, the heavy rain condition in particular.", "Likewise, DRDNet focuses on the detail recovery, but shows undesired deraining performance.", "MPRNet tends to produce over-smoothing results.", "Besides recovering cleaner and more credible image textures, our ELF produces results with better contrast and less color distortion.", "Please refer to the “tiger\" and “horse\" scenarios.", "Moreover, we provide the comparison and analyses in terms of the color histogram fitting curve of “Y\" channel in Supplementary, which verifies the consistency between the predicted deraining result to the ground truth in terms of the statistic distribution.", "We speculate that these visible improvements on restoration quality may benefit from our proposed hybrid representation framework of Transformer and CNN as well as the association learning scheme for rain streak removal and background recovery.", "These strategies are integrated into a unified framework, allowing the network to fully exploit the respective learning merits for image deraining while guaranteeing the inference efficiency.", "Figure: Visual comparison of derained images obtained by eight methods on five real-world scenarios, coverring rain veiling effect (1st), heavy rain (2st) and light rain (3st-4st).", "Please refer to the region highlighted in the boxes for a close up comparison.Figure: Visual comparison of joint image deraining and object detection on BDD350 dataset.Real-world Data.", "We further conduct experiments on three real-world datasets: Real127 [63], Rain in Driving (RID), and Rain in Surveillance (RIS) [28].", "Quantitative results of NIQE [37] and SSEQ [34] are listed in Table REF , where smaller NIQE and SSEQ scores indicate better perceptual quality and clearer contents.", "Again, our proposed ELF is highly competitive, achieving the lowest average values on the RID dataset and the best best average scores of NIQE and SSEQ on the Real127 and RIS datasets, respectively.", "We visualize the deraining results in Figure REF , showing that ELF produces rain-free images with cleaner and more credible contents, whereas the competing methods fail to remove rain streaks.", "These evidences indicate that our ELF model performs well in eliminating rain perturbation while preserving textural details and image naturalness.", "Table: Comparison results of joint image deraining and object detection on COCO350/BDD350 datasets." ], [ "Impact on Downstream Vision Tasks", "Eliminating the degraded effects of rain streaks under rainy conditions while preserving credible textural details is crucial for object detection.", "This motivates us to investigate the effect of deraining performance on object detection accuracy based on popular object detection algorithms (e.g., YOLOv3 [39]).", "Based on our ELF and several representative deraining methods, the restoration procedures are directly applied to the rainy images to generate corresponding rain-free outputs.", "We then apply the publicly available pre-trained models of YOLOv3 for the detection task.", "Table REF shows that ELF achieves highest PSNR scores on COCO350 and BDD350 datasets [18].", "Meanwhile, the rain-free results generated by ELF facilitate better object detection performance than other deraining methods.", "Visual comparisons on two instances in Figure REF indicate that the deraining images by ELF exhibit a notable superiority in terms of image quality and detection accuracy.", "We attribute the considerable performance improvements of both deraining and down-stream detection tasks to our association learning between rain streaks removal and detail recovery tasks." ], [ "Robust Analyses on Adversarial Attacks", "We conduct a brief study on the robustness of mainstream rain removal methods against adversarial attacks, including the LMSE attack and LPIPS attacks [58].", "Our study shows that the deraining models are more vulnerable to the adversarial attacks perturbations while our method shows better robustness over other competitors.", "More details and analyses are included in Supplementary.", "We rethink the image deraining as a composite task of rain streak removal, textures recovery and their association learning, and propose a dynamic associated network (ELF) for image deraining.", "Accordingly, a two-stage architecture and an associated learning module (ALM) are adopted in ELF to account for twin goals of rain streak removal and texture reconstruction while facilitating the learning capability.", "Meanwhile, the joint optimization promotes the compatibility while maintaining the model compactness.", "Extensive results on image deraining and joint detection task demonstrate the superiority of our ELF model over the state-of-the-arts.", "This work is supported by National Natural Science Foundation of China (U1903214,62071339, 62072347, 62171325), Natural Science Foundation of Hubei Province (2021CFB464) Guangdong-Macao Joint Innovation Project (2021A0505080008) Open Research Fund from Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)(GML-KF-22-16)." ] ]
2207.10455
[ [ "Multi-gigawatt peak power post-compression in a bulk multi-pass cell at\n high repetition rate" ], [ "Abstract The output of a 200 kHz, 34 W, 300 fs Yb amplifier is compressed to 31 fs with > 88 % efficiency to reach a peak power of 2.5 GW, which to date is a record for a single-stage bulk multi-pass cell.", "Despite operation 80 times above the critical power for self-focusing in bulk material, the setup demonstrates excellent preservation of the input beam quality.", "Extensive beam and pulse characterizations are performed to show that the compressed pulses are promising drivers for high harmonic generation and nonlinear optics in gases or solids." ], [ "ol per-mode = symbol Multi-gigawatt peak power post-compression in a bulk multi-pass cell at high repetition rate [1]Ann-Kathrin Raab [2]Marcus Seidel [1]Chen Guo [1]Ivan Sytcevich [3]Gunnar Arisholm [1]Anne L'Huillier [1]Cord L. Arnold [1,2,*]Anne-Lise Viotti [1]Department of Physics, Lund University, P.O.", "Box 118, SE-22100 Lund, Sweden [2]Deutsches Elektronen-Synchrotron DESY, Notkestraße 85, 22607 Hamburg, Germany [3]FFI (Norwegian Defence Research Establishment), P.O.", "Box 25, NO-2027, Kjeller, Norway [*]Corresponding author: anne-lise.viotti@fysik.lth.se The output of a ${200}{}$ , ${34}{}$ , ${300}{}$ Yb amplifier is compressed to ${31}{}$ with $>{88}{}$ efficiency to reach a peak power of ${2.5}{}$ , which to date is a record for a single-stage bulk multi-pass cell.", "Despite operation 80 times above the critical power for self-focusing in bulk material, the setup demonstrates excellent preservation of the input beam quality.", "Extensive beam and pulse characterizations are performed to show that the compressed pulses are promising drivers for high harmonic generation and nonlinear optics in gases or solids.", "High peak power ultrafast sources are key enabling tools for time-resolved studies at the femto- and atto-second time scales.", "In addition to increased repetition rates, long-term stability and excellent beam quality are highly desirable, especially for time-resolved pump-probe experiments or nanoscale imaging [1].", "Few-fs pulses with multi-${}{}$ level energies are needed to drive high harmonic generation (HHG) for applications demanding high repetition rate, e.g.", "coincidence spectroscopy or photoelectron microscopy [2], [3].", "To generate such optical pulses, Ytterbium (Yb) based sources are gaining popularity as they show excellent power and repetition rate scalability [4].", "They also benefit from simpler thermal management due to the small quantum defect and efficient, cheap diode pumping schemes thanks to the long upper-state lifetime.", "However, their gain bandwidth does not allow pulse durations smaller than few hundreds of fs.", "Shorter pulse durations can be accessed via optical parametric chirped pulse amplification [5], though at the cost of conversion efficiency and complexity.", "A viable, much more efficient alternative is direct pulse compression by spectral broadening via self-phase modulation (SPM) in a Kerr medium followed by chirp removal [6].", "One recent approach, based on multi-pass cell (MPC) geometry, displays exceptional average power handling capabilities for post-compression [7], [8].", "MPCs have been routinely operated with ${100}{}$ to ${}{}$ power levels [9].", "Moreover, MPCs have been used with pulse energies ranging from a few ${}{}$ to more than ${100}{}$ , resulting in peak powers approaching the ${}{}$ regime [10].", "For ${}{}$ input peak powers, gas-filled MPCs have been mainly utilized: they readily enable nonlinearity tuning and typically exhibit > ${90}{}$ power transmission.", "However, they require bulky, costly vacuum equipment and cannot be operated above the critical power of the nonlinear gas medium, which is why using a bulk material in an MPC is a much simpler alternative.", "Bulk MPCs have been operated with ${}{}$ input peak powers and, at best, post-compression up to $\\sim {1}{}$ peak power was reached [11], [12].", "Interestingly, the input peak powers surpass the critical power of the bulk nonlinear material multiple times.", "For example, a previous bulk MPC study at ${1.03}{}$ used ${150}{}$ input peak power and reported a compression ratio of 11 [11], while Vicentini et al.", "reached a compression factor of 3 in a second broadening stage with ${240}{}$ [13].", "These correspond to input peak powers exceeding the critical power of fused silica ($\\approx {4.3}{}$ , [14]) by a factor 30 and 50, respectively.", "A larger input (output) peak power, ${280}{}$ (${440}{}$ ), was obtained in a factor 2.8 self-compression experiment [15] performed at ${1.55}{}$ .", "This setting exceeded the critical power 30 times and translates to ${200}{}$ output peak power at ${1.03}{}$ , according to the wavelength squared scaling [14].", "Operating in such supercritical self-focusing regime bears the risk that the spatial nonlinearities cause pulse quality degradation, resulting in limited compressibility and the emergence of strong pulse pedestals.", "Moreover, for peak powers greater than ${50}{}$ times the critical power, small scale self-focusing and multi-filament breakup can occur [16].", "It has recently been shown that using multiple, thin Kerr media instead of one single, thicker medium allows cleaner compression and scaling of compression ratios by distributing the Kerr nonlinearity along the MPC [17].", "When using >${100}{}$ input peak power, it is, however, questionable if multiple thin plates suffice to suppress spatio-temporal couplings.", "In this work, we present an extensive characterization of a bulk MPC with, to the best of our knowledge, the highest input and output peak powers, operating >80 times above the critical power of fused silica.", "SPM in thin plates allows us to reach ${31}{}$ pulses with ${2.5}{}$ at ${200}{}$ and a compression factor of $\\approx {10}{}$ .", "This setup, solely built from readily available components, stands out due to its simplicity, overall cost, power efficiency and small footprint.", "The influence of the input pulse dispersion and temporal structure on the spectral broadening in the MPC is investigated.", "Spectral, power and carrier-envelope phase (CEP) stability measurements are carried out.", "In spite of the high input peak power, the measured spatial, spectral and temporal quality of the post-compressed pulses is excellent.", "This is essential when employing these pulses as drivers for secondary sources, e.g.", "HHG or other frequency conversion processes.", "Figure: Schematic of the laser architecture and the experimental setup with: motorized grating compressor, mode-matching, MPC and compression.Figure: (a) Reconstructed temporal profile of the MPC input pulse via d-scan with the relative energy content in the main pulse (red).", "The grey horizontal line represents the input compression position used in the study.", "(b) Measured and (c) simulated broadened MPC spectrum as a function of the relative grating compressor position.", "(b) and (c) are normalized.The experimental setup is depicted in Fig.", "REF .", "The frontend is a CEP-stabilized Titanium:Sapphire (Ti:Sa) oscillator.", "A narrow part of its spectrum is temporally stretched with a chirped fiber Bragg grating and amplified at ${200}{}$ in an Yb rod-type amplifier.", "The output is compressed by a transmission grating compressor to ${300}{}$ full-width-half-maximum (FWHM) with ${170}{}$ .", "In the compressor, the second grating and the retro-reflector are mounted on a motorized stage, which allows fine-tuning of the spectral phase by varying the group delay and third order dispersion at a rate of -${29000}{}$ and ${160000}{}$ , respectively.", "In the following, the grating compressor output is referred to as the input to the MPC.", "A lens telescope matches the beam to the eigenmode of an Herriott-type MPC.", "The cell is designed for 15 roundtrips using standard ${1030}{}$ quarter-wave stack mirrors with a radius of curvature (ROC) of ${300}{}$ .", "The cell length is ${500}{}$ and the Kerr media are two ${1}{}$ thin anti-reflection (AR) coated fused silica plates spaced by ${15}{}$ and symmetrically placed in the MPC (see Fig.", "REF ).", "In- and out-coupling is done via a scraper mirror.", "The SPM-induced chirp is removed via chirped mirrors, which compensate for a total dispersion of ${2800}{}$ .", "Adjusting the positions of the plates, and therefore the peak intensities in the bulk media, allows us to tune the broadening in the cell.", "This gives flexibility to operate the MPC at a targeted spectral broadening for a range of input peak powers.", "The MPC input pulses are characterized by the dispersion scan (d-scan) technique [18].", "The dispersion is varied by moving the motorized grating, as indicated by the double-headed arrow in Fig.", "REF .", "The retrieved d-scan trace (see Supplement 1) allows the reconstruction of the input pulse at different compressor positions, as shown in Fig.", "REF (a).", "In Fig.", "REF , the relative position from the center (zero) of the scanning range of the compressor stage is the common y-axis.", "Positive (negative) compressor positions correspond to a positive (negative) net input pulse dispersion, respectively.", "The relative main pulse energy contained in twice the FWHM, and integrated over the full measurement window of ±${2.5}{}$ , is compared to the total pulse energy (red curve in Fig.", "REF (a)).", "Larger values of the energy content correspond to cleaner input pulses with minimized pedestals, while shorter input pulses with slightly higher peak powers exhibit strong double-pulse structures and lower energy content.", "While changing the dispersion, the spectrum of the compressed MPC output is measured (see Fig.", "REF (b)).", "This study is interesting as the input pulse parameters for optimum spectral broadening and clean compression are not usually obvious and do not necessarily correspond to the shortest input pulse.", "Three cases can then be identified: Around +${0.9}{}$ , the input pulse has reduced pedestals, a large energy content in the main pulse (${82}{}$ ), while having a rather short duration (${300}{}$ ) and a high peak power (${370}{}$ ).", "At this position, highlighted by the horizontal line across Fig.", "REF , the pulse is slightly positively chirped and the resulting SPM spectrum is broad.", "At -${0.7}{}$ , the spectral broadening is the largest as the input pulse is the shortest, with a higher peak power.", "However, the side pulse becomes strong enough to broaden as well, thus leading to spectral fringes observed in Fig.", "REF (b).", "Finally, substantial broadening is also noticed around -${2}{}$ .", "There, the input pulse is a negatively chirped double pulse for which both pulses spectrally broaden independently, leading to strong modulations in the spectrum of Fig.", "REF (b).", "Additionally, the experimental data is supported by simulations, using the SISYFOS (SImulation SYstem For Optical Science) code [19] for nonlinear beam propagation in the MPC pictured in Fig.", "REF .", "The reflectivity and dispersion of the MPC mirrors, the transmission of the AR-coating of the thin plates are included in the simulations, as well as the Kerr nonlinearity of air, owing to the large peak intensities in the center of the cell.", "While the input spatial beam is a fundamental Gaussian with $M^2=1$ , the retrieved spectrum and phase from the d-scan measurements are used as input to the simulations.", "Figure REF (c) shows the simulated spectral broadening scan.", "The main spectral features from the experiment (Fig.", "REF (b)) are well reproduced in the simulations and the positions of the optimum compression and the largest broadening points match well.", "For negatively chirped input pulses (from -${0.7}{}$ to around -${2.5}{}$ ), the spectrum is strongly modulated, leading to complex post-compressed simulated pulse profiles (see Supplement 1), unsuitable for the envisioned ultrafast applications.", "The combination of the motorized grating compressor together with a spectrometer recording the output of the MPC constitutes a helpful tool to determine the optimum input pulse dispersion regime, in our case slightly positive, at +${0.9}{}$ .", "Figure: (a) Measured and (b) retrieved FROG traces in logarithmic scale.", "(c) Temporal profile of the compressed MPC output (red, 31{31}{}), compared to the FTL (black, 30{30}{}) and the MPC input (grey, 300{300}{}).", "(d) Retrieved and measured spectra.", "The fast spectral modulations and the significant peak power difference with respect to the transform limited pulse originate from a pre-pulse 1{1}{} away (see Supplement 1).The MPC input, as determined in Fig.", "REF , has an average power of ${34}{}$ and a peak power of ${370}{}$ with a duration of ${300}{}$ (FWHM).", "The post-compressed output power is ${30}{}$ , which translates to an overall transmission efficiency above ${88}{}$ .", "The compressed MPC output pulses are characterized by a second harmonic frequency-resolved optical gating (FROG) setup, for which the measured and retrieved traces are shown in Fig.", "REF (a) and (b).", "The retrieved FWHM of ${31}{}$ , normalized via the measured output pulse energy of ${150}{}$ , yields a peak power of ${2.5}{}$ (see Fig.", "REF (c)), so far the highest reported value for a bulk MPC.", "Measured and retrieved broadened spectra match well, as shown in Fig.", "REF (d), and correspond to a Fourier transform limit (FTL) of ${30}{}$ FWHM, which is the bandwidth limit supported by the current MPC mirrors.", "The compression factor is $9.7$ .", "The corresponding simulated broadened spectrum, extracted from Fig.", "REF (c) at +${0.9}{}$ , matches the results of the FROG measurement very well (see Supplement 1).", "Similar to Fig.", "REF (a), the efficiency of compression is assessed by estimating the relative energy content in the main peak of the post-compressed output pulse, selecting twice the FWHM FTL over an integration window of ±${2.5}{}$ .", "${77}{}$ of the energy remains in the main pulse, similar to recent results in gas-filled MPCs [20].", "During time-resolved pump-probe experiments, the long-term stability of the compressed MPC output matters.", "In terms of average power, a ${1}{}$ measurement with a ${10}{}$ sampling rate shows that the MPC does not increase the fluctuations significantly, from ${0.27}{}$ input root-mean-square error (RMSE) to ${0.32}{}$ output.", "A parallel measurement of the broadened spectrum over ${1}{}$ , see Supplement 1, reveals a standard deviation of the FTL of ${0.2}{}$ .", "For phase-sensitive experiments such as HHG, CEP stability has a strong influence.", "While not essential for the current pulse duration of ${31}{}$ , it becomes important for further compression.", "In our case, only the oscillator is CEP-stabilized.", "To measure the CEP stability, an f-2f interferometer is used at the output of the MPC, where the second harmonic of a white light generated in a sapphire crystal interferes with the blue edge of the same white light.", "The resulting spectral fringes are recorded in a single-shot measurement with a line camera [2], where the read-out speed allows us to capture each pulse at ${200}{}$ .", "While the phase is rather noisy, the time-average spectra over ${1}{}$ , shown in Fig.", "REF (a) and (b), are clearly different when the oscillator is free-running and when it is actively stabilized.", "Further numerical investigations with Allan variance analysis validate this distinction (see Supplement 1).", "Observing fringes in the stabilized case means that the oscillator's phase is preserved through the entire amplifier chain, which includes a pre-amplifier and a rod-type amplifier with a large stretching/compression ratio (see Fig.", "REF ), as well as the MPC.", "A correction loop can eventually be implemented to stabilize the phase drift [21].", "Figure: 1{1}{} time-averaged spectral fringes (linear scale) from an f-2f interferometer, comparing the oscillator in (a) a free-running state and (b) actively CEP-stabilized.", "M 2 M^2 measurements (c) before the mode-matching unit and (d) for the compressed MPC output, with insets of the normalized beam profiles in the focus.", "(e) Reconstructed spectral and temporal distributions in xx and yy directions.Finally, a beam quality assessment is performed, starting with an $M^2$ measurement to judge the focusability, which is highly relevant for, e.g., HHG.", "Figures REF (c) and (d) compare the $M^2$ at the input and output of the MPC with no significant change: a value below $1.2$ is obtained for both spatial directions.", "Since high peak intensities are reached in the cell, concerns arise regarding eventual spatio-temporal couplings.", "In fact, the MPC input peak power of ${370}{}$ exceeds the critical power of fused silica >80 times, largely above any previously reported result in bulk MPCs [11], [13].", "Full spatio-spectral/temporal 3D characterization was previously performed for gas-filled MPCs operating close to the critical power and yielded Strehl ratios around 0.9 [22].", "In this work, a full 3D characterization employing spatially-resolved Fourier transform spectrometry [23] is conducted.", "This method allows us to spectrally resolve the wavefront and the beam profile of the output of the MPC.", "Together with the reference pulse measurement obtained by FROG, the pulse is numerically focused and reconstructed in both spectral and temporal domains, as shown in Fig.", "REF (e).", "In the x-direction, the spectrum is homogeneous but a slight spatial chirped is observed in the y-direction.", "This translates into a pulse front-tilt in the time domain for the same direction.", "An identical measurement performed at the MPC input gives similar results and also displays spatial chirp (see Supplement 1), indicating that the MPC does not introduce significant spatio-temporal couplings.", "The origin of such chirp is most likely a slight misalignment of the retro-reflector in the grating compressor.", "From the results of this study, the 3D time profiles can be compared to an ideal wavefront-compensated pulse leading to a ${20}{}$ difference.", "This can be partly explained by the pulse front-tilt in the $y$ direction.", "Accounting for the FROG retrieval, the total 3D Strehl ratio is: $S_{3D}=0.69$ .", "Disregarding the temporal domain allows us to extract the commonly used 2D Strehl ratio, which is $S_{2D}=0.89$ .", "In conclusion, we present, to the best of our knowledge, a bulk MPC at ${200}{}$ with the highest peak powers so far achieved: ${370}{}$ input with ${300}{}$ compressed down to ${31}{}$ with ${2.5}{}$ output.", "This single-stage compression setup is power-, space- and cost-efficient, being solely realized with off-the-shelf optics.", "The operating pulse parameter regime demonstrated here directly competes with standard gas-filled hollow-core fibers used for spectral broadening [24].", "Despite a working point above 80 times the critical self-focusing power for fused silica, no spatio-temporal couplings are introduced, as the full 3D characterization shows.", "This MPC is also a suitable first stage towards the few-cycle regime and can be inserted into a cascaded spectral broadening scheme with, e.g., a second cell [25] or a capillary [26].", "Additionally, increasing pulse energies can enable further peak power scaling.", "As mentioned previously, the position of the Kerr medium can be tuned and the setup size can be geometrically scaled up, although only until a certain practical limit [8].", "Moreover, for high input peak powers, another limit will be the critical power in air, making it difficult to operate without a chamber.", "Methods to circumvent size and peak power restrictions have been recently demonstrated in gas-filled MPCs, such as utilization of higher-order spatial modes [10], multiplexing [27] or bow-tie type cavities [28].", "Together with preserved beam quality, power, spectral and phase stability, this setup constitutes a promising route for driving applications demanding large peak powers and high repetition rates.", "The source has been recently used to generate high-order harmonics in argon and neon up to cut-off energies equal to ${80}{}$ and ${135}{}$ , respectively.", "Funding We acknowledge support from the European Research Council (Advanced grant QPAP, 884900); the Swedish Research Council (2019-06275, 2016-04907, 2013-8185); the Wallenberg Center for Quantum Technology funded by the Knut and Alice Wallenberg Foundation; the Crafoord Foundation (20200584) and Lund Laser Centre.", "Disclosures The authors declare no conflicts of interest.", "Data availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.", "Supplemental document See Supplement 1 for supporting content." ] ]
2207.10431
[ [ "Bayesian Recurrent Units and the Forward-Backward Algorithm" ], [ "Abstract Using Bayes's theorem, we derive a unit-wise recurrence as well as a backward recursion similar to the forward-backward algorithm.", "The resulting Bayesian recurrent units can be integrated as recurrent neural networks within deep learning frameworks, while retaining a probabilistic interpretation from the direct correspondence with hidden Markov models.", "Whilst the contribution is mainly theoretical, experiments on speech recognition indicate that adding the derived units at the end of state-of-the-art recurrent architectures can improve the performance at a very low cost in terms of trainable parameters." ], [ "Introduction", "Recurrent models have been widely employed in signal processing and statistical pattern recognition, notably in the form of Kalman's state space filter [1], [2] and hidden Markov models (HMMs) [3], [4], [5], [6].", "Both approaches use a forward-backward training procedure to make a statistical estimation of the model parameters.", "With the current success of machine learning techniques, speech recognition architectures can be trained in an end-to-end fashion, by exploiting auto-differentiation inside deep learning frameworks like PyTorch [7].", "Here, recurrence is also an important concept and recurrent neural networks (RNNs) are commonly trained via the error Back-Propagation Through Time (BPTT) algorithm [8], [9], which is a generalization of gradient descent to process sequential data.", "During the forward pass, a batch of input examples is passed through the network, and a loss function is applied to the final outputs.", "During the subsequent backward computation of derivatives, the network trainable parameters are updated to minimize the loss.", "Similarities between HMMs and RNNs have long been observed.", "Bourlard and Wellenkens [10] have shown that the outputs of RNNs approximate the maximum a posteriori (MAP) output probabilities of HMMs trained via the Viterbi algorithm [11].", "Bridle [12] also showed that the alpha part of the forward-backward algorithm can be simulated by a recurrent network, and that the beta part bears similarities with the backward computation of derivatives in the training of neural networks.", "Bidirectional RNNs were subsequently defined by Schuster and Paliwal [13] to explicitly allow networks to take into account future observations.", "This approach was later applied to gated RNNs [14] and is now the standard approach for LSTMs [15], GRUs [16] and Li-GRUs [17].", "In a bidirectional recurrent layer, the size of all feedforward weight matrices (i.e., matrices that are applied to the layer inputs) is doubled compared to the unidirectional case, which represents a significant increase in the amount of trainable parameters.", "Recently, Garner and Tong [18] have used a Bayesian interpretation of gated RNNs to derive a backward recursion through the input sequences.", "Their probabilistic approach, analogous to a Kalman smoother, allows the consideration of future observations without requiring any additional trainable parameters.", "This Bayesian approach treats the input as a sequence of observations, and interprets the unit outputs as the probabilities of hidden features being present at each timestep.", "Recurrence emerges naturally from Bayes's theorem which updates a prior probability into a posterior given new observational data.", "In our previous work on the light Bayesian recurrent unit [19], hidden features were assumed to be interdependent, which led to a layer-wise recurrence for the computation of prior probabilities.", "In this paper, in a mainly theoretical contribution, we come back to the simpler case of a RNN with unit-wise recurrence and no gate.", "Here, by assuming a latent space of independent features, a Bayesian analysis demonstrates that the trainable parameters of the network directly correspond to standard parameters of a first-order 2-state hidden Markov model (HMM).", "In a second step, similarly to the Kalman smoother [1] and to the forward-backward algorithm [20], we derive two different backward recursions that allow the consideration of future observations without relying on any additional parameters.", "We also prove by induction that the two are equivalent.", "In contrast with the work of Garner and Tong [18], the unit-wise recurrence is here derived using transition probabilities instead of a context relevance gate.", "The derived unit-wise Bayesian recurrent units (UBRUs) can be trained like standard RNNs inside a modern deep neural network (DNN).", "Even though UBRUs have much less representational power than state-of-the-art RNNs, they are appropriate when the features are decorrelated.", "We confirm this by showing that, when placed on the phoneme (rather than acoustic) region of a DNN for automatic speech recognition (ASR), they are able to replace larger standard bidirectional gated RNNs without any loss of performance.", "More generally, our approach aims at developing the growing toolkit of Bayesian techniques applicable to deep learning." ], [ "A hidden Markov model approach", "Consider an input sequence $X_T=[x_1,\\hdots ,x_T]\\in \\mathbb {R}^{F\\times T}$ of length $T$ , where each observation $x_t$ is a vector with $F$ input dimensions.", "We assume that there are $H$ hidden features $\\lbrace \\phi _i\\,| i=1,\\hdots ,H\\rbrace $ that we wish to detect along the sequence.", "At each timestep $t$ , a feature has two possible states: present or absent, that we write as $\\phi _{t,i}$ and $\\lnot \\phi _{t,i}$ respectively.", "Each hidden feature can be represented as a first-order 2-state Markov process, such that its probability of occurring at timestep $t$ only depends on its state at the previous timestep $t-1$ .", "For a single hidden feature $\\phi $ , an initial state probability $a\\in [0,1]^{2\\times 1}$ , $a=\\Big [P(\\phi _{0}), P(\\lnot \\phi _{0})\\Big ] \\, ,$ and a transition matrix $A\\in [0,1]^{2\\times 2}$ , $A=\\begin{bmatrix}P(\\phi _{t}|\\phi _{t-1}) &P(\\lnot \\phi _{t}|\\phi _{t-1}) \\\\[5pt]P(\\phi _{t}|\\lnot \\phi _{t-1}) & P(\\lnot \\phi _{t}|\\lnot \\phi _{t-1})\\end{bmatrix} \\, ,$ can be defined to describe the evolution of the state through discrete time.", "Then, for any binary sequence of hidden states, the probability of the sequence being generated by the Markov chain is fully defined in terms of $a$ and $A$ as a product of initial and transition probabilities.", "Since the hidden sequence is not directly observable, let us additionally define a set of distributions, $B(x_t)=\\Big [b_1(x_t),b_2(x_t) \\Big ]=\\Big [p(x_t|\\phi _{t}),\\,p(x_t|\\lnot \\phi _{t})\\Big ] \\, ,$ representing the likelihood of seeing observation $x_t$ at timestep $t$ given the two possible feature states.", "As explained by Juang and Rabiner [21], the stochastic process represented by $X_T$ can then be fully characterized by the HMM parameters $a$ , $A$ and $B(x_t)$ , without requiring the knowledge of the sequence of hidden states." ], [ "Neural network formulation", "Let us start by using a more machine learning oriented formulation of the HMM parameters $a$ and $A$ .", "We define trainable scalars $\\rho _{0,i}$ , $\\tau _{11,i}$ and $\\tau _{01,i}$ $\\in [0,1]$ that describe the initial and transition probabilities of the $i$ -th hidden feature, $a_i=\\Big [\\rho _{0,i}, 1-\\rho _{0,i}\\Big ] \\quad \\text{and}\\quad A_i=\\begin{bmatrix}\\tau _{11,i} & 1-\\tau _{11,i} \\\\\\tau _{01,i} & 1-\\tau _{01,i}\\end{bmatrix} \\, ,$ where we used the notation $\\tau _{kl}=P(\\phi _t=l|\\phi _{t-1}=k)$ , $k,l\\in \\lbrace 0,1\\rbrace $ .", "These can then be vectorized for the whole layer as $\\rho _0$ , $\\tau _{11}$ and $\\tau _{01}$ $\\in [0,1]^H$ .", "In order to express the remaining HMM parameters related to the set of distributions $B(x_t)$ , we can assume that the likelihood of observing $x_t$ given the current state of the hidden features $\\phi _t$ , can be described using a distribution from the exponential family.", "As we will see in Section REF , only the ratio of these distributions will be necessary to compute.", "As demonstrated by Garner and Tong [18] drawing from Bridle [22], this ratio of likelihood $r_t$ can then be expressed as $r_t:=\\frac{p(x_t|\\,\\lnot \\phi _t\\,)}{p(x_t|\\,\\phi _t\\,)}=\\exp \\Big [-W^T\\,x_t - b \\Big ] \\, .$ Similarly to [19], for the case of multivariate normal distributions that share the same covariance matrix $\\Sigma $ , i.e., $p(x_t|\\phi _t)\\sim \\mathcal {N}(\\mu ,\\Sigma )$ and $p(x_t|\\,\\lnot \\phi _t\\,)\\sim \\mathcal {N}(\\nu ,\\Sigma )$ , the parameters $W\\in \\mathbb {R}^{F\\times H}$ and $b\\in \\mathbb {R}^{H}$ can be expressed as, W=(T-T) -1 b  = -12(T -1 +T -1 )   .", "Overall, we have shown that the Markov processes corresponding to a layer of $H$ independent hidden features can be fully described by a set of trainable tensors (or parameters) $\\rho _0$ , $\\tau _{11}$ , $\\tau _{01}$ $\\in [0,1]^H$ , $W\\in \\mathbb {R}^{F\\times H}$ and $b\\in \\mathbb {R}^{H}$ .", "In the next section, we will derive a forward-backward formulation that is similar to that of recurrent neural networks (RNNs).", "This will allow them to be trained inside a machine learning framework, while retaining a probabilistic interpretation as they correspond to standard HMM parameters." ], [ "Forward-backward procedure", "In order to make inference about the state of the hidden features throughout the sequence, we use a Bayesian approach and design a layer of recurrent units that will evaluate the stacked conditional probabilities $\\gamma _t:=P(\\phi _t|X_T)\\in [0,1]^H$ of the different features being present at each timestep $t=1,\\hdots ,T$ , given the information of the complete input sequence $X_T$ .", "In the first alpha or forward part of the procedure, the probabilities $\\alpha _t:=P(\\phi _t|X_t)\\in [0,1]^H$ are computed.", "In the subsequent beta or backward part, these probabilities are smoothed by taking into account future observations and produce the desired outputs $\\gamma \\in [0,1]^{T\\times H}$ , that are fed into the next layer." ], [ "Derivation of the forward pass", "The quantity $\\alpha _t$ is defined as $\\alpha _t:=P(\\phi _t|X_t)\\in [0,1]^H$ .", "Using Bayes's formula, we can write it as, $\\alpha _t=\\frac{p(x_t|\\phi _t)\\,P(\\phi _t|X_{t-1})}{\\sum _{\\phi _t^{^{\\prime }}}p(x_t|\\phi _t^{^{\\prime }})\\,P(\\phi _t^{^{\\prime }}|X_{t-1})} \\, .$ Dividing both numerator and denominator by $p(x_t|\\phi _t)$ gives $\\alpha _t=\\frac{p_t}{p_t+r_t\\,(1-p_t)} \\, ,$ where $r_t$ , $p_t$ and $\\alpha _t$ correspond to the ratio of likelihood, prior and posterior probabilities of the Bayesian update respectively.", "One can also reformulate Equation (REF ) by dividing the numerator and denominator by the prior.", "This gives rise to the well known sigmoid activation function $\\sigma (x)=1/(1+e^{-x})$ , $\\alpha _t=\\sigma \\Big [W^T x_t + b + \\text{logit}(p_t) \\Big ] \\, ,$ where the logit function, $\\text{logit}(x)=\\log \\big [x/(1-x)\\big ]$ , is the inverse of the sigmoid.", "The prior $p_t:=P(\\phi _t|X_{t-1})$ represents the probability of having the features present at time $t$ before seeing the current observation $x_t$ .", "For a time independent prior $p_t=\\text{const.", "}$ , the quantity $\\text{logit}(p_t)$ is also constant and can be integrated into the trainable bias $b$ , so that the forward pass corresponds to a hidden layer of a standard feed-forward neural network.", "With this probabilistic interpretation, it is therefore the time dependence of the Bayesian prior that leads to recurrence in neural networks.", "By assuming independent hidden features, the prior can be expanded as a function of the transition probabilities, $\\begin{split}p_t:&=P(\\phi _t|X_{t-1}) \\\\[+3pt]&=P(\\phi _t|\\phi _{t-1})P(\\phi _{t-1}|X_{t-1})\\\\&\\quad \\quad +P(\\phi _t|\\lnot \\phi _{t-1})P(\\lnot \\phi _{t-1}|X_{t-1}) \\\\[+3pt]&= \\tau _{11}\\,\\alpha _{t-1}+\\tau _{01}\\,(1-\\alpha _{t-1}) \\, .\\end{split}$ Using Bayes's theorem, we have thus derived a forward pass from $t=1$ to $t=T$ through the sequence, that allows the computation of $\\alpha _t=P(\\phi _t|X_t)$ via a unit-wise first-order recurrence on $\\alpha _{t-1}$ .", "So far, the inference on the state of the hidden features at timestep $t$ only takes the previous observations $X_t=[x_1,\\hdots ,x_t]$ into account.", "In order to include the future observations $X_{>t}=[x_{t+1},\\hdots ,x_T]$ , we define a backward recursion that will smooth out the probabilities." ], [ "Derivation of HMM backward recursion", "Using the relationship between joint and conditional probabilities as well as the independence of observations, we can express the desired quantity $\\gamma _t$ as, $\\begin{split}P(\\phi _t|X_T)&=\\frac{P(\\phi _t,X_T)}{P(X_T)} =\\frac{P(X_t,\\phi _t)\\,P(X_{> t}|X_t,\\phi _t)}{P(X_{> t}|X_t)\\,P(X_t)} \\\\&= P(\\phi _t|X_t)\\, \\frac{P(X_{> t}|\\phi _t)}{P(X_{> t}|X_t)} =\\alpha _t\\,\\,\\beta _t \\, ,\\end{split}$ where we use the notation $X_{> t}:=x_{t+1},\\hdots ,x_T$ and define $\\beta _t$ and its counterpart $\\overline{\\beta _t}$ as, $\\beta _t:=\\frac{P(X_{> t}|\\phi _t)}{P(X_{> t}|X_t)} \\quad \\text{and}\\quad \\overline{\\beta _t}:=\\frac{P(X_{> t}|\\lnot \\phi _t)}{P(X_{> t}|X_t)} \\, .$ Let us start by expanding the numerator of $\\beta _t$ and use the independence of observations, $\\begin{split}&P(X_{>t}|\\phi _{t+1})\\,P(\\phi _{t+1}|\\phi _t)\\,+ \\\\&\\,\\,\\,\\,\\,P(X_{>t}|\\lnot \\phi _{t+1})\\,P(\\lnot \\phi _{t+1}|\\phi _t) = \\\\[+5pt]&P(x_{t+1}|\\phi _{t+1})\\,P(X_{> t+1}|\\phi _{t+1})\\,P(\\phi _{t+1}|\\phi _t)\\,+ \\\\&\\,\\,\\,\\,\\,P(x_{t+1}|\\lnot \\phi _{t+1})\\,P(X_{> t+1}|\\lnot \\phi _{t+1})\\,P(\\lnot \\phi _{t+1}|\\phi _t) \\, .\\end{split}$ The denominator of equation REF can similarly be decomposed as, $P(X_{>t}|X_t)=P(x_{t+1}|X_t)\\,P(X_{>t+1}|X_{t+1}) \\, ,$ so that combining equations REF and REF gives, $\\beta _t=\\frac{b_1(x_{t+1})\\,\\beta _{t+1}\\,\\tau _{11}+b_2(x_{t+1})\\,\\overline{\\beta _{t+1}}\\,(1-\\tau _{11})}{P(x_{t+1}|X_t)} \\, .$ We finally need to deal with the remaining denominator of equation (REF ), $\\begin{split}P(x_{t+1}|X_t)&=P(x_{t+1}|\\phi _{t+1})\\,P(\\phi _{t+1}|X_t)\\\\&\\quad \\quad +P(x_{t+1}|\\lnot \\phi _{t+1})\\,P(\\lnot \\phi _{t+1}|X_t) \\\\[+3pt]&=b_1(x_{t+1})p_{t+1} \\\\&\\quad \\quad +b_2(x_{t+1})(1-p_{t+1}) \\, .\\end{split}$ By dividing the numerator and denominator by $b_1(x_{t+1})$ , we then get the following final expression for $\\beta _t$ , $\\beta _t=\\frac{\\tau _{11}\\,\\beta _{t+1}+r_{t+1}(1-\\tau _{11})\\,\\overline{\\beta _{t+1}}}{p_{t+1}+r_{t+1}\\,(1-p_{t+1})} \\, .$ Similarly, one can derive that, $\\overline{\\beta _t}=\\frac{\\tau _{01}\\,\\beta _{t+1}+r_{t+1}\\,(1-\\tau _{01})\\,\\overline{\\beta _{t+1}}}{p_{t+1}+r_{t+1}(1-p_{t+1})} \\, .$" ], [ "Derivation of Kalman backward recursion", "Following the approach of a Kalman smoother, a simpler backward pass can be derived by expanding $\\gamma _t$ on possible future states, $\\begin{split}P(\\phi _{t}|X_T)&=P(\\phi _{t}|\\phi _{t+1}) P(\\phi _{t+1}|X_T) \\\\&\\quad \\quad + P(\\phi _{t}|\\lnot \\phi _{t+1}) P(\\lnot \\phi _{t+1}|X_T) \\, .\\end{split}$ The transition probabilities need to be flipped using Bayes theorem, which gives $\\begin{split}P(\\phi _{t}|\\phi _{t+1})&=\\frac{P(\\phi _{t+1}|\\phi _{t}) P(\\phi _{t}|X_t)}{\\sum _{\\phi ^{^{\\prime }}_t}P(\\phi _{t+1}|\\phi ^{^{\\prime }}_t)P(\\phi ^{^{\\prime }}_t|X_t)} \\\\[+5pt]&=\\frac{\\tau _{11}\\, \\alpha _{t}}{\\tau _{11}\\,\\alpha _{t} + \\tau _{01}\\,(1-\\alpha _{t})} =\\tau _{11}\\frac{\\alpha _t}{p_{t+1}}\\,\\end{split}$ for the first one, using the definition of the prior given in Equation (REF ).", "Applying the same treatment to the second one then gives the following backward recursion, $\\begin{split}\\gamma _t&=\\alpha _t\\Bigg (\\tau _{11}\\frac{\\gamma _{t+1}}{p_{t+1}} + (1-\\tau _{11})\\frac{1-\\gamma _{t+1}}{1-p_{t+1}}\\Bigg ) \\, .\\end{split}$" ], [ "Equivalence of the two backward recursions", "Let us start with the HMM formulation of $\\gamma _t=\\alpha _t\\,\\beta _t$ .", "We can use Equation (REF ) of the forward pass to rewrite $\\beta _t$ as, $\\begin{split}\\beta _t&=\\frac{\\alpha _{t+1}}{p_{t+1}}\\Big (\\tau _{11}\\,\\beta _{t+1} + (1-\\tau _{11})\\,r_{t+1}\\,\\overline{\\beta _{t+1}}\\Big ) \\\\&=\\tau _{11}\\frac{\\gamma _{t+1}}{p_{t+1}} + (1-\\tau _{11})\\frac{(1-\\alpha _{t+1})\\overline{\\beta _{t+1}}}{1-p_{t+1}} \\, .\\end{split}$ By comparing with Equation (REF ), we see that in order to prove that the HMM and Kalman recursions are equivalent, the following equality, $1-\\gamma _{t+1}=(1-\\alpha _{t+1})\\overline{\\beta _{t+1}} \\, ,$ must be satisfied $\\forall t \\in \\lbrace T-1,T-2,\\hdots ,0\\rbrace $ .", "This can be demonstrated by induction as follows.", "We start by considering the base case $t=T-1$ .", "Here $1-\\gamma _T=(1-\\alpha _T)\\overline{\\beta _T}$ follows trivially from $\\overline{\\beta _T}=1$ and $\\gamma _T=\\alpha _T$ .", "By assuming that Equation (REF ) is correct for the case $n=t+1$ , we must now prove that it holds for the next case $n=t$ .", "Let us start with the left hand side $1-\\gamma _t$ and use the assumption for $n=t+1$ to express it as, $1-\\gamma _t=1-\\alpha _t\\Bigg (\\tau _{11}\\frac{\\gamma _{t+1}}{p_{t+1}} + (1-\\tau _{11})\\frac{1-\\gamma _{t+1}}{1-p_{t+1}}\\Bigg ) \\, .$ The transition probabilities $\\tau _{11}$ can be expressed as a function of $\\tau _{01}$ using Equation (REF ), $\\tau _{11}=\\frac{p_{t+1}-\\tau _{01}(1-\\alpha _t)}{\\alpha _t} \\, .$ By plugging Equation (REF ) into (REF ), we get that $\\begin{split}1-\\gamma _t&=(1-\\alpha _t)\\Bigg (\\tau _{01}\\frac{\\gamma _{t+1}}{p_{t+1}} + (1-\\tau _{01})\\frac{1-\\gamma _{t+1}}{1-p_{t+1}}\\Bigg ) \\\\&= (1-\\alpha _t)\\,\\overline{\\beta _t} \\, .\\end{split}$" ], [ "Implementation of the method", "The ratio of the distributions $b_2(x_t)$ and $b_1(x_t)$ , can be computed in advance for all timesteps and hidden features using Equations (REF ).", "At $t=0$ , $\\alpha _0$ is initialized with the trainable unconditional prior probability $\\rho _0=P(\\phi _0)$ .", "A forward pass from $t=1$ to $t=T$ is then performed to compute and store the Bayesian prior $p_t=P(\\phi _t|X_{t-1})$ and posterior $\\alpha _t=P(\\phi _t|X_t)$ using Equation (REF ) and (REF ) respectively.", "Since the two backward procedures are equivalent, we use the Kalman recursion, as it is computationally simpler.", "At $t=T$ , $\\gamma _T$ is initialized with $\\alpha _T$ , and from $t=T-1$ to $t=1$ , $\\gamma _t=P(\\phi _t|X_T)$ is computed using Equation (REF )Code at https://github.com/idiap/bayesian-recurrence." ], [ "Experiments", "Speech recognition experiments are performed on the TIMIT corpus [23], using the speechbrain [24] framework.", "Mel filterbank features are extracted from the waveforms and fed into two convolutional layers, followed by recurrent layers of H=512 hidden units.", "After two additional linear layers and a final log-softmax activation, the network outputs log-probabilities of phoneme classes.", "The training is done using the connectionist temporal classification (CTC) loss [25] and the Adadelta optimizer [26] for 50 epochs.", "Batch-normalization [27] is also used on feed-forward connections, as suggested in [17].", "Speech features entering the architecture are highly correlated.", "This suggests that the layer-wise recurrence of standard RNNs, which assumes interdependent hidden features, is best suited for processing them.", "Nevertheless, once the speech information has been processed and decorrelated, the classification of phoneme or subword representations does not require to assume the same level of correlation.", "As one expects a phoneme to stay in a state before transitioning to the next one, HMMs have been widely employed in ASR frameworks to process this form of information.", "Whilst the general aim of the experiments is to implement the derived unit-wise Bayesian recurrent units (UBRUs) with the backward recursion, and demonstrate that the mathematical predictions can be reflected practically, we also make the following hypotheses, As they assume a latent space of independent hidden features, UBRUs should be best placed after layers of standard gated RNNs that can first decorrelate the highly interdependent speech features.", "Since future observations can already be taken into account with the analytically derived backward recursion, we expect that this method can compete with the standard bidirectional approach.", "We start by evaluating UBRUs on their own.", "We consider unidirectional and bidirectional units, with or without the backward recursion, which leads to four different models.", "The results are presented in Table REF .", "As expected, the error-rates are relatively high due to the low representational capacities of the units.", "Nevertheless, we observe that the derived backward recursion improves the error-rate without requiring more trainable parameters, whereas making the units bidirectional does not.", "Table: PER on TIMIT with only two layers of UBRUs.We then test UBRUs by placing them after layers of state-of-the-art bidirectional Li-GRUs.", "We again find that unidirectional UBRUs with the backward recursion perform the best, as shown in Table REF , which corroborates our second hypothesis and highlights the importance of our probabilistic derivation.", "Table: TIMIT PER with four Li-GRU and one UBRU layers.By comparing with the Li-GRU baseline in Table REF , we find that adding a single unidirectional UBRU layer with the backward recursion brings the same improvement as adding another Li-GRU layer, even though the latter contains seven times more trainable parameters.", "In contrast, placing the UBRUs before the Li-GRUs in initial ad-hoc experiments suggested that the units were not effective at the acoustic level.", "This adheres to our first hypothesis that if features at that level represent phonemes and not acoustics, then the HMM-like derived UBRUs are appropriate for the classification task.", "Table: PER on TIMIT with layers of Li-GRUs.For reference, we made the same experiments with cross-entropy loss inside the pytorch-kaldi [28], [29] framework.", "Here again, a layer of unidirectional UBRUs with backward recursion is able to compete with a fifth Li-GRU layer, both scoring an accuracy of 14.4$\\%$ compared to 14.8$\\%$ for four Li-GRU layers.", "In summary, due to their correspondence with HMMs, the analytically derived unidirectional unit-wise recurrent units with a backward recursion are capable of replacing considerably larger, state-of-the-art, bidirectional, layer-wise units on the phoneme end of an ASR architecture, at an extremely low cost in terms of trainable parameters." ], [ "Conclusion", "Using a probabilistic formulation of neural network components, we have analytically derived a new type of recurrent unit with a unit-wise feedback and a backward recursion.", "The similarity with Kalman smoothers and the forward-backward algorithm of HMMs is made explicit, and the equivalence of both approaches is proven by induction.", "Evaluating on a standard speech recognition task shows that the derived backward recursion gives better results compared to the conventional bidirectional approach.", "Moreover, adding the derived unit-wise Bayesian recurrent units after layers of larger gated RNNs is capable of considerably improving upon their performance, while only relying on a limited amount of trainable parameters, showing the importance of a probabilistic derivation." ], [ "Acknowledgements", "This project received funding under NAST: Neural Architectures for Speech Technology, Swiss National Science Foundation grant 200021_185010." ] ]
2207.10486
[ [ "New physics effects on $\\Lambda_b\\to \\Lambda^*_c\\tau\\bar\\nu_\\tau$ decays" ], [ "Abstract We benefit from a recent lattice determination of the full set of vector, axial and tensor form factors for the $\\Lambda_b\\to \\Lambda^*_c(2595)\\tau\\bar\\nu_\\tau$ and $\\Lambda^*_c(2625)\\tau\\bar\\nu_\\tau$ semileptonic decays to study the possible role of these two reactions in lepton flavor universality violation studies.", "Using an effective theory approach, we analyze different observables that can be accessed through the visible kinematics of the charged particles produced in the tau decay, for which we consider the $\\pi^-\\nu_\\tau,\\rho^-\\nu_\\tau$ and $\\mu^-\\bar\\nu_\\mu\\nu_\\tau$ channels.", "We compare the results obtained in the Standard Model and other schemes containing new physics (NP) interactions, with either left-handed or right-handed neutrino operators.", "We find a discriminating power between models similar to the one of the $\\Lambda_b\\to \\Lambda_c$ decay, although somewhat hindered in this case by the larger errors of the $\\Lambda_b\\to\\Lambda^*_c$ lattice form factors.", "Notwithstanding this, the analysis of these reactions is already able to discriminate between some of the NP scenarios and its potentiality will certainly improve when more precise form factors are available." ], [ "Introduction", "The experimental observation of the Higgs boson by the ATLAS [1] and CMS [2] collaborations announced the completion of the electroweak sector of the Standard Model (SM).", "Despite its enormous success in describing many different experimental data, there are however theoretical indications (see for instance chapter 10 of Ref.", "[3]) as well as experimental measurements that hint at the possibility of the SM being just a low energy effective limit of a more fundamental underlying theory.", "One of the predictions of the SM is lepton flavour universality (LFU), which implies that the couplings to the $W$ and $Z$ gauge bosons is the same for all three lepton families.", "However, this prediction is being challenged by different semileptonic decays mediated by charged currents (CC) involving the third lepton and quark generation, i.e.", "by $b\\rightarrow c\\tau ^-\\bar{\\nu }_\\tau $ transitions.", "The strongest evidence in the direction of LFU violation comes from the ratios ${\\cal R}_{D^{(*)}}=\\frac{ \\Gamma (\\bar{B}\\rightarrow D^{(*)}\\tau ^-\\bar{\\nu }_\\tau )}{\\Gamma (\\bar{B}\\rightarrow D^{(*)}\\mu ^-\\bar{\\nu }_\\mu )}$ measured by the BaBar [4], [5], Belle [6], [7], [8], [9] and LHCb [10], [11], [12] collaborations.", "Their combined analysis by the HFLAV collaboration indicates a $3.1\\,\\sigma $ tension with SM predictions [13].", "LHCb has also measured the ratio ${\\cal R}_{J/\\psi }=\\Gamma (\\bar{B}_c\\rightarrow J/\\psi \\tau \\bar{\\nu }_\\tau )/\\Gamma (\\bar{B}_c\\rightarrow J/\\psi \\mu \\bar{\\nu }_\\mu )$ , which deviates from the SM predictions [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26] at the $1.8\\,\\sigma $ level.", "If these differences were finally confirmed they would be a clear indication for the necessity of new physics (NP) beyond the SM.", "A model-independent way to approach this problem is to take a phenomenological point of view and to carry out an effective field theory analysis, which includes the most general $b\\rightarrow c \\tau ^- \\bar{\\nu }_\\tau $ dimension-six operators (for one of the pioneering works on this type of approaches, see Ref. [27]).", "These operators are assumed to be generated by physics beyond the SM.", "Their strengths are encoded into unknown Wilson coefficients that can be determined by fitting to experimental data.", "In order to constrain and/or determine the most plausible extension of the SM, observables beyond the above mentioned LFU ratios need to be considered.", "Those observables typically include the averaged tau-polarization asymmetry and the longitudinal $D^*$ polarization, that have also been measured by Belle [8], [28], the $\\tau $ forward-backward asymmetry and the upper bound of the $\\bar{B}_c\\rightarrow \\tau \\bar{\\nu }_\\tau $ leptonic decay rate [29].", "A large number of studies along these lines have been conducted, not only for the $\\bar{B} \\rightarrow D^{(*)}$  [30], [31], [27], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48], [49] and $\\bar{B}_c\\rightarrow J/\\psi ,\\eta _c$  [50], [22], [24], [51], [49] semileptonic decays, but also for the $\\Lambda _b \\rightarrow \\Lambda _c$ transition [52], [53], [54], [55], [56], [38], [57], [58], [59], [60], [41], [61], [62], [63], [64], [48], where a similar behavior is to be expected.", "A better discriminating power for different models could be achieved if four body reactions, involving for instance $D^*\\rightarrow D\\pi , D\\gamma $  [32], [33], [34], [35], [40], [47], [44] or $\\Lambda _c\\rightarrow \\Lambda \\pi $  [60], [62] decays of the final hadron, are analyzed.", "Very recently, the LHCb collaboration has reported a measurement of the ${\\cal R}_{\\Lambda _c}=\\frac{\\Gamma (\\Lambda _b\\rightarrow \\Lambda _c\\tau ^-\\bar{\\nu }_\\tau )}{\\Gamma (\\Lambda _b\\rightarrow \\Lambda _c\\mu ^-\\bar{\\nu }_\\mu )}$ ratio [65] and the experimental value ${\\cal R}_{\\Lambda _c}=0.242 \\pm 0.026 \\pm 0.040 \\pm 0.059$ turns out to be in agreement, within errors, with the SM prediction ${\\cal R}_{\\Lambda _c}^{\\rm SM}=0.332 \\pm 0.007 \\pm 0.007$  [66].", "The $\\tau ^-$ lepton was reconstructed using the hadronic $\\tau ^- \\rightarrow \\pi ^-\\pi ^+\\pi ^-(\\pi ^0)\\,\\nu _\\tau $ decay, with the same technique used by the LHCb experiment to obtain ${\\cal R}_{ D^*}=0.291 \\pm 0.019 \\pm 0.026 \\pm 0.013$  [12], also in agreement with the SM prediction.", "A higher value ${\\cal R}_{ D^*}=0.336 \\pm 0.027 \\pm 0.030$ , however, was obtained by the same experiment when the $\\tau $ lepton was reconstructed using its leptonic decay into a muon [10].", "It is then of great interest to see if the above result for the $\\Lambda _b\\rightarrow \\Lambda _c$ decay is confirmed or not when the muonic reconstruction channel is used.", "Such an analysis is underway [67].", "As already discussed in Ref.", "[68], the different deviation of the present ${\\cal R}_{\\Lambda _c}$ and $R_{D^{(*)}}$ ratios with respect to their SM values, suppression for ${\\cal R}_{\\Lambda _c}$ versus enhancement for $R_{D^{(*)}}$ , puts a very stringent test on NP extensions of the SM, since scenarios leading to different deviations from SM expectations seem to be required.", "In this respect, in the very recent work of Ref.", "[69], it is argued that a more consistent comparison with the SM prediction for ${\\cal R}_{\\Lambda _c}$ is achieved if the recent $\\Gamma (\\Lambda _b\\rightarrow \\Lambda _c\\tau ^-\\bar{\\nu }_\\tau )$ LHCb measurement is normalized against the SM value for $\\Gamma (\\Lambda _b\\rightarrow \\Lambda _c\\mu ^-\\bar{\\nu }_\\mu )$ instead of the old LEP data used by the LHCb collaboration.", "This analysis gives rise to a new ${\\cal R}_{\\Lambda _c}= |0.04/V_{cb}|^2 (0.285 \\pm 0.073)$ value [69], also in agreement with the SM but with a less suppressed central value.", "In Refs.", "[70], [68] the $\\Lambda _b\\rightarrow \\Lambda _c\\tau ^-\\bar{\\nu }_\\tau $ and the $\\bar{B}\\rightarrow D^{(*)}\\tau ^-\\bar{\\nu }_\\tau $ decays were analyzed by employing the $\\tau ^-\\rightarrow \\pi ^-\\nu _\\tau ,\\rho ^-\\nu _\\tau $ and $\\tau ^-\\rightarrow \\mu ^-\\bar{\\nu }_\\mu \\nu _\\tau $ reconstruction channels.", "There, an special attention is paid to different quantities that can be measured by looking just at the visible kinematics of the charged particle produced in the $\\tau $ decay [70].", "Given a good-statistics measurement of these visible distributions, one has access to the values of the unpolarized differential decay width $d\\Gamma _{{\\rm SL}}(\\omega )/d\\omega $ and the spin $\\langle P^{\\rm CM}_L\\rangle (\\omega ),\\langle P^{\\rm CM}_T\\rangle (\\omega )$ , angular $A_{FB}(\\omega ),A_Q(\\omega )$ , and angular-spin $Z_L(\\omega ), Z_\\perp (\\omega ),Z_Q(\\omega )$ asymmetries.", "Here $\\omega $ is the product of the two hadron four-velocities.", "As shown in Ref.", "[70], in the absence of CP-violation, the above quantities provide the maximal information that can be extracted from the analysis of the semileptonic $H_b\\rightarrow H_c\\tau ^-\\bar{\\nu }_\\tau $ decay for a polarized final $\\tau $ .", "The general expression that links the visible-kinematics differential distributions to the above given asymmetries was first given in Ref.", "[71] for the $\\tau \\rightarrow \\pi ^-\\nu _\\tau ,\\rho ^-\\nu _\\tau $ hadronic decay modes.", "Actually, these hadronic channels are more convenient to determine all the above asymmetries and the role of the latter in distinguishing among different extensions of the SM was analyzed in detail in Refs.", "[48], [70].", "Since the full visible-kinematics differential decay width may suffer from low statistics, possible statistically-enhanced distributions, which can be obtained by integrating in one or more of the related visible-kinematics variables, are analyzed in Ref.", "[68] in the search for NP.", "In the present work, we will extend these kind of studies to the $\\Lambda _b\\rightarrow \\Lambda ^*_c(2595)$ and $ \\Lambda _b\\rightarrow \\Lambda ^*_c(2625)$ semileptonic decays, with the help of the recent lattice Chromodynamics (LQCD) determination of the full set of vector, axial and tensor form factors for these two transitions [72], [73].", "These two isoscalar odd parity resonances, with $J^P=\\frac{1}{2}^-$ and $\\frac{3}{2}^-$ respectively, are promising candidates for the lightest charmed baryon heavy-quark-spin doublet [74], [75]Some doubts on this respect have recently been put forward [76], [77], and experimental distributions for the semileptonic decay of the ground-state bottom baryon $\\Lambda _b$ into both excited states would definitely contribute to shed light into this issue [75]..", "The LFU analysis of the transitions involving these excited baryons could provide valuable/complementary information on the possible existence of NP beyond the SM and on its preferred extensions.", "The LQCD form factors in Refs.", "[72], [73] are defined based on a helicity decomposition of the amplitudes.", "After extrapolation to the physical point (both the continuum and the physical pion mass limits), each form factor was parameterized in terms of $\\omega $ as $f(\\omega )=F^f+A^f(\\omega -1)$ , corresponding to the first order Taylor expansion around the zero recoil point ($\\omega =1$ ).", "That was appropriate since lattice data were only available for just two kinematics near zero recoil.", "Thus, one expects this parameterization to be reliable only for small values of $(\\omega -1)$ and, in accordance, we shall restrict our evaluation of the different observables to a certain kinematical region near zero recoil.", "This work is organized as follows: in Sec.", "we will introduce the most general effective Hamiltonian of all possible dimension-six operators for the semileptonic $b\\rightarrow c$ transitions.", "We give general analytical results valid for the production of any lepton in the final state, although it is generally assumed that the Wilson coefficients are nonzero only for the third quark and lepton generation.", "We also provide the general expression for the transition amplitude squared for the production of a charged lepton in a given polarization state.", "In Sec.", "we present the general formula for the visible-kinematics differential decay width for the sequential $H_b\\rightarrow H_c\\tau ^-(\\pi ^-\\nu _\\tau ,\\rho ^-\\nu _\\tau ,\\mu ^-\\bar{\\nu }_\\mu \\nu _\\tau )\\bar{\\nu }_\\tau $ decays and the expressions after integration in one or more of the related variables.", "The results and the discussion are presented in Sec. .", "In Appendices  and we collect the matrix elements (form factor decomposition) and the $\\widetilde{W}_\\chi $ structure functions needed to construct the hadron tensors for the $1/2^+\\rightarrow 1/2^-$ and $1/2^+\\rightarrow 3/2^-$ transitions, respectively." ], [ "$H_b\\rightarrow H_c\\ell ^-\\bar{\\nu }_\\ell $ Effective Hamiltonian and decay amplitude", "Following Ref.", "[44], we use an effective low energy Hamiltonian that includes all dimension-six semileptonic $b\\rightarrow c$ operators with both left-handed (L) and right-handed (R) neutrino fields, $H_{\\rm eff}&=&\\frac{4G_F V_{cb}}{\\sqrt{2}}\\left[(1+C^V_{LL}){\\cal O}^V_{LL}+C^V_{RL}{\\cal O}^V_{RL}+C^S_{LL}{\\cal O}^S_{LL}+C^S_{RL}{\\cal O}^S_{RL}+C^T_{LL}{\\cal O}^T_{LL}\\right.\\nonumber \\\\&&+\\left.", "C^V_{LR}{\\cal O}^V_{LR}+C^V_{RR}{\\cal O}^V_{RR}+C^S_{LR}{\\cal O}^S_{LR}+C^S_{RR}{\\cal O}^S_{RR}+C^T_{RR}{\\cal O}^T_{RR} \\right]+h.c.", ",$ withNote that tensor operators with different lepton and quark chiralities vanish identically.", "It directly follows from $\\sigma ^{\\mu \\nu }(1+h_\\chi \\gamma _5)\\otimes \\sigma _{\\mu \\nu }(1+h_{\\chi ^{\\prime }} \\gamma _5) =(1+h_\\chi h_{\\chi ^{\\prime }})\\sigma ^{\\mu \\nu }\\otimes \\sigma _{\\mu \\nu }- (h_\\chi +h_{\\chi ^{\\prime }})\\frac{i}{2}\\epsilon ^{\\mu \\nu }_{\\ \\ \\,\\alpha \\beta }\\sigma ^{\\alpha \\beta }\\otimes \\sigma _{\\mu \\nu },$ where we use the convention $\\epsilon _{0123}=+1$ .", "${\\cal O}^V_{(L,R)L} = (\\bar{c} \\gamma ^\\mu b_{L,R})(\\bar{\\ell }\\gamma _\\mu \\nu _{\\ell L}), \\, {\\cal O}^S_{(L,R)L} =(\\bar{c}\\, b_{L,R}) (\\bar{\\ell }\\, \\nu _{\\ell L}), \\, {\\cal O}^T_{LL} =(\\bar{c}\\, \\sigma ^{\\mu \\nu } b_{L}) (\\bar{\\ell }\\sigma _{\\mu \\nu } \\nu _{\\ell L}),$ ${\\cal O}^V_{(L,R)R} = (\\bar{c} \\gamma ^\\mu b_{L,R})(\\bar{\\ell }\\gamma _\\mu \\nu _{\\ell R}), \\, {\\cal O}^S_{(L,R)R} =(\\bar{c}\\, b_{L,R}) (\\bar{\\ell }\\, \\nu _{\\ell R}), \\, {\\cal O}^T_{RR} =(\\bar{c}\\, \\sigma ^{\\mu \\nu } b_{R}) (\\bar{\\ell }\\sigma _{\\mu \\nu } \\nu _{\\ell R}),$ and where $\\psi _{R,L}= (1 \\pm \\gamma _5)\\psi /2$ , $G_F=1.166\\times 10^{-5}$  GeV$^{-2}$ and $V_{cb}$ is the corresponding Cabibbo-Kobayashi-Maskawa matrix element.", "The 10, complex in general, Wilson coefficients $C^X_{AB}$ ($X= S, V,T$ and $A,B=L,R$ ) parameterize the deviations from the SM.", "They could be lepton and flavor dependent although they are generally assumed to be nonzero only for the third quark and lepton generation.", "The transition amplitude for a $H_b\\rightarrow H_c\\ell ^-\\bar{\\nu }_\\ell $ decay can be written, in a short-hand notation, as ${\\cal M} = \\left(J_{H}^\\alpha J^{L}_\\alpha + J_{H} J^{L}+J_{H}^{\\alpha \\beta } J^{L}_{\\alpha \\beta }\\right)_{\\nu _{_{\\ell L}}}+\\left(J_{H}^\\alpha J^{L}_\\alpha + J_{H} J^{L}+J_{H}^{\\alpha \\beta } J^{L}_{\\alpha \\beta }\\right)_{\\nu _{_{\\ell R}}}, \\\\$ where the two contributions correspond to the two different neutrino chiralities.", "In the $m_{\\nu _\\ell }=0$ limit there is no interference between these two terms and $|{\\cal M}|^2$ is given by an incoherent sum of $\\nu _{_{\\ell L}}$ and $\\nu _{_{\\ell R}}$ contributions.", "The lepton currents for a fully polarized charged lepton are given by $J^{L(\\alpha \\beta )}_{\\chi , h S} &=& \\frac{1}{\\sqrt{2}}\\bar{u}_\\ell ^S(k^{\\prime };h) \\Gamma ^{(\\alpha \\beta )} P_5^{h_\\chi } v_{\\bar{\\nu }_\\ell }(k),\\nonumber \\\\ \\Gamma ^{(\\alpha \\beta )}&=& 1,\\gamma ^\\alpha ,\\sigma ^{\\alpha \\beta }, \\quad P_5^{h_\\chi } = \\frac{1+h_\\chi \\gamma _5}{2},$ with $u^S_\\ell (k^{\\prime }; h)$ the spinor of the final charged lepton corresponding to a state with $h=\\pm 1$ polarization (covariant spin) along a certain four-vector $S^\\alpha $ .This $u^S_\\ell (k^{\\prime }; h)$ spinor is defined by the condition $\\gamma _5{S}\\,u^S_\\ell (k^{\\prime }; h)=h\\,u^S_\\ell (k^{\\prime }; h).$ where the four-vector $S^\\alpha $ satisfies the constraints $S^{\\,2}=-1$ and $S\\cdot k^{\\prime }=0$ .", "A helicity state corresponds to $S^\\alpha = (|\\vec{k}^{\\prime }|, k^{\\prime 0}\\hat{k}^{\\prime })/m_\\ell $ , with $\\hat{k}^{\\prime }=\\vec{k}^{\\prime }/|\\vec{k}^{\\prime }|$ and $m_\\ell $ the charged lepton mass.", "$h_\\chi =\\pm 1$ accounts for the two possible neutrino chiralities ($h_\\chi =-1$ and $+1$ for $\\chi =L$ and $\\chi =R$ , respectively) considered in the effective Hamiltonian.", "From the lepton currents one can readily obtain the corresponding lepton tensors needed to evaluate $|{\\cal M}|^2$ .", "They are constructed as $L^{(\\alpha \\beta )(\\rho \\lambda )}_{\\chi ,hS}=J^{L(\\alpha \\beta )}_{\\chi , hS}(J^{L(\\rho \\lambda )}_{\\chi , h S})^*=\\frac{1}{2}{\\rm Tr}\\,[({k^{\\prime }}+m_\\ell )\\Gamma ^{(\\alpha \\beta )}P_5^{h_\\chi }{k}\\gamma ^0\\Gamma ^{(\\rho \\lambda )\\dagger }\\gamma ^0P_{S}^h],$ where we have taken $m_{\\nu _\\ell }=0$ and $P_{S}^h$ stands for the projector $P_{S}^h=\\frac{1+h\\gamma _5{S}}{2}.$ The final expressions for the lepton tensors have been collected in Appendix B of Ref. [70].", "The dimensionless hadron currents are given by $J_{H rr^{\\prime }\\,\\chi (=L,R)}^{(\\alpha \\beta )}(p,p^{\\prime }) = \\langle H_c; p^{\\prime },r^{\\prime }| \\bar{c}(0)O_{H\\chi }^{(\\alpha \\beta )}b(0) | H_b; p, r\\rangle ,$ with the hadron states normalized as $\\langle \\vec{p}\\,^{\\prime }, r^{\\prime }| \\vec{p},r\\rangle = (2\\pi )^3(E/M)\\delta ^3(\\vec{p}-\\vec{p}\\,^{\\prime })\\delta _{rr^{\\prime }}$ and where $r,r^{\\prime }$ represent the spin index.", "The different $O_{H\\chi }^{(\\alpha \\beta )}$ operator structures are $O_{H\\chi }^{(\\alpha \\beta )}= (C^S_{\\chi }+h_\\chi C^P_{\\chi } \\gamma _5),\\ (C^V_{\\chi }\\gamma ^\\alpha + h_\\chi C^A_{\\chi } \\gamma ^\\alpha \\gamma _5),\\ C^T_{\\chi }\\sigma ^{\\alpha \\beta } (1+h_\\chi \\gamma _5).", "$ The Wilson coefficients above are obtained as linear combinations of those introduced in the effective Hamiltonian in Eq.", "(REF ) and their expressions can be found in Appendix A of Ref. [70].", "The hadron tensors that enter the evaluation of $|{\\cal M}|^2$ are defined as $W_\\chi ^{(\\alpha \\beta )(\\rho \\lambda )}=\\overline{\\sum _{r,r^{\\prime }}}\\langle H_c; p^{\\prime },r^{\\prime }| \\bar{c}(0)O_{H\\chi }^{(\\alpha \\beta )}b(0) | H_b; p, r\\rangle \\langle H_b; p, r|\\bar{b}(0)\\gamma ^0O_{H\\chi }^{(\\rho \\lambda )\\dagger }\\gamma ^0c(0)| H_c; p^{\\prime },r^{\\prime }\\rangle ,$ where we sum (average) over the spin of the final (initial) hadron.", "As discussed in detail in Ref.", "[64], the use of Lorentz, parity and time-reversal transformations of the hadron currents and states [78] allows one to write general expressions for the hadron tensors valid for any $H_b\\rightarrow H_c$ transition.", "They are linear combinations of independent tensor and pseudotensor structures, constructed out of the vectors $p^\\mu $ , $q^\\mu $ , the metric tensor $g^{\\mu \\nu }$ and the Levi-Civita pseudotensor $\\epsilon ^{\\mu \\nu \\delta \\eta }$ .", "The coefficients of the independent structures are scalar functions of the four-momentum transferred squared $q^2$ , denoted by $\\widetilde{W}_\\chi $ as introduced in Refs. [70].", "The different $\\widetilde{W}_\\chi $ scalar structure functions (SFs) depend on the Wilson coefficients $C_\\chi ^{V,A,S,P,T}$ and on the genuine hadronic responses, the matrix elements of the involved hadron operators which can be derived from the form factors parameterizing each particular transition.", "It is shown in Refs.", "[70], [64] that there is a total of 16 independent $\\widetilde{W}_{\\chi }$ SFs for each neutrino-chirality, with the $\\widetilde{W}_{R}$ SFs directly obtained from the $\\widetilde{W}_{L}$ ones by the replacements $C^{V,A,S,P,T}_{\\chi =L}\\rightarrow C^{V,A,S,P,T}_{\\chi =R}$ .", "The different $W_\\chi ^{(\\alpha \\beta )(\\rho \\lambda )}$ hadron tensors, together with the definition of the $\\widetilde{W}_{\\chi }$ SFs are compiled in Appendix C of Ref. [70].", "As shown here in Appendix , the $\\widetilde{W}_\\chi $ SFs for the $\\Lambda _b\\rightarrow \\Lambda ^*_c[J^P=\\frac{1}{2}^-\\,]$ transition can be easily obtained from those in Appendix C of Ref.", "[70] by replacing $C_\\chi ^V\\longleftrightarrow C_\\chi ^A$ and $C_\\chi ^S\\longleftrightarrow C_\\chi ^P$ .", "In addition, the genuine hadron $W_{i=1,2,4,5}^{VV,AA}$ , $W_{i=3}^{VA}$ , $W_{1,2,3,4,5}^T$ , $W_{S}$ , $W_{P}$ , $W_{I1,I2}^{VS,AP}$ , $W_{I3}^{ST,PpT}$ and $W_{I4,I5,I6,I7}^{VT,ApT}$ SFs, which are independent of the Wilson coefficients, can be read out from Eqs.", "(E3)-(E5) of Ref.", "[64] for the $\\Lambda _b\\rightarrow \\Lambda _c$ transitionNote that the names of the form-factors in the parametrizations of Eqs.", "(REF )-() are chosen in order to directly use the results of Eqs.", "(E3)-(E5) of Ref. [64].", ".", "On the other hand, the $\\widetilde{W}_\\chi $ SFs for the $\\Lambda _b\\rightarrow \\Lambda _c^* [J^P=\\frac{3}{2}^-\\,]$ decay are explicitly calculated in this work and they are given in Eqs.", "(REF )-(REF ) of Appendix .", "Going back to the amplitude squared, it was shown in Refs.", "[64], [48] that for the production of a charged lepton with polarization $h$ along the four vector $S^\\alpha $ , and for massless neutrinos, one has that $&&\\hspace{-28.45274pt}\\frac{2\\,\\overline{\\sum }\\, |{\\cal M}|^2 }{M^2}=\\frac{2\\,\\overline{\\sum }\\, \\left(|{\\cal M}|_{\\nu _{\\ell L}}^2 + |{\\cal M}|_{\\nu _{\\ell R}}^2\\right) }{M^2}={\\cal N}(\\omega , p\\cdot k) + h\\bigg \\lbrace \\frac{(p\\cdot S)}{M}\\,{\\cal N_{H_{\\rm 1}}}(\\omega , p\\cdot k) \\nonumber \\\\&&\\hspace{170.71652pt}+\\frac{(q\\cdot S)}{M}\\,{\\cal N_{H_{\\rm 2}}}(\\omega , p\\cdot k)+\\frac{\\epsilon ^{ S k^{\\prime } qp}}{M^3}\\,{\\cal N_{H_{\\rm 3}}}(\\omega , p\\cdot k)\\ \\bigg \\rbrace ,$ where we have summed (averaged) over the polarization state of the final (initial) hadron.", "As already mentioned, $\\omega $ is the product of the two hadron four-velocities and it is related to $q^2$ via $q^2=M^2+M^{\\prime 2}-2MM^{\\prime }\\omega $ , with $M$ ($M^{\\prime }$ ) the mass of the initial (final) hadron.", "Besides, we have made use of the notation $\\epsilon ^{ S k^{\\prime } qp}=\\epsilon ^{\\alpha \\beta \\rho \\lambda }S_\\alpha k^{\\prime }_\\beta q_\\rho p_\\lambda $ .", "As for the ${\\cal N}$ and $\\cal N_{H_{\\rm 123}}$ scalar functions, they are given by ${\\cal N}(\\omega , k\\cdot p)&=&\\frac{1}{2}\\Big [{\\cal A}(\\omega )+{\\cal B}(\\omega ) \\frac{(k\\cdot p)}{M^2}+ {\\cal C}(\\omega )\\frac{(k\\cdot p)^2}{M^4}\\Big ],\\nonumber \\\\{\\cal N_{H_{\\rm 1}}}(\\omega , k\\cdot p)&=&{\\cal A_H}(\\omega )+ {\\cal C_H}(\\omega )\\frac{(k\\cdot p)}{M^2},\\nonumber \\\\{\\cal N_{H_{\\rm 2}}}(\\omega , k\\cdot p)&=& {\\cal B_H}(\\omega )+ {\\cal D_H}(\\omega ) \\frac{(k\\cdot p)}{M^2}+ {\\cal E_H}(\\omega )\\frac{(k\\cdot p)^2}{M^4},\\nonumber \\\\{\\cal N_{H_{\\rm 3}}}(\\omega , k\\cdot p)&=&{\\cal F_H}(\\omega )+{\\cal G_H}(\\omega )\\frac{(k\\cdot p)}{M^2},$ The term ${\\cal N_{H_{\\rm 3}}}$ is proportional to the imaginary part of SFs, which requires the existence of relative phases between some of the complex Wilson coefficients, thus incorporating violation of the CP symmetry in the NP effective Hamiltonian.", "The expressions of the ${\\cal A,\\,B,\\,C,\\, A_H,\\,B_H,\\,C_H,\\,D_H,\\,E_H,\\,F_H}$ and ${\\cal G_H}$ in terms of the $\\widetilde{W}_\\chi $ SFs are collected in Appendix D of Ref. [70].", "As inferred from Eq.", "(REF ), ${\\cal A,\\,B,}$ and ${\\cal C}$ describe the production of an unpolarized final charged lepton, while ${\\cal A_H,\\,B_H,\\,C_H,\\,D_H,\\,E_H,\\,F_H}$ and ${\\cal G_H}$ are also needed for the description of decays with a defined polarization ($h=\\pm 1$ ) of the outgoing charged lepton along the four vector $S^\\alpha $ ." ], [ "Sequential $H_b\\rightarrow H_c\\tau ^-(\\pi ^-\\nu _\\tau ,\\rho ^-\\nu _\\tau ,\\mu ^-\\bar{\\nu }_\\mu \\nu _\\tau )\\bar{\\nu }_\\tau $ decays and visible\nkinematics", "Due to its short mean life, the $\\tau $ produced in a $H_b\\rightarrow H_c\\tau ^-\\bar{\\nu }_\\tau $ process can not be directly measured and all the accessible information on the decay is encoded in the visible kinematics of the $\\tau $ -decay products.", "The three dominant decay modes $\\tau \\rightarrow \\pi \\nu _\\tau ,\\, \\rho \\nu _\\tau $ and $\\ell \\bar{\\nu }_\\ell \\nu _\\tau $ ($\\ell =e,\\mu $ ) account for more than 70% of the total $\\tau $ width.", "The (visible) differential distributions of the charged particle produced in the tau decay have been studied extensively for $\\bar{B}\\rightarrow D^{(*)}$ decays in Refs.", "[79], [30], [80], [81], [71].", "The general expression for the differential decay width for the $H_b\\rightarrow H_c\\tau ^-(d\\nu _\\tau )\\bar{\\nu }_\\tau $ decay, with $d=\\pi ^-,\\rho ^-,\\ell ^-\\bar{\\nu }_\\ell $ , reads [80], [81], [71], [70] $\\frac{d^3\\Gamma _d}{d\\omega d\\xi _d d\\cos \\theta _d} & = & {\\cal B}_{d}\\frac{d\\Gamma _{\\rm SL}}{d\\omega } \\Big \\lbrace F^d_0(\\omega ,\\xi _d)+ F^d_1(\\omega ,\\xi _d)\\cos \\theta _d + F^d_2(\\omega ,\\xi _d)P_2(\\cos \\theta _d)\\Big \\rbrace ,$ which is given in terms of $\\omega $ , $\\xi _d=\\frac{E_d}{\\gamma m_\\tau }$ (here $\\gamma =\\frac{q^2+m_\\tau ^2}{2m_\\tau \\sqrt{q^2}}$ ), which is the ratio of the energies of the tau-decay charged particle and the tau lepton measured in the $\\tau ^-\\bar{\\nu }_\\tau $ center of mass frame (CM), and $\\theta _d$ , the angle made by the three-momenta of the final hadron and the tau-decay charged particle measured in the same CM system (see Fig.", "1 of Ref. [68]).", "The azimuthal angular ($\\phi _d$ ) distribution of the tau decay charged product is sensitive to possible CP odd effects (${\\cal N_{H_{\\rm 3}}}$ term in Eq.", "(REF )).", "However, the measurement of $\\phi _d$ would require the full reconstruction of the tau three momentum, and this azimuthal angle has been integrated out to obtain the differential decay width of Eq.", "(REF ).", "That is the reason why the latter visible distribution does not depend on ${\\cal N_{H_{\\rm 3}}}$ , and thus it does not contain any information on possible CP violation contributions to the effective NP Hamiltonian of Eq.", "(REF ).", "In addition, ${\\cal B}_{d}$ in Eq.", "(REF ) is the branching ratio for the $\\tau ^-\\rightarrow d^-\\nu _\\tau $ decay mode and $P_2(\\cos \\theta _d)$ stands for the Legendre polynomial of order two.", "As for $d\\Gamma _{\\rm SL}/d\\omega $ , it represents the differential decay width for the unpolarized semileptonic $H_b\\rightarrow H_c\\tau ^-\\bar{\\nu }_\\tau $ decay.", "It reads $\\frac{d\\Gamma _{\\rm SL}}{d\\omega }=\\frac{G_F^2|V_{cb}|^2M^{\\prime 3}M^2}{24\\pi ^3} \\sqrt{\\omega ^2-1}\\Big (1-\\frac{m_\\tau ^2}{q^2}\\Big )^2n_0(\\omega ),$ where $n_0(\\omega )$ contains all the dynamical information, including any possible NP contribution.", "It is given by $n_0(\\omega ) =3a_0(\\omega )+a_2(\\omega )$ , where $a_{0,2}(\\omega )$ are linear combinations of ${\\cal A}(\\omega ),\\,{\\cal B}(\\omega )$ and ${\\cal C}(\\omega )$ , with explicit expressions given in Eq.", "(18) of Ref. [64].", "The $F^d_{0,1,2}(\\omega ,\\xi _d)$ functions in Eq.", "(REF ) can be written as [71], [68] $F^d_0(\\omega ,\\xi _d) &=& C_n^d(\\omega ,\\xi _d)+C_{P_L}^d(\\omega ,\\xi _d)\\,\\langle P^{\\rm CM}_L\\rangle (\\omega ), \\nonumber \\\\F^d_1(\\omega ,\\xi _d) &=& C_{A_{FB}}^d(\\omega ,\\xi _d)A_{FB}(\\omega )+C_{Z_L}^d(\\omega ,\\xi _d)Z_L(\\omega )+ C_{P_T}^d(\\omega ,\\xi _d)\\,\\langle P^{\\rm CM}_T\\rangle (\\omega ), \\nonumber \\\\F^d_2(\\omega ,\\xi _d) &=& C_{A_Q}^d(\\omega ,\\xi _d)A_{Q}(\\omega )+C_{Z_Q}^d(\\omega ,\\xi _d)Z_Q(\\omega )+ C_{Z_\\perp }^d(\\omega ,\\xi _d)Z_\\perp (\\omega ).$ where the decay-mode dependent coefficients $C^d_a(\\omega ,\\xi _d)$ are purely kinematical.", "Their analytical expressions for the $\\pi ^-\\nu _\\tau ,\\rho ^-\\nu _\\tau $ and $\\ell ^-\\bar{\\nu }_\\ell \\nu _\\tau $ decay channels can be found in Appendix G of Ref. [70].", "The rest of the observables in Eq.", "(REF ) are the tau-spin ($\\langle P^{\\rm CM}_{L,T}\\rangle (\\omega )$ ), tau-angular ($A_{FB,Q}(\\omega )$ ) and tau-angular-spin ($Z_{L,Q,\\perp }(\\omega )$ ) asymmetries of the $H_b\\rightarrow H_c\\tau \\bar{\\nu }_\\tau $ parent decay.", "They can be written [70], [64] in terms of the ${\\cal A},\\,{\\cal B},\\,{\\cal C},\\,{\\cal A_H},\\,{\\cal B_H},\\,{\\cal C_H},\\,{\\cal D_H}$ and ${\\cal E_H}$ functions introduced in Eq.", "(REF ).", "A numerical analysis of the role of each of the observables $d\\Gamma _{\\rm SL}/d\\omega $ , $\\langle P^{\\rm CM}_{L,T}\\rangle (\\omega )$ , $A_{FB,Q}(\\omega )$ and $Z_{L,Q,\\perp }(\\omega )$ in the context of LFU violation was conducted for the $\\Lambda _b\\rightarrow \\Lambda _c\\tau ^-\\bar{\\nu }_\\tau $ transition in Refs.", "[48], [70].", "Here we perform an analog analysis for the $\\Lambda _b\\rightarrow \\Lambda _c(2595)$ and $\\Lambda _b\\rightarrow \\Lambda _c(2625)$ semileptonic decays, for which the only differences are fully encoded in the form-factors input contained in the $\\widetilde{W}_\\chi $ SFs.", "This is because the expressions of ${\\cal A},\\,{\\cal B},\\,{\\cal C},\\,{\\cal A_H} ,\\,{\\cal B_H},\\,{\\cal C_H},\\,{\\cal D_H}$ and ${\\cal E_H}$ (or equivalently the differential decay width for unpolarized tau, the tau-spin, tau-angular and tau-angular-spin asymmetries) in terms of the latter is independent of the $b\\rightarrow c$ transition, and they are given by Eqs.", "(D1) and (D2) of Ref. [70].", "Measuring the triple differential decay width in Eq.", "(REF ) could also be difficult due to low statistics.", "An increased statistics is achieved by integrating in one or more of the variables $\\cos \\theta _d$ , $\\xi _d$ and $\\omega $ , at the price that the resulting distributions might not depend on some of the observables in Eq.", "(REF ).", "For instance, accumulating in the polar angle leads to the distribution [82] $\\frac{d^2\\Gamma _d}{d\\omega d\\xi _d} & = & 2{\\cal B}_{d}\\frac{d\\Gamma _{\\rm SL}}{d\\omega }\\Big \\lbrace C_n^d(\\omega ,\\xi _d)+C_{P_L}^d(\\omega ,\\xi _d)\\,\\langle P^{\\rm CM}_L\\rangle (\\omega )\\Big \\rbrace ,$ from where one can only extract, looking at the dependence on $\\xi _d$ , $d\\Gamma _{\\rm SL}/d\\omega $ and the CM $\\tau $ longitudinal polarization [$\\langle P^{\\rm CM}_L\\rangle (\\omega )]$ .", "From the latter, it immediately follows the averaged CM tau longitudinal polarization asymmetry, $P_\\tau = -\\frac{1}{\\Gamma _{\\rm SL}}\\int d\\omega \\frac{d\\Gamma _{\\rm SL}}{d\\omega }\\langle P^{\\rm CM}_L\\rangle (\\omega ),$ that has been measured for the $\\bar{B} \\rightarrow D^* \\tau \\bar{\\nu }_\\tau $ decay by the Belle collaboration [8].", "Integrating Eq.", "(REF ) in the $\\xi _d$ variable one obtains the double differential decay width [68] $\\frac{d^2\\Gamma _d}{d\\omega d\\cos \\theta _d} = {\\cal B}_{d}\\frac{d\\Gamma _{\\rm SL}}{d\\omega } \\Big [\\widetilde{F}^d_0(\\omega )+ \\widetilde{F}^d_1(\\omega )\\cos \\theta _d +\\widetilde{F}^d_2(\\omega )P_2(\\cos \\theta _d)\\Big ].$ While $\\widetilde{F}^d_0(\\omega )=1/2$ , losing in this way all information on $\\langle P^{\\rm CM}_L\\rangle (\\omega )$ , one has that $\\widetilde{F}^d_1(\\omega )&=&C^d_{A_{FB}}(\\omega )\\,A_{FB}(\\omega )+C^d_{Z_L}(\\omega )\\,Z_L(\\omega )+C^d_{P_T}(\\omega )\\,\\langle P_T^{\\rm CM}\\rangle (\\omega ), \\\\\\widetilde{F}^d_2(\\omega )&=&C^d_{A_Q}(\\omega )\\,A_Q(\\omega )+C^d_{Z_Q}(\\omega )\\,Z_Q(\\omega )+C^d_{Z_\\perp }(\\omega )\\,Z_\\perp (\\omega ),$ which retain all the information on the other six asymmetries, since the kinematical coefficients $C^d_{i}(\\omega )$ are known.", "A further integration in $\\omega $ additionally enhances the statistics.", "The obtained angular distribution [68] $\\frac{d\\Gamma _d}{d\\cos \\theta _d}={\\cal B}_d\\Gamma _{\\rm SL}\\Big [\\frac{1}{2}+\\widehat{F}_1^d\\cos \\theta _d+\\widehat{F}_2^d\\,P_2(\\cos \\theta _d)\\Big ], \\quad \\widehat{F}_{1,2}^d=\\frac{1}{\\Gamma }_{\\rm SL}\\int _1^{\\omega _{\\rm max}}\\frac{d\\Gamma _{\\rm SL}}{d\\omega }\\widetilde{F}_{1,2}^d(\\omega )\\,d\\omega .$ could still be a useful observable in the search for NP beyond the SM.", "Finally, from the differential decay width $d^2\\Gamma _d/(d\\omega d\\xi _d)$ given in Eq.", "(REF ) one can get [68] $\\frac{d\\Gamma _d}{dE_d} & = & 2{\\cal B}_{d} \\int _{\\omega _{\\rm inf}(E_d)}^{\\omega _{\\rm sup}(E_d)} d\\omega \\frac{1}{\\gamma m_\\tau }\\frac{d\\Gamma _{\\rm SL}}{d\\omega }\\Big \\lbrace C_n^d(\\omega ,\\xi _d)+C_{P_L}^d(\\omega ,\\xi _d)\\,\\langle P^{\\rm CM}_L\\rangle (\\omega )\\Big \\rbrace ,$ where the appropriate limits in $\\omega $ for each of the sequential decays considered are given in Ref. [68].", "From the latter distribution one can define the dimensionless observable $\\widehat{F}^d_0(E_d)=\\frac{m_\\tau }{2{\\cal B}_d\\Gamma _{\\rm SL}}\\frac{d\\Gamma _d}{dE_d}.$ Although it is normalized for all channels as $\\frac{1}{m_\\tau }\\int _{E_d^{\\rm min}}^{E_d^{\\rm min}} dE_d \\widehat{F}^d_0(E_d)= \\frac{1}{2},$ its energy dependence is still affected by the CM $\\tau $ longitudinal polarization $\\langle P^{\\rm CM}_L\\rangle (\\omega )$ .", "Predictions for the $d^2\\Gamma /(d\\omega \\, d\\cos \\theta _d)$ , $d\\Gamma /d\\cos \\theta _d$ and the $\\widehat{F}^d_0(E_d)$ distributions, and their role in distinguishing among different NP models, were presented and discussed in Ref.", "[68] for the $\\Lambda _b\\rightarrow \\Lambda _c\\tau ^-(\\pi ^-\\nu _\\tau ,\\rho ^-\\nu _\\tau ,\\mu ^-\\bar{\\nu }_\\mu \\nu _\\tau )\\bar{\\nu }_\\tau $ and $\\bar{B}\\rightarrow D^{(*)}\\tau ^-(\\pi ^-\\nu _\\tau ,\\rho ^-\\nu _\\tau ,\\mu ^-\\bar{\\nu }_\\mu \\nu _\\tau )\\bar{\\nu }_\\tau $ sequential decays.", "Here, we will also extend the study to reactions initiated by the $\\Lambda _b\\rightarrow \\Lambda _c(2595)$ and $\\Lambda _b\\rightarrow \\Lambda _c(2625)$ semileptonic parent decays." ], [ "Results and discussion", "In this section we present $\\Lambda _b\\rightarrow \\Lambda _c(2595),\\Lambda _c(2625)$ results for the observables mentioned in Sec.", "above.", "We will consider SM and different NP scenarios involving left- and right-handed neutrino fields.", "Since the LQCD form factors from Refs.", "[73], [72] that we use are not reliably obtained at high $\\omega $ values, we will restrict ourselves to the $1\\le \\omega \\le 1.1$ region.", "For this latter reason we will not show results for $d\\Gamma _d/d\\cos \\theta _d$ or $\\widehat{F}^d_0(E_d)$ since they involve an integration in the $\\omega $ variable over the full available phase-space, including regions for which the LQCD form-factors are not reliable.", "For each observable, we give central values plus an error band that we construct by adding in quadrature the form-factors and Wilson-coefficients uncertainties.", "For the errors related to the Wilson coefficients we shall use statistical samples of Wilson coefficients selected such that the $\\chi ^2$ -merit function computed in Refs.", "[41] and [44], for left- and right-handed neutrino NP fits, respectively, changes at most by one unit from its value at the fit minimum (for further details see Sec.", "III.B of Ref. [64]).", "For the uncertainty associated to the form factors, we consider two different sources [72]: statistical and systematic.", "We obtain the statistical error using the appropriate covariance matrix to Monte-Carlo generate a great number of form factor samples from which we evaluate the corresponding quantity and its standard deviation.", "The systematic error is evaluated as explained in Sec.", "VI of Ref. [72].", "This latter determination makes use of the form factors obtained with higher-order fits.", "Statistical and systematic errors are then added in quadrature to get the total error associated to the form factors.", "In Figs.", "REF and REF we show, for the $\\Lambda _b\\rightarrow \\Lambda ^*_c(2595)\\tau ^-\\bar{\\nu }_\\tau $ and $\\Lambda _b\\rightarrow \\Lambda ^*_c(2625)\\tau ^-\\bar{\\nu }_\\tau $ decays respectively, the results for $n_0(\\omega )$ and the full set of asymmetries introduced in Eq.", "(REF ).", "They have been obtained within the SM and the two NP models corresponding to Fits 6 and 7 of Ref.", "[41], which include only left-handed (L) neutrino operators.", "Even though these two NP scenarios have been adjusted to reproduce the measured $R_{D^{(*)}}$ ratios, they show a different behavior for other quantities.", "As seen from the figures, L Fit 6 and SM results agree within errors for most of the observables, while the predictions from L Fit 7 are quite different for the $ A_{FB},\\,Z_L,\\,P_L$ and $P_\\perp $ asymmetries.", "The latter are thus helpful in distinguishing between these two NP models that otherwise give very similar results for the $R_{D^{(*)}}$ ratios.", "In Fig.", "REF we compare the results obtained within the SM and fit R S7a of Ref. [44].", "The latter includes only NP operators constructed with right-handed (R) neutrino fields, and the corresponding Wilson coefficients have also been adjusted to reproduce the measured $R_{D^{(*)}}$ ratios.", "Among the different R fits conducted in Ref.", "[44], this is one of the more promising in terms of the pull from the SM hypothesis.", "However, due to the wide error bands, we find no significant difference between the R S7a model and SM results.", "The exceptions are the $Z_L$ and $\\langle P^{\\rm CM}_T\\rangle $ asymmetries for the $\\Lambda _b\\rightarrow \\Lambda ^*_c(2595)\\tau ^-\\bar{\\nu }_\\tau $ decay.", "The R S7a and the L Fit 6 models give also similar predictions, agreeing within errors.", "As for the differences with L Fit 7, the best observables to distinguish between the R S7a and the L Fit 7 models are the $A_{FB}$ and $\\langle P^{\\rm CM}_L\\rangle $ asymmetries for the $\\Lambda _b\\rightarrow \\Lambda ^*_c(2595)\\tau ^-\\bar{\\nu }_\\tau $ decay, whereas for $\\Lambda _b\\rightarrow \\Lambda ^*_c(2625)\\tau ^-\\bar{\\nu }_\\tau $ one finds that not only $A_{FB}$ and $\\langle P^{\\rm CM}_L\\rangle $ , but also $Z_L$ and $\\langle P^{\\rm CM}_T\\rangle $ are adequate observables.", "Figure: Same as Figs.", "and , but comparing in thiscase SM to the model corresponding to fit R S7a ofRef.", ", which NP contributions are constructed using right-handed neutrino fields.", "Two left columns:Λ b →Λ c * (2595)τ - ν ¯ τ \\Lambda _b\\rightarrow \\Lambda ^*_c(2595)\\tau ^-\\bar{\\nu }_\\tau decay.", "Two right columns: Λ b →Λ c * (2625)τ - ν ¯ τ \\Lambda _b\\rightarrow \\Lambda ^*_c(2625)\\tau ^-\\bar{\\nu }_\\tau decay.We show now the results for the $\\widetilde{F}^{d}_{1,2}(\\omega )$ coefficient-functions that expand the statistically-enhanced $d^2\\Gamma /(d\\omega d\\cos \\theta _d)$ differential decay width of Eq.", "(REF )Note that $\\widetilde{F}^{d}_{0}(\\omega )=1/2$ in all cases..", "In fact, we show the products $n_0(\\omega )\\widetilde{F}^{d}_{1,2}(\\omega )$ since, as mentioned, $n_0(\\omega )$ contains all the dynamical effects included in $d\\Gamma _{\\rm SL}/d\\omega $ which appears as an overall factor of the $d^2\\Gamma /(d\\omega d\\cos \\theta _d)$ distribution.", "Predictions obtained within the SM, and the L Fit 7 and the R S7a NP models of Refs.", "[41] and [44], respectively, are presented in Fig.", "REF .", "As it was to be expected from the previous results, SM and R Fit 7a results agree within errors in all cases.", "Similar results (not shown) are found for L Fit 6.", "However, for most of the observables plotted in the figure, the L Fit 7 predictions are distinguishable from those obtained using the SM or R S7a models, either in the near zero-recoil region or in the upper part of the shown $\\omega $ interval.", "Figure: Two left columns: n 0 (ω)F ˜ 1,2 d (ω)n_0(\\omega )\\widetilde{F}^{d}_{1,2}(\\omega ) for the threeΛ b →Λ c * (2595)τ - (μ - ν ¯ μ ν τ ,π - ν τ ,ρ - ν τ )ν ¯ τ \\Lambda _b\\rightarrow \\Lambda ^*_c(2595)\\tau ^-(\\mu ^-\\bar{\\nu }_\\mu \\nu _\\tau ,\\pi ^-\\nu _\\tau ,\\rho ^-\\nu _\\tau )\\bar{\\nu }_\\tau sequential decays evaluated within the SM, the L Fit 7 model ofRef.", "and the R S7a model ofRef. .", "Two right columns: the same but for theΛ b →Λ c * (2625)τ - (μ - ν ¯ μ ν τ ,π - ν τ ,ρ - ν τ )ν ¯ τ \\Lambda _b\\rightarrow \\Lambda ^*_c(2625)\\tau ^-(\\mu ^-\\bar{\\nu }_\\mu \\nu _\\tau ,\\pi ^-\\nu _\\tau ,\\rho ^-\\nu _\\tau )\\bar{\\nu }_\\tau sequential decaysWhen compared to the $\\Lambda _b\\rightarrow \\Lambda _c$ decay considered in Refs.", "[70], [68], we find here a worse discriminating power between different models due to the large errors in the form factors.", "Nevertheless, with the present values of the latter, these $\\Lambda _b\\rightarrow \\Lambda _c^*$ reactions are already able to distinguish between the L fit 7 model of Ref.", "[41] and the L Fit 6 and R S7a models of Refs.", "[41] and [44], or between L Fit 7 and the SM.", "A more precise determination of the form factors, with less error and an extended $\\omega $ region of validity, would certainly increase the value of the $\\Lambda _b\\rightarrow \\Lambda ^*_c(2595),\\Lambda ^*_c(2625)$ decays in the search for NP in LFU violation studies.", "Focusing on the SM $n_0(\\omega )$ distributions in Figs.", "REF and REF , we conclude that $\\Gamma _{\\rm SL}$ (or at least the partially integrated width up to $\\omega \\le 1.1$ ) for the $\\Lambda _c(2625)$ mode is smaller than for the $\\Lambda _c(2595)$ final state, contradicting the expectations from heavy-quark spin symmetry [75].", "Moreover, comparing with the results displayed in the left-upper plot of Fig.", "2 of Ref.", "[70], both widths are probably around a factor of ten lower than that of the $\\Lambda _b$ decay into the ground state charmed baryon, $\\Lambda _b\\rightarrow \\Lambda _c[J^P=1/2^+]$ .", "This reduction does not affect the tau-spin, tau-angular and tau-angular-spin asymmetries also shown in these figures, since these observables should not depend on the overall size of the semileptonic width $\\Gamma _{\\rm SL}$ .", "Actually, the asymmetries provide distinctive $\\omega -$ patterns for the $\\Lambda _b$ decay into each of the charmed final state baryons, which have different spin-parity quantum numbers.", "This makes the comparison of theoretical model predictions, considering jointly all three [$\\Lambda _c,\\Lambda _c(2595), \\Lambda _c(2625)$ ] modes, more exhaustive and demanding.", "Finally, we would like to stress that the expressions of the $\\widetilde{W}_\\chi $ SFs in terms of the $1/2^+\\rightarrow 1/2^-$ and $1/2^+\\rightarrow 3/2^-$ form-factors derived in this work are general, and they do not apply only to the $\\Lambda _b\\rightarrow \\Lambda _c(2595),\\Lambda _c(2625)$ transitions studied here.", "Actually, using the appropriate numerical values for the form-factors, these $\\widetilde{W}_\\chi $ SFs can be used for any $1/2^+\\rightarrow 1/2^-$ or $1/2^+\\rightarrow 3/2^-$ CC semileptonic decay, driven by a $q \\rightarrow q^{\\prime } \\ell ^- \\bar{\\nu }_\\ell $ transition at the quark level.", "This will allow to systematically analyse NP effects in the charged-lepton unpolarized and polarized differential distributions in all these kind of reactions." ], [ "Acknowledgements", "N.P.", "thanks Physics and Astronomy at the University of Southampton for hospitality while this work was completed and a Generalitat Valenciana grant CIBEFP/2021/32.", "This research has been supported by the Spanish Ministerio de Ciencia e Innovación (MICINN) and the European Regional Development Fund (ERDF) under contracts PID2020-112777GB-I00 and PID2019-105439GB-C22, the EU STRONG-2020 project under the program H2020-INFRAIA-2018-1, grant agreement no.", "824093 and by Generalitat Valenciana under contract PROMETEO/2020/023." ], [ "Form-factors", "We parameterize the matrix elements of the different $b\\rightarrow c$ transition operators for the $\\Lambda _b\\rightarrow \\Lambda ^*_c(2595)$ decay in such a way that we can make use of the expressions obtained in Refs.", "[64], [70] for $1/2^+\\rightarrow 1/2^+$ transitions with a minimum of changes.", "To that end, we use the form factor decompositionsThe form factors defined in this work are related to those in Ref.", "[75] by identifying $G_i=d_{V_i}$ , $F_i=d_{A_i}$ , $F_P=d_S$ , $F_S=d_P$ , and $T_i=d_{T_i}$ .", "$\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\gamma ^\\alpha b(0)|\\Lambda _b;\\vec{p},r\\rangle &=&\\bar{u}_{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })\\left(G_1\\gamma ^{\\alpha }+G_2\\frac{p^{\\alpha }}{M}+G_3\\frac{p^{\\prime \\alpha }}{M^{\\prime }}\\right)\\gamma _5u_{\\Lambda _b,r}(\\vec{p}\\,),\\\\\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\gamma ^\\alpha \\gamma _5 b(0)|\\Lambda _b;\\vec{p},r\\rangle &=&\\bar{u}_{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })\\left(F_1\\gamma ^{\\alpha }+F_2\\frac{p^{\\alpha }}{M}+F_3\\frac{p^{\\prime \\alpha }}{M^{\\prime }}\\right)u_{\\Lambda _b,r}(\\vec{p}\\,),\\\\\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0) b(0)|\\Lambda _b;\\vec{p},r\\rangle &=&F_P\\,\\bar{u}_{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })\\gamma _5u_{\\Lambda _b,r}(\\vec{p}\\,),\\\\\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\gamma _5 b(0)|\\Lambda _b;\\vec{p},r\\rangle &=&F_S\\,\\bar{u}_{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })u_{\\Lambda _b,r}(\\vec{p}\\,),\\\\\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\sigma ^{\\alpha \\beta }\\gamma _5 b(0)|\\Lambda _b;\\vec{p},r\\rangle &=&\\bar{u}_{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })\\Big [i \\frac{ T_1}{M^2}(p^\\alpha p^{\\prime \\beta }-p^\\beta p^{\\prime \\alpha })+i \\frac{ T_2}{M}(\\gamma ^\\alpha p^\\beta -\\gamma ^\\beta p^\\alpha )\\nonumber \\\\&&\\hspace{49.79231pt}+i \\frac{ T_3}{M}(\\gamma ^\\alpha p^{\\prime \\beta }-\\gamma ^\\beta p^{\\prime \\alpha })+ T_4 \\sigma ^{\\alpha \\beta }\\Big ]u_{\\Lambda _b,r}(\\vec{p}\\,) \\\\\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\sigma ^{\\alpha \\beta } b(0)|\\Lambda _b;\\vec{p},r\\rangle &=&\\bar{u}_{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })\\epsilon ^{\\alpha \\beta }_{\\ \\ \\ \\rho \\lambda }\\Big [\\frac{ T_1}{M^2}p^\\rho p^{\\prime \\lambda }+\\frac{ T_2}{M}\\gamma ^\\rho p^\\lambda \\nonumber \\\\&&\\hspace{71.13188pt}+ \\frac{ T_3}{M}\\gamma ^\\rho p^{\\prime \\lambda }+ \\frac{1}{2} T_4 \\gamma ^{\\rho }\\gamma ^{\\lambda }\\Big ]u_{\\Lambda _b,r}(\\vec{p}\\,)$ where $p$ and $p^{\\prime }$ ($M$ and $M^{\\prime }$ ) are the four-momenta (masses) of the $\\Lambda _b$ and $\\Lambda ^*_c$ baryons, respectively, $u_{\\Lambda _b,\\Lambda ^*_c}$ are Dirac spinors, and we have made use of $\\sigma ^{\\alpha \\beta }\\gamma _5=-\\frac{i}{2}\\epsilon ^{\\alpha \\beta }_{\\ \\ \\ \\rho \\lambda }\\sigma ^{\\rho \\lambda }$ .", "The form-factors are Lorentz scalar functions of $q^2$ or equivalently of $\\omega $ , the product of the four-velocities of the initial and final hadrons.", "The form-factors used in this work are related to the helicity ones evaluated in the LQCD simulation of Refs.", "[72], [73] by $&&G_1=-f^{(\\frac{1}{2}^-)}_\\perp \\nonumber \\\\&&G_2=M\\Big (f^{(\\frac{1}{2}^-)}_0\\frac{M+M^{\\prime }}{q^2}+f^{(\\frac{1}{2}^-)}_+\\frac{M-M^{\\prime }}{s_-}(1-\\frac{M^2-M^{\\prime 2}}{q^2})+f^{(\\frac{1}{2}^-)}_\\perp \\frac{2M^{\\prime }}{s_-}\\Big )\\nonumber \\\\&&G_3=M^{\\prime }\\Big (-f^{(\\frac{1}{2}^-)}_0\\frac{M+M^{\\prime }}{q^2}+f^{(\\frac{1}{2}^-)}_+\\frac{M-M^{\\prime }}{s_-}(1+\\frac{M^2-M^{\\prime 2}}{q^2})-f^{(\\frac{1}{2}^-)}_\\perp \\frac{2M}{s_-}\\Big )\\\\&&F_1=-g^{(\\frac{1}{2}^-)}_\\perp \\nonumber \\\\&&F_2=M\\Big (-g^{(\\frac{1}{2}^-)}_0\\frac{M-M^{\\prime }}{q^2}-g^{(\\frac{1}{2}^-)}_+\\frac{M+M^{\\prime }}{s_+}(1-\\frac{M^2-M^{\\prime 2}}{q^2})+g^{(\\frac{1}{2}^-)}_\\perp \\frac{2M^{\\prime }}{s_+}\\Big )\\nonumber \\\\&&F_3=M^{\\prime }\\Big (g^{(\\frac{1}{2}^-)}_0\\frac{M-M^{\\prime }}{q^2}-g^{(\\frac{1}{2}^-)}_+\\frac{M+M^{\\prime }}{s_+}(1+\\frac{M^2-M^{\\prime 2}}{q^2})+g^{(\\frac{1}{2}^-)}_\\perp \\frac{2M}{s_+}\\Big )\\\\&&T_1=\\frac{2M^2}{s_+}[h^{(\\frac{1}{2}^-)}_+-\\tilde{h}^{(\\frac{1}{2}^-)}_+-\\frac{s_+(M-M^{\\prime })^2}{q^2s_-}(h^{(\\frac{1}{2}^-)}_\\perp -h^{(\\frac{1}{2}^-)}_+)+\\frac{(M+M^{\\prime })^2}{q^2}(\\tilde{h}^{(\\frac{1}{2}^-)}_\\perp -h^{(\\frac{1}{2}^-)}_+)]\\nonumber \\\\&&T_2=-\\frac{2Mq\\cdot p^{\\prime }}{q^2s_-}(h^{(\\frac{1}{2}^-)}_\\perp -h^{(\\frac{1}{2}^-)}_+)(M-M^{\\prime })+\\frac{M}{q^2}(\\tilde{h}^{(\\frac{1}{2}^-)}_\\perp -h^{(\\frac{1}{2}^-)}_+)(M+M^{\\prime })\\nonumber \\\\&&T_3=\\frac{2Mq\\cdot p}{q^2s_-}(h^{(\\frac{1}{2}^-)}_\\perp -h^{(\\frac{1}{2}^-)}_+)(M-M^{\\prime })-\\frac{M}{q^2}(\\tilde{h}^{(\\frac{1}{2}^-)}_\\perp -h^{(\\frac{1}{2}^-)}_+)(M+M^{\\prime })\\nonumber \\\\&&T_4=h^{(\\frac{1}{2}^-)}_+$ where $s_\\pm =(M\\pm M^{\\prime })^2-q^2=2p\\cdot p^{\\prime }\\pm 2MM^{\\prime }=2MM^{\\prime }(\\omega \\pm 1)$ .", "Finally, thanks to the equations of motion of the heavy-quarks, one can relate $F_P$ and $F_S$ to the vector and axial form factors as $F_P&=&\\frac{1}{m_b-m_c}[-(M+M^{\\prime })G_1+(M-M^{\\prime }\\omega )G_2+(M\\omega -M^{\\prime })G_3],\\nonumber \\\\F_S&=&-\\frac{1}{m_b+m_c}[(M-M^{\\prime })F_1+(M-M^{\\prime }\\omega )F_2+(M\\omega -M^{\\prime })F_3],$ with $m_b$ and $m_c$ the masses of the $b$ and $c$ quarks respectively." ], [ "Hadron tensors and $\\widetilde{W}_\\chi $ SFs", "In Eqs.", "(REF )-(), we have interchanged the form factor decomposition of the $\\bar{c}(0){ O}^{(\\alpha \\beta )}b(0)$ and $\\bar{c}(0){ O}^{(\\alpha \\beta )}\\gamma _5b(0)$ matrix elements, with ${O}^{(\\alpha \\beta )}=I,\\gamma ^\\alpha ,\\sigma ^{\\alpha \\beta }$ , with respect to the ones used in Refs.", "[64], [70] for the $1/2^+\\rightarrow 1/2^+$ case due to the opposite parity here of the final charmed baryon.", "In this way, when comparing the vector ($J_{HVrr^{\\prime }}^\\alpha $ ), axial ($J_{HArr^{\\prime }}^\\alpha $ ), scalar ($J_{HSrr^{\\prime }}$ ), pseudoscalar ($J_{HPrr^{\\prime }}$ ), tensor ($J_{HTrr^{\\prime }}^{\\alpha \\beta }$ ) and pseudotensor ($J_{HpTrr^{\\prime }}^{\\alpha \\beta }$ ) hadronic matrix elements here with those for the $1/2^+\\rightarrow 1/2^+$ transition, and apart from the obvious differences in the actual values of the form factors, we only have to implement the following changes $J^\\alpha _{Hrr^{\\prime }\\chi }=C^V_\\chi J_{HVrr^{\\prime }}^\\alpha +h_\\chi C^A_\\chi J_{HArr^{\\prime }}^\\alpha &\\rightarrow &C^V_\\chi J_{HArr^{\\prime }}^\\alpha +h_\\chi C^A_\\chi J_{HVrr^{\\prime }}^\\alpha =h_\\chi [C^A_\\chi J_{HVrr^{\\prime }}^\\alpha +h_\\chi C^V_\\chi J_{HArr^{\\prime }}^\\alpha ],\\nonumber \\\\J_{Hrr^{\\prime }\\chi }=\\, C^S_\\chi J_{HSrr^{\\prime }}+h_\\chi C^P_\\chi J_{HPrr^{\\prime }}&\\rightarrow & C^S_\\chi J_{HPrr^{\\prime }}+h_\\chi C^P_\\chi J_{HSrr^{\\prime }}\\,=h_\\chi [C^P_\\chi J_{HSrr^{\\prime }}+h_\\chi C^S_\\chi J_{HPrr^{\\prime }}],\\nonumber \\\\J^{\\alpha \\beta }_{Hrr^{\\prime }\\chi }=\\ C^T_\\chi (J_{HTrr^{\\prime }}^{\\alpha \\beta }+h_\\chi J_{HpTrr^{\\prime }}^{\\alpha \\beta })&\\rightarrow &C^T_\\chi (J_{HpTrr^{\\prime }}^{\\alpha \\beta }+h_\\chi J_{HTrr^{\\prime }}^{\\alpha \\beta })\\ =h_\\chi [ C^T_\\chi (J_{HTrr^{\\prime }}^{\\alpha \\beta }+h_\\chi J_{HpTrr^{\\prime }}^{\\alpha \\beta })].\\nonumber \\\\$ Since there is no left-right interference for massless neutrinos and all hadronic tensors are quadratic in the Wilson coefficients, the global factor $h_\\chi $ is irrelevant and, to get the $\\widetilde{W}_\\chi $ SFs for the $1/2^+\\rightarrow 1/2^-$ decay, it suffices to do the changes $C^V_\\chi \\longleftrightarrow C^A_\\chi \\ \\ ,\\ \\ C^S_\\chi \\longleftrightarrow C^P_\\chi $ in our original expressions of Appendix C of Ref. [70].", "In addition, the genuine hadron $W_{i=1,2,4,5}^{VV,AA}$ , $W_{i=3}^{VA}$ , $W_{1,2,3,4,5}^T$ , $W_{S}$ , $W_{P}$ , $W_{I1,I2}^{VS,AP}$ , $W_{I3}^{ST,PpT}$ and $W_{I4,I5,I6,I7}^{VT,ApT}$ SFs, which are independent of the Wilson coefficients, can be read out from Eqs.", "(E3)-(E5) of Ref.", "[64] obtained for the $\\Lambda _b\\rightarrow \\Lambda _c$ ($1/2^+\\rightarrow 1/2^+$ ) transition." ], [ "Form-factors", "For the $\\Lambda _b\\rightarrow \\Lambda ^*_c(2625)$ decay, we use the following form factor decompositionsThe form-factors below are related to those in Ref.", "[75] as $F_i^{V,A,T}=l_{V,A,T_i}$ and $F_{S,P}^{(3/2)}=l_{S,P}$ .", "$\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\gamma ^\\alpha b(0)|\\Lambda _b;\\vec{p},r\\rangle &=&\\bar{u}^\\mu _{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })\\Big [\\frac{F_1^V}{M}p_\\mu \\gamma ^\\alpha + \\frac{F^V_2}{M^2} p_\\mu p^{\\alpha }+\\frac{F_3^V}{MM^{\\prime }} p_\\mu p^{\\prime \\alpha }\\nonumber \\\\&&\\hspace{49.79231pt}+ F_4^V g_\\mu ^{\\ \\alpha }\\Big ]u_{\\Lambda _b,r}(\\vec{p}\\,),$ $\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\gamma ^\\alpha b(0)|\\Lambda _b;\\vec{p},r\\rangle &=&\\bar{u}^\\mu _{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })\\Big [ \\frac{F_1^A}{M}p_\\mu \\gamma ^\\alpha + \\frac{F_2^A}{M^2} p_\\mu p^{\\alpha }+\\frac{F_3^A}{MM^{\\prime }} p_\\mu p^{\\prime \\alpha }\\nonumber \\\\&&\\hspace{49.79231pt}+ F_4^A g_\\mu ^{\\ \\alpha }\\Big ]\\gamma _5u_{\\Lambda _b,r}(\\vec{p}\\,) ,$ $\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0) b(0)|\\Lambda _b;\\vec{p},r\\rangle =\\bar{u}^\\mu _{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })p_\\mu \\frac{F_S^{(3/2)}}{M}u_{\\Lambda _b,r}(\\vec{p}\\,) , $ $\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\gamma _5 b(0)|\\Lambda _b;\\vec{p},r\\rangle =\\bar{u}^\\mu _{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })p_\\mu \\frac{F_P^{(3/2)}}{M}\\gamma _5u_{\\Lambda _b,r}(\\vec{p}\\,) ,$ $&&\\hspace{-28.45274pt}\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\sigma ^{\\alpha \\beta } b(0)|\\Lambda _b;\\vec{p},r\\rangle =\\bar{u}^\\mu _{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })\\Big [i\\frac{F^T_1}{M^3} p_{\\mu }(p^\\alpha p^{\\prime \\beta }-p^\\beta p^{\\prime \\alpha })+i\\frac{F^T_2}{M^2} p_{\\mu }(\\gamma ^\\alpha p^{\\beta }-\\gamma ^\\beta p^{\\alpha })\\nonumber \\\\&&\\hspace{177.82971pt}+ i\\frac{F^T_3}{M^2} p_{\\mu }(\\gamma ^\\mu p^{\\prime \\nu }-\\gamma ^\\beta p^{\\prime \\alpha })+\\frac{F^T_4}{M} p_{\\mu }\\sigma ^{\\alpha \\beta }\\nonumber \\\\&&\\hspace{177.82971pt}+iF^T_5(g_\\mu ^{\\ \\alpha }\\gamma ^\\beta -g_\\mu ^{\\ \\beta }\\gamma ^\\alpha )+i\\frac{F^T_6}{M}(g_\\mu ^{\\ \\alpha }p^\\beta -g_\\mu ^{\\ \\beta }p^\\alpha )\\nonumber \\\\&&\\hspace{177.82971pt}+i\\frac{F^T_7}{M}(g_\\mu ^{\\ \\alpha }p^{\\prime \\beta }-g_\\mu ^{\\ \\beta }p^{\\prime \\alpha })\\Big ]u_{\\Lambda _b,r}(\\vec{p}\\,)\\\\&&\\hspace{-28.45274pt}\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\sigma ^{\\alpha \\beta }\\gamma _5 b(0)|\\Lambda _b;\\vec{p},r\\rangle =\\bar{u}^\\mu _{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })\\Big [\\frac{F^T_1}{M^3} p_{\\mu }\\epsilon ^{\\alpha \\beta }_{\\ \\ \\rho \\lambda }p^\\rho p^{\\prime \\lambda }+\\frac{F^T_2}{M^2} p_{\\mu }\\epsilon ^{\\alpha \\beta }_{\\ \\ \\rho \\lambda }\\gamma ^\\rho p^{\\lambda }\\nonumber \\\\&&\\hspace{192.05609pt}+\\frac{F^T_3}{M^2} p_{\\mu }\\epsilon ^{\\alpha \\beta }_{\\ \\ \\rho \\lambda } \\gamma ^\\rho p^{\\prime \\lambda }-i\\frac{F^T_4}{M} p_{\\mu }\\frac{1}{2}\\epsilon ^{\\alpha \\beta }_{\\ \\ \\rho \\lambda }\\sigma ^{\\rho \\lambda }\\nonumber \\\\&&\\hspace{192.05609pt}+ F^T_5\\epsilon ^{\\alpha \\beta }_{\\ \\ \\ \\mu \\lambda }\\gamma ^\\lambda +\\frac{F^T_6}{M}\\epsilon ^{\\alpha \\beta }_{\\ \\ \\ \\mu \\lambda }\\,p^\\lambda \\nonumber \\\\&&\\hspace{192.05609pt}+\\frac{F^T_7}{M}\\epsilon ^{\\alpha \\beta }_{\\ \\ \\ \\mu \\lambda }\\,p^{\\prime \\lambda }\\Big ]u_{\\Lambda _b,r}(\\vec{p}\\,).$ Here, $u^\\mu (\\vec{p}\\,^{\\prime })$ is the Rarita-Schwinger spinor satisfying ${p}^{\\prime }u^\\mu (p^{\\prime })=M^{\\prime }u^\\mu (p^{\\prime })$ and the orthogonality conditions $\\gamma _\\mu u^\\mu (p^{\\prime })=p^{\\prime }_{\\mu } u^\\mu (p^{\\prime })=0$ .", "Using these relations, together with ${p}u(\\vec{p}\\,)=Mu(\\vec{p}\\,)$ and the identity $\\epsilon ^{\\alpha \\mu \\nu \\lambda }=-\\gamma _5(-i\\sigma ^{\\alpha \\mu }\\sigma ^{\\nu \\lambda }-ig^{\\alpha \\lambda }g^{\\mu \\nu }+ig^{\\alpha \\nu }g^{\\mu \\lambda }+g^{\\alpha \\nu }\\sigma ^{\\mu \\lambda }+g^{\\mu \\lambda }\\sigma ^{\\alpha \\nu }-g^{\\alpha \\lambda }\\sigma ^{\\mu \\nu }-g^{\\mu \\nu }\\sigma ^{\\alpha \\lambda })$ one can rewrite Eq.", "() as $&&\\hspace{-28.45274pt}\\langle \\Lambda ^*_c;\\vec{p}\\,^{\\prime },r^{\\prime }|\\bar{c}(0)\\sigma ^{\\alpha \\beta }\\gamma _5 b(0)|\\Lambda _b;\\vec{p},r\\rangle \\nonumber \\\\&&=\\bar{u}^\\mu _{\\Lambda ^*_c,r^{\\prime }}(\\vec{p}\\,^{\\prime })\\gamma _5\\Big [- ip_{\\mu }(p^\\alpha p^{\\prime \\beta }-p^\\beta p^{\\prime \\alpha })\\frac{F^T_1}{M^3}+ip_{\\mu }(\\gamma ^\\alpha p^{\\beta }-\\gamma ^\\beta p^{\\alpha })\\frac{M^{\\prime }F^T_1-MF^T_2}{M^3}\\nonumber \\\\&&\\hspace{71.13188pt}+ip_{\\mu }(\\gamma ^\\alpha p^{\\prime \\beta }-\\gamma ^\\beta p^{\\prime \\alpha })\\frac{F^T_1+F^T_3}{M^2}\\nonumber \\\\&&\\hspace{71.13188pt}+ p_{\\mu }\\sigma ^{\\alpha \\beta }[\\frac{F^T_6}{M}-\\frac{F^T_1}{M^3}(p\\cdot p^{\\prime }+MM^{\\prime })+\\frac{F^T_2}{M}-\\frac{M^{\\prime }}{M^2}F^T_3+\\frac{F^T_4}{M}]\\nonumber \\\\&&\\hspace{71.13188pt}+i(g_\\mu ^{\\ \\alpha }\\gamma ^\\beta -g_\\mu ^{\\ \\beta }\\gamma ^\\alpha )(F^T_5+F^T_6+\\frac{M^{\\prime }}{M}F^T_7)-i(g_\\mu ^{\\ \\alpha }p^\\beta -g_\\mu ^{\\ \\beta }p^\\alpha )\\frac{F^T_6}{M}\\nonumber \\\\&&\\hspace{71.13188pt}+i(g_\\mu ^{\\ \\alpha }p^{\\prime \\beta }-g_\\mu ^{\\ \\beta }p^{\\prime \\alpha })\\frac{F^T_7}{M}\\Big ]u_{\\Lambda _b,r}(\\vec{p}\\,)$ which will be the form used in the rest of the appendix.", "The vector and axial form factors used in this work are related to the helicity ones computed in Refs.", "[72], [73] via $&&F_1^V=(f_\\perp ^{(\\frac{3}{2}^-)}+f_{\\perp ^{\\prime }}^{(\\frac{3}{2}^-)})\\frac{MM^{\\prime }}{s_-},\\nonumber \\\\&&F_2^V=M^2\\Big [f_0^{(\\frac{3}{2}^-)}\\frac{M^{\\prime }}{s_+}\\frac{(M-M^{\\prime })}{q^2}+f_+^{(\\frac{3}{2}^-)}\\frac{M^{\\prime }}{s_-}\\frac{(M+M^{\\prime })[q^2-(M^2-M^{\\prime 2})]}{q^2s_+}\\nonumber \\\\&&\\hspace{56.9055pt}-(f_\\perp ^{(\\frac{3}{2}^-)}-f_{\\perp ^{\\prime }}^{(\\frac{3}{2}^-)})\\frac{2M^{\\prime 2}}{s_-s_+}\\Big ]\\nonumber \\\\&&F_3^V=M^{\\prime 2}\\Big [-f_0^{(\\frac{3}{2}^-)}\\frac{M}{s_+}\\frac{(M-M^{\\prime })}{q^2}+f_+^{(\\frac{3}{2}^-)}\\frac{M}{s_-}\\frac{(M+M^{\\prime })[q^2+(M^2-M^{\\prime 2})]}{q^2s_+}\\nonumber \\\\&&\\hspace{56.9055pt}-[f_\\perp ^{(\\frac{3}{2}^-)}-f_{\\perp ^{\\prime }}^{(\\frac{3}{2}^-)}(1-\\frac{s_+}{MM^{\\prime }})]\\frac{2M^{2}}{s_-s_+}\\Big ],\\nonumber \\\\&&F_4^V=f_{\\perp ^{\\prime }}^{(\\frac{3}{2}^-)}.$ $&&F_1^A=(g_\\perp ^{(\\frac{3}{2}^-)}+g_{\\perp ^{\\prime }}^{(\\frac{3}{2}^-)})\\frac{MM^{\\prime }}{s_+},\\nonumber \\\\&&F_2^A=M^2\\Big [-g_0^{(\\frac{3}{2}^-)}\\frac{M^{\\prime }}{s_-}\\frac{(M+M^{\\prime })}{q^2}-g_+^{(\\frac{3}{2}^-)}\\frac{M^{\\prime }}{s_+}\\frac{(M-M^{\\prime })[q^2-(M^2-M^{\\prime 2})]}{q^2s_-}\\nonumber \\\\&&\\hspace{56.9055pt}-(g_\\perp ^{(\\frac{3}{2}^-)}-g_{\\perp ^{\\prime }}^{(\\frac{3}{2}^-)})\\frac{2M^{\\prime 2}}{s_-s_+}\\Big ]\\nonumber \\\\&&F_3^A=M^{\\prime 2}\\Big [g_0^{(\\frac{3}{2}^-)}\\frac{M}{s_-}\\frac{(M+M^{\\prime })}{q^2}-g_+^{(\\frac{3}{2}^-)}\\frac{M}{s_+}\\frac{(M-M^{\\prime })[q^2+(M^2-M^{\\prime 2})]}{q^2s_-}\\nonumber \\\\&&\\hspace{56.9055pt}+[g_\\perp ^{(\\frac{3}{2}^-)}-g_{\\perp ^{\\prime }}^{(\\frac{3}{2}^-)}(1+\\frac{s_-}{MM^{\\prime }})]\\frac{2M^{2}}{s_-s_+}\\Big ],\\nonumber \\\\&&F_4^A=g_{\\perp ^{\\prime }}^{(\\frac{3}{2}^-)}.$ Moreover, using the equations of motion, one can relate $F_S^{(3/2)}$ and $F_P^{(3/2)}$ to the vector and axial form factors through $F_S^{(3/2)}&=&\\frac{1}{m_b-m_c}[(M-M^{\\prime })F_1^V+(M-M^{\\prime }\\omega )F_2^V+(M\\omega -M^{\\prime })F_3^V+MF_4^V],\\nonumber \\\\F_P^{(3/2)}&=&-\\frac{1}{m_b+m_c}[-(M+M^{\\prime })F_1^A+(M-M^{\\prime }\\omega )F_2^A+(M\\omega -M^{\\prime })F_3^A+MF_4^A].$ In the case of the matrix elements of the tensor operators, although there are seven different structures of the tensor (or pseudotensor) form, one of them can be removed without any loss of generality.", "As shown in Ref.", "[83], there is a combination of these structures that does not enter the physical amplitude.", "The argument goes as follows.", "Let us consider the contraction of the matrix element $J^{\\alpha \\beta }_{HTrr^{\\prime }}(p,q)$ of the tensor operator $\\bar{c}(0)\\sigma ^{\\alpha \\beta }b(0)$ with a general tensor $ F_{\\alpha \\beta }$ .", "One would then have $J^{\\alpha \\beta }_{HTrr^{\\prime }} (p,q) F_{\\alpha \\beta }&=&J^{\\alpha \\beta }_{HTrr^{\\prime }} (p,q)\\,g^{\\ \\alpha ^{\\prime }}_{\\alpha }g^{\\ \\beta ^{\\prime }}_{\\beta } F_{\\alpha ^{\\prime }\\beta ^{\\prime }}=J^{\\alpha \\beta }_{HTrr^{\\prime }} (p,q)\\, g_{rr}\\epsilon ^{\\alpha ^{\\prime }*}_r\\epsilon _{r\\alpha }\\,g_{ss}\\epsilon ^{\\beta ^{\\prime }*}_s \\epsilon _{s\\beta }F_{\\alpha ^{\\prime }\\beta ^{\\prime }},$ with $\\epsilon _{r=0,\\pm 1}$ the usual polarization vectors of a vector particle with four-momentum $q$ and invariant mass $\\sqrt{q^2}$ , $\\epsilon _{r=t}=\\frac{q}{\\sqrt{q^2}}$ and $-g_{tt}=g_{00}=g_{\\pm 1\\pm 1}=-1$ .", "Since $J^{\\alpha \\beta }_{HTrr^{\\prime }} (p,q)$ is antisymmetric in the $\\alpha ,\\,\\beta $ indexes, one has $J^{\\alpha \\beta }_{HTrr^{\\prime }}(p,q)\\epsilon _{r\\alpha }\\epsilon _{s\\beta }=J^{\\alpha \\beta }_{HTrr^{\\prime }}(p,q)\\frac{1}{2}(\\epsilon _{r\\alpha }\\epsilon _{s\\beta }-\\epsilon _{r\\beta }\\epsilon _{s\\alpha })$ and then only six different products that correspond to the values $(r,s)=\\lbrace (t,0),\\,(t,-1),\\,(t,+1)$ , $(0,-1),\\,(0,+1),\\,(-1,+1)\\rbrace $ could appear.", "One can find $\\lambda _{1-7}(q)$ scalar functions such that the linear combination $&&\\hspace{-14.22636pt}{\\Lambda }^{\\alpha \\beta }_{HTrr^{\\prime }} (p,q,\\vec{\\lambda })=\\bar{u}_{r^{\\prime }\\mu }(\\vec{p}\\,^{\\prime })\\Big [\\frac{\\lambda _1}{M^3} p^{\\mu }(p^\\alpha p^{\\prime \\beta }-p^\\beta p^{\\prime \\alpha })+\\frac{\\lambda _2}{M^2} p^{\\mu }(\\gamma ^\\alpha p^{\\beta }-\\gamma ^\\beta p^{\\alpha })+ \\frac{\\lambda _3}{M^2} p^{\\mu }(\\gamma ^\\alpha p^{\\prime \\beta }-\\gamma ^\\beta p^{\\prime \\alpha })\\nonumber \\\\&&\\hspace{106.69783pt}-i\\frac{\\lambda _4}{M}p^{\\mu }\\sigma ^{\\alpha \\beta }+\\lambda _5(g^{\\mu \\alpha }\\gamma ^\\beta -g^{\\mu \\beta }\\gamma ^\\alpha )+\\frac{\\lambda _6}{M}(g^{\\mu \\alpha }p^\\beta -g^{\\mu \\beta }p^\\alpha )\\nonumber \\\\&&\\hspace{106.69783pt}+\\frac{\\lambda _7}{M}(g^{\\mu \\alpha }p^{\\prime \\beta }-g^{\\mu \\beta }p^{\\prime \\alpha })\\Big ]u_r(\\vec{p}\\,)$ is orthogonal to the six $\\frac{1}{2}(\\epsilon _{r\\alpha }\\epsilon _{s\\beta }-\\epsilon _{r\\beta }\\epsilon _{s\\alpha })$ anti-symmetric tensorsUsing that $\\epsilon ^{\\mu \\nu \\alpha \\beta }\\epsilon _{0\\alpha }\\epsilon _{t\\beta }&=&i(\\epsilon ^\\mu _{+1}\\epsilon ^\\nu _{-1}-\\epsilon ^\\nu _{+1}\\epsilon ^\\mu _{-1}),\\\\\\epsilon ^{\\mu \\nu \\alpha \\beta }\\epsilon _{\\pm 1\\alpha }\\epsilon _{t\\beta }&=&\\pm i(\\epsilon ^\\mu _{\\pm 1}\\epsilon ^\\nu _{0}-\\epsilon ^\\nu _{\\pm 1}\\epsilon ^\\mu _{0}),$ it is enough to ask for the orthogonality of both ${\\Lambda }^{\\alpha \\beta }_{HTrr^{\\prime }}$ and ${\\Lambda }^{\\alpha \\beta }_{HpTrr^{\\prime }}=-\\frac{i}{2}\\epsilon ^{\\alpha \\beta }_{\\ \\ \\ \\rho \\lambda }{\\Lambda }^{\\rho \\lambda }_{HTrr^{\\prime }}$ to the combinations $\\epsilon _{0}\\epsilon _t$ and $\\epsilon _{\\pm 1}\\epsilon _t$ .", ".", "A choice of such functions is given in Ref.", "[83] as $\\vec{\\lambda }=\\Lambda \\left(0,0,\\frac{M}{M^{\\prime }},1,(\\omega +1),-1,-\\frac{M}{M^{\\prime }}\\right)$ where $\\Lambda $ is an arbitrary scalar function of $q^2$ .", "Thus, no physical observable changes if one modifies $F^T_3&\\rightarrow & F^{T^{\\prime }}_3=F^T_3+\\Lambda \\frac{M}{M^{\\prime }},\\nonumber \\\\F^T_4&\\rightarrow & F^{T^{\\prime }}_4=F^T_3+\\Lambda ,\\nonumber \\\\F^T_5&\\rightarrow & F^{T^{\\prime }}_5=F^T_5+\\Lambda (\\omega +1),\\nonumber \\\\F^T_6&\\rightarrow & F^{T^{\\prime }}_6=F^T_6-\\Lambda ,\\nonumber \\\\F^T_7&\\rightarrow & F^{T^{\\prime }}_7=F^T_7-\\Lambda \\frac{M}{M^{\\prime }},$ Thus, $\\Lambda $ can be chosen so as to cancel one of the above form factors.", "For simplicity we omit the prime in what follows and take $F^T_7=0$ .", "Then one has the following relations between the tensor form factors here and the ones defined and evaluated in the LQCD simulation of Refs.", "[72], [73] $F^T_1&=&-\\frac{2M^3M^{\\prime }}{s_+s_-} (h^{(\\frac{3}{2}^-)}_{+}-\\tilde{h}^{(\\frac{3}{2}^-)}_{+})-\\frac{2M^3M^{\\prime }(M-M^{\\prime })^2}{s_+s_-q^2}\\tilde{h}^{(\\frac{3}{2}^-)}_{\\perp }+\\frac{2M^3(M-M^{\\prime })(M^2-MM^{\\prime }-q^2)}{s_+s_-q^2}\\tilde{h}^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }}\\nonumber \\\\&&+\\frac{2M^3M^{\\prime }(M+M^{\\prime })^2}{s_+s_-q^2} h^{(\\frac{3}{2}^-)}_{\\perp }+\\frac{2M^3(M+M^{\\prime })(M^2+MM^{\\prime }-q^2)}{s_+s_-q^2} h^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }},\\nonumber \\\\F^T_2&=&\\frac{2M^2M^{\\prime 2}}{s_+s_-}\\tilde{h}^{(\\frac{3}{2}^-)}_{+}-\\frac{M^2M^{\\prime }(M-M^{\\prime })(M^2-M^{\\prime 2}-q^2)}{s_+s_-q^2}(\\tilde{h}^{(\\frac{3}{2}^-)}_{\\perp }-\\tilde{h}^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }})\\nonumber \\\\&&+\\frac{M^2M^{\\prime }(M+M^{\\prime })}{s_-q^2}(h^{(\\frac{3}{2}^-)}_{\\perp }+ h^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }}),\\nonumber \\\\F^T_3&=&-\\frac{2M^3M^{\\prime }}{s_+s_-}\\tilde{h}^{(\\frac{3}{2}^-)}_{+}+\\frac{M^2M^{\\prime }(M-M^{\\prime })(M^2-M^{\\prime 2}+q^2)}{s_+s_-q^2}\\tilde{h}^{(\\frac{3}{2}^-)}_{\\perp }\\nonumber \\\\&&-\\frac{M(M-M^{\\prime })(M^2+M^{\\prime 2}-MM^{\\prime }-q^2)(M^2-M^{\\prime 2}+q^2)}{s_+s_-q^2}\\tilde{h}^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }}-\\frac{M^2M^{\\prime }(M+M^{\\prime })}{s_-q^2}h^{(\\frac{3}{2}^-)}_{\\perp }\\nonumber \\\\&&-\\frac{M(M^3+M^{\\prime 3}-q^2(M+M^{\\prime }))}{s_-q^2}h^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }},\\nonumber \\\\F^T_4&=&\\frac{MM^{\\prime }}{s_+}\\tilde{h}^{(\\frac{3}{2}^-)}_{+}-\\frac{M^{\\prime }(M+M^{\\prime })}{q^2}h^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }}-\\frac{M^{\\prime }(M-M^{\\prime })(M^2-M^{\\prime 2}+q^2)}{q^2s_+}\\tilde{h}^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }},\\nonumber \\\\F^T_5&=&-\\frac{1}{2M q^2}\\Big [h^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }} (M+M^{\\prime }) s_++\\tilde{h}^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }}(M-M^{\\prime })(M^2-M^{\\prime 2}+q^2) \\Big ],\\nonumber \\\\F^T_6&=&\\frac{1}{q^2}\\Big [h^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }}(M+M^{\\prime })^2+\\tilde{h}^{(\\frac{3}{2}^-)}_{\\perp ^{\\prime }}(M-M^{\\prime })^2\\Big ].$" ], [ "Hadron tensors and $\\widetilde{W}_\\chi $ SFs", "As already mentioned, in Refs.", "[70], [64] we derived general expressions for the hadronic tensors that are valid for any CC transition, the differences being encoded in the actual values of the $\\widetilde{W}_\\chi $ SFs.", "In this case, writing $J^{\\mu \\,(\\alpha \\beta )}_{H\\chi rr^{\\prime }}= \\bar{u}_{r^{\\prime }\\mu }(\\vec{p}\\,^{\\prime })\\Gamma ^{\\mu \\,(\\alpha \\beta )}_{H\\chi }u_{r}(\\vec{p}\\,^{\\prime }),$ we have that the hadronic tensors are given by the traces $W_\\chi ^{(\\alpha \\beta )(\\rho \\lambda ))}=\\frac{1}{2}{\\rm Tr}\\Big [\\Big (\\sum _{r^{\\prime }}u_{r^{\\prime }\\nu }(\\vec{p}\\,^{\\prime })\\bar{u}_{r^{\\prime }\\mu }(\\vec{p}\\,^{\\prime })\\Big )\\Gamma ^{\\mu \\,(\\alpha \\beta )}_{H\\chi } \\Big (\\sum _ru_r(\\vec{p}\\,)\\bar{u}_r(\\vec{p}\\,)\\Big )\\gamma ^0\\Gamma ^{\\nu \\,(\\rho \\lambda )\\dagger }_{H\\chi }\\gamma ^0\\Big ],$ with $\\sum _ru_r(\\vec{p}\\,)\\bar{u}_r(\\vec{p}\\,)&=&({p}+M),\\\\\\sum _{r^{\\prime }}u_{r^{\\prime }\\nu }(\\vec{p}\\,^{\\prime })\\bar{u}_{r^{\\prime }\\mu }(\\vec{p}\\,^{\\prime })&=&-({p}^{\\prime }+M^{\\prime })\\Big [g_{\\nu \\mu }-\\frac{1}{3}\\gamma _\\nu \\gamma _\\mu -\\frac{2}{3}\\frac{p^{\\prime }_\\nu p^{\\prime }_\\mu }{M^{\\prime 2}}+\\frac{1}{3}\\frac{p^{\\prime }_\\nu \\gamma _\\mu -p^{\\prime }_\\mu \\gamma _\\nu }{M^{\\prime }}\\Big ]$ and $\\Gamma ^{\\mu \\,(\\alpha \\beta )}_{H\\chi }= C^V_\\chi \\Gamma ^{\\mu \\,\\alpha }_{HV}+h_\\chi C^A_\\chi \\Gamma ^{\\mu \\alpha }_{HA},\\ C^S_\\chi \\Gamma ^\\mu _{HS}+h_\\chi C^P_\\chi \\Gamma ^\\mu _{HP},\\ C^T_\\chi (\\Gamma ^{\\mu \\,\\alpha \\beta }_{HT}+h_\\chi \\Gamma ^{\\mu \\,\\alpha \\beta }_{HpT}),$ where the $\\Gamma $ 's can be easily read out from Eqs.", "(REF )-().", "From a direct comparison of the results for those traces with the general form of the different $W^{(\\alpha \\beta )(\\rho \\lambda )}_\\chi $ tensors in Refs.", "[70], [64] we extract the corresponding $1/2^+\\rightarrow 3/2^-$ $\\widetilde{W}_\\chi $ SFs.", "They have been obtained with the use of the FeynCalc package [84], [85], [86] on Mathematica [87] and they are given in terms of the Wilson coefficients and form factors by the following expressionsHere we have kept the explicit dependence on $F^T_7$ .. $\\widetilde{W}_{1\\chi }&=&\\frac{1}{3}\\Big \\lbrace \\hspace{7.11317pt} |C^V_\\chi |^2(\\omega +1)\\Big [(F^V_4)^2+(F^V_1)^2(\\omega -1)^2-F^V_1F^V_4(\\omega -1)\\Big ]\\nonumber \\\\&&\\hspace{14.22636pt}+|C^A_\\chi |^2(\\omega -1)\\Big [(G^V_4)^2+(G^V_1)^2(\\omega +1)^2-G^V_1G^V_4(\\omega +1)\\Big ]\\Big \\rbrace , \\\\\\widetilde{W}_{2\\chi }&=&\\frac{1}{3M^{\\prime 2}}\\Big \\lbrace \\ |C^V_\\chi |^2\\Big [2(F^V_1)^2M M^{\\prime }(\\omega ^2-1)+F^V_1[F^V_4(M^2(1+2\\omega )-2MM^{\\prime }-M^{\\prime 2})\\nonumber \\\\&&\\hspace{71.13188pt}+2(M+M^{\\prime })(\\omega ^2-1)(F^V_3 M+F^V_2 M^{\\prime })]+(\\omega +1)[(F^V_4)^2M^2\\nonumber \\\\&&\\hspace{71.13188pt}-2(F^V_3M+F^V_2M^{\\prime })F^V_4(M^{\\prime }-M\\omega )+(\\omega ^2-1)(F^V_3M+F^V_2M^{\\prime })^2] \\Big ]\\nonumber \\\\&&\\hspace{35.56593pt}+|C^A_\\chi |^2\\Big [2(F^A_1)^2M M^{\\prime }(\\omega ^2-1)+F^A_1[F^A_4(M^2(1-2\\omega )+2MM^{\\prime }-M^{\\prime 2})\\nonumber \\\\&&\\hspace{71.13188pt} -2(M-M^{\\prime })(\\omega ^2-1)(F^A_3 M+F^A_2 M^{\\prime })]+(\\omega -1)[(F^A_4)^2M^2\\nonumber \\\\&&\\hspace{71.13188pt}-2(F^A_3 M+F^A_2M^{\\prime })F^A_4(M^{\\prime }-M\\omega )+(\\omega ^2-1)(F^A_3M+F^A_2M^{\\prime })^2] \\Big ]\\Big \\rbrace ,\\nonumber \\\\ \\\\\\widetilde{W}_{3\\chi }&=&-\\frac{2{\\rm Re}(C^V_\\chi C^{A*}_\\chi )M}{3M^{\\prime }}\\Big \\lbrace F^V_4[F^A_1(\\omega +1)+F^A_4]+(\\omega -1)F^V_1[F^A_4-2F^A_1(\\omega +1)]\\Big \\rbrace ,\\\\\\widetilde{W}_{4\\chi }&=& \\frac{M^2}{3M^{\\prime 2}}\\Big \\lbrace \\ |C^V_\\chi |^2\\Big [F^V_1[F^V_4(1+2\\omega )+2F^V_3(\\omega ^2-1)]\\nonumber \\\\&&\\hspace{71.13188pt} +(\\omega +1)[(F^V_4)^2+2F^V_3F^V_4\\omega +(F^V_3)^2(\\omega ^2-1)]\\Big ]\\nonumber \\\\&&\\hspace{31.2982pt}+|C^A_\\chi |^2 \\Big [F^A_1[F^A_4(1-2\\omega )-2F^A_3(\\omega ^2-1)]\\nonumber \\\\&&\\hspace{71.13188pt}+(\\omega -1)[(F^A_4)^2+2F^A_3F^A_4\\omega +(F^A_3)^2(\\omega ^2-1)]\\Big ]\\Big \\rbrace ,\\\\\\widetilde{W}_{5\\chi }&=& \\frac{2M}{3M^{\\prime }}\\Big \\lbrace \\ |C^V_\\chi |^2\\Big [-(\\omega ^2-1){ [}(F^V_1+F^V_2)(F^V_1+F^V_3)+F^V_2F^V_3\\omega ]\\nonumber \\\\&&\\hspace{71.13188pt}+F^V_4[F^V_1+(F^V_3-F^V_2\\omega )(\\omega +1)]\\nonumber \\\\&&\\hspace{71.13188pt}-\\frac{M}{M^{\\prime }}\\lbrace F^V_1[2F^V_3(\\omega ^2-1)+F^V_4(1+2\\omega )]\\nonumber \\\\&&\\hspace{71.13188pt}+(\\omega +1)[(F^V_4)^2+2F^V_3F^V_4\\omega +(F^V_3)^2(\\omega ^2-1)]\\rbrace \\Big ]\\nonumber \\\\&&\\hspace{29.87547pt}+|C^A_\\chi |^2\\Big [-(\\omega ^2-1)[(F^A_1-F^A_2)(F^A_1+F^A_3)+F^A_2F^A_3\\omega ]\\nonumber \\\\&&\\hspace{71.13188pt}-F^A_4[F^A_1 {-} (F^A_3-F^A_2\\omega )(\\omega -1)]\\nonumber \\\\&&\\hspace{71.13188pt}+\\frac{M}{M^{\\prime }}\\lbrace F^A_1[2F^A_3(\\omega ^2-1)+F^A_4(2\\omega -1)]\\nonumber \\\\&&\\hspace{71.13188pt}-(\\omega -1)[(F^A_4)^2+2F^A_3F^A_4\\omega +(F^A_3)^2(\\omega ^2-1)]\\rbrace \\Big ]\\Big \\rbrace ,\\\\\\widetilde{W}_{SP\\chi }&=&\\frac{\\omega ^2-1}{3}\\Big [|C^S_\\chi |^2\\left(F^{(3/2)}_S\\right)^2(\\omega +1)+|C^P_\\chi |^2\\left(F^{(3/2)}_P\\right)^2(\\omega -1)\\Big ],$ $\\widetilde{W}_{I1\\chi }&=& \\frac{2}{3M^{\\prime }}\\Big \\lbrace C^V_\\chi C^{S*}_\\chi F_S^{(3/2)}(\\omega +1)\\Big [F^V_1(M+M^{\\prime })(\\omega -1)+(F^V_2M^{\\prime }+F^V_3M)(\\omega ^2-1)\\nonumber \\\\&&\\hspace{28.45274pt}+F^V_4(M\\omega -M^{\\prime })\\Big ]+C^A_\\chi C^{P*}_\\chi F_P^{(3/2)}(\\omega -1)\\Big [F^A_1(M^{\\prime }-M)(\\omega +1)\\nonumber \\\\&&\\hspace{28.45274pt}+(F^A_2M^{\\prime }+F^A_3M)(\\omega ^2-1)+F^A_4(M\\omega -M^{\\prime })\\Big ]\\Big \\rbrace ,\\\\\\widetilde{W}_{I2\\chi }&=& \\frac{2M}{3M^{\\prime }}\\Big \\lbrace -C^V_\\chi C^{S*}_\\chi F_S^{(3/2)}(\\omega +1)\\Big [F^V_1(\\omega -1)+F^V_3(\\omega ^2-1)+F^V_4\\omega \\Big ]\\nonumber \\\\&&\\hspace{32.72049pt}+C^A_\\chi C^{P*}_\\chi F_P^{(3/2)}(\\omega -1)\\Big [F^A_1(\\omega +1)-F^A_3(\\omega ^2-1)-F^A_4\\omega \\Big ]\\Big \\rbrace ,\\\\\\widetilde{W}_{I3\\chi }&=&-\\frac{2}{3M^{\\prime }}\\Big \\lbrace C^S_\\chi C^{T*}_\\chi F_S^{(3/2)}(\\omega +1)\\Big [M[F^T_5+\\omega F^T_6+(F^T_2+F^T_4)(\\omega -1)]\\nonumber \\\\&&\\hspace{35.56593pt}+M^{\\prime }[F^T_7-(\\omega -1)(F^T_1(\\omega +1)+F^T_3)]\\Big ]\\nonumber \\\\&&\\hspace{35.56593pt}+C^P_\\chi C^{T*}_\\chi F_P^{(3/2)}(\\omega -1)M[-F^T_5+F^T_4(\\omega +1)]\\Big \\rbrace ,\\\\\\widetilde{W}_{I4\\chi }&=&\\frac{1}{3M^{\\prime 2}}\\Big \\lbrace C^V_\\chi C^{T*}_\\chi \\Big [-F^V_1\\lbrace F^T_5 M(M^{\\prime }+M(2\\omega +1))+F^T_6 M(M^{\\prime }\\omega +M(2\\omega +1))\\nonumber \\\\&&\\hspace{78.24507pt}+M^{\\prime }[(2MF^T_2-2MF^T_3-2(M+M^{\\prime })F^T_1)(\\omega ^2-1)\\nonumber \\\\&&\\hspace{78.24507pt}+F^T_7(M(2+\\omega )+M^{\\prime })]\\rbrace -2(F^V_2M^{\\prime }+F^V_3M)(\\omega +1)\\nonumber \\\\&&\\hspace{106.69783pt}\\times \\lbrace -F^T_1M^{\\prime }(\\omega ^2-1)+[M(F^T_2+F^T_4)-M^{\\prime }F^T_3](\\omega -1)\\nonumber \\\\&&\\hspace{128.0374pt}+M(F^T_5+F^T_6\\omega )+M^{\\prime }F^T_7\\rbrace \\nonumber \\\\&&\\hspace{78.24507pt}-F^V_4\\lbrace 2F^T_1M^{\\prime }(\\omega +1)[M^{\\prime }-M\\omega ]+F^T_2M[M(1+2\\omega )-M^{\\prime }(2+\\omega )]\\nonumber \\\\&&\\hspace{78.24507pt}+F^T_3M^{\\prime }(M^{\\prime }-M\\omega )+F^T_4M[M(1+2\\omega )-M^{\\prime }]\\nonumber \\\\&&\\hspace{78.24507pt}+M^2[F^T_5+2F^T_6(\\omega +1)]\\rbrace \\Big ]\\nonumber \\\\&&\\hspace{28.45274pt}+C^A_\\chi C^{T*}_\\chi M\\Big [F^A_1\\lbrace M^{\\prime }(\\omega +1)[-2(F^T_2+F^T_3)(\\omega -1)+F^T_6+F^T_7]\\nonumber \\\\&&\\hspace{85.35826pt}+F^T_5(M^{\\prime }+M(2\\omega -1))\\rbrace \\nonumber \\\\&&\\hspace{85.35826pt}-2(F^A_2M^{\\prime }+F^A_3M)(\\omega -1)[F^T_4(\\omega +1)-F^T_5]\\nonumber \\\\&&\\hspace{85.35826pt}+F^A_4\\lbrace MF^T_5+M^{\\prime }[F^T_6+F^T_7+(\\omega -1)(F^T_2+F^T_3)]\\nonumber \\\\&&\\hspace{85.35826pt}+F_4(M(1-2\\omega )+M^{\\prime })\\rbrace \\Big ]\\Big \\rbrace ,\\\\\\widetilde{W}_{I5\\chi }&=&\\frac{M}{3M^{\\prime 2}}\\Big \\lbrace C^V_\\chi C^{T*}_\\chi \\Big [F^V_1\\lbrace (F^T_5+F^T_6)M(1+2\\omega )+M^{\\prime }[F^T_7(\\omega +2)\\nonumber \\\\&&\\hspace{71.13188pt}-2(F^T_1+F^T_3)(\\omega ^2-1)]\\rbrace +2F^V_3(\\omega +1)\\lbrace M^{\\prime }[-F^T_1(\\omega ^2-1)\\nonumber \\\\&&\\hspace{71.13188pt}-F^T_3(\\omega -1)+F^T_7]+M[F^T_5+(F^T_2+F^T_4)(\\omega -1)+F^T_6\\omega ]\\rbrace \\nonumber \\\\&&\\hspace{71.13188pt}+F^V_4\\lbrace -M^{\\prime }[2F^T_1\\omega (\\omega +1)+F^T_3\\omega ]+M[F^T_5+2F^T_6(1+\\omega )\\nonumber \\\\&&\\hspace{71.13188pt}+(F^T_2+F^T_4)(1+2\\omega )]\\rbrace \\Big ]\\nonumber \\\\&&\\hspace{35.56593pt}+C^A_\\chi C^{T*}_\\chi \\Big [-F_1^A \\lbrace F^T_5M(2 \\omega -1)+M^{\\prime }(\\omega +1)[-2F^T_3(\\omega -1)+F^T_7]\\rbrace \\nonumber \\\\&&\\hspace{86.78099pt}+2F_3^A M(\\omega -1)[F^T_4(\\omega +1)-F^T_5]\\nonumber \\\\&&\\hspace{86.78099pt}-F_4^A\\lbrace F^T_5M+M^{\\prime }[F^T_3(\\omega -1)+F^T_7]+F^T_4M(1-2\\omega )\\rbrace \\Big ]\\Big \\rbrace ,$ $\\widetilde{W}_{I6\\chi }&=&\\frac{1}{3MM^{\\prime }}\\Big \\lbrace C^V_\\chi C^{T*}_\\chi M\\Big [F^V_1(\\omega -1)\\lbrace F^T_5[M^{\\prime }-M(1+2\\omega )]+(\\omega +1)[2F^T_4(M-M^{\\prime })\\nonumber \\\\&&\\hspace{99.58464pt}-M^{\\prime }[-2(F^T_2+F^T_3)(\\omega -1)+F^T_6+F^T_7]]\\rbrace \\nonumber \\\\&&\\hspace{99.58464pt}+F^V_4\\lbrace (\\omega +1)[F^T_4(M^{\\prime }-M)+M^{\\prime }[(F^T_2+F^T_3)(1-\\omega )\\nonumber \\\\&&\\hspace{99.58464pt}+2(F^T_6+F^T_7)]]+F^T_5[M^{\\prime }+M(2+\\omega )]\\rbrace \\Big ]\\nonumber \\\\&&\\hspace{42.67912pt}-C^A_\\chi C^{T*}_\\chi \\Big [F^A_1(\\omega +1)\\lbrace 2F^T_2M(\\omega -1)(M-M^{\\prime }\\omega )+2F^T_3M^{\\prime }(\\omega -1)(M\\omega -M^{\\prime })\\nonumber \\\\&&\\hspace{99.58464pt}+2F^T_4M(M+M^{\\prime })(\\omega -1)+F^T_5M[M^{\\prime }+M(1-2\\omega )]\\nonumber \\\\&&\\hspace{99.58464pt}-F^T_6M(M-M^{\\prime }\\omega )+F^T_7M^{\\prime }(M^{\\prime }-M\\omega )\\rbrace \\nonumber \\\\&&\\hspace{99.58464pt}+F^A_4\\lbrace -F^T_2M(\\omega -1)(M-M^{\\prime }\\omega )+F^T_3M^{\\prime }(\\omega -1)(M^{\\prime }-M\\omega )\\nonumber \\\\&&\\hspace{99.58464pt}-F^T_4M(M+M^{\\prime })(\\omega -1)+F^T_5M[M^{\\prime }+M(\\omega -2)]\\nonumber \\\\&&\\hspace{99.58464pt}-F^T_6M(M-M^{\\prime }\\omega )+F^T_7M^{\\prime }(M^{\\prime }-M\\omega )\\rbrace \\Big ]\\Big \\rbrace ,\\\\\\widetilde{W}_{I7\\chi }&=&\\frac{1}{3M^{\\prime }}\\Big \\lbrace - C^V_\\chi C^{T*}_\\chi \\Big [F^V_1(\\omega -1)\\lbrace (\\omega +1)[2F^T_4M-M^{\\prime }[F^T_7-2F^T_3(\\omega -1)]]-F^T_5M(1+2\\omega )\\rbrace \\nonumber \\\\&&\\hspace{85.35826pt}+F^V_4\\lbrace (\\omega +1)[-F^T_4M+M^{\\prime }[2F^T_7-F^T_3(\\omega -1)]]+F^T_5M(2+\\omega ) \\rbrace \\Big ]\\nonumber \\\\&&\\hspace{35.56593pt}+C^A_\\chi C^{T*}_\\chi \\Big [F^A_1(\\omega +1)\\lbrace 2[(F^T_2+F^T_4)M+F^T_3M^{\\prime }\\omega ](\\omega -1)+F^T_5M(1-2\\omega )\\nonumber \\\\&&\\hspace{85.35826pt}-F^T_6M-F^T_7M^{\\prime }\\omega \\rbrace -F^A_4\\lbrace [(F^T_2+F^T_4)M+F^T_3M^{\\prime }\\omega ](\\omega -1)\\nonumber \\\\&&\\hspace{85.35826pt}+F^T_5M(2-\\omega )+F^T_6M+F^T_7M^{\\prime }\\omega \\rbrace \\Big ]\\Big \\rbrace ,\\\\\\widetilde{W}_{1\\chi }^T&=&\\frac{|C^T_\\chi |^2}{3M^2}\\Big \\lbrace M^2\\Big [2\\omega (F^{T}_5)^2+2(\\omega +1)F^T_5(F^T_6\\omega +F^T_2(\\omega -1))+(\\omega +1)[\\omega ^2((F^{T}_4)^2\\nonumber \\\\&&\\hspace{35.56593pt}+(F^T_6+F^T_2+F^T_4)^2)-2\\omega (F^T_2+F^T_4)(F^T_6+F^T_2+F^T_4)+F^T_2(F^T_2+2F^T_4)]\\Big ]\\nonumber \\\\&&\\hspace{35.56593pt}+2MM^{\\prime }(\\omega +1)[F^T_5+\\omega F^T_6-(F^T_2+F^T_4)(1-\\omega )]\\nonumber \\\\&&\\hspace{49.79231pt}\\times [F^T_7-(\\omega -1)(F^T_1(1+\\omega )+F^T_3)]\\nonumber \\\\&&\\hspace{35.56593pt}+M^{\\prime 2}(\\omega +1)[F^T_7-(\\omega -1)(F^T_1(1+\\omega )+F^T_3))]^2\\Big \\rbrace ,\\\\\\widetilde{W}_{3\\chi }^T&=&\\frac{|C^T_\\chi |^2}{3M^{\\prime 2}}\\Big \\lbrace M^2\\Big [-2\\omega (F^T_5)^2+F^T_5(F^T_6+F^T_2(1+2\\omega )+4\\omega F^T_4)+F^T_6[F^T_6(1+\\omega )\\nonumber \\\\&&\\hspace{28.45274pt}+(F^T_2+F^T_4)(1+2\\omega )]\\Big ]+MM^{\\prime }\\Big [-2\\omega ^2[F^T_6F^T_1+F^T_2(F^T_1+F^T_3)+F^T_4(F^T_1+2F^T_3)]\\nonumber \\\\&&\\hspace{28.45274pt}-\\omega F^T_6(2F^T_1+F^T_3)+2F^T_2(F^T_1+F^T_3)+2F^T_4(F^T_1+2F^T_3)\\nonumber \\\\&&\\hspace{28.45274pt}+F^T_5[-2F^T_7(2+\\omega )-2F^T_1(1+\\omega )-F^T_3(3+2\\omega -4\\omega ^2)]\\nonumber \\\\&&\\hspace{28.45274pt}+F^T_7[F^T_2(\\omega +2)+F^T_4(2\\omega +3)]\\Big ]-M^{\\prime 2}\\Big [2(\\omega +1)(F^T_7)^2+F^T_7[2(\\omega +1)F^T_1\\nonumber \\\\&&\\hspace{28.45274pt}+F^T_3(3-2\\omega ^2)]-(\\omega ^2-1)[(\\omega +1)(F^T_1)^2+2F^T_1F^T_3-2(\\omega -1)(F^T_3)^2]\\Big ]\\Big \\rbrace ,$ $\\widetilde{W}_{2\\chi }^T&=&-\\frac{|C^T_\\chi |^2}{3M^2M^{\\prime 2}}\\Big \\lbrace M^4\\Big [2\\omega (F^T_5)^2-F^T_5(F^T_6+F^T_2(1+2\\omega )+4\\omega F^T_4)-F^T_6[F^T_6(1+\\omega )\\nonumber \\\\&&\\hspace{56.9055pt}+(F^T_2+F^T_4)(1+2\\omega )]\\Big ]+M^3M^{\\prime }\\Big [2(F^T_5)^2-4(F^T_4)^2+F^T_5[4F^T_6(\\omega +1)\\nonumber \\\\&&\\hspace{56.9055pt}+2F^T_7(2+\\omega )+2F^T_1(1+\\omega )+2F^T_2(1+2\\omega )+F^T_3(3+2\\omega -4\\omega ^2)+4F^T_4]\\nonumber \\\\&&\\hspace{56.9055pt}+2\\omega ^2[(F^T_6)^2+F^T_6(F^T_1+2(F^T_2+F^T_4))+2(F^T_4)^2+F^T_2(F^T_1+F^T_3)\\nonumber \\\\&&\\hspace{56.9055pt}+F^T_4(F^T_1+2(F^T_2+F^T_3))]-2F^T_2(F^T_1+F^T_3)-2F^T_4(F^T_6+F^T_1+2(F^T_2+F^T_3))\\nonumber \\\\&&\\hspace{56.9055pt}+\\omega F^T_6(2(F^T_6+F^T_1+F^T_2)+F^T_3)-F^T_7[(\\omega +2)F^T_2+F^T_4(2\\omega +3)]\\Big ]\\nonumber \\\\&&\\hspace{56.9055pt}+M^2M^{\\prime 2}\\Big [-\\omega ^3[(F^T_1)^2+4F^T_1(F^T_6+F^T_2+F^T_4)-2((F^T_2)^2+(F^T_3)^2)]\\nonumber \\\\&&\\hspace{56.9055pt}-\\omega ^2[(F^T_1)^2+2F^T_1(2F^T_5+F^T_3)+2((F^T_2)^2+(F^T_3)^2)+2F^T_3(F^T_7+2F^T_2)\\nonumber \\\\&&\\hspace{56.9055pt}+2F^T_6(2F^T_1+F^T_2+2F^T_3)+4F^T_4(F^T_2+F^T_3)]\\nonumber \\\\&&\\hspace{56.9055pt}+\\omega [(F^T_6)^2+2F^T_6(2F^T_7-F^T_2)+2((F^T_7)^2-(F^T_2)^2-(F^T_3)^2)+(F^T_1)^2\\nonumber \\\\&&\\hspace{56.9055pt}+2F^T_1(-2F^T_5+F^T_7+2F^T_2)+4F^T_7F^T_2+4F^T_4(F^T_7+F^T_1)]\\nonumber \\\\&&\\hspace{56.9055pt}+(F^T_6)^2+(F^T_1)^2+2((F^T_7)^2+(F^T_2)^2+(F^T_3)^2)\\nonumber \\\\&&\\hspace{56.9055pt}+F^T_5(F^T_6+2F^T_7-3F^T_2-2F^T_3)+F^T_6(4F^T_7+F^T_2+2F^T_3+F^T_4)\\nonumber \\\\&&\\hspace{56.9055pt}+F^T_7(2F^T_1+2F^T_2+3F^T_3+2F^T_4)+2F^T_3(F^T_1+2F^T_2)+4F^T_4(F^T_2+F^T_3)\\Big ]\\nonumber \\\\&&\\hspace{56.9055pt}+MM^{\\prime 3}\\Big [2F^T_2F^T_3\\omega ^2+2(F^T_1)^2\\omega (\\omega -1)(\\omega +1)^2+\\omega (-F^T_7F^T_2+F^T_6F^T_3\\nonumber \\\\&&\\hspace{56.9055pt}-2F^T_7F^T_3)-2F^T_2F^T_3-F^T_7(2F^T_2+F^T_4)+F^T_5(F^T_3+2F^T_1(\\omega +1))\\nonumber \\\\&&\\hspace{56.9055pt}+2(\\omega +1)F^T_1[(\\omega -1)(F^T_2+F^T_4)+\\omega (F^T_6-2(F^T_7-F^T_3(\\omega -1)))]\\Big ]\\nonumber \\\\&&\\hspace{56.9055pt}+M^{\\prime 4}\\Big [F^T_7(F^T_3+2F^T_1(\\omega +1))-(\\omega ^2-1)F^T_1(F^T_1(\\omega +1)+2F^T_3)\\Big ]\\Big \\rbrace ,\\\\\\widetilde{W}_{4\\chi }^T&=&\\frac{|C^T_\\chi |^2}{3MM^{\\prime 2}}\\Big \\lbrace M^3\\Big [2\\omega (F^T_5)^2-F^T_5(F^T_6+F^T_2(1+2\\omega )+4\\omega F^T_4)-F^T_6[F^T_6(1+\\omega )\\nonumber \\\\&&\\hspace{49.79231pt}+(F^T_2+F^T_4)(1+2\\omega )]\\Big ]+M^2M^{\\prime }\\Big [(F^T_5)^2+F^T_5[-4F^T_3\\omega ^2\\nonumber \\\\&&\\hspace{49.79231pt}+2\\omega (F^T_6+F^T_7+F^T_1+F^T_2+F^T_3)+2F^T_6+4F^T_7+2F^T_1\\nonumber \\\\&&\\hspace{49.79231pt}+F^T_2+3F^T_3+2F^T_4]+\\omega ^2[(F^T_6)^2+2F^T_6(F^T_1+F^T_2+F^T_4)]\\nonumber \\\\&&\\hspace{49.79231pt}+2(\\omega ^2-1)[(F^T_4)^2+F^T_4(F^T_1+F^T_2+2F^T_3)+F^T_2(F^T_1+F^T_3)]\\nonumber \\\\&&\\hspace{49.79231pt}+\\omega F^T_6(2F^T_1+F^T_2+F^T_3)+F^T_6(\\omega F^T_6-F^T_4)\\nonumber \\\\&&\\hspace{49.79231pt}-F^T_7[(\\omega +2)F^T_2+(2\\omega +3)F^T_4]\\Big ]+MM^{\\prime 2}\\Big [-\\omega ^3[F^T_1(F^T_1+2(F^T_6+F^T_2+F^T_4))\\nonumber \\\\&&\\hspace{49.79231pt}-2(F^T_3)^2]-\\omega ^2[2F^T_5F^T_1+2F^T_6(F^T_1+F^T_3)+2F^T_7F^T_3+F^T_1(F^T_1+2F^T_3)\\nonumber \\\\&&\\hspace{49.79231pt}+2F^T_3(F^T_2+F^T_3+F^T_4)]+\\omega [2F^T_7(F^T_6+F^T_7+F^T_1+F^T_2+F^T_4)-2(F^T_3)^2\\nonumber \\\\&&\\hspace{49.79231pt}+F^T_1(-2F^T_5+F^T_1+2(F^T_2+F^T_4))]+F^T_7(2F^T_7+F^T_5+2F^T_6+2F^T_1+F^T_2\\nonumber \\\\&&\\hspace{49.79231pt}+3F^T_3+F^T_4)+F^T_1(F^T_1+2F^T_3)+F^T_3(2F^T_3-F^T_5+F^T_6+2F^T_2+2F^T_4)\\Big ]\\nonumber \\\\&&\\hspace{49.79231pt}+M^{\\prime 3}\\omega \\Big [(\\omega ^2-1)F^T_1(F^T_1(\\omega +1)+2F^T_3)-F^T_7(F^T_3+2F^T_1(\\omega +1))\\Big ]\\Big \\rbrace .$ As shown in Refs.", "[64], [70], one has the general constraint $2M^2\\widetilde{W}^T_{1\\chi }+p^2\\widetilde{W}^T_{2\\chi }+q^2\\widetilde{W}^T_{3\\chi }+2p\\cdot q\\widetilde{W}^T_{4\\chi }=0,$ that allows to eliminate $\\widetilde{W}^T_{1\\chi }$ in terms of the other three SFs.", "In fact, as shown in Refs.", "[64], [70], the term in $\\widetilde{W}^T_{1\\chi }$ of the hadron tensor does not contribute when contracted with the corresponding lepton tensor." ] ]
2207.10529
[ [ "First row CKM unitarity" ], [ "Abstract In this talk I briefly review the precision test of the first row CKM unitarity, focusing mainly on $V_{ud}$ and $V_{us}$ extracted from pion, kaon, neutron and superallowed nuclear beta decays.", "I will discuss the current status of several important Standard Model theory inputs to these processes, and the need for future improvements." ], [ "Introduction", "This article serves a a contribution to the Proceedings of the 20$^{\\textrm {th}}$ Conference on Flavor Physics and CP Violation (FPCP 2022) based on a remote talk I gave.", "Most of the contents can be found in a recent review paper [1], with only several minor updates.", "Despite being one of the most successful physics theory ever, the Standard Model (SM) of particle physics falls short in explaining some cosmological observations such as dark matter, dark energy and matter-antimatter asymmetry, and does not address a number of theory-driven questions including the hierarchy problem and the unification of forces.", "This leads to world-wide programs to search for physics beyond the Standard Model (BSM) in all energy scales, and the precision test of the first row Cabibbo-Kobayashi-Maskawa (CKM) matrix [2], [3] unitarity, $|V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2=1$ , represents one of the most powerful tools for that purpose at low energies.", "Due to the smallness of $|V_{ub}|^2$ , at the current precision level it reduces to the Cabibbo unitarity $|V_{ud}|^2+|V_{us}|^2=1$ , so the test of the unitarity relation is equivalent to checking the mutual consistencies in the extraction of the Cabibbo angle $\\theta _C=\\cos ^{-1}|V_{ud}|=\\sin ^{-1}|V_{us}|$ from different experiments.", "Figure: The current status of |V ud ||V_{ud}| and |V us ||V_{us}| obtained from superallowed nuclear beta decays (red band), semileptonic kaon decays (blue band) and leptonic kaon/pion decays (green band).", "The black line represents |V ud | 2 +|V us | 2 =1|V_{ud}|^2+|V_{us}|^2=1, i.e.", "the first row CKM unitarity requirement.Due to the limitation of time, I will concentrate mainly on decays of pion, kaon, neutron and $J^P=0^+$ nuclei.", "The values of $|V_{ud}|$ and $|V_{us}|$ extracted from these decay processes are summarized in Fig.REF .", "If different experiments are consistent with each other and satisfy the SM prediction, then there should be a common overlapping region between all the colored bands and the black line; but clearly, such region does not exist.", "From the figure we observe several interesting anomalies: For instance, the distance between the blue + red region and the black line signifies the breaking of the first row CKM unitarity by combining $|V_{ud}|$ from superallowed nuclear decays (i.e.", "beta decays of $0^+$ nuclei) and $|V_{us}|$ from semileptonic kaon decays ($K_{\\ell 3}$ ) [4]: $|V_{ud}|_{0^+}^2+|V_{us}|_{K_{\\ell 3}}^2-1=-0.0021(7)~.$ Meanwhile, the distance between the red + blue region and the red + green region may be interpreted as an inconsistency in the measurement of $|V_{us}|$ from $K_{\\ell 3}$ and leptonic kaon decays ($K_{\\mu 2}$ ) [5]: $|V_{us}|=\\left\\lbrace \\begin{array}{ccc}0.22308(55) & & K_{\\ell 3}\\\\0.2252(5) & & K_{\\mu 2}\\end{array}\\right.~.$ Both anomalies are at the level of $3\\sigma $ , and provide interesting hints for BSM physics.", "Table: Error budget of |V ud | 0 + 2 |V_{ud}|_{0^+}^2 and |V us | K ℓ3 2 |V_{us}|_{K_{\\ell 3}}^2.The conclusions above are based on a combination of experimental measurements and SM theory inputs.", "To get a feeling, let us take a closer look to the unitarity violation with $|V_{ud}|_{0^+}$ and $|V_{us}|_{K_{\\ell 3}}$ , which is summarized in Table REF .", "The current significance level is $3.2\\sigma $ , and there are five major sources of uncertainty prohibiting us from claiming a discovery: $\\delta |V_{ud}|^2_{0^+}$ , exp: The experimental errors in the superallowed decay half life; $\\delta |V_{ud}|^2_{0^+}$ , RC: Theory errors from the single-nucleon radiative corrections (RC) in free neutron and nuclear beta decay; $\\delta |V_{ud}|^2_{0^+}$ , NS: Theory errors from the nuclear-structure-dependent corrections in superallowed decays; $\\delta |V_{us}|^2_{K_{\\ell 3}}$ , exp + th: The combined experimental + theory (non-lattice) errors in $K_{\\ell 3}$ , and $\\delta |V_{us}|^2_{K_{\\ell 3}}$ , lat: The lattice errors in the $K\\rightarrow \\pi $ transition form factor at zero momentum transfer.", "In the $V_{ud}$ determination, the dominant uncertainty comes from theory, whereas in $V_{us}$ the experimental and theory uncertainties are comparable.", "In this talk I will focus more on the theory ones." ], [ "Inputs in the nucleon/nuclear sector", "Beta decays of free neutron and nuclear systems are primary avenues to obtain $|V_{ud}|$ , and we may start by discussing the theory inputs in the single nucleon sector.", "The major source of theory uncertainty is the so-called $\\gamma W$ -box diagram, where the nucleon exchanges a $W$ -boson and a photon with the lepton (see Fig.REF ).", "Its precise determination is challenging because the loop integral probes all momentum scales from infrared to ultraviolet.", "In particular, at $Q\\sim 1$  GeV the strong interaction governed by Quantum Chromodynamics (QCD) becomes non-perturbative and results in a large hadronic uncertainty, which represents a major theory challenge in the past 4 decades [6].", "The best treatment before 2018 consists of dividing the loop integral into different regions according to $Q^2$ ; at large $Q^2$ perturbative QCD is applicable, at small $Q^2$ the dominance of the elastic contribution is assumed, while at intermediate $Q^2$ an interpolating function is constructed to connect the high and low $Q^2$ result [7].", "In year 2018 a dispersion relation (DR) treatment [8], [9] was introduced to relate the loop integral to experimentally-measurable structure functions The normalization of $F_3^{(0)}$ in different literature may differ by a factor 2.: $\\Box _{\\gamma W}=\\frac{2\\alpha }{\\pi }\\int _0^\\infty \\frac{dQ^2}{Q^2}\\frac{M_W^2}{M_W^2+Q^2}\\int _0^1dx\\frac{1+2r}{(1+r)^2}F_3^{(0)}~,$ where $x$ is the Bjorken variable, $r=\\sqrt{1+4m_N^2x^2/Q^2}$ , and $F_3^{(0)}$ is a parity-odd, spin-independent structure function resulting from the product of an isosinglet vector current and an isotriplet axial current.", "It could be related, barring some model-dependence, to a similar structure function $F_3^{\\nu p+\\bar{\\nu }p}$ obtained from inclusive $\\nu p/\\bar{\\nu }p$ scattering experiments.", "Making use of existing data , Refs.", "[8], [9] reduced the theory uncertainty in $\\Box _{\\gamma W}$ substantially, and at the same time a large shift of the central value of $\\Box _{\\gamma W}$ was observed.", "It reduced the value of $|V_{ud}|$ from 0.97420(21) in earlier 2018 to 0.97370(14) in late 2018, and unveiled a tension in the first row CKM unitarity.", "This finding was later confirmed by several independent studies [10], [11], [12], [13].", "A major limiting factor of the DR treatment is the low quality of the neutrino scattering data in the most interesting region of $Q^2\\sim 1$  GeV$^2$ .", "To overcome this limitation, there is an ongoing program to calculate the box diagram directly using lattice QCD.", "The first attempt towards this direction was done in Ref.", "[14], where a simpler pion axial $\\gamma W$ -box diagram was computed from first principles to a 1% precision by combining 4-loop pQCD prediction at large $Q^2$ and lattice calculations at small $Q^2$ .", "It led to a significant reduction of the theory uncertainty in the pion semileptonic decay ($\\pi _{e3}$ ), making it the theoretically cleanest avenue for the $V_{ud}$ extraction.", "Also, it provides indirect implications to the nucleon box diagram through a Regge exchange picture [11].", "A next step is to compute the nucleon box diagram on lattice; it may proceed in the same way which the pion box diagram is calculated, or with alternative approaches, e.g.", "the Feynman-Hellmann theorem [15].", "Next we proceed from a free neutron to nuclear systems.", "Superallowed beta decays of $J^P=0^+$ , $I=1$ nuclei currently provide the best measurement of $V_{ud}$ , mainly due to two reasons: First, since the nuclei are spinless, at tree level they probe only the conserved vector current which matrix element is completely fixed by isospin symmetry.", "Second, a large number of superallowed transitions are measured, with 15 among them whose lifetime precision is 0.23% or better [16].", "This provides a huge gain in statistics.", "The advantages above come with a price, namely the nuclear-structure-dependent theory uncertainties.", "The master formula for the $V_{ud}$ extraction from superallowed decay reads: $|V_{ud}|_{0^+}^2=\\frac{2984.43~s}{\\mathcal {F}t(1+\\Delta _R^V)}$ where $\\Delta _R^V$ denotes the single-nucleon RC we discussed before.", "The nuclear-structure-dependent corrections are lumped into the “corrected” $ft$ -value: $\\mathcal {F}t=ft(1+\\delta _\\text{R}^{\\prime })(1+\\delta _\\text{NS}-\\delta _\\text{C}).$ There are three types of nucleus-dependent corrections: (1) The “outer” correction $\\delta _\\text{R}^{\\prime }$ , (2) $\\delta _\\text{NS}$ which is the nuclear structure effects in the “inner” RC, and (3) The isospin-breaking (ISB) correction $\\delta _\\text{C}$ .", "The first is well under control so I will focus on the next two.", "$\\delta _\\text{NS}$ originates from the difference between the nuclear and single-nucleon axial $\\gamma W$ -box diagram, i.e.", "$\\Box _{\\gamma W}^\\text{nucl}=\\Box _{\\gamma W}^n+\\left[\\Box _{\\gamma W}^\\text{nucl}-\\Box _{\\gamma W}^n\\right].$ The term in the square bracket gives rise to $\\delta _\\text{NS}$ after integrating over the phase space.", "There are two ways that a non-zero difference could occur: (1) The single-nucleon absorption spectrum at low energies is distorted by nuclear corrections, and (2) The two gauge bosons may couple to two distinct nucleons in the nucleus, which does not have a counterpart in the single nucleon sector.", "Earlier studies of $\\delta _\\text{NS}$ made use of a nuclear shell model [17]; in particular, a quenching factor is used to account for the reduced strength of the Born contribution in the nuclear medium [18].", "However, it was recently pointed out that such calculations missed some of the very important nuclear effects, for instance the contribution from a quasi-free nucleon in a nucleus [9].", "It was also argued that, unlike free nucleon, the electron energy-dependence in the nuclear box diagram may be non-negligible [19].", "A simple Fermi gas model suggested that these two new nuclear corrections carry different sign and partially cancel each other, resulting in an inflated theory uncertainty in $\\delta _\\text{NS}$ .", "This leads to the present quoted value of $|V_{ud}|=0.97373(31)$  [16], where the dominant uncertainty comes from $\\delta _\\text{NS}$ .", "In the future, direct computations of the nuclear $\\gamma W$ -box diagram using ab-initio methods are expected to reduce such uncertainty.", "Finally, the ISB correction $\\delta _\\text{C}$ , which mainly originates from the Coulomb interaction between protons, modifies the tree-level charged weak matrix element from the group theory prediction of $\\sqrt{2}$ .", "This correction is important in terms of aligning the $\\mathcal {F}t$ values of different superallowed transitions, i.e.", "to ensure that the right hand side of Eq.", "(REF ) is nucleus-independent.", "Again, this correction has been studied systematically by Hardy and Towner using nuclear shell model, which achieved an impressive alignment of the $\\mathcal {F}t$ values [20], [21], [22].", "But at the same time their results are also questioned in several aspects, including theoretical inconsistencies [23], [24], [25] and not being able to be reproduced by other nuclear theory calculations [26], [27], [28], [29], [30], [31], which generically predict smaller values of $\\delta _\\text{C}$ (that would lead to smaller $|V_{ud}|$ and further intensify the first row unitary violation).", "In the future, a more model-independent approach is much desirable to pin down this correction." ], [ "Inputs in the kaon/pion sector", "Next I will discuss the extraction of $V_{us}$ from kaon decays.", "In the leptonic decay channel, it is usually the ratio between the kaon and pion leptonic decay rate $R_A\\equiv \\Gamma _{K_{\\mu 2}}/\\Gamma _{\\pi _{\\mu 2}}$ that is analyzed [32], which returns the ratio $|V_{us}/V_{ud}|$ : $\\frac{|V_{us}|f_{K^+}}{|V_{ud}|f_{\\pi ^+}}=\\left[\\frac{ M_{\\pi ^+}}{M_{K^+}}R_A\\right]^{1/2}\\frac{1-m_\\mu ^2/M_{\\pi ^+}^2}{1-m_\\mu ^2/M_{K^+}^2}(1-\\delta _\\text{EM}/2).$ The advantage is that several SM corrections common to the kaon and pion channel cancel out in the ratio, making it theoretically cleaner than the individual channels.", "The two main theory inputs to the formula above are as follows.", "First, $f_{K^+}/f_{\\pi ^+}$ , namely the ratio between the charged kaon and the charged pion decay constant, is taken from lattice QCD calculations.", "The global averages with different number of active quark flavors ($N_f$ ) agree with each other and can be found in the FLAG review [33].", "Here we just quote the most precise average which comes from $N_f=2+1+1$ : $N_f=2+1+1:f_{K^+}/f_{\\pi ^+}=1.1932(21)~\\text{\\cite {Bazavov:2017lyh,Dowdall:2013rya,Carrasco:2014poa,Miller:2020xhy}}~.$ The second input is the residual electromagnetic RC which does not fully cancel between the numerator and the denominator.", "Chiral Perturbation Theory (ChPT) provides a solid prediction of this quantity at leading order because it is independent from poorly-constrained low energy constants (LECs) [38], [39]: $\\delta _\\text{EM}=\\delta _\\text{EM}^K-\\delta _\\text{EM}^\\pi =-0.0069(17).$ This result is recently confirmed by a direct lattice QCD calculation [40], which serves as a strong proof of the reliability of the theory input.", "With them we obtain: $|V_{us}/V_{ud}|=0.23131(41)_\\text{lat}(24)_\\text{exp}(19)_\\text{RC}.$ Usually, this is later combined with $|V_{ud}|$ obtained from superallowed decays to obtain the value of $|V_{us}|$ , i.e.", "the green + red region in Fig.REF .", "Next we proceed to $K_{\\ell 3}$ , which is another best avenue for the precision measurement of $V_{us}$ .", "There are six independent channels for this decay, which correspond to $K_L$ , $K_S$ and $K^+$ decaying into $e^+$ and $\\mu ^+$ respectively, and measurements of branching ratio exist in all channels [41].", "To extract $|V_{us}|$ one makes use of the following master formula: $\\Gamma _{K_{\\ell 3}}&=&\\frac{G_F^2|V_{us}|^2M_K^5C_K^2}{192\\pi ^3}S_\\text{EW}|f_+(0)|^2I_{K\\ell }^{(0)}\\nonumber \\\\&&\\times \\left(1+\\delta _\\text{EM}^{K\\ell }+\\delta _\\text{SU(2)}^{K\\pi }\\right).$ I will now discuss the theory inputs that appear in the formula above.", "First, $C_K$ is a trivial isospin factor that equals 1 for $K_{\\ell 3}^0$ and $1/\\sqrt{2}$ for $K_{\\ell 3}^+$ .", "Next, $S_\\text{EW}$ is a channel-independent multiplicative factor that encodes the short-distance electroweak RC.", "Of course how it separates from the “long-distance” part is merely a choice, and for standard analysis one always use the value $S_\\text{EW}=1.0232(3)_\\text{HO}$  [42].", "While these two are not an issue, the truly non-trivial theory inputs are as follows.", "First, $|f_+(0)|$ is the $K^0\\rightarrow \\pi ^-$ transition form factor at zero momentum transfer (this channel is chosen just by convention).", "In the $\\text{SU(3)}_f$ limit it is exactly 1, but the fact that $m_s\\ne \\hat{m}$ brings upon a small correction.", "This constant is computed with lattice QCD, and here we quote the FLAG average at different $N_f$  [33]: $N_f=2+1+1&:&f_+(0)=0.9698(17)~\\text{\\cite {Carrasco:2016kpy,Bazavov:2018kjg}}\\nonumber \\\\N_f=2+1&:&f_+(0)=0.9677(27)~\\text{\\cite {Bazavov:2012cd,Boyle:2015hfa}}\\nonumber \\\\N_f=2&:&f_+(0)=0.9560(57)(62)~\\text{\\cite {Lubicz:2009ht}}~.\\nonumber \\\\$ Usually only the $N_f=2+1+1$ and $N_f=2+1$ results are used for the $|V_{us}|$ determination.", "But even these are not without ambiguity: For instance, a recent calculation from the PACS collaboration at $N_f=2+1$ with two lattice spacings gives $f_+(0)=0.9615(10)(^{+47}_{-2})(5)$ , significantly smaller than the FLAG average  [48].", "If this value is used, then there will be no observable difference between the $K_{\\ell 3}$ and $K_{\\mu 2}$ determinations of $|V_{us}|$ .", "It is therefore important to resolve the discrepancy between different lattice results and make sure that there is no large hidden systematic errors in those calculations.", "Table: K ℓ3 K_{\\ell 3} phase space factors from the dispersive parameterization.Next we need the phase-space factor $I_{K\\ell }^{(0)}&=&\\int _{m_\\ell ^2}^{(M_K-M_\\pi )^2}\\frac{dt}{M_K^8}\\bar{\\lambda }^{3/2}\\left(1+\\frac{m_\\ell ^2}{2t}\\right)\\left(1-\\frac{m_\\ell ^2}{t}\\right)^2\\nonumber \\\\&&\\times \\left[\\bar{f}_+^2(t)+\\frac{3m_\\ell ^2\\Delta _{K\\pi }^2}{(2t+m_\\ell ^2)\\bar{\\lambda }}\\bar{f}_0^2(t)\\right]$ (where $\\bar{\\lambda }=[t-(M_K+M_\\pi )^2][t-(M_K-M_\\pi )^2]$ and $\\Delta _{K\\pi }=M_K^2-M_\\pi ^2$ ) which probes the $t$ -dependence of the rescaled $K\\pi $ form factors $\\bar{f}_+(t)$ and $\\bar{f}_0(t)$ .", "It is usually obtained by fitting to the $K_{\\ell 3}$ Dalitz plot with specific parameterizations of the form factors.", "Among them, the dispersive parameterization currently quotes the smallest uncertainty (at 0.1% level) and is usually taken as the standard input [49]; we quote their results in Table REF .", "Figure: Long-distance electromagnetic RC in K ℓ3 K_{\\ell 3}, self-energy diagrams are not shown.Table: Results of δ EM Kℓ \\delta _{\\text{EM}}^{K\\ell } in units of 10 -3 10^{-3},from Sirlin's representation and ChPT.Next we have $\\delta _\\text{EM}^{K\\ell }$ , which denotes the long-distance electromagnetic RC to the decay rate.", "It consists of virtual photon loop corrections and real bremsstrahlung corrections, see Fig.REF .", "The previous state-of-the-art calculation of this correction comes from ChPT [50], where the two major sources of uncertainties are (1) The discarded terms of higher chiral power counting, and (2) The unknown LECs.", "Recently a series of re-analysis of $\\delta _\\text{EM}^{K\\ell }$ is performed [51], [52], [5] based on a combination of Sirlin's representation of the electroweak RC [6], [53] and ChPT [54], which allows a resummation of the most important terms to all orders in the chiral power counting; in addition, the use of most recent lattice QCD inputs effectively pinned down the LECs [55], [56].", "While agreeing with the ChPT result, this re-analysis substantially reduces the theory uncertainty by nearly an order of magnitude, reaching the level of $10^{-4}$ .", "We compare the outcomes of these two determinations in Table REF .", "Finally, the ISB correction $\\delta _\\text{SU(2)}^{K^+\\pi ^0}\\equiv \\left(\\frac{f_+^{K^+\\pi ^0}(0)}{f_+^{K^0\\pi ^-}(0)}\\right)^2-1$ measures the difference of the $K\\rightarrow \\pi $ transition form factor between the $K^+$ and $K^0$ channel.", "Upon neglecting small electromagnetic contributions, it is expressed in terms of the quark mass parameters $m_s$ and $\\hat{m}$  [57]: $\\delta _\\text{SU(2)}^{K^+\\pi ^0}=\\frac{3}{2}\\frac{1}{Q^2}\\left[\\frac{\\hat{M}_K^2}{\\hat{M}_\\pi ^2}+\\frac{\\chi _{p^4}}{2}\\left(1+\\frac{m_s}{\\hat{m}}\\right)\\right]$ where $\\hat{M}_{K,\\pi }$ are the meson masses in the isospin limit, $Q^2=(m_s^2-\\hat{m}^2)/(m_d^2-m_u^2)$ , and $\\chi _{p^4}$ is a calculable coefficient.", "The most recent lattice QCD inputs give $Q=23.3(5)$ and $m_s/\\hat{m}=27.42(12)$ at $N_f=2+1$  [58], [59], [60], [61], [62], which return $\\delta _\\text{SU(2)}^{K^+\\pi ^0}=0.0457(20)$ .", "On the other hand, phenomenological inputs based on $\\eta \\rightarrow 3\\pi $ returns a somewhat larger value of $\\delta _\\text{SU(2)}^{K^+\\pi ^0}=0.0522(34)$  [63]; the discrepancy between these two determinations is not yet fully understood.", "In what follows we adopt the lattice determination.", "Table: Channel-dependent and averaged values of |V us f + (0)||V_{us}f_{+}(0)|.With the inputs above, we may determine the value of $|V_{us}f_+(0)|$ from each of the six channels as well as averages over different channels [4], [5], which we summarize in Table REF .", "The averages over the $Ke$ and $K\\mu $ channels agree with each other within respective uncertainties, and do not show signatures of lepton flavor non-universality.", "Supplementing with the $N_f=2+1+1$ lattice average of $f_+(0)$ gives $|V_{us}|_{K_{\\ell 3}}=0.22308(39)_\\text{lat}(39)_K(3)_\\text{HO}$ .", "From Table REF it appears that experimental uncertainties dominate over the non-lattice theory uncertainties, but one still needs to further scrutinize all the theory inputs to make sure that the present $V_{us}$ anomaly does not come from some unexpected, large SM corrections.", "Finally, I will briefly discuss a relatively new idea to determine $V_{us}/V_{ud}$ .", "Ref.", "[64] suggested that, in addition to the axial ratio $R_A=\\Gamma (K_{\\mu 2})/\\Gamma (\\pi _{\\mu 2})$ , one may also use the vector ratio $R_V\\equiv \\Gamma (K_{\\ell 3})/\\Gamma (\\pi _{e 3})$ as an alternative avenue to determine the ratio $V_{us}/V_{ud}$ , which may shed new lights on the $V_{us}$ anomaly.", "The short-distance SM corrections as well as some possible BSM corrections that affect $K_{\\ell 3}$ and $\\pi _{e 3}$ in the same way cancel out in the ratio, which limits the possible explanations if the anomaly persists.", "The cancellation of the long-distance electromagnetic RC in $R_V$ is not as good as in $R_A$ (for example, the $\\mathcal {O}(e^2p^2)$ LECs do not fully cancel out), but this is not an issue anymore due to the recent improvements of the $K_{\\ell 3}$ and $\\pi _{e3}$ RC we described above.", "As a consequence, $R_V$ is theoretically cleaner than $R_A$ , as can be seen from the following formula: $\\left|\\frac{V_{us}f_+^K(0)}{V_{ud}f_+^\\pi (0)}\\right|&=&0.22216(64)_{\\text{BR}(\\pi _{e3})}(39)_K(2)_{\\tau _{\\pi ^+}}(1)_{\\text{RC}_\\pi }\\nonumber \\\\\\left|\\frac{V_{us}f_{K^+}}{V_{ud}f_{\\pi ^+}}\\right|&=&0.27600(29)_\\text{exp}(23)_\\text{RC}.$ The first line is from $R_V$ and the second line from $R_A$ .", "The major limiting factor, however, comes from experiments, in particular the large uncertainty in the $\\pi _{e3}$ branching ratio.", "The current best measurement comes from the PIBETA experiment in year 2004 [65], [64]: $\\text{BR}(\\pi _{e3})=1.038(6)\\times 10^{-8},$ so there are a lot of rooms for improvement.", "In fact, partially motivated by the new idea of $R_V$ and the theory progress of the RC, a next-generation experiment (PIONEER) of rare pion decays is aiming to improve the BR($\\pi _{e3}$ ) precision by a factor of 3 or more [66], [67].", "This will make $R_V$ competitive to $R_A$ in the determination of $V_{us}/V_{ud}$ ." ], [ "Summary", "To summarize, I described several anomalies at the level of $3\\sigma $ that have been observed in the measurements of the first row CKM matrix element $V_{ud}$ and $V_{us}$ from various beta decay processes.", "They provide interesting hints to new physics so it is important to confirm them with higher precision by improving the SM theory inputs.", "In the $V_{ud}$ sector, we need to improve the RC in the single-nucleon and nuclear systems using lattice QCD and ab-initio methods respectively, and need a more model-independent determination of the ISB corrections in the nuclear wavefunctions.", "In the $V_{us}$ sector, we need better lattice inputs for the kaon/pion decay constants and the $K\\pi $ transition form factor, and further improvements of the RC in leptonic and semileptonic kaon decays.", "The phase space factor in $K_{\\ell 3}$ and the ISB correction to the $K^+_{\\ell 3}$ channels also need to be better understood.", "Successful reduction of all these theory uncertainties, say by a factor 1/2, could increase the significance of the anomalies to more than 5$\\sigma $ assuming the central values remain unchanged.", "Of course there are also many desirable experimental improvements, for example better measurements of the $K_{\\ell 3}$ and $\\pi _{e3}$ branching ratios, the neutron lifetime, and the neutron axial coupling constant $g_A$ .", "I thank Xu Feng, Daniel Galviz, Mikhail Gorchtein, Lu-Chang Jin, Peng-Xiang Ma, William J. Marciano, Ulf-G. Meißner, Hiren H. Patel and Michael J. Ramsey-Musolf for collaborations in related topics.", "This work is supported in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) and the NSFC through the funds provided to the Sino-German Collaborative Research Center TRR110 “Symmetries and the Emergence of Structure in QCD” (DFG Project-ID 196253076 - TRR 110, NSFC Grant No.", "12070131001)." ] ]
2207.10492
[ [ "Log Barriers for Safe Black-box Optimization with Application to Safe\n Reinforcement Learning" ], [ "Abstract Optimizing noisy functions online, when evaluating the objective requires experiments on a deployed system, is a crucial task arising in manufacturing, robotics and many others.", "Often, constraints on safe inputs are unknown ahead of time, and we only obtain noisy information, indicating how close we are to violating the constraints.", "Yet, safety must be guaranteed at all times, not only for the final output of the algorithm.", "We introduce a general approach for seeking a stationary point in high dimensional non-linear stochastic optimization problems in which maintaining safety during learning is crucial.", "Our approach called LB-SGD is based on applying stochastic gradient descent (SGD) with a carefully chosen adaptive step size to a logarithmic barrier approximation of the original problem.", "We provide a complete convergence analysis of non-convex, convex, and strongly-convex smooth constrained problems, with first-order and zeroth-order feedback.", "Our approach yields efficient updates and scales better with dimensionality compared to existing approaches.", "We empirically compare the sample complexity and the computational cost of our method with existing safe learning approaches.", "Beyond synthetic benchmarks, we demonstrate the effectiveness of our approach on minimizing constraint violation in policy search tasks in safe reinforcement learning (RL)." ], [ "Introduction", "Probabilistic inference has become a core technology in AI, largely due to developments in graph-theoretic methods for the representation and manipulation of complex probability distributions .", "Whether in their guise as directed graphs (Bayesian networks) or as undirected graphs (Markov random fields), probabilistic graphical models have a number of virtues as representations of uncertainty and as inference engines.", "Graphical models allow a separation between qualitative, structural aspects of uncertain knowledge and the quantitative, parametric aspects of uncertainty...", "Remainder omitted in this sample.", "See http://www.jmlr.org/papers/ for full paper.", "We would like to acknowledge support for this project from the National Science Foundation (NSF grant IIS-9988642) and the Multidisciplinary Research Program of the Department of Defense (MURI N00014-00-1-0637)." ], [ "Appendix A.", "In this appendix we prove the following theorem from Section 6.2: Theorem Let $u,v,w$ be discrete variables such that $v, w$ do not co-occur with $u$ (i.e., $u\\ne 0\\;\\Rightarrow \\;v=w=0$ in a given dataset ${\\cal D}$ ).", "Let $N_{v0},N_{w0}$ be the number of data points for which $v=0, w=0$ respectively, and let $I_{uv},I_{uw}$ be the respective empirical mutual information values based on the sample ${\\cal D}$ .", "Then $N_{v0} \\;>\\; N_{w0}\\;\\;\\Rightarrow \\;\\;I_{uv} \\;\\le \\;I_{uw}$ with equality only if $u$ is identically 0.", "Proof.", "We use the notation: $P_v(i) \\;=\\;\\frac{N_v^i}{N},\\;\\;\\;i \\ne 0;\\;\\;\\;P_{v0}\\;\\equiv \\;P_v(0)\\; = \\;1 - \\sum _{i\\ne 0}P_v(i).$ These values represent the (empirical) probabilities of $v$ taking value $i\\ne 0$ and 0 respectively.", "Entropies will be denoted by $H$ .", "We aim to show that $\\frac{\\partial I_{uv}}{\\partial P_{v0}} < 0$ ....", "Remainder omitted in this sample.", "See http://www.jmlr.org/papers/ for full paper." ] ]
2207.10415
[ [ "Joint HST, VLT/MUSE and XMM-Newton observations to constrain the mass\n distribution of the two strong lensing galaxy clusters: MACS J0242.5-2132 &\n MACS J0949.8+1708" ], [ "Abstract We present the strong lensing analysis of two galaxy clusters: MACS J0242.5-2132 (MACS J0242, $z=0.313$) and MACS J0949.8+1708 (MACS J0949, $z=0.383$).", "Their total matter distributions are constrained thanks to the powerful combination of observations with the Hubble Space Telescope and the MUSE instrument.", "Using these observations, we precisely measure the redshift of six multiple image systems in MACS J0242, and two in MACS J0949.", "We also include four multiple image systems in the latter cluster identified in HST imaging without MUSE redshift measurements.", "For each cluster, our best-fit mass model consists of a single cluster-scale halo, and 57 (170) galaxy-scale halos for MACS J0242 (MACS J0949).", "Multiple images positions are predicted with a $rms$ 0.39 arcsec and 0.15 arcsec for MACS J0242 and MACS J0949 models respectively.", "From these mass models, we derive aperture masses of $M(R<$200 kpc$) = 1.67_{-0.05}^{+0.03}\\times 10^{14} M_{\\odot}$, and $M(R<$200 kpc$) = 2.00_{-0.20}^{+0.05}\\times 10^{14} M_{\\odot}$.", "Combining our analysis with X-ray observations from the XMM-Newton Observatory, we show that MACS J0242 appears to be a relatively relaxed cluster, while conversely, MACS J0949 shows a relaxing post-merger state.", "At 200 kpc, X-ray observations suggest the baryon fraction to be respectively $f_b = 0.115^{+0.003}_{-0.004}$ and $0.053^{+0.007}_{-0.006}$ for MACS J0242 and MACS J0949.", "MACS J0242 being relaxed, its density profile is very well fit by a NFW distribution, in agreement with X-ray observations.", "Finally, the strong lensing analysis of MACS J0949 suggests a flat dark matter density distribution in the core, between 10 and 100 kpc, and not following a NFW profile.", "This appears consistent with X-ray observations." ], [ "Introduction", "One of the most promising avenues towards understanding the nature of dark matter is to study its gravitational influence on the Universe's large-scale structure, particularly within the most massive galaxy clusters.", "These gravitationally bound clusters act as the largest natural laboratories, allowing not only to observe the large-scale baryonic physics, but also to indirectly probe dark matter thanks to the effect of gravitational lensing.", "Gravitational lensing is the phenomenon of optical distortion of background images, occurring when a massive foreground object – like a cluster, the “lens” – is on its line-of-sight.", "Gravitational lenses act as magnifying telescopes of objects in the background, creating in some cases multiple images of a same source, and allowing observers to study objects in the distant Universe [51].", "For all these reasons, since the first discovery of the gravitational giant arc of Abell 370 [35], [81] to the modern surveys of galaxy clusters and gravitational lenses such as the Cluster Lensing And Supernovae survey with Hubble [71], the Hubble Frontier Fields [57], the REionization LensIng Cluster Survey [15], the SDSS Giant Arcs Survey [79] and the Beyond the Ultra-deep Frontier Fields And Legacy Observation programme [82], gravitational lensing has emerged as a field of cosmology, capable of bringing key information to comprehend the structure formation and the nature of dark matter.", "In particular, the study of a system of multiple images originating from one source through gravitational lensing allows one to constrain the mass distribution within the lens, and to characterise the dark matter density profile within it.", "The descriptive potential of gravitational lensing has already been showcased at multiple occasions such as in [72], [40], [43], [16], [17], [18], [19], [20], [32], [11], [86].", "Using the combination of high resolution images taken with the Hubble Space Telescope (HST) and the Dark Energy Survey (DES) for photometric analysis in the one hand, and the Multi-Unit Spectroscopic Explorer [4] for spectroscopy in the other, we were able to securely identify cluster members and multiple images systems.", "This combination has proven to be particularly successful over the past few years [84], [53], [54], [41], [45], [46], [33], [59], [12].", "In this paper, we repeat a similar exercise, looking at two galaxy clusters, MACS J0242.5$-$ 2132 and MACS J0949.8$+$ 1708 (i.e.", "RXC J0949.8+1707), hereafter MACS J0242 and MACS J0949 respectively, initially discovered by the MAssive Cluster Survey [21].", "We combined multi-band HST and ground-based imaging with spectrocopy from VLT/MUSE with the lensing modelling technique presented in detail in [73] which makes use of the publicly available Lenstool software [52], [48].", "We then confront our lensing results to the intra-cluster gas distribution observed by the XMM-Newton X-ray Observatory.", "It is common practice to use the combined baryonic analysis of the X-ray signal and the Sunyaev-Zel'dovich effect (SZ) to understand the thermodynamics of galaxy clusters.", "One can then reconstruct the total matter density of galaxy clusters by making a number of hypotheses such as hydrostatic equilibrium or polytropic temperature distribution [83].", "Furthermore, as the analysis of multi-wavelengths observations (optical, Sunyaev-Zel'dovich effect, X-rays) characterises the thermodynamics of the intra-cluster medium [78], a careful comparison between these and a strong lensing analysis can provide clues on the possible differences between expected and observed baryon and dark matter distributions.", "As an example, the study in merging galaxy clusters of the offset between the position of the centre of dark matter, luminous galaxies and X-ray emission can be used to constrain the cross-section of self-interacting dark matter [85].", "In fact, simulations of colliding clusters suggests the cold dark matter (CDM) distribution to be bounded to the luminous distribution; while in SIDM scenarios dark matter lags behind baryonic matter [60], [75], [76].", "For instance, [76] present SIDM simulations with anisotropic scattering, yielding an offset between the galaxies centre and that of DM smaller than 10 kpc for an interaction $\\sigma / m = 1$ cm$^2$ .g$^{-1}$ .", "This was pioneered in [14] and [8], and has now become more and more popular as shown in, e.g.", "[64], [36], [61], [62], [42], [44].", "In this article, we focus on the lensing-based mass reconstructions of the two clusters.", "Utilising the ICM detected in the X-rays to infer the dark matter halo profile, we compare the results of our lensing reconstruction to the XMM-Newton X-ray data from [10], processed following the X-COP pipeline [29] for these two clusters.", "We present a broader context for such comparisons, i.e.", "new models of baryonic matter distribution rooted in lensing analysis to constrain the electronic densities of galaxy clusters, in a companion paper (Allingham et al.", "in prep.).", "Our paper is organised as follows.", "In Section , we present the observations used for our analysis.", "The methods to extract multiple image candidates, and to build cluster galaxy catalogues are presented in Section .", "The lensing reconstruction method is introduced in Section , the mass models are described in Section , and conclusions are presented in Section .", "Throughout this paper, we assume the $\\Lambda $ CDM cosmological model, with $\\Omega _m=0.3$ , $\\Omega _{\\Lambda }=0.7$ , and $H_{0}=70$  km/s/Mpc.", "All magnitudes use the AB convention system [69]." ], [ "Data", "To determine the cluster mass distributions as robustly as possible, we include both imaging and spectroscopic information when constructing lens models.", "This combination is especially powerful, allowing us to identify and confirm individual components of the model (such as multiple-image constraints and cluster members), while simultaneously rejecting interlopers along the line of sight.", "We complement the observations we have with HST and VLT/MUSE with XMM-Newton X-ray Observatory observations to cross-check our lensing model results." ], [ "Hubble Space Telescope", "As part of the MACS survey [21], both targets in our study have publicly available HST data.", "Snapshot (1200s) imaging of MACS J0242 taken with the Wide Field Planetary Camera 2 [39] exist for both the F606W and F814W bands (PID:11103, PI: Ebeling), supplemented by an additional 1200s image taken with the Advanced Camera for Surveys [26] in F606W (PID: 12166, PI: Ebeling).", "Similarly, shallow imaging for MACS J0949 have been taken with the ACS in both F606W (PID:10491, PI: Ebeling) and F814W (PID: 12166, PI: Ebeling).", "Archival processed versions of these datasets are available from the Hubble Legacy Archivehttps://hla.stsci.edu/.", "Following the initial MACS data, MACS J0949 was subsequently observed as part of the RELICS survey [15] – under the name RXC J0949.8+1707 – and thus there are additional data sets for this cluster.", "Specifically, ACS imaging in F435W, F606W and F814W provide wider, deeper coverage of the cluster field in optical bands, while coverage in F105W, F125W, F140W and F160W bands using the Wide Field Camera 3 [49] provide information in the near-IR regime.", "These data are also publicly availablehttps://archive.stsci.edu/prepds/relics/, and therefore in this work combine all of the imaging (save for the F435W band, which is too low S/N for our purposes) to create our master data set.", "A summary of all available HST imaging can be found in Table REF ." ], [ "DESI Legacy Survey", "Since the available HST imaging for MACS J0242 are shallow and colour information is limited to a WFPC2-sized footprint, we complement these data with additional multi-band ground-based imaging from the DESI legacy archive.", "To enhance the HST data as much as possible, we extract cutout images in three optical bands – g, r and z, see [1].", "The images are centred around the MACS J0242 brightest cluster galaxy (BCG) located at ($\\alpha = 40.6497\\deg $ , $\\delta = -21.5406\\deg $ ), and extend over a full ACS field of view.", "Combining the space- and ground-based information allow us to improve our galaxy selection function during lens modelling (see section ).", "The DESI data are summarised in Table REF .", "Table: Summary of the HST observations used in this analysis for MACS J0242 and MACS J0949.Table: Summary of the DESI observations used in this analysis for MACS J0242.Table: Summary of MUSE observations for MACS J0242 and MACS J0949.", "Columns 1 to 3 indicate respectively the name of the cluster, its average redshift, and the ID of the ESO programme.", "For each pointing, we then give the observation date in column 4, the right ascension, R.A., and declination, Dec., of the centre of the field of view in columns 5 and 6, the total exposure time in column 7, and the FWHM of the seeing during the observations in column 8.Figure: Composite DES colour image of MACS J0242.The gas distribution obtained from XMM-Newton observations is shown with dashed green contours.", "In cyan, we highlight the positions of the multiple images used to constrain the mass model, and which are listed in Table .", "Critical lines for a source at z=3.0627z=3.0627 (redshift of system 1) are shown in red.The MUSE field of view is shown in pink.Figure: Composite colour HST image of MACS J0949.The critical lines of system 1, at redshift 4.89024.8902, are shown in red.The gas distribution obtained thanks to XMM-Newton observations are shown with dash green contours.", "In cyan, we highlight the positions of the multiple images used to constrain the mass model.", "They are listed in Table .", "In pink, we display the MUSE field of view.In addition to imaging, our lensing reconstruction makes use of the Multi Unit Spectroscopic Explorer [4] observations at the Very Large Telescope.", "Such observations are invaluable to obtain redshift information.", "Both clusters were observed with MUSE as part of the filler large programme “A MUSE Survey of the Most Massive Clusters of Galaxies - the Universe's Kaleidoscopes” (PI: Edge).", "Data for each cluster consists of a single MUSE pointing, divided in a series of three exposures of 970 seconds.", "To reduce the effects of bad pixels, cosmic rays, and other systematics, each successive exposure is rotated by 90 degrees, and a small ($\\sim 0.05 $ ) dither pattern is applied.", "We reduce the raw data following the procedure detailed in [74].", "Details of the observations for both clusters are summarised in Table REF ." ], [ "X-ray data", "We searched the XMM-Newton archive for publicly available observations of the two systems of interest.", "MACS J0242 was observed for a total of 70 ks (OBSID:0673830101), and MACS J0949 for a total of 36 ks (OBSID:0827340901).", "We analysed the two observations using XMMSAS v17.0, and the most up-to-date calibration files.", "We used the XMMSAS tools mos-filter and pn-filter to extract light curves of the observations and filter out periods of enhanced background, induced by soft proton flares.", "After flare filtering, the available clean exposure time is 61 ks (MOS) and 53 ks (PN) for MACS J0242, and 35 ks (MOS) and 34 ks (PN) for MACS J0949.", "In this section, we present the key steps to obtain cluster galaxy catalogues and (candidate) background multiple image systems for both MACS J0242 and MACS J0949: from the source extraction to the selections of galaxies and identification of cluster galaxies specifically, using both the multi-band imaging in hand for the two clusters as well as the spectroscopy from VLT/MUSE." ], [ "Spectroscopic analysis", "We here present the analysis of the spectroscopic observations described in Section REF .", "In spite of the field of view of the MUSE cubes, $1\\times 1$ , being smaller than that of HST or DES, we can still access the redshift of a large number of foreground, cluster and background galaxies.", "In order to detect specifically multiple image systems, we use MUSELET (MUSE Line Emission Tracker), a package of MPDAF (Muse Python Data Analysis Framework) which removes the constant emission from bright galaxies in the field, and is optimised for the detection of the faintest objects.", "For more details about the technique, we refer the reader to [5] and [70].", "We go through each of the 3681 slices of this subtracted MUSE datacube, and identify the bright detections.", "We complete this technique with CatalogueBuilder [74] for a thorough and systematic analysis.", "The latter embeds the MUSELET analysis, but also uses a modified version of MARZ [37], which is better tuned to the resolution and spectral profiles specific to MUSE data.", "CatalogueBuilder also uses the position data of the deepest field available (in this case HST/ACS).", "These make it easier to confirm the likely source of the multiple images which we are looking for.", "Using the spectroscopic information, we adjust with our own custom redshifting routine the detected spectra to the known absorption lines, and notably [OII], [OIII] and Ly-$\\alpha $ .", "We then obtain catalogues containing coordinates and redshifts, such as Tables REF and REF .", "We also consider multiple detections within a radius of $< 0.5$ and a redshift separation of $\\delta z < 0.05$ to be a unique object.", "All redshifts are supposed known with a precision estimated to $\\delta z = 0.0001$ .", "We can associate to these detections Signal-to-Noise (S/N) ratios.", "As we also know the type of pattern the absorption lines should match, we can use the S/N ratio and spectral patterns to define different confidence levels.", "We only keep in all catalogues, including for example in Section , detections judged to be “good” or “excellent” (identifiers 3 and 4 in MARZ and CatalogueBuilder).", "In the case of several detections representing a same object, we merge them keeping the best quality of detection.", "The distribution of redshifts in each cluster is shown in Fig.", "REF for the full MUSE frame.", "We measure 36 and 96 good spectroscopic redshifts in MACS J0242 and MACS J0949 respectively.", "Due to the small statistics, this distribution is not Gaussian but it is sufficient to constrain the redshift of the clusters, which we estimate to be $0.300 \\le z \\le 0.325$ and $0.36 \\le z \\le 0.41$ for MACS J0242 and MACS J0949 respectively.", "For the current analysis, we define the redshift of each cluster by that of their BCG, i.e.", "respectively $0.3131$ and $0.383$ for MACS J0242 and MACS J0949 respectively.", "Figure: Redshift distribution of all MUSE detected objects.", "Top row: Cluster MACS J0242.", "Objects identified as being in the cluster are shown in green, while foreground and background objects are shown in blue and yellow respectively.", "We highlight Lyman-α\\alpha emitters in red.", "At last, objects within the Milky Way (stars, etc.)", "are displayed in purple.", "Left panel: Redshift distribution of objects located at small redshifts z<1z < 1.", "– Right panel: Redshift distribution of all objects with a measured redshift.Bottom row: Cluster MACS J0949.", "Left panel: Redshift distribution of objects located at small redshifts z<1z < 1.", "– Right panel: Redshift distribution of all objects with a measured redshift.We first align all images from a given instrument (HST/ACS, HST/WFC3, HST/WFPC2 and DESI) to the same $wcs$ coordinates, and pixelate them accordingly to allow for direct colour comparison of detected objects.", "In order to extract all detected objects from the multi-band imaging in hand for each cluster, we run the SExtractor software [7] in dual-image mode, for each pass-band of each instrument.", "For each instrument, we adopt a reference pass-band and a position of reference.", "The former sets the magnitude of each detection, while the latter sets its location.", "The number of bands per instrument as well as the reference pass-band used are listed in Tables REF and REF for MACS J0242 and MACS J0949 respectively.", "For each instrument, we then apply several cuts and selection criteria to the output catalogues from SExtractor.", "That allows us to build a complete multi-band catalogue composed only of galaxies.", "We summarise the different steps of this process: (i) All detections without reliable magnitude measurements (i.e.", "MAG$\\_$ AUTO$=$ -99) and incomplete (or corrupted) data are removed from all catalogues.", "This includes isophotal data and memory overflow that occurs during deblending or extraction.", "(ii) All objects with a stellarity greater than $0.2$ are removed as they are likely to be stars rather than galaxies.", "We additionally mask all detections very close to bright stars.", "(iii) For a given cluster, only objects detected in all pass-bands are kept.", "(iv) All objects with a Signal-to-Noise ratio (S/N) smaller than 10 are removed.", "Tables REF and REF are listing the number of detections remaining once each of these criteria are applied for each instrument, for MACS J0242 and MACS J0949 respectively.", "Table: Number of detections (Nod) after each source extraction selections as listed in Sect.", "for MACS J0242.Table: Number of detections (Nod) after each source extraction selections as listed in Sect.", "for MACS J0949." ], [ "Spectroscopic redshift identification", "Now that we have a galaxy catalogue for each instrument, we can match our detection with spectroscopic redshift measurements from VLT/MUSE.", "In order to ensure a MUSE detection corresponds to a photometric one, we compare the positions measured by Sextractor in the different filters for all objects, using a Haversine functionThe Haversine angle reads as ${\\begin{array}{c}\\begin{aligned}\\mathcal {H} = 2 \\arcsin \\sqrt{ \\sin ^2 \\left( \\frac{\\delta _2 - \\delta _1}{2} \\right) + \\cos \\delta _1 \\cos \\delta _2 \\sin ^2 \\left( \\frac{\\alpha _2 - \\alpha _1}{2}\\right) }.\\end{aligned}\\end{array}}$ .", "If the separation angle between objects from the spectroscopic and the photometric catalogues is smaller than $0.5$ , we consider the detection to be of the same objects, and hence associate the spectroscopic redshift to the photometric detection.", "Out of this step, we attribute a spectroscopic redshift to 20, 25, and 25 sources in the DES, HST/WFPC2 and HST/ACS catalogues for MACS J0242.", "In the case of MACS J0949, we attribute a spectroscopic redshift to 54, and 49 sources in the HST/ACS and HST/WFC3 catalogues." ], [ "Cluster galaxy selection", "The next step is the identification of cluster galaxies specifically.", "For that we are using colour-magnitude selections for each clusters.", "The first step consists in applying the red sequence technique [31].", "Using the catalogues after source extraction selections and spectroscopic redshift identification, we compute for both clusters a series of colour-magnitude (CM) diagrams.", "We compute these for each instrument.", "As each pass-band represents a magnitude, we can respectively compute 3 and 1 CM diagrammes for DES and HST/WFPC2 for MACS J0242 (none for HST/ACS as only one band is available), and 1 and 6 for HST/ACS and HST/WFC3 for MACS J0949.", "As shown in Fig.", "REF , cluster members are expected to follow a main sequence (magenta line).", "To calibrate our selections, we use spectroscopically confirmed cluster members.", "We then remove all detections with a magnitude exceeding $m_{\\mathrm {max}}$ , which varies depending on instruments and filters.", "For MACS J0242, we have $m_{\\mathrm {max}} = 22$ for HST/WFPC2, 23.5 for DES/z, and 24.5 for DES/r.", "For MACS J0949, we have $m_{\\mathrm {max}} = 21.5$ for HST/WFC3 and 22.5 for HST/ACS.", "We then perform a linear regression and obtain the main sequence.", "We give in Appendix  the fits for all colour-magnitudes used for both clusters.", "Figure: Colour-magnitude diagrams.", "Top row: Cluster MACS J0242.", "Left panel: Instrument HST/WFPC2 – m F814W m_{\\mathrm {F814W}} vs (m F606W - F814W m_{\\mathrm {F606W}} - _{\\mathrm {F814W}}).", "Right panel: Instrument DES – m z m_{\\mathrm {z}} vs (m g -m z m_{\\mathrm {g}} - m_{\\mathrm {z}}).", "Grey filled circles (with their error bars) have successfully passed all selections described in Section .", "The magenta line represents the main sequence regression.", "Blue, gold and red dots represent spectroscopic detections of foreground, cluster and background objects respectively.", "Bottom row: Cluster MACS J0949.", "Left panel: Instrument HST/ACS – m F814W m_{\\mathrm {F814W}} vs (m F606W -m F814W m_{\\mathrm {F606W}} - m_{\\mathrm {F814W}}).", "Right panel: Instrument HST/WFC3 – m F160W m_{\\mathrm {F160W}} vs (m F105W -m F160W m_{\\mathrm {F105W}} - m_{\\mathrm {F160W}}).Galaxies selected as cluster members are galaxies which have a colour within $2 \\sigma _{C}$ of the main red sequence for HST/ACS and HST/WFC3, and within $3 \\sigma _C$ for HST/WFPC2 and DES.", "$\\sigma _{C}$ is the weighed colour standard deviation of the spectroscopically confirmed cluster galaxy sample.", "These limits are highlighted as black rectangles in Fig.", "REF .", "For an instrument with more than 2 pass-bands, we can compute more than one CM diagram, and thus only retain cluster member identifications compatible with all colour-magnitude diagram selections.", "We summarise in Tables REF and REF for MACS J0242 and MACS J0949 respectively, the number of galaxies identified as cluster members per instrument once these colour-magnitude selections are applied.", "In some cases, spectroscopically confirmed cluster galaxies fall outside the colour-magnitude selection.", "These objects are ultimately conserved in our cluster galaxy catalogue.", "However, we do not include them in the CM cut counts, to show the effect of the photometric selection." ], [ "Instrument catalogue combination", "We now assemble the galaxy catalogues for each instrument before merging these into a final cluster galaxy catalogue for each cluster.", "We match the coordinates of sources with the already defined $0.5$ separation angle.", "MACS J0242 and MACS J0949 were imaged with different instruments, and thus have different coverage.", "We define the camera of reference as the camera with the highest resolution.", "In the case of the both clusters, it is HST/ACS, but the reference band is chosen as F814W for MACS J0242, and F606W for MACS J0949.", "MACS J0242 was observed with HST/ACS in only one band.", "Moreover, MACS J0242 was observed with HST/WFPC2 in 2 pass-bands, but the shape of the camera field of view does not cover the entire ACS field of view.", "MACS J0242 has DES observations in 3 pass-bands, covering a wide field of view.", "However the quality of these observations is lower than the ones we have from space.", "We therefore require for a given cluster member selected galaxy in HST/ACS to be at least present in DES or WFPC2 in order to be included into the final cluster member catalogue.", "MACS J0949 was imaged with HST/ACS and WFC3 cameras.", "HST/WFC3 has a smaller field of view than ACS.", "We detected multiply imaged systems out of the WFC3 field of view.", "In order to account for the gravitational effect of individual galaxies on these systems, we include all galaxies detections from at least one camera to our galaxies catalogue.", "Finally, cluster galaxies located at a distance larger than 40from the cluster centre and with a magnitude difference to the BCG of $\\Delta m > 4$ are ignored.", "Due to their small mass, these galaxies would only have a very small impact on the strong lensing configurations observed." ], [ "Cluster galaxy catalogues", "Sect.", "REF describes all the steps for the identification of cluster members, including colour-magnitude selections as well as spectroscopic identifications.", "All galaxies identified as cluster members and used for our lensing modelling are listed in Appendix, in Tables  REF and REF for MACS J0242 and MACS J0949.", "Our final catalogues include 58 and 170 galaxies for MACS J0242 and MACS J0949 respectively." ], [ "Multiple image systems", "In Section REF , we described the preliminary steps leading to the multiple image system catalogue.", "At this point, this is simply a catalogue of reliable detections with redshift $z > 0.6$ .", "The second step in the identification of multiple image systems is to look for similarities between these detections, starting with their spectra.", "We then look at their positions and see if they are compatible with a lensing geometry.", "The MUSE field of view being narrower than the HST one, one can also look at the colour and morphology of possible multiple images.", "If all criteria are satisfied for a given set of multiple images, we consider them as a multiple image system.", "In Fig.", "REF , we show a colour composite HST image of four MUSE detections, 4 multiple images of the same galaxy located at redshift $z = 4.89$ .", "In the case of MACS J0949, we force extract emission from the MUSE cube corresponding to the location of multiple images previously identified by the RELICS collaboration (obtained through private communication); we only reveal marginal identification as explained in Sec.", "REF .", "The final list of system used in this analysis is presented in Table REF .", "Figure: The four multiple images of System 1 detected in MACS J0949 with VLT/MUSE observations.", "Labelled cyan circles show the positions of the multiple images and correspond to the peak of the Lyman-α\\alpha emission.", "The green contours show flux density levels at 1.5001.500, 2.1252.125 and 4.000×10 -20 4.000 \\times 10^{-20} erg s -1 ^{-1} cm -2 ^{-2} Å -1 ^{-1}.", "This narrow-band image is shown in Fig.", "." ], [ "Strong Lensing Mass modelling", "The mass distribution of each cluster is reconstructed using the Lenstool softwarehttps://projets.lam.fr/projects/lenstool/wiki [52], [48], in its parametric mode.", "The optimisation is performed in the image plane with a Markhov Chain Monte-Carlo algorithm (MCMC) assuring the sampling of parameter space.", "It optimises the predicted positions of multiple images while fitting an underlying mass distribution composed of large-scale halo(s) to describe the overall cluster potential, and small-scale halos to account for local perturbers such as cluster galaxies.", "For both clusters, we describe any potential using a dual Pseudo-Isothermal Elliptical Matter Distribution [50] which, as described in [23], has two different pivot scales: a core radius, which describes the potential evolution due to the baryonic matter content, and a cut radius that describes the dark matter potential.", "A dPIEMD potential is described by seven parameters (excluding the redshift): the central coordinates, the ellipticity $e$ , the position angle $\\theta $ , the core and cut radii, $r_{\\mathrm {core}}$ and $r_{\\mathrm {cut}}$ respectively, and a fiducial central velocity dispersion $\\sigma $ .", "The fiducial central velocity dispersion in Lenstool $\\sigma $ relates to the true three dimensional central velocity dispersion with $\\sigma _0 = \\sqrt{3/2} \\sigma $ , as detailed in [6], Appendix C. For each cluster, we assume one single large-scale dark matter halo to describe the overall cluster potential.", "It is described by a large velocity dispersion, a large core radius and large cut radius.", "We optimise all the parameters of the potential, excluding the cut radius which we fixed to values $>$ 1 Mpc as it is located far from the strong lensing region and thus cannot be constrained by multiple images only.", "The position of each cluster halo is allowed to vary within 10 of the cluster centre, i.e.", "the position of the BCG.", "The ellipticity of the halo is limited to values $<0.8$ .", "The cut radius is fixed to 1.5 Mpc for both MACS J0242 and MACS J0949, as our investigation to model the ICM through lensing shows that this value provides a better fit to the X-ray observations (see our companion paper Allingham et al.", "in prep.).", "This value is in agreement with [13], taking in consideration the higher mass range of the clusters we are exploring here.", "The BCG of each cluster is also modelled independently, using a dPIEMD potential.", "The BCG has a strong gravitational influence in the cluster core, and will thus impact the geometry of multiple images quite strongly [66].", "We fix their $r_{\\mathrm {core}}$ to a small value of 0.30 kpc for cluster MACS J0242 and 0.25 kpc for MACS J0949.", "For their positions, position angle, and ellipticity, we fix their values to the shape parameters in outputs of SExtractor.", "Finally, we only optimise its their velocity dispersion and cut radius.", "Each individual cluster member is modelled by its own dPIEMD potential.", "Their positions, ellipticities and position angles are obtained with the photometric extraction.", "We again assume a small but non-null value for $r_{\\mathrm {core}}$ .", "Their cut radii and velocity dispersions are optimised using their magnitude and assuming the Faber-Jackson scaling relation [25].", "All cluster members cut radii and velocity dispersions are rescaled with regard to a unique set of parameters ($r_{\\mathrm {cut}, 0}$ , $\\sigma _{0}$ ).", "This allows us to optimise each cluster galaxy potential using a remarkably small number of parameters.", "$r_{\\rm cut}$ and $\\sigma $ are allowed to vary between 1 and 50 kpc, and 100 and 300 km.s$^{-1}$ respectively.", "As mentioned earlier, the Faber-Jackson relation being scaled to a reference magnitude $mag_{0}$ , we use the reference pass-band of the main camera for each cluster, ACS/F814W ($mag_{0}=20.0205$ ) and ACS/F606W ($mag_{0}=19.5085$ ) for MACS J0242 and MACS J0949 respectively.", "As the centre of the cluster-scale halo and the BCG are aligned, the $r_{\\mathrm {core}}$ , $r_{\\mathrm {cut}}$ and $\\sigma $ parameters of both potentials are degenerate.", "Due to the limited number of lensing constraints, we proceed incrementally to model the potential, to narrow the parameters space.", "First, we include the BCG in the scaling relation of the cluster galaxies and optimise the cluster-scale halo and the scaling relation parameters as described above.", "Second, we run a model with the BCG optimised independently, only optimising $r_{\\rm cut}$ and $\\sigma $ as explained above.", "However in this case, the cluster-scale halo parameters are allowed to vary within a restricted range, defined gaussianly around the best fit values obtained from the first model.", "This way, we can limit the degeneracy between the cluster-scale and BCG halos, and obtain physical values to describe the BCG potential.", "Finally, we added a completely free dPIEMD potential south to the main cluster halo of MACS J0949.", "This structure has already been included in the public RELICS models and correspond to the location of three candidate multiply-imaged systems 4, 5 and 6 as shown in Fig.", "REF .", "We optimised their redshifts as well as the potential and to prevent nonphysically high value we imposed gaussian priors on $r_{\\mathrm {core}}$ , $r_{\\mathrm {cut}}$ and velocity dispersion." ], [ "MACS J0242", "In MACS J0242, we detected six systems of multiple images with MUSE.", "Their positions and redshifts are given in Table REF .", "We provide the best fit parameters of our model in Table REF .", "The fixed values are highlighted by an asterisk.", "Our best-fit model yields predicted multiple images with a $rms$ of 0.39of the observed positions.", "Table: List of multiple images detected with VLT/MUSE in MACS J0242.", "We here list their ID, coordinates, R.A. and Decl., given in degrees (J2000), and their measured spectroscopic redshift zz.The geometry of the cluster is typical of a relaxed cool-core cluster.", "The density profiles peak in the centre, and the transition between the BCG and the DM halo appears to be very smooth as illustrated in Fig.", "REF .", "No other significant structure are identified.", "The inclusion of an external shear component does not provide a significant improvement to the mass model, i.e.", "a $rms$ of 0.38compared to our best-fit mass model of 0.39.", "Table: Best fit parameters of the strong lensing mass models for MACS J0242 and MACS J0949.", "We here list the central coordinates, Δ α \\Delta _{\\alpha } and Δ δ \\Delta _{\\delta } in arcsec, relative to the centre, the ellipticity, ee, the position angle in degrees, θ\\theta , the core radius in kpc, r core r_{\\mathrm {core}}, the cut radius in kpc, r cut r_{\\mathrm {cut}}, and the velocity dispersion in km.s -1 ^{-1}, σ\\sigma , for each component of the model.The centres are taken to be respectively (α c ,δ c )=(40.649555,-21.540485)(\\alpha _c, \\delta _c) = (40.649555, -21.540485) deg and (α c ,δ c )=(147.4659012,17.1195939)(\\alpha _c, \\delta _c) = (147.4659012, 17.1195939) deg for MACS J0242 and MACS J0949.The asterisks highlight parameters which are fixed during the optimisation.Figure REF shows the surface density profile, $\\Sigma $ , and includes a $68 \\%$ confidence interval around the best contours, as a function of the distance to the cluster centre.", "The inner part of the profile, $R \\lesssim 50$  kpc, is dominated by the BCG potential, i.e.", "the baryonic component, while at larger radii, the dark matter distribution takes over.", "This pivot scale of about 50 kpc corresponds to the core radius of the DM halo, and the separation between the two different regimes of the dPIEMD potential.", "In order to compare our results to the X-ray data, we extrapolate the masses $M_{\\Delta , c}$ comprised within an overdensity $\\Delta $ using $R_{\\Delta } = \\left\\lbrace R \\;|\\;\\frac{M(< R)}{\\frac{4}{3} \\pi R^3} = \\Delta \\cdot \\rho _c (z) \\right\\rbrace ,$ where $\\rho _c$ is the critical density at the cluster redshift, and $M(< R)$ the total mass enclosed within a given radius, $R$ .", "At large radii ($R > 200$ kpc), the strong lensing mass reconstruction only provides an estimate of the true mass distribution as there is no strong lensing constraints to precisely and accurately estimate the mass distribution in the outskirts.", "It therefore only provides a pure extrapolation of the inner core mass distribution, and only a weak-lensing analysis would provide a precise mass estimate in this region of the cluster, however this is beyond the scope of this analysis.", "We also compute $M_{2D}(R<200$  kpc), the integrated mass within a radius of 200 kpc.", "This mass is a direct output of the lensing mass reconstruction.", "These values are all listed in Table REF .", "Figure: Top row: Cluster MACS J0242.", "Left panel: Surface mass density profile derived from the best-fit mass model.Shaded regions show the 68%68 \\% confidence interval – Right panel: Volume mass density.", "The reconstruction of the XMM-Newton observations are shown in black, given with 1σ1\\sigma error bars in yellow.", "The green and red curves – with error bars – represent respectively the BCG and DM halo reconstructions, and the full cluster is shown in blue.", "The magenta dashed line represents the NFW fit of the Lenstool reconstruction.", "The cyan line shows the fit to the X-ray data.Bottom row: Cluster MACS J0949.Red: Our model, with 68%68 \\% confidence interval.", "Blue: Lenstool model from RELICS.", "We note that error bars were obtained on a different sample (2,000 realisations for our model, 100 for RELICS).", "Green: Glafic RELICS model, realised under the same conditions.", "– Right panel: Volume mass density.", "The reconstruction of the XMM-Newton data is shown in black, given with 1σ1\\sigma error bars in yellow.", "The green and red curves represent respectively the BCG and DM halo reconstruction, and the full cluster is shown in blue.", "The magenta dashed line represents the NFW fit to the Lenstool reconstruction.", "The cyan line shows the fit to the X-ray data." ], [ "MACS J0949", "In MACS J0949, we identified several objects located behind the cluster with the MUSE observations.", "However most of them appear to be singly lensed.", "Through the techniques exposed in Section , we detected a multiple image system in the MUSE field at redshift $z = 4.8902$ .", "This system 1 is composed of five multiple images, including four in the field, and one counterpart 1.3 located outside the MUSE field of view, and detected in the HST imaging.", "We also detect a fifth image, image 1.5, located close the BCG of the cluster.", "Images 1.4 and 1.5 (see Fig.", "REF ), straddling the central critical curve of the cluster, allow to set stringent constraints on the inner slope of the mass density profile [77], [67].", "Careful consideration of the HST images allowed us to detect secondary, fainter emission knots for four multiple images in system 1 – all except the central one which is hidden by the emission of the BCG.", "This is shown in Fig.", "REF .", "The MUSE spectroscopic analysis of these three images which compose system 2 shows a faint Ly-$\\alpha $ peak for all of them, allowing us to measure a redshift of $4.8844$ , very close to that of system 1.", "We interpret system 2 either as part of the same galaxy, or a companion galaxy of system 1's source.", "The Ly-$\\alpha $ halo of system 1 extends, and the potential secondary peak emission coincides with system 2 emission knots.", "We include 4 multiple images of system 2 as additional constraints to our mass model, the fifth image being demagnified we restrain ourselves from including it in our mass model.", "The coordinates and redshifts of the multiply imaged systems are given in Table REF .", "We give a list of the singly imaged objects in Appendix .", "The inspection of HST images also led to the discovery of system 3, composed of two multiple images.", "These faint detections in the South of the cluster were equally present in the MUSE field.", "A faint and a priori inconclusive detection of Ly-$\\alpha $ – see Fig.", "REF – is consistent with the redshift optimisation of this system using only system 1, or 1 and 2 as constraints.", "We therefore conclude that this system's redshift is 5.8658.", "However the stack of the spectra presents a S/N ratio $< 2$ , and the MUSE data are sensible to sky perturbations in the speculated Ly-$\\alpha $ bandwidth.", "We therefore decide not to use this as a redshift constraint, but to let the redshift free during the model optimisation.", "Figure: Spectra of images 3.1 and 3.2 of cluster MACS J0949 obtained by VLT/MUSE.", "We can observe a faint signal, possibly Ly-α\\alpha .", "Blue: spectrum of 3.1; Red: spectrum of 3.2; Green: summed spectra.", "The redshift measured would be of 5.86585.8658.", "However, the confidence level of our measurements is low due to high sky noise at this wavelength.At last, we detect three candidate multiply lensed images in the South of the HST field of view, in a region not covered by the MUSE observations.", "We included these three candidate systems 4, 5 and 6 in our mass model, letting their redshifts as free parameters.", "Their detection supposes the presence of a Southern halo as described in Section .", "For systems 3, 4, 5 and 6, our best fit mass model gives the respective redshifts: $4.85_{-0.70}^{+1.52}$ , $3.76_{-0.80}^{+1.57}$ , $3.63_{-0.74}^{+1.67}$ and $3.57_{-1.08}^{+0.35}$ .", "Table: List of the multiple images detected with VLT/MUSE in MACS J0949.", "We here list their ID, coordinates, R.A. and Decl.", "given in degrees (J2000), and their measured spectroscopic redshift zz.Similarly to MACS J0242, we model the mass distribution of the cluster scale halo and the BCG galaxy separately.", "The best-fit mass model parameters are listed in Table  REF , and gives a $rms$ of 0.15.", "In a similar fashion to MACS J0242, although the degeneracy between the cluster scale halo and the BCG is still present, the BCG optimisation converges.", "The addition of an external shear component does not improve the mass model, and gives a $rms$ of $0.16$ .", "The $rms$ is particularly small which may be explained by the lack of constraints in our model.", "Indeed, as shown in e.g.", "[47], a larger number of constraints may increase the value of the $rms$ but could also improve the accuracy of the model.", "Similarly to MACS J0242, we compute integrated and 3D masses for MACS J0949.", "These are listed in Table REF and discussed further in Sect.", "REF .", "We compare our model of MACS J0949 to the two publicly available models from the RELICS collaborationhttps://archive.stsci.edu/prepds/relics/.", "Comparing the surface density profiles, we find a $1\\sigma $ agreement between the model presented in this article and the Lenstool RELICS model as can be seen in Fig.", "REF .", "As for the RELICS model obtained using the Glafic lensing algorithm [68], its density profile is in agreement with our model, although the most stringent constraints (in the $R \\in [40, 100]$ kpc region) yield a slightly smaller surface density.", "The overall profile from the Lenstool RELICS public release model presents a flatter density profile and an excess in mass after 80 kpc (coincidental with the Einstein radius of system 1).", "This could be partially explained by the more massive structure in the South of the cluster, which is slightly offset from the South bright galaxy surrounded by systems 4, 5, and 6 as mentioned before ($M_{2D}(< 100$  kpc $) = 13.02 \\times 10^{12} M_{\\odot }$ compared to $M_{2D}(< 100$ kpc $) = 7.65 \\times 10^{12} M_{\\odot }$ for our model).", "We report a very good agreement between the measured spectroscopic redshift obtained from MUSE observations with the photo-$z$ used by the RELICS team (obtained through private communication with K. Sharon).", "Our model presents a significantly lower $rms$ of $0.15$ , in comparison to $0.58$ .", "The reconstructed mass distribution appears to be more elliptical than the X-ray surface brightness obtained with XMM-Newton as shown in Fig.", "REF .", "The 3D density profile is presented in Fig.", "REF .", "It confirms the inflexion point in the density profile at $r \\simeq 100$  kpc, and therefore suggests that the cluster is still undergoing a relaxing phase.", "Looking at the galaxy distribution within the cluster, we can identify a bright galaxy ($L = (4.46 \\pm 0.26) \\times 10^{11} L_{{\\odot }}$ ) in the North-East region of the cluster, close to the BCG.", "It is galaxy 3 in Table REF .", "We investigated whether this galaxy could be the former BCG of another cluster which would have merged with MACS J0949 in the past.", "However, the X-ray distribution of MACS J0949 does not present any excess correlated with this massive galaxy and the lensing configurations do not favour a second cluster scale halo neither.", "Therefore, our analysis strongly suggests a unique dominant cluster scale dark matter component.", "The strong lensing analyses are giving us an estimate of the total mass enclosed in each clusters.", "From the photometric observations we have in hand, we can also derive stellar mass estimates for each cluster.", "We use models published in [9] to estimate the K-band luminosity,We take the K-band reference here to be the KPNO Flamingos Ks filter.", "$L_K$ , from the reference pass-band of each camera [38], [56].", "We then apply the relation derived by [3] in the case of red quiescent galaxies, with the power-law relation: $\\log _{10}\\left[ \\frac{M_{\\star }}{M_{\\odot }} \\frac{L_{{\\odot }}}{L_K} \\right] = a z + b,$ given the parameters $\\lbrace a, b\\rbrace = \\lbrace -0.18, +0.07\\rbrace $ .", "The stellar masses measured for both clusters are given in Table REF .", "Table: Mass and radius measurements for MACS J0242 and MACS J0949.", "All error bars show a 68%68 \\% confidence interval.", "We here list M ☆ M_{\\star }, the stellar mass, M 2D (R<200 kpc )M_{\\mathrm {2D}} (R < 200 \\mathrm {kpc}), the mass distribution obtained in projection on the plane of the cluster, within a radius of 200 kpc, and M Δ M_{\\Delta } and R Δ R_{\\Delta }, defined in eq. ().", "Masses are given in 10 14 M ⊙ 10^{14} M_{\\odot } and distances in kpc.To measure the stellar masses, we consider the catalogues output from Section .", "We convert the magnitudes measured in the catalogue into their K-band luminosities, and sum these to get an estimate of the stellar luminosity of a cluster.", "With equation (REF ), one can easily convert the magnitudes into a stellar mass estimate of a galaxy cluster.", "In order to have a theoretical reference, we compare our estimates with the stellar mass predicted using the formula derived by [30].", "This relationship, established for poor clusters, with redshifts $0.1 \\le z \\le 1$ , relates the total mass of the cluster to its stellar fraction ($M_{\\star }/M_{500}$ here) using the relation: $f^{\\star }_{500} = 0.05^{+0.001}_{-0.001} \\left( \\frac{M_{500}}{5 \\times 10^{13} M_{\\odot }} \\right)^{-0.37 \\pm 0.04}.$ Let us notice the high ($\\sim 50\\%$ ) logarithmic scatter in the data fitting this relationship.", "For MACS J0242, the field of view considered is quite large (DES: 182), as we consider all galaxy in HST/WFPC2 or DES, and thus our cluster member catalogue is assumed to be relatively complete.", "We measure a stellar mass $M_{\\star } = (6.484 \\pm 0.615) \\times 10^{12} M_{\\odot }$ for MACS J0242.", "Let us notice these error bars are only associated to the error on the measured magnitude.", "We obtain a difference between our measured value and the predicted value of $M_{\\star , {\\rm Giodini}} = (1.190 \\pm 0.168) \\times 10^{13} M_{\\odot }$ .", "We may explain this discrepancy by the variable conditions for selecting a galaxy within the galaxy catalogue.", "Indeed, the field of view being different between WFPC2, ACS and DES, as well as the poorer imaging quality of the latter instrument, we expect our error bars to be far larger than those computed given the error on the measured magnitude.", "For MACS J0949, we require that a galaxy is detected in either HST/ACS or HST/WFC3 to include it in the final catalogue.", "Because the field of view of WFC3 is smaller than that of ACS, a large number of selected cluster member galaxies are weakly constrained, as ACS only contains two bands here.", "This method is adapted to our lensing analysis, the main goal of this paper, as galaxies far from the cluster centre are particularly important to constrain the southern halo.", "However, when considering the stellar content of the cluster, we might be selecting too many galaxies.", "Our analysis yields $M_{\\star } = (1.392 \\pm 0.137) \\times 10^{13} M_{\\odot }$ .", "Similarly to MACS J0242, we compare our measurement with the predicted value following the [30] formula.", "We obtain a stellar mass $M_{\\star , {\\rm Giodini}}=(1.801 \\pm 0.289) \\times 10^{13} M_{\\odot }$ .", "This difference can give us an estimate of the overestimation of our cluster member catalogue.", "We summarise the estimated stellar fractions for both clusters, $f^{\\star }_{500} = M_{\\star }/M_{500}$ , as well as the predicted values with the [30] formula in Table REF .", "Table: Comparison between the star fractions f 500 ☆ =M ☆ /M 500 f^{\\star }_{500} = M_{\\star }/M_{500} measured with this work, and the predictions from the formula." ], [ "Analysis procedure", "We used the X-COP analysis pipeline [29] to analyse the data and compute the hydrostatic mass profiles of the two systems.", "We extracted X-ray photon images in the [0.7-1.2] keV band, which maximises the signal-to-background ratio.", "To estimate the non X-ray background, we used the unexposed corners of the MOS detectors to estimate the cosmic-ray-induced flux at the time of the observations.", "The difference between the scaled high-energy count rates inside and outside the field of view were then used to estimate the residual soft proton contribution, which was next modelled following the method described in [28].", "To determine the spectroscopic temperature profile of the two systems, we extracted spectra in logarithmically spaced concentric annuli centred on the surface brightness peak.", "The sky background emission was measured in regions located well outside of the cluster's virial radius and described by a three-component model including the cosmic X-ray background, the local hot bubble, and the galactic halo.", "The sky background spectrum was then rescaled appropriately to the source regions and added as an additional model component.", "Finally, the source spectrum was modelled by a single-temperature APEC model [80] absorbed by the Galactic $N_{H}$ , which was fixed to the HI4PI value [34]." ], [ "Hydrostatic mass reconstruction", "We used the publicly available Python package hydromasshttps://github.com/domeckert/hydromass [22] to deproject the X-ray data and recover the mass under the hypothesis of hydrostatic equilibrium.", "The X-ray surface brightness and spectroscopic temperature profiles are fitted jointly using a Navarro-Frenk-White profile [65] to recover the X-ray mass profile.", "The technique employed here is similar to the method described in [24], in which the gas density profile and the parametric mass profile are used to integrate the hydrostatic equilibrium equation and predict the 3D pressure and temperature profiles.", "The 3D temperature profile is then projected along the line of sight using spectroscopic-like weights [63] and adjusted onto the observed spectroscopic temperature profile.", "The model temperature and gas density profiles are convolved with the XMM-Newton PSF to correct for the smearing introduced by the telescope's spatial resolution, in particular in the cluster's central regions." ], [ "MACS J0242", "MACS J0242 exhibits all the features of a relaxed, cool-core cluster.", "Its X-ray morphology is regular and it shows a pronounced surface brightness peak, a central temperature drop, and a metal abundance peak in its core.", "The dynamical state of the cluster is best gauged from the X-ray emission, but the optical emission lines of the BCG is an additional, relatively faithful tracer of the presence of a cool core.", "The NFW mass reconstruction returns a mass $M_{500} = (3.7 \\pm 0.2)\\times 10^{14}\\,M_\\odot $ .", "For an average temperature of 4.5 keV, this is in agreement with the expectations of mass-temperature relations [58].", "The cluster appears to be highly concentrated, with a fitted NFW concentration $c_{200} = 8.2 \\pm 0.5$ ." ], [ "MACS J0949", "MACS J0949 exhibits a regular X-ray morphology with no obvious large substructure.", "However, its brightness distribution is relatively flat, it shows a high central entropy and central cooling time, and no temperature drop in its core.", "Therefore, MACS J0949 is not a relaxed cool-core cluster, but its regular morphology indicates that it is not strongly disturbed either.", "Such properties are typical of post-merger clusters in the process of relaxation after a merging event.", "The hydrostatic mass profile is well described by an NFW model with $c_{200} = 5.3_{-1.0}^{+1.3}$ and $M_{500} = 7.4_{-1.2}^{+1.4}\\times 10^{14} M_\\odot $ .", "Its hydrostatic gas fraction $f_{gas,500} = 0.155_{-0.014}^{+0.016}$ is consistent with the Universal baryon fraction [2]." ], [ "The particular case of MACS J0949", "On Fig.", "REF , we display the extracted emission of images 1.1 and 2.1 detected in MACS J0949 from the MUSE narrow-band centred on $\\lambda = 715.869$ nm within a yellow box.", "We then infer the emission in the source plane ($z=4.8902$ ), before projecting it back to the image plane with our lens model, to obtain a re-lensed prediction.", "The other multiple images on the MUSE field, 1.2, 1.4, 1.5, 2.2 and 2.4 are correctly predicted.", "Their Lyman-$\\alpha $ detections are also listed in Table REF .", "Images 1.4 and 1.5 emission appear to be connected.", "This is simply due to the extended source emission of system 1 and 2, as a number of faint multiple images of system 2 are predicted between 1.4 and 1.5, in agreement to the MUSE observations on the narrow-band.", "Figure: MACS J0949 reconstruction of the full image plane of system 1 from the unique extended emission images 1.1 and 2.1.", "Their region, highlighted with the yellow box is cut out and deprojected into the source plane, and casted back in the image plan to produce the full system.", "We clearly observe a continuous emission between the North-East image 1.4 and the central one 1.5.We display in green the contours of the Ly-α\\alpha extended emission from the VLT/MUSE narrow-band image centered at 715.869 nm and 1.625 nm wide, showing the four detected multiple images of system 1, and three of system 2 (see Fig.", "for more details).The last images 1.3 and 2.3 of these systems are located outside of the VLT/MUSE field of view.", "The critical lines are displayed in red, for redshift z=4.8902z = 4.8902 of system 1.", "The pink overlay represents the MUSE narrow-band contours.In the cluster core, we observe four bright and massive galaxies, of comparable magnitude to the BCGThe maximum magnitude separation between these five galaxies being 0.29 on the reference band ACS/F814W.. We could extrapolate all of these bright galaxies to have been the BCG of former galaxy clusters.", "However, the X-ray observations show a diffuse emission centred on the BCG and thus do not provide any evidence of recent merger events.", "Nonetheless, the presence of numerous BCG-bright galaxies is consistent with MACS J0949 being a post-merger state.", "Our interpretation of the dynamical state of MACS J0949 and its lensing power could be further constrained with additional spectroscopic or imaging observations.", "The clear identification of the spectroscopic redshift of system 3, and of additional systems would particularly assist constraining the dark matter halo ellipticity, core radius and velocity dispersion.", "We also detected with high confidence three images at redshifts $z = 0.84722$ , $0.84727$ and $0.84920$ (see Section ) in the South West, South and North of the BCG respectively.", "However they are not predicted to be multiply imaged, and we only list them in Appendix for consistency." ], [ "Dynamical state of MACS J0242 & MACS J0949", "Our strong lensing mass models are constrained respectively by 6 and 2 spectroscopically confirmed multiple image systems for MACS J0242 and MACS J0949.", "In MACS J0949, we also include four multiply imaged systems, without a confirmed spectroscopic redshift.", "The mass distribution is modelled using the Lenstool software, and includes one cluster-scale halo for each cluster, and 58 and 170 cluster galaxies for MACS J0242 and MACS J0949 respectively.", "Our best fit mass models yield a $rms$ on the multiple image positions of $0.39$ and $0.15$ for MACS J0242 and MACS J0949 respectively.", "Using the XMM-Newton X-ray data from [10], processed with the X-COP pipeline [29], we compare the ICM to the reconstructed dark matter density.", "The combination of the lensing mass reconstructions with the X-ray analyses of the ICM and the MUSE spectrography shows that MACS J0242 is in a cool-core, relaxed dynamical state, compatible with a NFW profile, while MACS J0949 has a flat distribution between radii of 50 to 100 kpc because it is still undergoing the relaxing process, being in a post-merger dynamical state.", "We note that degeneracies between the BCG and the dark matter halo could hinder the Lenstool optimisations, and could thus affect our conclusion regarding the morphology of the dark matter distribution in these clusters, [55].", "We find an important difference between the 3D lensing mass extrapolation and the X-ray observations for both galaxy clusters, with a larger extrapolated mass from lensing.", "For MACS J0242, we find $M_{1000} = 4.628_{-0.342}^{+0.298} 10^{14} M_{\\odot }$ and $M_{500} = 5.954_{-0.455}^{+0.400} \\times 10^{14} M_{\\odot }$ , at a respective sizeable 9.8 and 9.6 $\\sigma $ distance from the X-ray values.", "As for MACS J0949, $M_{1000} = 8.848_{-2.215}^{+0.000} \\times 10^{14} M_{\\odot }$ and $M_{500} = 1.148_{-0.341}^{+0.000} \\times 10^{15} M_{\\odot }$ , respectively at 4.3 and 2.8 $\\sigma $ for the XMM-Newton modelled values.", "We can compare this latter value to the one found with Planck SZ data of $M_{500} = 8.24 \\pm 0.46 \\times 10^{14} M_{\\odot }$ [27].", "Assuming a NFW profile, the same analysis yielded $M_{2D}(< 200 \\mathrm {kpc}) = 1.59_{-0.00}^{+0.38} \\times 10^{14} M_{\\odot }$ .", "However, one will note that the phase space of the lensing optimisation of MACS J0949 is quite large, and the mass profile of this cluster could vary with the same constraints while the $rms$ remains low.", "The Lenstool and Glafic RELICS models provide respectively $M(R < 200 \\mathrm {kpc}) = 1.84_{-0.03}^{+0.03} \\times 10^{14} M_{\\odot }$ and $M(R < 200 \\mathrm {kpc}) = 1.85_{-0.07}^{+0.08} \\times 10^{14} M_{\\odot }$ , in good agreement with a value of $2.00_{-0.20}^{+0.05} \\times 10^{14} M_{\\odot }$ obtained with our model.", "The ellipticity of the clusters obtained with our lensing mass model is not recovered by the X-ray analysis.", "This discrepancy could be explained by the weak X-ray signal or the minimal number of lensing constraints.", "As we have established through lens models the total matter density distribution in two galaxy clusters, we laid the foundations of our companion paper (Allingham et al.", "in prep.).", "In this forthcoming paper, we describe a new method using analytical models of galaxy cluster potentials to predict the ICM distribution, and in the foreseeable future to put constraints on interacting dark matter." ], [ "Acknowledgements", "JA would like to thank Markus Mosbech for comments and discussions.", "JA is supported by the International Postgraduate Research Scholarship in Astroparticle Physics/Cosmology at the University of Sydney.", "MJ and DJL are supported by the United Kingdom Research and Innovation (UKRI) Future Leaders Fellowship `Using Cosmic Beasts to Uncover the Nature of Dark Matter' (grant number MR/S017216/1).", "DJL is partially supported by ST/T000244/1 and ST/W002612/1.", "The authors acknowledge the Sydney Informatics Hub and the use of the University of Sydney high performance computing cluster, Artemis.", "This work is based on observations taken by the RELICS Treasury Program (GO 14096) with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.", "GM acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No MARACHAS - DLV-896778.", "ACE acknowledges support from STFC grant ST/P00541/1.", "The galaxy and spectroscopic detections catalogues and the lens models are available upon reasonable request to the corresponding author." ], [ "Spectroscopic detections of interest", "We present additional spectroscopic good detections in the background of both clusters MACS J0242 and MACS J0949, respectively in Tables REF and REF .", "Table: Spectroscopic detections of multiple images in MACS J0242.", "Coordinates are in degrees (J2000).", "The reference for right ascension and declination are taken to be the centre of the cluster.Table: Spectroscopic detections of multiple images in MACS J0949.", "Coordinates are in degrees (J2000).", "The reference for R.A. and declination are taken to be the centre of the cluster.We present in Tables REF and REF (respectively for clusters MACS J0242 and MACS J0949) a few cluster members in their final catalogue format: their positions and all geometrical components (semi-major and minor axes $a$ and $b$ , rotation angle $\\theta $ ) as well as their magnitudes are coming from the photometric analysis, while the redshifts are detected through spectroscopy.", "Table: A few of the brightest cluster members in the cluster MACS J0242.", "Coordinates are in degrees (J2000).", "We remind that the reference coordinates are (40.649555;-21.540485)deg(40.649555 ; -21.540485) \\deg .Table: A few of the brightest cluster members in the cluster MACS J0949.", "Coordinates are in degrees (J2000).", "We remind that the reference coordinates are (α c ,δ c )=(147.4659012,17.1195939)(\\alpha _c, \\delta _c) = (147.4659012, 17.1195939).", "Magnitudes are given on the reference band ACS/F814W." ], [ "Additional information on colour-magnitude diagrammes selections", "We here provide the equation of each main red colour sequence for both galaxy cluster MACS J0242 and MACS J0949, according to process described in Section REF .", "We also provide all the additional colour-magnitude diagrammes we can plot.", "Tables REF and REF provide respectively the equations of the main colour sequences of clusters MACS J0242 and MACS J0949, and the weighed colour standard deviation of the spectroscopically confirmed cluster galaxy sample $\\sigma _C$ .", "The height of the selection box is $2\\sigma _{C}$ away from the main red sequence for HST/ACS and HST/WFC3, and $3\\sigma _C$ for HST/WFPC2 and DES.", "Table: Equations of the main colour sequences and standard deviations on colours for all colour-magnitude diagrammes of MACS J0242.", "m 1 m_1 represents the magnitude in abscissa.", "Associated plots are Fig.", ", and .Figure: Colour-magnitude diagramme for MACS J0242, instrument DES.", "The colour is (m r -m z m_{\\mathrm {r}} - m_{\\mathrm {z}}), and the magnitude m z m_{\\mathrm {z}}.Figure: Colour-magnitude diagramme for MACS J0242, instrument DES.", "The colour is (m g -m r m_{\\mathrm {g}} - m_{\\mathrm {r}}), and the magnitude m z m_{\\mathrm {z}}.Table: Equations of the main colour sequences and standard deviations on colours for all colour-magnitude diagrammes of MACS J0949.", "m 1 m_1 represents the magnitude in abscissa.", "Associated graphs are Fig.", ", , , , and .Figure: Colour-magnitude diagramme for MACS J0949, instrument HST/WFC3.", "The colour is (m F140W -m F160W m_{\\mathrm {F140W}} - m_{\\mathrm {F160W}}), and the magnitude m F160W m_{\\mathrm {F160W}}.Figure: Colour-magnitude diagramme for MACS J0949, instrument HST/WFC3.", "The colour is (m F125W -m F160W m_{\\mathrm {F125W}} - m_{\\mathrm {F160W}}), and the magnitude m F160W m_{\\mathrm {F160W}}.Figure: Colour-magnitude diagramme for MACS J0949, instrument HST/WFC3.", "The colour is (m F125W -m F140W m_{\\mathrm {F125W}} - m_{\\mathrm {F140W}}), and the magnitude m F140W m_{\\mathrm {F140W}}.Figure: Colour-magnitude diagramme for MACS J0949, instrument HST/WFC3.", "The colour is (m F105W -m F140W m_{\\mathrm {F105W}} - m_{\\mathrm {F140W}}), and the magnitude m F140W m_{\\mathrm {F140W}}.Figure: Colour-magnitude diagramme for MACS J0949, instrument HST/WFC3.", "The colour is (m F105W -m F125W m_{\\mathrm {F105W}} - m_{\\mathrm {F125W}}), and the magnitude m F125W m_{\\mathrm {F125W}}." ] ]
2207.10520
[ [ "Multi-Event-Camera Depth Estimation and Outlier Rejection by Refocused\n Events Fusion" ], [ "Abstract Event cameras are bio-inspired sensors that offer advantages over traditional cameras.", "They operate asynchronously, sampling the scene at microsecond resolution and producing a stream of brightness changes.", "This unconventional output has sparked novel computer vision methods to unlock the camera's potential.", "Here, the problem of event-based stereo 3D reconstruction for SLAM is considered.", "Most event-based stereo methods attempt to exploit the high temporal resolution of the camera and the simultaneity of events across cameras to establish matches and estimate depth.", "By contrast, this work investigates how to estimate depth without explicit data association by fusing Disparity Space Images (DSIs) originated in efficient monocular methods.", "Fusion theory is developed and applied to design multi-camera 3D reconstruction algorithms that produce state-of-the-art results, as confirmed by comparisons with four baseline methods and tests on a variety of available datasets." ], [ "Multimedia Material", "Video: https://youtu.be/Ewhkcsu7S4E" ], [ "Introduction", "Intelligent navigation in our complex 3D world relies on robust and efficient visual perception, which is challenging for autonomous robots.", "However, humans use vision very efficiently to navigate 3D environments, even in novel scenarios.", "Inspired by human vision, neuromorphic spike-based sensing and processing has been recently investigated for robot vision [1], [2] and retinal implants [3].", "Event cameras, such as the Dynamic Vision Sensor [4], [5], [6] (DVS), are neuromorphic sensors that acquire visual information very differently from traditional cameras.", "They sample the scene asynchronously, producing a stream of spikes, called “events”, that encode the time, location and sign of per-pixel brightness changes.", "Event cameras possess outstanding properties compared to traditional cameras: very high dynamic range (HDR), high temporal resolution ($\\approx $ ), temporal redundancy suppression and low power consumption.", "These properties offer potential to tackle challenging scenarios for standard cameras (high speed and/or HDR) [7], [8], [9], [10], [11].", "However, this calls for novel methods to process the unconventional output of event cameras in order to unlock their capabilities [2].", "In this work, we tackle the problem of event-based stereo 3D reconstruction for Visual Odometry (VO) and Simultaneous Localization and Mapping (SLAM).", "An efficient SLAM system is critical for the navigation of autonomous intelligent agents (like field robots) in challenging unstructured environments, especially in extreme ones like offshore drilling, nuclear power plants, etc.", "[12].", "An event-based SLAM system has the potential to efficiently overcome difficult scenarios in these applications [13], [14], [15].", "Our work is inspired by EVO [13], which is the state of the art in event-based monocular VO.", "The effectiveness of EVO is largely due to its mapping module, Event-based Multi-View Stereo (EMVS) [10], which enables 3D reconstruction without the need to recover image intensity, without having to explicitly solve for data association between events, and without the need of a GPU (it is fast on a standard CPU –e.g., speed of 1.2ev//core [10]).", "Additionally, EMVS admits an interpretation in terms of event refocusing or event alignment (contrast maximization) [16], which is the state of the art framework to tackle other vision problems [17], [18], [19], [20], [21], [22], [23], [24], [25].", "Our goal is to extend EMVS to the multi-camera setting (i.e., two or more event cameras in a multi-view configuration sharing a common clock), and in particular to the stereo setting, in order to benefit from these advantages and connections (Fig.", "REF ).", "In the process, we revisit the event simultaneity assumption used in stereo depth estimation and develop a theory of fusion of refocused events, which could be useful in other problems, such as feature or camera tracking [26].", "Figure: Semi-dense depth maps estimated by event-based monocular  and stereo methods.Stereo is beneficial for more accurate estimationand outlier removal compared to monocular depth estimation (e.g., traffic sign in the center).Depth is pseudo-colored, from red (close) to blue (far).Color frames are only shown for visualization.Data from .In summary, our contributions are: Simple, efficient and extensible solutions to the problem of event-based stereo 3D reconstruction for SLAM using a correspondence-free approach.", "We investigate early event data fusion strategies in two orthogonal directions: between cameras (“spatial stereo”, sec:method:fusionacrosscameras) and along time (“temporal stereo”, sec:method:fusiontime).", "The investigation of several functions to fuse refocused events (using e.g., Generalized means, sec:method:fusion) and its application to the two mentioned directions.", "A comprehensive experimental evaluation on five publicly available datasets and comparing against several baseline methods, producing state-of-the-art results (sec:experim).", "We also show how the method can naturally handle multi event-camera setups with linear complexity.", "This research aims at developing robust multi-camera visual perception systems for the navigation of artificial intelligent systems in challenging environments, like stereo depth perception for SLAM and attention in robots [28], [29]." ], [ "Related Work", "What: Stereo depth estimation using event cameras has been an interesting problem ever since the first event camera was invented by Mahowald and Mead in the 1990s [30].", "As such, they simultaneously designed a stereo chip [31] to implement Marr and Poggio's cooperative stereo algorithm [32].", "This approach has inspired a lot of literature that focuses on 3D reconstruction over short time intervals (“instantaneous stereo”) [33], [34], [35].", "These methods work well with stationary cameras in uncluttered scenes (where events are caused only by few moving objects), thus enabling 3D reconstruction of sparse, dynamic scenes.", "For a detailed survey on these stereo methods we refer to [36], [37].", "In contrast, stereo event-based 3D reconstruction for VO/SLAM has been addressed recently [38], [15].", "It assumes a static world and known camera motion (e.g., from a tracking method) to assimilate events over longer time intervals, so as to increase parallax and produce more accurate semi-dense depth maps.", "Some other works estimate depth by combining an event camera with other devices, such as light projectors [8], [39], [40] or a motorized focal lens [41], which are different from our hardware setup and application.", "How: Depth estimation with stereo event cameras is predominantly based on exploiting the epipolar constraint and the assumption of temporal coincidence of events across retinas, namely that a moving object produces events of same timestamps on both cameras [42], [39], [43].", "This aims at exploiting the high temporal resolution and redundancy suppression of event cameras to establish event matches across image planes and then triangulate.", "It is also known as event simultaneity or temporal consistency [38], and it is analogous to photometric consistency in traditional cameras.", "This assumption does not strictly hold [44], [45], and so it is relaxed to account for temporal noise (jitter and delay).", "Essentially event simultaneity is exploited to solve the data association problem (establishing event matches), which is a well-known difficult problem due to the little information carried by each event and their dependency with motion direction (changing “appearance” of events [46], [2]).", "The above ideas are used in the mapping module of [15], the state-of-the-art stereo 3D reconstruction method for VO/SLAM.", "In this method temporal consistency is measured across space-time neighborhoods of events by first converting the events into time surfaces (TSs) [47] and then comparing their spatial neighborhoods.", "Stereo point matches are established and provide depth estimates which are fused in a probabilistic way using multiple TSs to produce a more accurate semi-dense inverse depth map.", "In contrast, we investigate a new way of doing stereo, without explicitly using event simultaneity and hence without establishing event matches.", "Therefore, to the best of our knowledge, we completely depart from previous event-based stereo methods.", "Inspired by [10], we circumvent the data association task by leveraging the sparsity of events (event cameras naturally highlight edges, which are sparse, in hardware) and by exploiting the continuous set of camera viewpoints at which events are available.", "This provides a rich collection of back-projected rays through the events to estimate scene structure.", "Our contributions pertain to the processing (e.g., fusion) of such back-projected rays or “refocused events”, which has not been considered before (since [10] and newer approaches [48] do not consider data fusion, e.g., across cameras)." ], [ "Event-based Stereo Depth Estimation", "This section reviews how an event camera works (sec:method:eventcam) and the monocular method EMVS (sec:method:emvs) before presenting our stereo depth estimation approach.", "Two main event fusion directions are presented: fusion of camera views (sec:method:fusionacrosscameras) using one of several functions (sec:method:fusion), and fusion of multiple time intervals (sec:method:fusiontime).", "Then, we revisit the event simultaneity assumption (sec:method:shuffling) and analyze the computational complexity of the approach (sec:complexity).", "Figure: Our method takes as input the events from two or more synchronized, rigidly attached event cameras and their poses, and estimates the scene depth.Using Space Sweeping, it builds ray density Disparity Space Images (DSIs) from each camera data and fuses them into one DSI (sec:method:fusion), from which the 3D structure of the scene is extracted in the form of a semi-dense depth map, which may be cast into a point could.Fusion across cameras (sec:method:fusionacrosscameras) is represented with yellow lines,and temporal fusion (sec:method:fusiontime) with red lines.The optional shuffling block (“S”) is presented in sec:method:shuffling." ], [ "How an Event Camera Works", "Event cameras, such as the Dynamic Vision Sensor (DVS) [4], are bio-inspired sensors that capture pixel-wise brightness changes, called events, instead of brightness images.", "An event $e_k \\doteq (\\mathbf {x}_k, t_k, p_{k})$ is triggered when the logarithmic brightness $L$ at a pixel exceeds a contrast sensitivity $\\theta >0$ , $L(\\mathbf {x}_k,t_k) - L(\\mathbf {x}_k, t_k-\\Delta t_k) = p_k \\, \\theta ,$ where $\\mathbf {x}_k\\doteq (x_k, y_k)^{\\top }$ , $t_k$ (in ) and $p_{k} \\in \\lbrace +1,-1\\rbrace $ are the spatio-temporal coordinates and polarity of the brightness change, respectively, and $t_k-\\Delta t_k$ is the time of the previous event at the same pixel $\\mathbf {x}_k$ .", "Hence, each pixel has its own sampling rate, which depends on the visual input.", "Assuming constant illumination, pixels produce events proportionally to the amount of scene motion and texture." ], [ "EMVS: Monocular 3D Reconstruction", "The problem of monocular depth estimation with an event camera consists of estimating the 3D structure of the scene given the events and the camera poses (i.e., position and orientation) as the sensor moves through the scene.", "The method in [10] solves this problem, called EMVS, in two main steps: it builds a Disparity Space Image (DSI) using a space sweeping approach [49] and then detects local maxima of the DSI.", "The key idea is that, as the camera moves, events are triggered at an almost continuous set of viewpoints, which are used to back-project events into space in the form of rays (a DSI).", "The local maxima of the ray density (where many rays intersect, as shown in fig:dsiproj:monitor) are candidate locations for the 3D edges that produce the events.", "Specifically, the DSI is discretized on a projective voxel grid defined at a reference view, and local maxima are detected along viewing rays, thus producing a semi-dense depth map.", "Events are processed in packets of about 0.21 events.", "The key benefits of EMVS are its simplicity, accuracy, efficiency (real-time, with $\\sim $ 1.2Mev/s throughput per CPU core [10]) and that it estimates depth without explicit data association." ], [ "Fusion Across Cameras", "We consider the problem of depth estimation from two synchronized and calibrated event cameras rigidly attached.", "Hence, the input consists of a stereo event stream and poses, and the desired output is a (semi-dense) depth map or, equivalently, a 3D point cloud with the scene structure (fig:block-diagram).", "Challenges and Proposed Architecture: A naive solution to the problem consists of running two instances of EMVS, one per camera, and fusing the resulting point clouds into a single one, including post-processing to mitigate redundant 3D points.", "This is a late-fusion approach, which greatly ignores the benefits that arise from having two cameras observing the same scene.", "[t!]", "Stereo event fusion across cameras [1] Input: stereo events in interval $[0,T]$ , camera trajectory, camera calibration (intrinsic and extrinsic).", "Define a single reference view (RV) for both DSIs, coinciding with the left camera pose at say $t=T/2$ .", "Create 2 DSIs by back-projecting events from each camera.", "Fusion: compute the pointwise harmonic mean of the DSIs.", "Extract depth and confidence maps from the fused DSI $f$ : $Z^\\ast (x,y) \\doteq \\arg \\max f(\\mathbf {X}(x,y))$ , $c^\\ast (x,y) \\doteq \\max f(\\mathbf {X}(x,y))$ .", "By contrast, we seek to perform fusion earlier in the processing pipeline: at the DSI stage.", "Hence the first technical challenge is to define the DSI.", "EMVS defines a DSI per camera, located at a reference view (RV) along the camera's trajectory.", "However, the fusion of DSIs at two different RVs is prone to resampling errors.", "Thus it is key to define a common DSI location for both cameras.", "The second challenge is to investigate sensible fusion strategies.", "Our approach includes, as a particular case, that of back-projecting the events from both cameras into a single DSI and simply counting rays.", "This is equivalent to the scenario of a single event camera that moves twice through the scene, with different motions, but uses the same DSI to aggregate rays.", "It doubles the ray count in the DSI, but summation discards valuable information for fusion, such as how many rays originate in each camera: given 8 rays at a point, it is preferable to have 4 rays from each camera than an unbalanced situation (a 3D edge seen only by one camera).", "To deal with the above challenges we define two DSIs at a common reference view: having one DSI per camera allows us to preserve the origin of the event data, and having geometrically aligned DSIs avoids resampling errors during fusion.", "Without loss of generality, let the RV be a point along the trajectory of the left camera.", "We investigate how to compare and fuse the ray densities from each event camera.", "fig:block-diagram shows the block diagram of our stereo approach.", "For now, assume there is $N_s=1$ DSI per camera ($N_s>1$ is presented in sec:method:fusiontime).", "First, the aligned DSIs are populated with back-projected events from each camera, then they are fused (combined) into a single one (e.g., using a voxel-wise harmonic mean or other similarity score (sec:method:fusion)), and finally local maxima are extracted to produce a semi-dense depth map.", "The steps are specified in Alg.", "REF .", "Figure: Intuitive Example: Monocular vs.", "Stereo method on planar rock scene and 1D motion along the camera's XX axis.Plots of the evolution of the DSI projections (see text) for different methods (rows) as time increases (columns).The 3D edge patterns in the DSI (in yellow) are less localized in EMVS (top row) than in stereo Alg.", "(bottom).Intuitive Example.", "To illustrate key differences between EMVS (monocular) and the stereo Alg.", "REF we use a sequence acquired with two event cameras [50] performing a 1D motion (translation along the $X$ axis, using a linear slider).", "As input to EMVS we use the data from the left camera.", "fig:experim:monovsstereo shows the evolution of the DSIs as time progresses, i.e., as the camera rig moves and more events are acquired and back-projected onto the DSIs (and fused in the stereo case).", "We plot projections of the DSI along its three coordinate axes (like in fig:dsiproj:monitor) at the reference viewpoint RV.", "As fig:experim:monovsstereo shows, the stereo DSI (bottom row) converges faster to the 3D structure of the scene than the monocular DSI (top row).", "This is specially noticeable in the top views: only one set of nearly parallel rays, poor for triangulation, is visible in the monocular case.", "By contrast, the top views of the stereo DSIs show two sets of rays, one from each camera: the rays from the left camera are nearly straight, whereas the rays from the right camera are curved due to inverse depth parametrization of the DSI grid and the fact that the DSI is projective.", "In both scenarios, the rays intersect at multiple voxels and as time progresses the true intersection locations dominate over others, i.e., the 3D structure emerges by event refocusing [10], [17].", "Additionally, in the stereo case refocusing is combined with the proposed fusion functions (sec:method:fusion) to speed-up the emergence and better highlight 3D structure.", "In Alg.", "REF this is achieved by the harmonic mean, which deemphasizes the non-intersecting parts of the rays.", "Output of the Stereo Method.", "fig:output:stereo shows the output of Alg.", "REF on two scenes.", "After DSI fusion, Alg.", "REF extracts a depth map by locating the DSI maxima along each viewing ray (through RV pixel $\\mathbf {x}= (x,y)^\\top $ ).", "Letting $f:\\mathbb {R}^3 \\rightarrow \\mathbb {R}_{\\ge 0}$ be the fused DSI, its maxima provide the confidence or “contrast” map $c(\\mathbf {x}) = f(\\mathbf {x},Z^\\star (\\mathbf {x}))$ and the depth map $Z^\\star (\\mathbf {x})$ .", "Adaptive Gaussian Thresholding (AGT) selects the pixels with highest local value, thus making the depth maps semi-dense.", "A median filter is applied to remove isolated points.", "The front-view projection of the DSI in fig:dsiproj:monitor corresponds to the confidence map, which is called this way because it is used in AGT to “select the most confident pixels in the depth map” [10] (since voxels with many ray intersections are more likely to capture true 3D points than voxels with few ray intersections).", "Figure: Output of Stereo Alg.", "on two scenes.Top: our method produces a semi-dense depth map of the scene(color coded from red (close) to blue (far), overlaid on a grayscale frame of the DAVIS ),and a confidence map (Bottom) with the maximum DSI value along each reference view pixel,in negated scale (bright = small; dark = large)." ], [ "DSI Fusion Functions", "DSI fusion is the central part of our method (fig:block-diagram).", "It takes two ray density DSIs on the same region of space as input (one per camera) and produces a merged DSI, which is then used to extract depth information (candidate locations of 3D edges).", "So, what are sensible ways to compare two DSIs?", "Ray density DSIs have very different statistics from natural images, hence standard similarity metrics for image patches may not be the most appropriate ones [51].", "Formally, let $\\mathcal {E}_l=\\lbrace e^l_{k}\\rbrace _{k=1}^{N_e^l}$ and $\\mathcal {E}_r=\\lbrace e^r_{k}\\rbrace _{k=1}^{N_e^l}$ be stereo events over some time interval $[0,T]$ , and ${f}_l,{f}_r:V\\subset \\mathbb {R}^3 \\rightarrow \\mathbb {R}_{\\ge 0}$ be the ray densities (DSIs) defined over a volume $V$ : ${f}_l(\\mathbf {X}) = \\sum _{k=1}^{N_e^l}\\delta \\bigl (\\mathbf {X}-\\mathbf {X}^{\\prime }_k(e^l_{k})\\bigr ),$ where $\\mathbf {X}^{\\prime }_k(e^l_{k}) = (\\mathbf {x}^{l\\prime \\top }_k,Z)^\\top $ is a 3D point on the back-projected ray through event $e^l_{k}$ , at depth $Z$ with respect to the reference view (RV).", "Events are transferred to RV using the continuous motion of the cameras and candidate depth values $Z\\in [Z_{\\min },Z_{\\max }]$ : $\\mathbf {x}^{l\\prime }_k = \\mathbf {W}\\bigl (e^l_{k}, \\mathtt {P}^l(t_k),\\mathtt {P}_{v}, Z \\bigr ),$ where $\\mathtt {P}^l(t)$ is the pose of the left event camera at time $t$ and $\\mathtt {P}_v$ is the pose of RV.", "The warp $\\mathbf {W}$ corresponds to the planar homography induced by a plane parallel to the image plane of RV and at the given depth $Z$ .", "In a coordinate system adapted to RV (i.e., $\\mathtt {P}_v=(\\mathtt {I}, \\mathbf {0})$ ), the planar homography is given by the $3\\times 3$ homogeneous matrix $\\mathtt {H}_\\mathbf {W}\\sim \\bigl (\\mathtt {R}+ \\frac{1}{Z}\\mathbf {t}\\mathbf {e}_3^\\top \\bigr )^{-1},$ where $\\mathtt {P}^l(t_k)=(\\mathtt {R},\\mathbf {t})$ and $\\mathbf {e}_3=(0,0,1)^\\top $ .", "A similar formula applies to compute ${f}_r$ from $\\mathcal {E}_r$ and the corresponding camera poses.", "In practice, DSIs are discretized over a projective voxel grid with $N_Z$ depth planes in $[Z_{\\min },Z_{\\max }]$ , and the Delta $\\delta $ in (REF ) is approximated by bilinear voting [52], [10].", "Hence, each voxel counts the number of event rays that pass through it.", "Next, letting $u=f_l(\\mathbf {X})$ , $v=f_r(\\mathbf {X})$ be the values of the DSIs at a 3D point $\\mathbf {X}$ , we seek to define a fused value $g(\\mathbf {X})$ .", "For simplicity, we consider metrics operating in a point-wise (i.e., voxel-wise) manner, i.e., with a slight abuse of notation: $g(\\mathbf {X}) \\equiv g\\bigl (f_l(\\mathbf {X}),f_r(\\mathbf {X}) \\bigr ) = g(u,v).$ The metrics considered are the following: $A(u,v) & \\doteq (u+v)/2 & \\text{Arithmetic mean}\\\\G(u,v) & \\doteq \\sqrt{uv} & \\text{Geometric mean}\\\\H(u,v) & \\doteq 2/(u^{-1}+v^{-1}) & \\text{Harmonic mean}\\\\\\text{RMS}(u,v) & \\textstyle \\doteq \\sqrt{\\frac{1}{2}(u^2+v^2)} & \\text{Quadratic mean}\\\\\\min (u,v) & & \\text{Minimum}\\\\\\max (u,v) & & \\text{Maximum}$ They are special cases of the Generalized mean (power mean or Hölder mean) and satisfy an order (see tab:fusionmetrics): $\\min \\le H \\le G \\le A \\le \\operatorname{RMS} \\le \\max ,$ where the equal sign holds if and only if $u=v$ .", "Eq.", "(REF ) also establishes a qualitative order of the depth maps obtained after DSI fusion with the corresponding function.", "The arithmetic mean $A$ (i.e., averaging ray densities) corresponds to the above-mentioned particular case of counting the back-projected stereo events on a single DSI.", "Hence, functions performing worse than this case are not pursued.", "Figure: Fusion functions considered, for u=[0,4]u=[0,4], v=1v=1.What are the requirements for a good fusion function (REF )?", "Intuitively, given two ray densities defined on the same volume, a fusion function should emphasize the regions of high ray density on both DSIs and deemphasize the rest.", "It is not sufficient for one of the two densities to be large at a point $\\mathbf {X}$ to signal the presence of a 3D edge; both densities have to be similar and large at $\\mathbf {X}$ .", "The arithmetic mean $A$ and its dominant functions (e.g., RMS and $\\max $ in (REF )) do not satisfy this “AND” logic, whereas the geometric mean, harmonic mean and $\\min $ functions do satisfy it (e.g., a large value $G(u,v)$ can only be achieved if both $u$ and $v$ are large).", "Mathematically, this requirement is well described by the concavity properties of the function (plot in tab:fusionmetrics).", "The arithmetic mean $A$ and functions below it are concave (assuming non-negative inputs).", "Further, $G$ , $H$ and $\\min $ are strictly concave.", "As the experiments will show, fusion using $G$ still lets notorious outliers pass.", "Functions $H$ and $\\min $ deemphasize considerably more than $G$ .", "The $\\min $ function saturates strictly, treating values $u>v$ as $u=v$ , hence clipping and discarding potentially beneficial information about ray density values.", "The harmonic mean $H$ shows strong concavity and varies smoothly with both input arguments, without discarding information.", "$H$ is dominated by the minimum of its arguments, $\\min (u,v)\\le H(u,v)\\le 2\\min (u,v)$ (in terms of the plot in tab:fusionmetrics ($v=1$ ), the green curve is bounded: $H(u,1)\\le 2$ ).", "The goal of the present work is to introduce and study fusion functions rather than to select a single “best one”.", "To narrow the discussion we often use a subset of the fusion functions.", "Remark.", "Interpretation in terms of Contrast Maximization: The proposed stereo fusion method is related to contrast/focus maximization [16], [17].", "The depth slices of the DSIs count refocused events (warped by back-projection), i.e., they constitute so-called images of warped events (IWEs) [16].", "The fused DSI can be interpreted as a similarity score between refocused events.", "Since fusion functions such as $H$ try to emphasize DSI regions with large and similar values, stereo Alg.", "REF tries to maximize the similarity score between refocused events (in-focus effect) on both cameras, jointly.", "The confidence map registers the maximum focus similarity score at each viewing ray of the fused DSI.", "Remark.", "More fusion functions: Additional means exist beyond those in tab:fusionmetrics, such as the contraharmonic mean, logarithmic mean, quasi-arithmetic mean, arithmetic-geometric mean, Heronian mean and weighted generalized means.", "However they are not covered for the sake of brevity and to avoid clutter.", "In some cases, the order relation (REF ) can be extended to justify their limited practical interest.", "Seeking more fusion functions, one could combine functions in tab:fusionmetrics with non-linear transformations of the input DSIs, in a homomorphic filtering fashion.", "For example, the $A$ -mean of the log-DSIs is related to the $G$ -mean of the DSIs, which has a stronger concavity than the $A$ -mean of the DSIs.", "The same idea can be applied to other functions to increase concavity: the $G$ -mean of the log-DSIs, $G(\\log (1+u),\\log (1+v)))$ , has stronger concavity than the $G$ -mean of the original DSIs.", "The logarithm plays down large DSI values, thus deemphasizing differences between corresponding DSIs before fusion.", "For simplicity, we restrict the study to the functions in tab:fusionmetrics.", "Remark.", "Loose connection with prior fusion work: A method for fusing 3D representations called “temporally synchronized event disparity volumes” was proposed in [53], where two binary data volumes $I_L, I_R$ were fused using an intersection-over-union (IoU) cost.", "It resembles the $H$ mean: $\\text{IoU}=\\frac{\\sum _{\\mathbf {x}\\in W} I_L(\\mathbf {x},d) \\cap I_R(\\mathbf {x},d)}{\\sum _{\\mathbf {x}\\in W} I_L(\\mathbf {x},d) \\cup I_R(\\mathbf {x},d)}\\;\\text{ vs. }\\; H=2\\frac{uv}{u+v},$ where the product in the numerator (“intersection”) acts as an “AND” condition and the sum in the denominator (“union”) acts as a normalization factor.", "However, note that the IoU in [53] is computed by aggregating binary data $I_L, I_R$ over spatial windows $W$ (of $32\\times 32$ pixels), whereas $H$ (tab:fusionmetrics) is computed voxel-wise, without spatial aggregation (i.e., it has higher spatial resolution), on continuous ray densities." ], [ "Temporal Fusion", "The functions presented in sec:method:fusion can be used to fuse any pair of aligned DSIs.", "Moreover, the functions can be extended to handle more than two inputs: they allow us to fuse an arbitrary number of registered DSIs, with a complexity that is linear in the number of DSIs.", "The DSIs may be populated by events from different cameras or, as we also investigate, from different time intervals.", "The main idea is to split an interval into multiple sub-intervals, build the DSI for each of them and fuse all DSIs into a single one (fig:block-diagram).", "This strategy can be applied regardless of the number of cameras in the system, hence it represents an independent axis of variation.", "Moreover, the same technique enables camera- and time- fusion, which we collectively call Alg.", "REF .", "The key lines of Alg.", "REF that change with respect to Alg.", "REF are lines 3 and 4.", "Given $N$ fusion functions ($N=6$ in tab:fusionmetrics) there are $2N^2$ possible fusion schemes considering the choice of temporal fusion function, across-camera fusion function and the order of application (Alg.", "REF ).", "For brevity we reduce the analysis to the comparison of $N=2$ fusion functions: $A$ and $H$ , which yield 8 possible fusion schemes.", "Let $A_t$ denote the fusion operation along the time ($t$ ) axis using the arithmetic mean ($A$ ).", "Likewise, $H_c$ is the fusion operation along the camera ($c$ ) axis using the harmonic mean ($H$ ).", "Then, $A_t \\circ H_c$ first applies $H_c$ (producing as many DSIs as sub-intervals) and then $A_t$ .", "Out the of 8 possibilities, there are only 6 distinct ones due to commutativity: $A_c \\circ A_t = A_t \\circ A_c \\;\\text{ and }\\; H_c \\circ H_t = H_t \\circ H_c.$ The four remaining fusion combinations are: $A_t\\circ H_c,\\;\\; A_c\\circ H_t,\\;\\; H_t\\circ A_c\\;\\; \\text{ and }\\; H_c\\circ A_t.$ Clearly, $A_c\\circ A_t$ is equivalent to the approach of summing all stereo events into a single DSI, and $H_t \\circ H_c$ is very restrictive because only edges seen by all cameras in all subintervals will survive.", "Alg.", "REF is also a particular case of Alg.", "REF ($H_c\\circ A_t$ ).", "[t] Stereo event fusion across cameras and time [1] Input: stereo events in interval $[0,T]$ , camera trajectory, camera calibration (intrinsic and extrinsic).", "Define a single reference view (RV) for all DSIs.", "Divide the interval $[0,T]$ into $N_s$ sub-intervals (of equal size or equal number of events).", "Create $2N_s$ DSIs by back-projecting events from each subinterval and camera.", "$S_2 \\circ S_1$ : Two fusion axes (cameras and time).", "If $S_2\\equiv A_t$ and $S_1 \\equiv H_c$ , compute first the $S_1$ fusion ($H$ -mean of two corresponding DSIs, on the same sub-interval); then compute the $S_2$ fusion ($A$ -mean of all sub-interval DSIs).", "Extract depth and confidence maps from the fused DSI." ], [ "Is Event Simultaneity Needed in Stereo?", "Data association is a fundamental problem in event-based vision [2].", "In stereo, event simultaneity is a cornerstone assumption to resolve data association (i.e., find corresponding points) and subsequently infer depth.", "A thought-provoking discovery made while developing our method is that across-camera fusion does not need to be done on corresponding intervals (step 4 in Alg.", "REF ).", "We tested our method with the shuffling block in fig:block-diagram enabled and it still produced good results (see sec:experim:shuffled).", "Hence, the proposed stereo method foregoes the event simultaneity assumption.", "The explanation is that given the camera poses, events are transformed into a representation (i.e., the DSI) where event simultaneity is not as critical as in the instantaneous stereo problem (sec:relatedwork).", "The camera poses serve as a proxy allowing us to reproject stereo events to a common DSI and fuse them, even if the DSIs are well separated in time.", "The DSI representation is sufficient to produce 3D reconstructions.", "Stereo is not solved by matching events, but by comparing possibly non-simultaneous DSIs (each DSI spans several thousands of events)." ], [ "Complexity Analysis", "Let us analyze the complexity of the proposed stereo methods in comparison with the monocular case.", "The main steps of the methods are: DSI creation (event back-projection), DSI fusion, maxima detection along viewing rays of the DSI, and thresholding (AGT).", "If $N_e$ is the number of events, $N_p$ is the number of pixels in the reference view, $N_Z$ is the number of depth planes in the DSI, and $N_k$ is the number of pixels in the AGT kernel (e.g., $5\\times 5$ ), then the complexity of [10] is $O(\\underbrace{N_e N_Z}_{\\text{DSI creation}} + \\underbrace{N_Z N_p}_{\\text{arg max}} + \\underbrace{N_p N_k}_{\\text{AGT}}).$ In the case of Alg.", "REF with $N_c$ cameras, assuming that each camera produces $N_e$ events, there are $N_c$ DSIs to build and fuse.", "Hence, the complexity is: $O(\\underbrace{N_c N_e N_Z}_{\\text{DSI creation}} + \\underbrace{N_c N_Z N_p}_{\\text{DSI fusion}} + \\underbrace{N_Z N_p}_{\\text{arg max}} + \\underbrace{N_p N_k}_{\\text{AGT}}).$ In the case of Alg.", "REF with $N_s$ subintervals, there are $N_s$ DSIs per camera, but each one with $N_e/N_s$ events, and so the complexity of DSI creation does not change.", "Only the fusion step becomes more expensive: $O(\\underbrace{N_c N_e N_Z}_{\\text{DSI creation}} + \\underbrace{N_s N_c N_Z N_p}_{\\text{DSI fusion}} + \\underbrace{N_Z N_p}_{\\text{arg max}} + \\underbrace{N_p N_k}_{\\text{AGT}}).$" ], [ "Experiments", "To assess the performance of our method we test on a wide variety of real-world and synthetic sequences, which are introduced in sec:experim:datasets.", "sec:experim:fusionfunctions compares functions for fusion across cameras.", "sec:experim:methods compares our method with three state-of-the-art methods on MVSEC and UZH data.", "sec:experim:timefusion evaluates temporal fusion and sub-interval shuffling.", "Then, we evaluate on higher resolution data: driving dataset DSEC (sec:experim:dsec), 1Mpixel VIO dataset TUM-VIE (sec:experim:tumvie) and analyze the sensitivity with respect to the camera's spatial resolution (sec:experim:multires).", "We also present trinocular examples (sec:experim:morethantwocams), and analyze runtime (sec:experim:runtime) and sensitivity with respect to the sparsity of the input events (sec:experim:varyingcontrastthreshold).", "Finally, sec:experim:summary summarizes the findings and sec:limitations discusses limitations of the method." ], [ "Datasets and Evaluation Metrics", "Datasets.", "We evaluate our stereo methods on sequences from five publicly available datasets [38], [54], [55], [27], [56] and a simulator.", "Sequences from [38], [55] were acquired with a hand-held stereo or trinocular event camera in indoor environments.", "Sequences in the MVSEC dataset [54] were acquired with a stereo event camera mounted on a drone while flying indoors.", "The sequences in the DSEC dataset [27] were recorded with event cameras on a car that drove through Zurich's surroundings.", "The TUM-VIE dataset was recorded with the sensor rig mounted on a helmet, and its sequences contain indoor and outdoor scenes.", "The simulator [57], [58] provides synthetic sequences using an ideal event camera model and scenes built using CAD models.", "Table: Parameters of stereo or trinocular event-camera rigs used in the experiments.Ground Truth.", "Some datasets contain ground truth poses from a motion-capture system, which we use as input to all tested methods.", "If camera poses are not available (e.g., TUM-VIE), we compute them using data from the sensor rig (e.g., a visual-inertial odometry algorithm).", "Some datasets, such as MVSEC and DSEC, contain ground truth depth for quantitative assessment of the 3D reconstruction methods.", "Depth is given by a LiDAR operating at 1020.", "The event camera pixels corresponding to points outside the LiDAR's field of view (FOV) or points close to the sensor rig may not have a LiDAR depth value.", "Rigs and Calibration.", "The main geometric parameters of the event cameras used in the above datasets are summarized in tab:stereo-rig-params.", "The stereo rigs in [38], [54] consist of two Dynamic and Active Pixel Vision Sensors (DAVIS) [50].", "The DAVIS comprises a frame-based and an event-based sensor on the same pixel array, thus calibration (intrinsic and extrinsic) is achieved using the intensity frames, and then it is applied to the events.", "The datasets whose cameras output only events (EVIMO2, DSEC and TUM-VIE), are calibrated by converting events to frames and calibrating the latter (e.g., using [59]).", "All methods work on undistorted coordinates.", "Metrics.", "The performance of the proposed method is quantitatively characterized using several standard metrics on the datasets with ground truth depth (i.e., MVSEC and DSEC).", "We provide mean and median errors between the estimated depth and the ground truth one (median errors are more robust to outliers than mean errors).", "We also report the number of reconstructed points, the number of outliers (bad-pix [60]), the scale invariant depth error (SILog Err), the sum of absolute value of relative differences in depth (AErrR), and $\\delta $ -accuracy values on the percentage of points whose depth ratio with respect to ground truth is within some threshold (see [61]).", "We also provide precision, recall and F1-score curves [62].", "Precision is the percentage of estimations that are within a certain error from the ground truth.", "Recall (e.g., completeness or reconstruction density) is the percentage of ground truth points that are within a certain error from the estimations.", "The F1 score is the harmonic mean of precision and recall, which is dominated by the smallest of them.", "Since the depth maps obtained are semi-dense (while the ground truth is often more dense), recall often dominates." ] ]
2207.10494
[ [ "NusaCrowd: A Call for Open and Reproducible NLP Research in Indonesian\n Languages" ], [ "Abstract At the center of the underlying issues that halt Indonesian natural language processing (NLP) research advancement, we find data scarcity.", "Resources in Indonesian languages, especially the local ones, are extremely scarce and underrepresented.", "Many Indonesian researchers do not publish their dataset.", "Furthermore, the few public datasets that we have are scattered across different platforms, thus makes performing reproducible and data-centric research in Indonesian NLP even more arduous.", "Rising to this challenge, we initiate the first Indonesian NLP crowdsourcing effort, NusaCrowd.", "NusaCrowd strives to provide the largest datasheets aggregation with standardized data loading for NLP tasks in all Indonesian languages.", "By enabling open and centralized access to Indonesian NLP resources, we hope NusaCrowd can tackle the data scarcity problem hindering NLP progress in Indonesia and bring NLP practitioners to move towards collaboration." ], [ "What is Figure: NO_CAPTION", "Natural language processing (NLP) resources in Indonesian languages, especially the local language ones, are extremely scarce and underrepresented in the research community.", "This introduces bottlenecks to Indonesian NLP research, restraining it from opportunities and hindering its progress.", "In response to this issue, several Indonesian and NLP communities have sourced various types of datasets to also be available in Indonesian languages [22], [13], [6], [14], [15], [2], [21], [1], [12], [19].", "However, a significant mass of the local resources is scattered across different platforms [9], [5], [7], and a lack of access to public datasets still persists [3].", "Figure: Open access to the datasheets collected is provided through NusaCatalogue, and the dataloader scripts to retrieve the resources are implemented in NusaCrowd Data Hub.To address this vital problem, inspired by other open collaboration projects [18], [17], [8], [4], [16], [20], [10], [11], we take a step and initiate NusaCrowd, a joint movement to collect and centralize NLP datasets in Indonesian and various Indonesia's local languages, and engage the linguistics community in collaboration.", "Figure: The outline of public Indonesian NLP datasheet submission to NusaCatalogue.Powered by the collective effort of our contributors, NusaCrowd aims to increase the accessibility of these datasets and promote reproducible research on Indonesian languages through three fundamental facets: 1) Curated public corpora datasheet sourcinghttps://indonlp.github.io/nusa-catalogue/, 2) Open-access centralized data hubhttps://github.com/IndoNLP/nusa-crowd, and 3) Promoting private-to-public data access.", "We maintain the quality of the contributions, both the consolidation efforts and the programmatic means, by enforcing a quality control with a mix of automatic and manual evaluation schemes.", "NusaCrowd is currently open for contribution, the movement is held from 25 June 2022 to 18 November 2022.", "Let's bring Indonesian NLP research one step forward together." ], [ "Contributing in Figure: NO_CAPTION", "Together, contributors in NusaCrowd drive Indonesian NLP forward by developing a multi-faceted solution to improve data accessibility and research reproducibility.", "To assist our widespread open collaboration and collective progression, we formulate three main ways of contributing in NusaCrowd in the following sections, each corresponds to a fundamental aspect in NusaCrowd's main objective." ], [ "Submit public Figure: NO_CAPTION", "We encourage contributors to register the datasheets of public datasets on NusaCataloguehttps://indonlp.github.io/nusa-catalogue/ by submitting them through an online form at https://forms.gle/31dMGZik25DPFYFd6.", "NusaCatalogue is a public datasheet catalogue website, inspired by [4], in which we list all datasets collected in NusaCrowd.", "We build NusaCatalogue to improve the discoverability of Indonesian NLP datasets and to assist users in searching and locating Indonesian NLP datasets based on their metadata.", "A datasheet is a dataset metadata which contains various information about the dataset, including but not limited to: dataset name, original resource URL, relevant publication, supported tasks, and dataset licence.", "The datasheet will be reviewed and scored within a week or two.", "The contribution point calculation for the public Indonesian NLP corpora datasheet is based on three criteria: 1) whether the relevant dataset is previously public or not, 2) dataset quality, and 3) language rarity.", "Details on the scoring mechanism will be explained in §REF .", "Once the datasheet passes the review, we will notify the responsible contributor and list the approved dataset's datasheet on NusaCatalogue and also reported on NusaCrowd data hub task listhttps://github.com/orgs/IndoNLP/projects/2 so its dataloader could be implemented.", "The complete flow of how to submit the public Indonesian NLP corpora datasheet is shown in Figure REF .", "Figure: The outline of dataloader implementation for NusaCrowd data hub." ], [ "Implement dataloader(s) for Figure: NO_CAPTION", "A large-scale centralized data hub has to be equipped with the capability of a simple and standardized programmatic data access that spans across diverse resources, regardless of their separate hosting locations, distinct data structures or formats, and different configurations.", "For this purpose, building NusaCrowd data hub requires a few key elements: datasheet documentation (§REF ), task schema standardization to support common NLP tasks, and dataloader implementation.", "While the datasheets become the backbone of NusaCrowd and standardized task schemas compose the skeleton, the dataloader implementation is the heart of NusaCrowd data hub.", "Each dataset requires a specific dataloader script tailored to its source task type, structure, and configuration to enable easy loading and enforce interoperability.", "To centralize all of the Indonesian NLP resources, NusaCrowd needs a large number of proper dataloaders to be implemented.", "Therefore, we invite all collaborators to contribute through creating these dataloaders via NusaCrowd's GitHubhttps://github.com/IndoNLP/nusa-crowd.", "Firstly, a contributor can view the task list in NusaCrowd Github project, then choose the dataset they want to implement by assigning themself to the related issue.", "Afterwards, the contributor can start setting up the environment needed for development.", "To help with the dataloader implementation, we provide a template script specifying all the parts the contributor needs to complete, and task schemas for common NLP tasks, e.g., knowledge base, question answering, text classification, text-to-text, text pairs, question answering, and more.", "We also equip NusaCrowd's repository with several working dataloader scripts that the contributor can check for examples.", "To fill in the details of the dataset in the dataloader, such as its source URL or its publication, the contributor can refer to the corresponding datasheet recorded in NusaCatalogue.", "The contributor can ensure that their dataloader is implemented correctly through a manual inspection, a direct attempt of execution, and a unit test provided in the repository.", "We also encourage the contributor to tidy up their code accordingly with our formatter before they make a pull request to submit their changes.", "The implemented dataloader then will go through a code review process by NusaCrowd maintainers.", "If an adjustment in the code is required, the maintainer will request some changes and provide their feedback on a comment on the respective pull request, so the contributor will be able to improve the dataloader accordingly.", "Once two maintainers give their approvals, the dataloader will be merged to NusaCrowd repository.", "The flow overview of the dataloader implementation is depicted in Figure REF .", "A comprehensive guide for contributing through dataloader implementation can be accessed herehttps://github.com/IndoNLP/nusa-crowd/blob/master/DATALOADER.md." ], [ "Provide information on private Figure: NO_CAPTION", "Many studies in Indonesian NLP use private datasets, which in turn hampers the research reproducibility.", "Therefore, the last method to contribute is to list research papers of Indonesian NLP in which the data is not shared publicly.", "We then contact the author to participate in open data access and ask for their approval to include the dataset in NusaCrowd.", "Contribution points for the authors that release their data to public will follow the scoring defined in §REF .", "The paper lister will also be awarded with a contribution point.", "As far as we know, there is no data or analysis yet on why local researchers prefer not to share their dataset publicly.", "Some reasons that we are aware of include: 1) not accustomed to open data and open research, 2) restricted by the university or funding policy, or 3) keep the data private as property.", "By listing research with private datasets, we can additionally ask the authors about their consideration to improve our understanding on this matter.", "Steps to provide information on private Indonesian NLP datasets are illustrated in Figure REF ." ], [ "Contribution Point", "To support fairness and transparency for all of our contributors, we establish a scoring system of which co-authorship eligibility will be decided from.", "To be eligible as a co-author in the upcoming NusaCrowd publication, a contributor needs to earn at least 10 contribution points.", "The score is aggregated from all the contributions made by the contributors.", "In order to earn contribution points, we introduce three different methods to contribute (for the method details, see §): 1) submitting public Indonesian NLP corpora datasheet, 2) implementing dataloader(s) for NusaCrowd data hub, and 3) provide information on non-public Indonesian NLP datasets.", "The point for each type of contribution is described in the following paragraphs.", "Figure: A glance at the contribution matrix used for recapitulation.", "The contributor list is clipped for simplicity." ], [ "Public Figure: NO_CAPTION", "A contributor can help to register public NLP corpora in NusaCrowd.", "For any datasheet listed, the contributor is eligible for +2 contribution point as a referrer.", "To support the development of local language datasets, we provide additional contribution points according to the rarity of the dataset language.", "Specifically, a contributor of any Sundanese (sun), Javanese (jav), or Minangkabau (min) dataset, will receive +2 contribution points, while a contributor of any other local language dataset will be granted +3 contribution points.", "In addition, to encourage more diverse NLP corpora, we provide additional +2 contribution points for tasks that are considered rare.", "Based on our observation, we find that the common NLP tasks in Indonesian languages include: machine translation (MT), language modeling (LM), sentiment analysis (SA), and named entity recognition (NER).", "All other NLP tasks are considered rare and are eligible for the +2 contribution points.", "Lastly, we also notice that publicly available Indonesian NLP corpora involving another modality (e.g., speech or image) are very scarce, for instance: speech-to-text or automatic speech recognition (ASR), text-to-speech (TTS) or speech synthesis, image-to-text (e.g., image captioning), text-to-image (e.g., controllable image generation), etc.", "To encourage more coverage over these data, we will give additional +2 contribution points for the relevant datasheets submitted.", "We understand that dataset quality can vary a lot.", "To support fairness in scoring datasets with different qualities, for any dataset that does not pass a certain minimum standard, 50% penalty will be applied.", "This penalty affects any dataset that is collected with: 1) crawling without manual validation process, 2) machine or heuristic-rule labelled dataset without manual validation, and 3) machine-translated dataset without manual validation." ], [ "Implementing Figure: NO_CAPTION", "A contributor can help to implement a dataloader for any dataset listed on the task list in NusaCrowd GitHub project (see §REF ).", "As a rule of thumb, one dataloader implementation is generally worth 3 contributions points.", "However, there are some exceptions where a dataloader can be worth more.", "The contribution points will be counted once the respective pull request is merged to master." ], [ "Finding and opening private Figure: NO_CAPTION", "Contributors can help to list research papers introducing non-public Indonesian NLP dataset.", "For every private dataset listed, the corresponding contributor will be eligible for +1 contribution point.", "While for the original author of the dataset, if the author agrees to make the dataset publicly available, the author will be eligible for +3 contribution points.", "Note that we might request the author to clean up their data until it is proper for public release (i.e.", "formatting, consistency, or additional filtering).", "In addition to the contribution points obtained from publicly releasing the dataset, the author is also eligible for additional points from the datasheet listing, as mentioned in §REF , when the datasheet is submitted to NusaCatalogue." ], [ "Other ways to contribute in Figure: NO_CAPTION", "Other than the previously mentioned contributions methods, we also open for other forms of contribution, subject to NusaCrowd's open discussion.", "To get more details on the open discussion, please join our Slack and Whatsapp group (see §)." ], [ "Contribution point recapitulation", "The total contribution point for all contributors will be recapped every week by a maintainer, and a contribution matrix will be published and updated on a weekly basis.", "The contribution matrix and more detailed information to the contribution point can be accessed on the following linkhttps://docs.google.com/spreadsheets/d/e/2PACX-1vS3Kbi9s3o_V-lyFRHeONOI7jFnMlUswqKj-D6cpgiSYOSxbijC4DIrjAstqxj-H-EI6I2lFhhyKe5s/pubhtml.", "The final score will be recapped in November and the finalized contribution matrix (see Figure REF for an example) will be published along with the research paper.", "Figure: Timeline of the NusaCrowd movement.", "The datasheet collection and dataloader implementation start from 25 June 2022 to 2 October 2022.", "Then NusaCrowd will continue with experiments, paper writing, and point recapitulation.", "The paper will be submitted to a top-level computational linguistics conference, ACL 2023.Figure: Join NusaCrowd's Slack (https://join.slack.com/t/nusacrowd/shared_invite/zt-1b61t06zn-rf2rWw8WFZCpjVp3iLXf0g), Whatsapp group (https://chat.whatsapp.com/Jn4nM6l3kSn3p4kJVESTwv), and Github (https://github.com/IndoNLP/nusa-crowd)." ], [ "Dataset Licensing and Ownership", "NusaCrowd does not make a clone or copy the submitted dataset.", "The owner and copyright holder will remain to the original data owner.", "All data access policy will follow the original data licence without any modification from NusaCrowd.", "NusaCrowd simply downloads and reads the file from the original publicly available data source location when creating the dataloader, and, in addition, the NusaCatalogue datasheet also directly points to the original data site and publication." ], [ "Timeline", "The current NusaCrowd movement is started from 25 June 2022 and will be closed on 18 November 2022.", "The registration of datasheets and dataloaders will be completed on 2 October 2022.", "From October onwards, we will focus more on preparing extension and set of experiments to show the benefit of having NusaCrowd platform.", "In addition, the contribution point for each contributors (see Figure REF for an example) and the research paper will also be finalized by early November, followed by the final submission of the paper to the Association of Computational Linguistics (ACL) 2023 conference.", "The detailed phase for the paper development is shown in Figure REF ." ], [ "Summary", "NLP resources in Indonesian languages, especially the local ones, are extremely low-resource and underrepresented in the research community.", "There are multitudes factors causing this limitation.", "Here we solve this problem by initiating the largest Indonesian NLP crowd sourcing efforts, NusaCrowd.", "In the spirit of fairness, openness, and transparency; NusaCrowd comes with various ways to contribute towards openness and standardization in Indonesian NLP, while at the same time, introducing a scoring mechanism that provides equal chance for all contributors to show the best out of their contributions.", "We hope that, NusaCrowd can bring a new perspective to all Indonesian NLP practitioners to focus more on openness and collaboration through code and data sharing, complete documentation, and community efforts." ], [ "Call for participation", "We invite all Indonesian NLP enthusiasts to participate in NusaCrowd.", "For any inquiry and further information, contributors can join our community channel on Slack or Whatsapp Group (see Figure REF ).", "Let's work together to advance the progress of Indonesian language NLP!", "Figure: NO_CAPTIONFigure: NO_CAPTIONFigure: NO_CAPTION" ], [ "Acknowledgements", "Thank you for all initiators, without whom NusaCrowd initiative would never be possible, and salute to all contributors for the amazing efforts in NusaCrowd." ] ]
2207.10524
[ [ "Online Localisation and Colored Mesh Reconstruction Architecture for 3D\n Visual Feedback in Robotic Exploration Missions" ], [ "Abstract This paper introduces an Online Localisation and Colored Mesh Reconstruction (OLCMR) ROS perception architecture for ground exploration robots aiming to perform robust Simultaneous Localisation And Mapping (SLAM) in challenging unknown environments and provide an associated colored 3D mesh representation in real time.", "It is intended to be used by a remote human operator to easily visualise the mapped environment during or after the mission or as a development base for further researches in the field of exploration robotics.", "The architecture is mainly composed of carefully-selected open-source ROS implementations of a LiDAR-based SLAM algorithm alongside a colored surface reconstruction procedure using a point cloud and RGB camera images projected into the 3D space.", "The overall performances are evaluated on the Newer College handheld LiDAR-Vision reference dataset and on two experimental trajectories gathered on board of representative wheeled robots in respectively urban and countryside outdoor environments.", "Index Terms: Field Robots, Mapping, SLAM, Colored Surface Reconstruction" ], [ "Introduction", "Embedded architectures for autonomous robots performing exploration missions are continuously improving, such that localisation and map construction in a previously uncharted environment becomes possible with limited human intervention [1].", "Many SLAM-based localisation methods have been proposed and extensively evaluated on reference datasets.", "They rely either on monocular cameras or stereovision [2], [3], [4], or on 2D or 3D LiDAR scanners [5], [6], [7], [8], with some recent attempts to combine both types of sensors [9].", "Several computationally-efficient online mapping and 3D reconstruction approaches have been proposed in parallel [10], while the specific issue of the colourisation of mesh or point cloud representations has been investigated independently [11], [12].", "However, there remains the need to further evaluate in realistic conditions the behaviour and performances of full systems which are able to combine online localisation, mapping and mesh colourisation on real ground robots equipped with 3D LiDAR and vision sensors [13].", "Early work on the subject of real-time 3D mesh reconstruction in [14] introduced real-time 3D surface mesh reconstruction in an urban environment using stereo camera images and pose estimation obtained by fusing GNSS measurements and visual odometry.", "The method proposed in [15] performs real-time 3D mapping of house-sized indoor environments through the application of Truncated Signed Distance Function [16] and dynamic voxel hashing [17], with Visual-Inertial Odometry (VIO) as localisation source.", "While 3D mesh reconstruction was mainly intended to be used as a visualisation tool for human operators, later work has focused on the aspect of surfacic mesh mapping for navigation purposes, arguing that surface-based maps contain more dense information compared to sparse point clouds.", "This makes them more exploitable for autonomous driving or robot navigation (either autonomous or remotely-operated).", "Limitation of memory usage and computational cost along with scalability of the 3D mesh reconstruction solution are major concerns when addressing the deployment of such systems on the field.", "In [18], a method has been presented to carry out online 3D large-scale mesh reconstruction using manifold mapping and monocular-camera-based localisation applied to urban mapping and evaluated on the KITTI autonomous driving dataset [19].", "An online localisation and dense scalable 3D map reconstruction architecture has been presented in [20] with grayscale coloring.", "It implements surfel based methods with the use of RGB-D, stereo and monocular cameras.", "Texture projection on reconstructed meshes is addressed in [21] and [22], however the presented methods consist of a post-processing of the whole data and are thus performed offline.", "In [23], a multi-robot system using stereovision-based localisation and 3D TSDF manifold mapping with grayscale coloring has been shown to run in real-time in an indoor environment.", "A real-time approach for creating and maintaining a colourised or textured surface reconstruction from RGB-D sensors has been introduced in [24], with a special focus on memory management for scalability.", "Figure: Example of online colored mesh rendered by the proposed systemThe Online Localisation and Colored Mesh Reconstruction (OLCMR) architecture proposed in the present paper is a complete system that performs both localisation and colored mesh reconstruction in real-time on board of a ground robot equipped with a 3D LiDAR scanner and multiple cameras.", "Figure REF depicts an example of reconstruction produced by this system.", "The designed system builds upon recent open-source implementations of LiDAR-based SLAM and 3D mesh reconstruction methods that are summarized in Section  in perspective with related work, along with a description of the adaptations that were necessary to tackle the common objective pursued here.", "The overall performances of the proposed OLCMR architecture have then been evaluated on the handheld Newer College reference benchmark [25], [26] and are reported in Section .", "The results and computational needs achieved on two experimental datasets acquired with tele-operated ground robots in urban and countryside outdoor environments are given in Section  to demonstrate the versatility and wide applicability of the system.", "The results on these three different test cases include the evaluation of localisation (with and without loop-closure) and mapping accuracy with respect to independent reference models.", "Illustrations of mesh coloration are also presented for each dataset alongside images taken from the robot camera to highlight the quality of the whole reconstruction." ], [ "OLCMR Architecture description", "The system architecture (Figure REF ) has been designed to process data on-board of a ground robot for online missions in diverse uncharted environments.", "The main requirement was to be able to combine data from a 3D Laser scanner, one or several monocular cameras and an IMU, given intrinsic and extrinsic calibration parameters of these sensors (using e.g. [27]).", "The main objective is to compute online a dense colored 3D reconstruction with sufficient localisation accuracy, in order to be provided during the autonomous robot mission.", "The architecture is intended to function in various types of scenarios presenting challenging characteristics such as GNSS-denied, unstructured surroundings, dim or varying lights, uneven terrain.", "It has been chosen to rely on a LiDAR-based SLAM (Lidarslam_ROS2)[28] for localisation to be light-invariant and to navigate in environments with possibly long ranges to points of interest, thus ruling out most of vision-based methods [4], [6].", "The choice of the LiDAR as the main localization and mapping sensor also readily provides a 3D point cloud without any additional computational cost for the embedded CPU.", "IMU measurements and kinematic odometry (based on wheel encoders) are additionally used to robustify the robot localisation through EKF sensor fusion.", "The camera images are intended to enrich the produced 3D mesh by incorporating color levels or any kind of visually extracted data (e.g., semantic classification in a future perspective) through their projection in the 3D dense reconstruction using their relationship with the LiDAR point cloud.", "The TSDF-based mapping method Voxblox [29] is used to obtain this 3D dense reconstruction of the environment The processing required by OLCMR is entirely performed on CPU (see related evaluation results in Section ).", "The architecture is implemented under ROS2 Galactic, with the 3D mesh reconstruction running under ROS Noetic and communicating through a ROS1/ROS2 bridge [30].", "Figure: OLCMR architecture overview" ], [ "Localisation", "Reviews of open-source ROS SLAM implementations based on the use of stereo cameras, depth cameras or LiDAR are proposed in [7], [4], [3], [5].", "For the previously stated reasons, our choice for a SLAM implementation has been restricted to the LiDAR based approaches.", "Since OLCMR is supposed to operate in complex outdoor environments (possibly unstructured and with uneven terrain) and relies on a 3D point cloud for mapping purposes, 2D LiDAR based approaches have been excluded.", "Recent SLAM implementations are often composed of two distinct parts.", "The SLAM front-end allows real-time localisation of the robot relative to its close environment, relying on high-rate sensors, either proprioceptive or external.", "The back-end estimates and corrects the localisation drift induced by the front-end over time.", "It run at a lower rate and relies either on absolute measurements such as GNSS as in LIO-SAM [31], or landmarks of known absolute position or on the recognition of previously visited areas (loop-closure).", "The vast majority of LiDAR-based SLAM front-end implementations are centered on the use of a scan-to-scan or scan-to-submap matching algorithm.", "The most used approach for scan-to-scan matching is Iterative Closest Point (ICP) [32], which however presents significant drawbacks in the studied context where the LiDAR clouds are composed of a vast number of points, depending on the sensor angular resolution and beam number (typical numbers being 16, 32, 64, 128).", "Thus, the sole application of an iterative, exhaustive algorithm such as ICP for scan matching can be suboptimal requiring a consequent amount of calculation to converge and grant satisfying results.", "A widely used solution to that issue is the extraction of interest points for each LiDAR scan using geometric properties of the cloud's distribution, as in LEGO-LOAM [33].", "The ICP matching is then performed only using these more relevant points thus requiring way less computational capacity.", "However, the interest point extraction is still a costly process and requires the point cloud to present a somehow organised distribution for them to be extracted efficiently.", "As an alternative, a stochastic method based on normal approximation of the point cloud distribution named Normal Distribution Transform (NDT) was introduced in [34].", "This makes the SLAM algorithm appropriate for evolution in unstructured environment as well as in geometrically rich ones, and requires limited computational power with no feature point extraction processing.", "For these reasons, the Lidarslam_ROS2 open-source SLAM implementation of a LiDAR scan-to-scan matching has been integrated in the proposed architecture.", "In order to reduce the impact of locations that could be difficult to map (e.g.", "corridors with repeating geometrical patterns or bare flat fields) and for the SLAM to converge faster, the scan matcher takes a prior transform and differentiates it from the latest prior to use as initial guess for the transform between two scans.", "The prior choice is left to the discretion of the user.", "In the current case, the prior must present a good trade-off between accuracy and computational cost.", "It has thus been chosen to estimate it by fusing the forward velocity from wheeled odometry and the orientation and angular velocities from the IMU using an Extended Kalman Filter.", "The chosen method requires a significant overlap between successive scans, which could make it unsuitable to fast moving vehicles such as autonomous cars.", "In order to avoid some local failures in the scan-to-scan pose estimation (typically happening during fast rotations or very jittery motions), we proposed the following procedure: when the estimated transform from the last scan pose is greater than a threshold (set to 0.5 m) from the prior pose, this prior is used as final estimated transform and the associated scan is not registered into the map.", "The LidarSlam back-end performs loop-closure detection by comparing the current scan to stored key-frames using NDT and pose graph optimisation with the g$^{2}$ o framework [35]." ], [ "3D Dense Reconstruction", "The idea of producing a dense representation of the explored environment from the LiDAR point-clouds instead of using the sparse map produced by the SLAM implementation is driven by two different needs.", "In the first place, a point-cloud map is not the most adequate for visualisation by a human operator as the structure of mapped objects remains ambiguous when looking at it, especially in cluttered environments.", "Secondly, a dense representation is much more adapted to navigation purposes as it allows the robot to infer the terrain traversability at any coordinates without requiring further interpolation and therefore navigate in the full exploration space.", "Various methods for 3D dense representation of the environment have been developed by the robotic and computer vision community.", "Since OLCMR is intended to function in real-time, offline methods such as Structure from Motion and Poisson surface reconstruction have been dismissed.", "Voxel-based visualisation produced by methods such as Octomap [36] is well-suited for inclusion in autonomous navigation loops but less in terms of visualisation.", "Online surfacic methods such as the previously mentioned TSDF [16] use the information of sensor position relative to points in the cloud, thus removing the ambiguity of surface orientation and allowing to identify free space between the sensor and the mapped points.", "For these reasons, the 3D mesh reconstruction relies on the open-source implementation of the Voxblox ROS package [29] to build incrementally a surfacic mesh from each LiDAR scan using the generated point-cloud and the current robot pose (given by the SLAM/EKF process described in Section REF ) before fusing it in the globally reconstructed mesh." ], [ "2D-3D color re-projection", "A dedicated process handles the colourisation of the LiDAR points with the corresponding pixel values from the RGB camera images, for further inclusion in a 3D colored mesh.", "The projection of the 3D points into the camera images is computed geometrically [37] as follows.", "For each camera, the coordinates of the 3D LiDAR points are expressed into the camera frame, with $R$ and $t$ respectively being the rotation matrix and translation vector between the LiDAR ($L$ ) and camera ($C$ ) frames.", "$\\begin{bmatrix}x_i & y_i & z_i\\end{bmatrix}_C^T = R .", "\\begin{bmatrix}x_i & y_i & z_i\\end{bmatrix}_L^T + t$ The coordinates are normalised by the point depth values.", "$\\begin{bmatrix}x^{\\prime }_i \\\\y^{\\prime }_i \\\\\\end{bmatrix} = \\begin{bmatrix}1/z_i & 0 \\\\0 & 1/z_i \\\\\\end{bmatrix} .", "\\begin{bmatrix}x_i \\\\y_i \\\\\\end{bmatrix}_C$ The coordinates are projected into the image plane using the intrinsic camera matrix, where the parameters $p_x$ , $p_y$ , $c_u$ and $c_v$ come from the camera calibration process.", "$\\begin{bmatrix}u \\\\v \\\\1\\end{bmatrix} = \\begin{bmatrix}p_x & 0 & c_u \\\\0 & p_y & c_v \\\\0 & 0 & 1\\end{bmatrix} .", "\\begin{bmatrix}x^{\\prime }_i \\\\y^{\\prime }_i \\\\1\\end{bmatrix}$ Let $V(u,v)$ be the value (e.g.", "RGB color) of the pixel of coordinates $u,v$ .", "Because cameras are affected by distortion, undistortion is applied to compute the image $U$ using the distortion model and parameters given during calibration.", "If the resulting coordinates are inside the image bounds, the RGB value of the pixel of coordinates $[u,v]$ is allocated to the corresponding 3D LiDAR point.", "The color field is denoted as $C$ .", "$C(x_i,y_i,z_i) = U(u,v)$ Each surface reconstructed by the TSDF is then colored by the Voxblox pipeline with a recursive average filtering of its vertex colors.", "For the evaluations of Section , the full calibration parameters were available in the reference dataset.", "For the robot acquisitions from Section , the calibration process was performed using the Kalibr ROS package [27].", "The intrinsic parameters and distortion model are estimated for each camera using a collection of images of an AprilTag grid of known dimensions.", "Kalibr exploits IMU measurements and the camera overlaps to estimate their extrinsic parameters.", "The transform between the LiDAR and the cameras was determined manually using the CAD model of the robots.", "Note that it is well known that combined LiDAR-vision systems can be very sensitive to the calibration between these two sensors for re-projection or SLAM purposes [9].", "Robot-held public datasets such as [38] or [39] do not contain all the required sensors at once (e.g.", "LiDAR and cameras).", "Vehicle-based datasets such as KITTI [19] are not adapted to our architecture, with large moving objects and reduced overlap between successive scans.", "The performances of the OLCMR architecture have thus been evaluated in terms of localisation precision and 3D reconstruction quality with the Newer College Dataset [25] and its 2021 extension [26], and a qualitative assessment of the mesh colourisation is also presented.", "The Newer College dataset extension provides LiDAR scans, IMU measurements and monocular images gathered from 4 cameras aboard a handheld device along various trajectories inside New College, Oxford.", "Ground-truth for the evaluation of SLAM and 3D reconstruction are provided using a tripod-mounted survey LiDAR and ICP registration, and many modern SLAM solutions have been evaluated on this dataset.", "For these reasons, this dataset has been deemed to be relevant for the evaluation of the OLCMR architecture performances.", "Although the architecture has been developed to function optimally aboard a ground robot, a few changes allowed it to be efficient while treating the data gathered from these handheld trajectories.", "The robot kinematic odometry used as prior has been replaced with a constant forward speed of 1.0 $m/s$ and the 128 beam LiDAR point cloud has been down-sampled 10 times for SLAM input to limit its CPU usage.", "This section presents these evaluation results with relevant comparison to state-of-the-art.", "All evaluations were performed on a Intel Xeon(R) W-2123 8 core 3.60GHz CPU with 16 GB of RAM." ], [ "Localisation Evaluation", "The localisation building blocks (LiDAR SLAM and EKF management of prior) of the proposed architecture have been evaluated on this dataset.", "The goal of this evaluation is to ensure that this localisation is sufficiently precise to be used for the overall colored mesh reconstruction.", "Following the localisation evaluation protocol proposed in [40], the Relative Pose Error over 10 m (RPE) and the Absolute Trajectory Error (ATE) are respectively computed for the 2021 quad-easy and 2020 short-experiment datasets.", "They are compared to state-of-the art SLAM evaluations on the same trajectories (as respectively reported in [41] and [42]).", "Fig.", "REF shows the trajectories estimated by the SLAM scan matcher and loop closure optimiser superimposed on the ground truth trajectory, as well as yaw errors.", "Table REF summarizes the SLAM performances.", "The localisation performance is consistent with the best currently available SLAM methods and the loop closure yields a small performance improvement, since the front-end SLAM presented only a small drift in this richly-textured environment.", "Table: SLAM evaluation on the Newer College datasetFigure: Newer College localisation evaluation, top shows trajectories estimated after scan matching (left) and loop closure optimisation (right) superposed to the ground truth trajectory.", "Bottom shows Absolute Yaw Error smoothed over 50 samples for same trajectories." ], [ "3D Reconstruction Evaluation", "The Newer College dataset offers ground-truth 3D meshes of the visited area used to evaluate the quality of our 3D mesh reconstruction.", "After running the mapping pipeline of the architecture (composed of the LiDAR SLAM and Voxblox) on the Newer College dataset, the resulting uncolored mesh is treated and compared to the ground truth model using the CloudCompare open-source softwareCloudCompare website : https://www.danielgm.net/cc/.", "Both meshes are sampled into dense point clouds which are then manually stripped of aberrant points produced by reflective surfaces (e.g.", "windows) by applying a planar cut-off beyond said surfaces.", "The closest point error between the two resulting point clouds is performed using the M3C2 method described in [48].", "Points that have no relevant match are removed.", "90% of our reconstruction model points show a distance error to the ground truth model lesser than 0.54 m. As a comparison, [8] states that 90% of the points from their reconstructed model show a distance error lesser than 0.50 m on a similar dataset.", "The results are illustrated in Figure REF .", "This validates the soundness of the overall LiDAR-based localisation and dense mapping algorithms incorporated in the architecture.", "The projection of the camera grayscale levels has also been carried out as detailed in Section REF , and the associated qualitative result is given in Figure REF .", "The CPU and RAM usage have also been monitored during the processing of the trajectories (see Figure REF ).", "It turns out that the CPU needs have been successfully adjusted to avoid the saturation of available resources with limited growth over time, and that the RAM requirements increase quite linearly which is a usual behaviour of SLAM systems as the map size increases.", "By performing a linear fit to the RAM usage, the maximum duration of a mission with similar settings is roughly estimated to 3142 s. Figure: Newer College reconstruction comparison: Top left shows the reference 3D model from the dataset, bottom left shows the output of our architecture and right shows the absolute reconstruction error point cloud with histogram, output of MC3C2 absolute distance computed on the Newer College dataset.Mean = 0.236 m, Std = 0.248 mFigure: 3D mesh with mono camera projection, output of the application of the complete OLCMR architecture on the Newer College Dataset.", "Left shows the complete reconstructed mesh, right shows a comparison between a camera frame and a corresponding view of the reconstructed mesh.Figure: CPU (mean: 84.4 %, std: 11.7 %) and RAM (max: 23 %) usage of OLCMR while running on the Newer College dataset.", "Mean CPU usage and linear regression of RAM usage are displayed on relevant figures." ], [ "Evaluation on Field Robots Trajectories", "The proposed method obtained good performances on the Newer College dataset, which contains handheld trajectories acquired in a highly-textured environment.", "To further evaluate the OLCMR architecture in situations closer to its goal applications, we acquired two dedicated experimental datasets produced by tele-operating wheeled ground robots along predefined trajectories and gathering relevant perception data.", "These two datasets were respectively produced on-board of a Robotnik Summit-XL robot in a urban environment and an Agilex Scout robot in a countryside environment.", "These two platforms are four-wheel differentially driven, equipped with on-board CPUs running Ubuntu and ROS, and their respective sensor suites are summarized in Table REF .", "OLCMR components main parameters for each dataset are referenced in Table REF .", "The data processing has been performed on the same computer as for the reference dataset evaluation from Section , which presents similar computational power as the robots' embedded computers.", "Table: Sensors embedded on ONERA-DTIS Summit and Scout robotsFigure: ONERA-DTIS Agilex Scout and Robotnik Summit XL robot setups" ], [ "Localisation Evaluation", "The localisation global performance has been evaluated with respect to the total drift of the trajectories, so this drift is estimated by the difference between the last pose and the first pose computed given the fact that robots were operated in order for the actual ending pose to be approximately equal to the actual starting pose.", "The characteristics of the trajectories and the values of the approximated final APE and loop-closure total corrections are summarized in Table REF , and Figure REF presents the robot estimated trajectories.", "The overall drift without loop-closure remains between 1 and 2 percents, with a higher value in the less-textured environment.", "These degradations compared to the previous dataset could be interpreted by the transition from high-quality handheld sensors to robot-mounted lower-grade IMU and LiDAR sensors.", "These performances remain acceptable to carry out the online 3D dense model rendering process.", "Table: SLAM evaluation on field datasetsFigure: Robot trajectories on field datasets.", "Top with the Summit robot, bottom with the Scout robot." ], [ "3D Reconstruction Evaluation", "Images have been acquired independently from the robot setups in the two new test environments using handheld camera devices, respectively a stereo bench composed of 2 uEye IDS 1241LE-M monocular cameras for the Summit (urban) dataset and a HERO7 GoPro for the Scout (countryside) dataset.", "An offline photogrammetric 3D reconstruction has then been obtained using the Structure-From-Motion Colmap software [49].", "The reconstruction errors between the mapping obtained with the OLCMR architecture using the robots embedded sensors and these Colmap-generated reference models have been analyzed using CloudCompare (see Figure REF ).", "The colored rendering mesh is presented in Figure REF .", "The CPU and RAM usage evaluations (Figure REF ) show a similar behaviour as on the Newer College dataset, which seem to be compatible with on-board actual deployment for trajectories lengths of one-kilometer order of magnitude.", "Figure: Reconstruction error point cloud map and histogram, output of M3C2 absolute distance computed from Summit (top) and Scout (bottom).Figure: Mesh rendering for Summit robot (urban) dataset (top) and Scout robot (countryside) dataset (bottom).For each dataset, example of picture with corresponding view of the OLCMR colored mesh and bird-eye view of the global colored mesh are shown.Figure: CPU (mean: 81.782 %, std: 11.7 %) and RAM (max: 29 %) usage of OLCMR while on the Summit robot data.", "Mean CPU usage and linear regression of RAM usage are displayed on relevant figures.Table: OLCMR open-source components main parameters for each experiment" ], [ "Conclusions and Perspectives", "An architecture running in real-time combining LiDAR-based localisation with 3D dense mapping and mesh colourisation using multiple cameras has been proposed in this paper.", "It is based on recent open-source ROS package implementations of a LiDAR SLAM and a TSDF-based mapping algorithms, with specific developments to assemble the overall pipeline.", "The full system has been thoroughly evaluated on datasets exhibiting different characteristics, namely the Newer College handheld benchmark and two dedicated trajectories acquired with wheeled ground robots in urban and countryside environments.", "The architecture performed well in all of these conditions, which make it suitable for future field deployment of tele-operated or autonomous robotic exploration.", "The loop-closure is currently used solely by the localisation stack.", "To improve the development of the OLCMR architecture, it could also be integrated within the mapping stack in a manifold framework such as [23].", "Additional layers could be added to this perception architecture, e.g.", "for semantic navigation in unknown environments.", "The next step towards that goal would be to implement semantic segmentation on camera images before projecting them into the 3D space in order to create semantic maps that could be used by the robot for safer and better autonomous navigation." ] ]
2207.10489
[ [ "Machine Learning assisted excess noise suppression for\n continuous-variable quantum key distribution" ], [ "Abstract Excess noise is a major obstacle to high-performance continuous-variable quantum key distribution (CVQKD), which is mainly derived from the amplitude attenuation and phase fluctuation of quantum signals caused by channel instability.", "Here, an excess noise suppression scheme based on equalization is proposed.", "In this scheme, the distorted signals can be corrected through equalization assisted by a neural network and pilot tone, relieving the pressure on the post-processing and eliminating the hardware cost.", "For a free-space channel with more intense fluctuation, a classification algorithm is added to classify the received variables, and then the distinctive equalization correction for different classes is carried out.", "The experimental results show that the scheme can suppress the excess noise to a lower level, and has a significant performance improvement.", "Moreover, the scheme also enables the system to cope with strong turbulence.", "It breaks the bottleneck of long-distance quantum communication and lays a foundation for the large-scale application of CVQKD." ], [ "Introduction", "The development of digitalization and intelligentization in modern society is based on massive information exchange.", "Therefore, how to realize the secure transmission of information is an important development direction in modern communication.", "Quantum key distribution (QKD) [1], [2], [3], [4], [5] combined with the One-time pad (OTP) can achieve theoretically unconditional secure communication.", "It can be implemented in the discrete-variable scheme and continuous-variable scheme (CVQKD) [6], [7], and the latter has the advantages of a high secret key rate (SKR) and low cost.", "Long-distance transmission [8] and high SKR [9], [10] are the two critical goals of CVQKD, which are limited by the practical transmission loss of the quantum channel, excess noise, and reconciliation efficiency [11], [12].", "Therefore, maintaining high reconciliation efficiency [13], [14] and low excess noise is a major obstacle to the realization of remote CVQKD.", "Excess noise refers to the sum of the variances of all possible noise sources in Alice's system (imperfect modulation), channel transmission (phase fluctuations), and Bob's system (imperfect detection, electronic noise), which results in a serious system security threats [15], [16], [17], [18], [19].", "There are two solutions to eliminate its influence.", "One is to improve the tolerance of the system, such as the two-way protocol [20] and the phase noise model proposed in [21].", "The other is to suppress excess noise at a lower level which is mainly realized by tracking and compensating for phase noise [22], [23], [24], [25], [26].", "A phase estimation protocol based on the theoretical security and Bayes’ theorem is studied in [22], which can achieve a well-motivated confidence interval of the estimated eigenphase without the strong reference pulse propagation.", "In the work [23], [24], the fast and slow phase drift can be both estimated by using the improved vector Kalman filter carrier phase estimation algorithm, and thus the phase estimation error can be tracked in real-time and be almost approximate to the theoretical mean square error limit.", "An implementation of a machine learning framework based on an unscented Kalman filter is explored in [26] for estimation of phase noise, enabling CVQKD systems with low hardware complexity which can work on diverse transmission lines.", "The above studies have achieved a great excess noise suppression effect, but they are all from the perspective of improving the accuracy and real-time performance of the phase noise estimation and compensation, not from its source.", "The free-space channel has greater fluctuation due to the atmospheric turbulence effect, research on free-space excess noise suppression is also in progress [27], [28], [29].", "This work proposes an excess noise suppression scheme based on equalization assisted by machine learning algorithms for CVQKD.", "Channel equalization is an anti-fading measure taken to improve the transmission performance of communication systems in fluctuating channels, therefore, equalization can fundamentally reduce the excess noise caused by channel fluctuation.", "This scheme obtains the correction coefficients through the neural network acting on pilot tone, makes parameters estimation to judge the transmission safety, then applies the trained correction coefficients to the signal following pilot tone at the same stage to reduce the impact of channel fluctuation on its quality, so as to suppress excess noise and improve system performance.", "This correction performance is verified experimentally in a Gaussian modulated coherent state (GMCS) CVQKD [30], [31] under a 10km fiber channel, which achieves a more stable and lower excess noise.", "A free-space communication process experiences turbulences of different intensities, thus, the quality of all received variables varies greatly, which affects the training effect of the correction coefficients.", "A K-Nearest Neighbor (KNN) classification algorithm is employed to divide these received variables into three classes according to their quality.", "For each class, distinctive correction coefficients are trained to better complete the signal correction and improve the overall performance of the system.", "Compared with the unclassified equalization scheme, the classified scheme has a better correction effect, especially in medium and strong turbulence, thus, this scheme enables the CVQKD system to curb the negative impact of transmission fluctuation on the system." ], [ "CVQKD system with equalization", "As shown in Fig.", "REF , Alice modulates the quantum signals and then dispatches them to Bob via a quantum channel whose feature is transmittance distribution $P(T)$ .", "After taking over the quantum signal, Bob conducts the coherent detection of the received signals and acquires the raw key variables.", "A practical detector is featured by an efficiency $\\eta $ and a noise $\\upsilon _{el} $ on account of detector electronics.", "Detected signals deviate due to the random fluctuation of the quantum channel.", "Before Bob extracts secret keys, he first preprocesses the received variables with equalization to make the processed received variables close to the ideal variables as possible, thus, there is only atmospheric inherent attenuation in the quantum channel and no atmospheric turbulence so as to make up for the bad impact of atmospheric turbulence on the transmission signal.", "Figure: EB model of CVQKD system with equalization.", "(BS: Beam splitter, HD: Homodyne detection, η\\eta : detection efficiency, TT: transmission rate, ε\\varepsilon : excess noise.", "The ellipse model is used to describe the beam wangdering caused by the beam passing through the free-space channel, where (W 1 ,W 2 ,φ)(W_{1},W_{2},\\phi )) indicates the direction and size of the beam, aa is aperture radius, and r 0 r_{0} is the center of beam.Specifically, Alice employs two single-mode squeezed vacuum states $\\left| \\nu \\right\\rangle $ and $\\left| - \\nu \\right\\rangle $ to prepare a two-mode squeezed $\\left| \\psi \\right\\rangle _{AB} $ .", "Then she measures one half of it randomly and sends another one to Bob through a quantum channel.", "The state $\\left| \\psi \\right\\rangle _{AB} $ after transmission through free-space channel is described as $\\left| \\psi \\right\\rangle _{AB_{1} } =\\left\\lbrace \\mathbf {I} \\otimes U_{trans(L)} \\right\\rbrace \\left| \\psi \\right\\rangle _{AB },$ where $U_{trans(L)}$ indicates the effect of channel on quantum state.", "$\\mathbf {I}$ means that the first module is saved at Alice and is not transmitted through the channel, so as to maintain its original state.", "The second mode undergoes channel transmission and is affected by $U_{trans(L)}$ .", "The covariance matrix $\\gamma _{AB_{1} } $ of $\\left| \\psi \\right\\rangle _{AB_{1} } $ is written as $\\begin{split}&\\gamma _{AB_{1} } =\\left( \\mathbf {I} \\otimes U_{trans(L)} \\right) ^{T} \\gamma _{AB } \\left( \\mathbf {I} \\otimes U_{trans(L)} \\right)=\\\\&\\begin{pmatrix}V& 0& \\Lambda cos(\\Delta \\varphi ) & -\\Lambda sin(\\Delta \\varphi )\\\\0& V &-\\Lambda sin(\\Delta \\varphi ) &-\\Lambda cos(\\Delta \\varphi ) \\\\\\Lambda cos(\\Delta \\varphi )& -\\Lambda sin(\\Delta \\varphi )& T \\left(V+\\frac{1}{T}-1+\\varepsilon \\right) & 0\\\\-\\Lambda sin(\\Delta \\varphi )& -\\Lambda cos(\\Delta \\varphi ) & 0 &T \\left(V+\\frac{1}{T}-1+\\varepsilon \\right)\\end{pmatrix}\\end{split},$ where $\\Lambda =\\sqrt{A_{\\alpha }^{eq}}\\sqrt{T^{th}} \\sqrt{V^{2}-1 }$ , $T^{th} $ is theoretical channel transmission, $A_{\\alpha }$ , $\\Delta \\varphi $ , $T$ and $\\varepsilon $ are amplitude attenuation, phase drift, and practical channel transmittance and excess noise.", "$V$ is variance of $\\left| \\psi \\right\\rangle _{AB }$ .", "When the sampled variable is corrected by the equalization, it is equivalent to correcting the quantum signal to make it closer to the ideal data.", "$\\left| \\psi \\right\\rangle _{AB_{2} } = U_{eq} \\left| \\psi \\right\\rangle _{AB_{1} } = \\left\\lbrace \\mathbf {I}\\otimes U_{eq}U_{trans}(L) \\right\\rbrace \\left| \\psi \\right\\rangle _{AB},$ where $U_{eq}$ indicates the correction effect on quantum signal correction.", "Thus, the covariance matrix $\\gamma _{AB_{2} } $ of $\\left| \\psi \\right\\rangle _{AB_{2} } $ is rewritten as $\\footnotesize {\\begin{split}&\\gamma _{AB_{2} } =\\left( \\mathbf {I} \\otimes U_{eq)} \\right) ^{T} \\gamma _{AB_{1} } \\left( \\mathbf {I} \\otimes U_{eq} \\right)=\\\\&\\begin{pmatrix}V& 0&\\frac{ \\Lambda }{ \\sqrt{A_{\\alpha }^{eq}}}cos (\\Delta \\varphi -\\Delta \\varphi ^{eq} ) & -\\frac{ \\Lambda }{ \\sqrt{A_{\\alpha }^{eq}}}sin (\\Delta \\varphi -\\Delta \\varphi ^{eq} )\\\\0& V &-\\frac{ \\Lambda }{ \\sqrt{A_{\\alpha }^{eq}}}sin (\\Delta \\varphi -\\Delta \\varphi ^{eq} ) &-\\frac{ \\Lambda }{ \\sqrt{A_{\\alpha }^{eq}}}cos (\\Delta \\varphi -\\Delta \\varphi ^{eq} ) \\\\\\frac{ \\Lambda }{ \\sqrt{A_{\\alpha }^{eq}}}cos(\\Delta \\varphi -\\Delta \\varphi ^{eq} ) & -\\frac{ \\Lambda }{ \\sqrt{A_{\\alpha }^{eq}}}sin(\\Delta \\varphi -\\Delta \\varphi ^{eq} ) & T^{eq} \\left(V+\\frac{1}{T^{eq}}-1+\\varepsilon ^{eq} \\right) & 0\\\\-\\frac{ \\Lambda }{ \\sqrt{A_{\\alpha }^{eq}}}sin(\\Delta \\varphi -\\Delta \\varphi ^{eq} ) & -\\frac{ \\Lambda }{ \\sqrt{A_{\\alpha }^{eq}}}cos(\\Delta \\varphi -\\Delta \\varphi ^{eq} ) & 0 &T^{eq} \\left(V+\\frac{1}{T^{eq}}-1+\\varepsilon ^{eq} \\right)\\end{pmatrix}\\end{split}},$ where $A_{\\alpha }^{eq}$ , $\\Delta \\varphi ^{eq}$ , $T^{eq}$ and $\\varepsilon ^{eq}$ are amplitude attenuation, phase drift, channel transmittance and excess noise with equalization correction.", "Under ideal conditions, $\\frac{A_{\\alpha }^{eq}}{A_{\\alpha }} =1$ and $\\Delta \\varphi ^{eq}=\\Delta \\varphi $ , which means perfect correction." ], [ "Establishment of dataset", "The experimental setup is shown in Fig.", "REF .", "Figure: Experimental setup.", "(CW: continuous wave, LO:local oscillator, MOD: modulation module.", "θ th \\theta ^{th}, θ\\theta , and θ eq \\theta ^{eq} represent phase relationship between signal and pilot at the sender, the receiver, and after correction, respectively.", "y i (i=1,2,...,8)y_{i} (i=1, 2, ..., 8) is sampled point of received signal pulse, T s T_{s} is sampling interval, ω\\omega and bb are the weight and bias of the neural network.", ")On Alice's side, she prepares two independent Gaussian random variables $X_{A} $ and $P_{A}$ which obey the same zero-centered Gaussian distribution $\\mathcal {N} (0,V_{A} )$ , where $V_{A}$ is the modulation variance.", "Then she sends modulated coherent state $\\left| \\alpha _{s} \\right\\rangle =\\left|X_{A} +iP_{A} \\right\\rangle $ as quantum signal, and sends another classical coherent state $\\left| \\alpha _{P} \\right\\rangle =\\left|X^{P} _{A} +iP^{P}_{A} \\right\\rangle $ as pilot tone in the next time bin to Bob where $(X^{P} _{A},P^{P} _{A})$ is publicly known [32], [33].", "Repeating this process many times, interleaved signal pulses and pilot tone are simultaneously transmitted from Alice to Bob.", "After the signal is transmitted through the fluctuating channel, it will produce intensity attenuation and phase fluctuation for quantum signals.", "On Bob's side, he employs a local oscillator (LO) to perform homodyne detection with a pilot tone pulse and with a signal pulse, respectively.", "After the pilot tone is corrected by the neural network, its parameters $(T^{eq} ,\\varepsilon ^{eq} )$ are estimated.", "If the transmission is evaluated as safe, the trained correction coefficient can also be used for the signal pulse at the same stage, which is helpful for the next key generation." ], [ "Performance analysis", "The theoretical transmittance $T^{th}$ is mainly related to the transmission distance $L$ .", "However, the transmission fluctuation produces more excess noise in the system, which makes the practical transmittance $T$ less than $T^{th}$ .", "In order to make practical channel transmission performance as close to ideal channel transmission performance as possible at the same distance, it is necessary to suppress the negative impact of the channel fluctuation on the signal.", "$T^{th}$ can be obtained by $T^{th}=10^{-\\alpha _{f}L/10 },$ where $\\alpha _{f}(dB/km)$ is the attenuation coefficient.", "Using the received variables of the known pilot tone, the neural network can be applied to predict the ideal received values quickly and accurately, and $T^{th}$ provides a target for it.", "The purpose of using neural network at Bob side is $f(\\mathbf {y} ) = \\mathbf {\\omega } ^{T} \\mathbf {y}_{i} +b \\approx {y}^{\\prime } = t^{th}x,$ where $\\mathbf {y}$ is an $8 \\times 1$ vector representing eight points on a pilot tone pulse acquired by the oscilloscope at one time, $ \\mathbf {\\omega }$ is the correction coefficient vector trained by the neural network, $f(\\mathbf {y} )$ represents the corrected value, $ {y}^{\\prime } $ is the targeted received variable, $t^{th}$ is the parameter in the linear channel model that is provided by $T^{th}$ $(t^{th} =\\sqrt{\\eta T^{th} }$ , $\\eta $ is detection efficiency), and $x$ is Alice modulated variable, $b$ is bias of neural network.", "Figure: Equalization correction via neural network.", "(OSP: optimum sampling point, T s T_{s} is time delay during sampling, ω i \\omega _{i} and b i b_{i} are weights and bias trained by neural network, t=ηT t=\\sqrt{\\eta T} represents practical channel state, t th =ηT th t^{th}=\\sqrt{\\eta T^{th}} represents ideal channel state, y i (i=1,2...,8){y}_{i}(i=1,2...,8) is sampling points on a pilot tone pulse acquired by the oscilloscope at one time, y ' {y}^{\\prime } refers to targeted received variable, zz represents Gaussian noise, and xx is Alice modulated variable.", ")The principle of equalization correction via a neural network is shown in Fig.", "REF , where the neural network has 8 input neurons, 1 output neuron, and one hidden layer.", "A received pilot tone pulse is sampled into eight points $(y_{1}, y_{2},...,y_{8})$ , which are the input of the neural network.", "After training the neural network, the predicted received variable $f(\\mathbf {y})$ is obtained.", "By comparing with the expected variable ${y}^{\\prime }$ , if the error is within a reasonable range, the neural network is applied to the signal pulse following the pilot tone at the same stage.", "If the error exceeds the acceptable range, the neural network will be retrained, and the weight and bias $(\\omega _{i},b_{i} )$ will be adjusted, so that the equalization system has a certain adaptive ability.", "The results of signal correction and suppression of system excess noise by equalization are shown in Fig.", "REF .", "Figure: Excess noise suppression by equalization.", "Let Alice modulated variable be $x$ and Bob received variable be $y$ (without equalization) and $f(\\mathbf {y}) $ (with equalization).", "It can be seen that $f(\\mathbf {y}) $ is closer to $x$ in Fig.", "REF .", "The correlation $\\gamma _{xf(\\mathbf {y}) }$ between $f(\\mathbf {y}) $ and $x$ enhances than that $\\gamma _{xy } $ between $y$ and $x$ according to parameter estimation.", "In Fig.", "REF , the excess noise with equalization $\\varepsilon ^{eq}$ and without equalization $\\varepsilon $ are calculated, respectively, and the former features a more concentrated distribution and a lower mean value.", "Based on the $(x,y)$ and $(x, f(\\mathbf {y}) )$ , the practical system parameters $(T,\\varepsilon )$ , the equalized system parameters $(T^{eq} ,\\varepsilon ^{eq} )$ , and theoretical system parameters $(T^{th},\\varepsilon ^{th} )$ are calculated in Tab.", "REF .", "The transmission loss $(1-T^{eq}) $ between $f(\\mathbf {y}) $ and $x$ is mainly caused by the inevitable loss of the channel.", "And the improvement of system performance is shown in Fig.", "REF .", "It can be seen that equalization is of great help to improve the system performance, which is close to the theoretical SKR.", "In the farthest CVQKD experiment using phase compensation [8], the excess noise suppression rate is 78%.", "This proposed scheme achieves 72% without additional phase compensation software and hardware.", "Table: Comparisons of transmission and excess noise.Figure: System performance enhancement of the proposed equalization scheme.", "(From left to right, the first purple dot, the second green dot, and the last blue dot describe the system performance without equalization, with equalization, and under ideal conditions, respectively.", "The red line is the SKR under different transmission environments.)" ], [ "Establish data set through Monte Carlo", "In the atmospheric channel, the theoretical transmittance $T^{fs}$ can be obtained according to Lambert's law [34], $T^{fs}=\\mathrm {exp} (-\\alpha _{\\lambda } L),$ where $\\alpha _{\\lambda }$ stands for the wavelength-dependent coefficient of atmospheric decay that changes greatly under different weather conditions.", "The empirical formula of $\\alpha _{\\lambda } $ and air visibility $V$ is [35] $\\alpha _{\\lambda } =\\frac{3.91}{V} \\times \\left( \\frac{\\lambda }{0.55} \\right) ^{-q} ,$ where $q$ is a constant.", "Thus, the typical value of $\\alpha _{\\lambda } $ under different weather conditions can be obtained.", "Three simulated received variables $y_{weak} $ , $y_{medium} $ and $y_{strong} $ under weak turbulence, medium turbulence, and strong turbulence are constructed according to different $\\alpha _{\\lambda } $ , respectively, as shown in Tab.", "REF .", "And statistical relationships between simulated received variables and Alice modulated variable $(x, y_{weak})$ , $(x, y_{medium})$ , and $(x, y_{strong})$ are shown in Fig.", "REF .", "Table: α λ \\alpha _{\\lambda }, corresponding transimission and received variables under different turbulences.When passing through weak turbulence, the relationship of $(x,y_{weak})$ shows good linear characteristics.", "In this turbulence condition, the attenuation degree is less than that of a fiber channel at the same distance, which also shows the advantages of a free-space channel over a fiber channel and becomes the primary choice for long-distance secret key distribution.", "In medium turbulence case, the linear relationship of $(x,y_{medium})$ decreases.", "And the linear relationship of $(x,y_{strong})$ under strong turbulence is the worst, therefore, security and long-distance secret key distribution under medium-to-strong turbulence are the bottlenecks that need to be broken through." ], [ "Performance analysis", "However, it can also be seen that even though it passes through the strong turbulence with the worst transmission characteristics, some of the variables received by Bob still belong to the part with excellent quality (the part overlapping with the yellow ones), and some belong to the part with ordinary quality (the part overlapping with the orange ones), and the rest belong to the part with bad quality.", "Therefore, in order to correct variables more accurately, we classify them according to the quality of received variables and then equalize the classified variables with high consistency.", "First, as shown in Fig.", "REF , the training labels are determined according to the ellipse fitting model [36], where the variables are labeled according to these boundaries.", "The variables within the blue ellipse are considered excellent variables, which are most suitable for extracting secret keys.", "the variables between the blue ellipse and the red ellipse are considered ordinary variables, the variables between the red ellipse and the green ellipse are considered bad quality variables, which occupy the largest proportion and have the most important impact on system performance, and the variables beyond the green circle are considered variables that need to be discarded.", "Figure: Correlation between the received variables constructed under different turbulence and Alice modulation variables.", "(Yellow circles, orange circles, and blue circles are the received variables through weak turbulence, medium turbulence, and strong turbulence.", ")Figure: Determine the classification labels according to the quality of the received variables.", "(Blue dots, orange dots, and green dots represent received variables through weak turbulence, medium turbulence, and strong turbulence, respectively.", "Variables in the black ellipse, between the black ellipse and the red ellipse, between the green ellipse and the red ellipse, and outside the green ellipse are labeled as excellent variables, ordinary variables, bad variables, and discarded variables.", ")Second, train the KNN classifier.", "KNN is considered to be one of the simplest classification algorithms and the most commonly used classification algorithms.", "The performances of the classifier on training set and test set are shown in Fig.", "REF , and the accuracy is 96.8% (K=5).", "Figure: ROC curve on test set.According to the above indicators (Accuracy, TPR, FNR, ROC Curve, and AUC) of the training set and test set, the classifier is trustworthy.", "Figure: Correction effect without classification.Third, for different variable classes, the correction coefficients are obtained by different neural networks.", "It can be seen that when the variables are classified, the corrected results of the neural network under different turbulence channels are excellent, maintained at more than 90% (Figs.", "REF - REF ).", "However, if the classification algorithm is not employed and the prediction results of all variables are directly corrected, the prediction results are reduced, about 70% and more (Fig.", "REF ), which also proves the importance of early classification.", "The system performance is shown as Fig.", "REF .", "It can be seen that the atmospheric attenuation model has the least impact on the system performance, followed by weak turbulence and medium turbulence, and strong turbulence has the greatest influence.", "The scheme is more universal and systematic with the assistance of a classification algorithm.", "Among them, the performance improvement of the system under medium-to-strong turbulence is the most obvious.", "Figure: Effect of equalization on system performance under different turbulence conditions.", "(The solid red line represents the performance of the system considering only atmospheric attenuation.", "The blue lines, rose-red lines, and black lines represent the system performance under weak turbulence, medium turbulence, and strong turbulence, respectively.", "Among them, the solid lines and the dotted lines are the cases without equalization and with equalization, respectively.)" ], [ "Conclusion", "This work shows the excess noise suppression and system performance enhancement based on equalization in a GMCS CVQKD experiment with a 10km fiber link through the pilot tone and neural network.", "Its results show that the correction effect of equalization is excellent, which can reduce excess noise to the theoretical value, and improve the system performance.", "For the more complex volatility of the free-space channel, the KNN classification algorithm is added, and the scheme can be achieved more efficiently.", "For different classes of received variables, the trained correction coefficients are more targeted, which can cope with various turbulence, thus, this scheme fully considers the characteristics of the large change range and fast change speed of the free-space channel.", "Compared with the unclassified equalization scheme, the classified scheme has a better correction effect, especially in medium-to-strong turbulence.", "Compared with other schemes, it focuses on the source of excess noise, fundamentally solves the problem of superfluous excess noise and boosts system performance, and omits phase tracking and compensation, reducing the hardware complexity and relieving the pressure on the system post-processing.", "It can be implemented on different links, which lays a foundation for the implementation of large-scale CVQKD in the future." ], [ "Acknowledgements", "This work is supported by the National Natural Science Foundation of China (Grant No.", "62071381), Shaanxi Provincial Key R&D Program General Project (2022GY-023), and ISN 23rd Open Project (ISN23-06) supported by the State Key Laboratory of Integrated Services Networks (Xidian University).", "As an indispensable component of the global quantum communication networks, the free-space CVQKD is drawing much more attention.", "The atmospheric channel is an anisotropic time-varying medium composed of a variety of gas molecules and suspended particles, and its effect on the beam mainly includes the beam extinction caused by the absorption and scattering of atmospheric molecules, also called the atmospheric attenuation effect, and the random fluctuations in signal phase and amplitude caused by changes in atmospheric density and humidity caused by the motion of atmospheric molecules also called the atmospheric turbulence effect.", "When beams pass through the atmospheric turbulence channel, they may deviate from the established transmission direction during propagation, resulting in beam wandering and broadening.", "The interference of refracted beam of different intensities in the receiving aperture section brings about the loss of phase, resulting in deformation and scintillation.", "These phenomena can be described by the elliptical beam model [37].", "According to sub-channel theory [38], it is assumed that in a sub-channel $i$ -th with a transmittance of $T_{i}$ and an excess noise of $\\varepsilon _{i} $ , Alice modulation variable is $X^{i} =\\left\\lbrace x_{1}, x_{2},...,x_{N} \\right\\rbrace $ and Bob measurement variable is $Y^{i} =\\left\\lbrace y_{1}, y_{2},...,y_{N} \\right\\rbrace $ , and the correlation data of the two $\\left\\lbrace \\left( x_{j}^{i} , y_{j}^{i} \\right)_{j=1, 2, ..., N}^{i=1, 2, ..., M} \\right\\rbrace $ ($\\mathit {M} $ is the number of sub-channel, and $\\mathit {N}$ is the number of quantum state transmitted in each sub-channel) meets the following relationship $y_{j}^{i} =t_{i}x_{j}^{i} +z_{i},$ where $t_{i}=\\sqrt{\\eta T_{i} }$ , $\\eta $ is detection efficiency.", "The variance of Gaussian noise $z_{i}$ is $\\sigma _{i}^{2} =N_{0i}+ \\eta T_{i}\\varepsilon _{i}+\\nu _{el} $ , $N_{0i}$ represents the shot noise in $i$ -th channel.", "According to the GMCS protocol, Alice modulates the information on the amplitude $A$ and phase $\\varphi $ of pulses by amplitude modulation and phase modulation, thus, $X^{i}$ can be further described as $\\begin{split}X^{i} = A^{i} cos\\varphi ^{i},\\end{split}$ and the Gaussian coherent state passes through the linear channel model of Eq.", "(REF ), $Y^{i}$ can be described as $\\begin{split}Y^{i} = \\sqrt{\\eta } A_{\\alpha }^{i} A^{i}cos(\\varphi ^{i}+\\Delta \\varphi ^{i} )+A_{n}^{i} ,\\end{split}$ where $A_{\\alpha }^{i}$ and $\\Delta \\varphi ^{i}$ represent the amplitude attenuation parameter and phase fluctuation caused by free-space quantum channel propagation.", "$A_{n}^{i}$ represents the amplitude of noise.", "According to the Maximum likelihood estimation, parameters are estimated as $\\begin{split}\\hat{t_{i} } =\\frac{\\sum _{j=1}^{m}x_{j}^{i} y_{j}^{i} }{\\sum _{j=1}^{m} x_{j}^{i2} } ,\\hat{\\sigma } _{i} =\\frac{1}{m}\\sum _{j=1}^{m} (y_{j}^{i}-\\hat{t }_{i}x_{j}^{i}),\\end{split}$ with the confidence interval $\\begin{split}\\bigtriangleup t_{i}=z_{\\varepsilon _{PE}/2 } \\sqrt{\\frac{\\hat{\\sigma } _{i} }{mV_{A} } } ,\\bigtriangleup \\hat{\\sigma } _{i} =z_{\\varepsilon _{PE}/2 }\\frac{\\hat{\\sigma } _{i}\\sqrt{2} }{\\sqrt{m} },\\end{split}$ where $m$ is the number of signals for parameter estimation, and $V_{A}$ is modulated variance.", "When considering the effects of channel fluctuations to the system, the Eq.", "(REF ) is rewritten as $\\begin{split}\\hat{t} _{i}&=\\frac{E\\left[ \\sqrt{\\eta }A_{\\alpha }^{i}(A^{i})^2 cos\\varphi ^{i} cos(\\varphi ^{i} +\\Delta \\varphi ^{i} ) \\right] }{\\hat{V}_{A} }=\\sqrt{\\eta }(E\\left[A_{\\alpha }^{i} cos\\Delta \\varphi ^{i} \\right]-E\\left[A_{\\alpha }^{i} cos\\Delta \\varphi ^{i} \\right] ),\\\\\\hat{\\sigma } _{i}^{2}&=E\\left[ (Y^{i})^2 \\right] -2\\hat{t} _{i} E\\left[ X^{i}Y^{i} \\right] +\\hat{t}_{i}^{2} E\\left[( X^{i})^2 \\right] \\\\&=\\hat{V} _{A} \\left\\lbrace \\eta \\left( E\\left[ (A_{\\alpha }^{i})^2 cos^{2}\\Delta \\varphi ^{i} \\right] + E\\left[ (A_{\\alpha }^{i})^{2} sin^{2}\\Delta \\varphi ^{i} \\right]\\right.\\right.\\left.-2E\\left[A_{\\alpha }^{i}sin \\Delta \\varphi ^{i} \\right]E\\left[A_{\\alpha }^{i}cos \\Delta \\varphi ^{i} \\right] \\right) \\\\&-2\\hat{t}_{i} \\sqrt{\\eta }(E\\left[A_{\\alpha }^{i}cos \\Delta \\varphi ^{i} \\right]\\left.", "-E\\left[A_{\\alpha }^{i}sin \\Delta \\varphi ^{i} \\right]) + \\hat{t} _{i}^{2} \\right\\rbrace +E\\left[(A_{\\alpha }^{i})^{2} \\right] .\\end{split}$ Beam extinction.", "Under the atmospheric turbulent channel, according to the elliptical model, the beam wandering causes a random displacement between the beam and the receiving plane, and the amplitude attenuation of the transmitted beam depends on the beam profile at the moment of reception and the displacement of its centroid relative to the center of the receiving aperture.", "In general, under weak turbulence conditions, the amplitude attenuation caused by atmospheric turbulence is described by a log-normal distribution.", "Under medium-strong turbulence conditions, the probability distribution function of amplitude attenuation caused by atmospheric turbulence is described by Gamma-Gamma distribution [39], $\\begin{split}P_{w} (I)=\\frac{1}{I\\sqrt{2\\pi \\sigma _{I}^{2}(\\mathbf {r} ,L) } } \\mathrm {exp} \\left\\lbrace -\\frac{\\left[\\mathrm { ln}(1/\\left\\langle I(\\mathbf {r} ,L) \\right\\rangle ) +\\sigma _{I}^{2}(\\mathbf {r} ,L)/2 \\right]^{2} }{2\\sigma _{I}^{2}(\\mathbf {r} ,L)} \\right\\rbrace ,\\\\P_{m-s} (I)=\\frac{2(\\alpha \\beta )^{(\\alpha + \\beta )/2} }{\\Gamma (\\alpha )\\Gamma (\\beta )} I^{\\frac{(\\alpha + \\beta )}{2}-1 }J_{\\alpha - \\beta }(2\\sqrt{\\alpha \\beta I} ),\\end{split}$ where $\\mathbf {r}$ represents the relative position between the centroid of the signal beam and the center of the receiving aperture, $\\sigma _{I}^{2}(\\mathbf {r} ,L)$ represents scintillation index, $\\Gamma (\\bullet )$ represents the Gamma function, $J_{(\\alpha -\\beta )}(\\bullet ) $ is the second type of correction Bezier function.", "The effective numbers of large-scale turbulence and small-scale turbulence during the scattering process is represented as $\\alpha =\\left[\\mathrm {exp} (\\sigma _{\\mathrm {lnx} }^{2} )-1 \\right]^{-1} $ and $\\beta =\\left[\\mathrm {exp} (\\sigma _{\\mathrm {lny} }^{2} )-1 \\right]^{-1}$ , respectively.", "Phase fluctuation.", "Generally speaking, it is assumed that the phase fluctuations caused by turbulent atmospheres follow a Gaussian distribution with variance $\\sigma _{\\Delta \\varphi }^{2} $ .", "The probability density function of $\\Delta \\varphi $ can be expressed as $\\rho (\\Delta \\varphi )=\\frac{1}{\\sqrt{2\\pi } \\sigma _{\\Delta \\varphi } } \\mathrm {exp}\\Big (-\\frac{({\\Delta \\varphi }^{2})}{2\\sigma _{\\Delta \\varphi }^{2} } \\Big ).$ Thus, the feature function $M(\\omega )$ under the Fourier transform of $\\rho (\\Delta \\varphi )$ is $M(\\omega )=\\mathrm {exp} \\left( -\\frac{\\omega ^{2}\\sigma _{\\Delta \\varphi }^{2} }{2} \\right) ,$ and $\\sigma _{\\Delta \\varphi }^{2}$ can be described by Zernike polynomial as [40] $\\sigma _{\\Delta \\varphi }^{2} = C_{J}\\left( \\frac{2a}{d_{0} } \\right) ^{2},$ where $a$ is receive aperture radius, $d_{0}$ [41] is used to describe the wavefront coherent diameter of the spatial correlation of phase fluctuations within the receiving plane, and $C_{J}$ is determined by Zernike polynomial [42], [43].", "Each part in Eq.", "(REF ) is described as $\\begin{aligned}E\\left[ A_{\\alpha }^{i} cos\\Delta \\varphi ^{i} \\right] &= \\frac{M(1)+M(-1)}{2} \\int _{\\mathfrak {B}} \\sqrt{I(\\mathbf {r_{i}} ,L)}p(I(\\mathbf {r_{i}} ,L))d\\mathbf {r_{i} }\\\\&= \\text{exp}\\Big (-\\frac{(\\sigma _{\\Delta \\varphi }^{i})^{2} }{2}\\Big )\\int _{\\mathfrak {B} } \\sqrt{I(\\mathbf {r_{i}} ,L)}p(I(\\mathbf {r_{i}} ,L))d\\mathbf {r_{i} },\\\\E\\left[ A_{\\alpha }^{i} sin\\Delta \\varphi ^{i} \\right] &= \\frac{-j[M(1)+M(-1)]}{2} \\int _{\\mathfrak {B} } \\sqrt{I(\\mathbf {r_{i}} ,L)}p(I(\\mathbf {r_{i}} ,L))d\\mathbf {r_{i} },\\\\E\\left[ (A_{\\alpha }^{i})^{2} cos^{2}\\Delta \\varphi ^{i} \\right] &= \\frac{2+[M(2)+M(-2)]}{4} \\int _{\\mathfrak {B} } \\sqrt{I(\\mathbf {r_{i}} ,L)}p(I(\\mathbf {r_{i}} ,L))d\\mathbf {r_{i} } \\\\&=\\frac{1+\\mathrm {exp\\Big (-(\\sigma _{\\Delta \\varphi }^{i})^{2} \\Big )} }{2} \\int _{\\mathfrak {B} } \\sqrt{I(\\mathbf {r_{i}} ,L)}p(I(\\mathbf {r_{i}} ,L))d\\mathbf {r_{i} },\\\\E\\left[ (A_{\\alpha }^{i})^{2} sin^{2}\\Delta \\varphi ^{i} \\right] &= \\frac{2-[M(2)-M(-2)]}{2} \\int _{\\mathfrak {B} } \\sqrt{I(\\mathbf {r_{i}} ,L)}p(I(\\mathbf {r_{i}} ,L))d\\mathbf {r_{i} }\\\\&= \\frac{1-\\mathrm {exp\\Big (-(\\sigma _{\\Delta \\varphi }^{i})^{2} \\Big )} }{2} \\int _{\\mathfrak {B} } \\sqrt{I(\\mathbf {r_{i}} ,L)}p(I(\\mathbf {r_{i}} ,L))d\\mathbf {r_{i} }.\\end{aligned}$ Thus, Eq.", "(REF ) can be smplified to $\\begin{split}\\hat{t} _{i}&=\\sqrt{\\eta } (E\\left[ A_{\\alpha }^{i} cos\\Delta \\varphi ^{i} \\right] -E\\left[ A_{\\alpha }^{i} sin\\Delta \\varphi ^{i} \\right])=\\mathrm {exp}\\Big (-\\frac{(\\sigma _{\\Delta \\varphi }^{i})^{2} }{2}\\Big ) \\int _{\\mathfrak {B} } \\sqrt{I(\\mathbf {r_{i }} ,L)}p(I(\\mathbf {r_{i}} ,L))d\\mathbf {r_{i} },\\\\\\sigma _{i}^{2}&=\\hat{V}_{A}\\eta \\left\\lbrace E[(A_{\\alpha }^{i})^{2}cos^{2}\\Delta \\varphi ^{i} ]+E[(A_{\\alpha }^{i})^{2}sin^{2}\\Delta \\varphi ^{i} ] \\right.", "\\left.-(E[A_{\\alpha }^{i}cos\\Delta \\varphi ^{i}])^2 \\right\\rbrace +E[(A_{\\alpha }^{i})^{2}] .\\end{split}$ Thus, for sub-channel $i_{th}$ , $\\begin{split}\\hat{T} _{i}&=\\frac{\\hat{t} _{i}}{\\eta } =\\frac{\\left[ \\mathrm {exp}-\\Big (\\sigma _{\\Delta \\varphi }^{i})^{2}\\Big ) \\int _{\\mathfrak {B} } \\sqrt{I(\\mathbf {r_{i }} ,L)}p(I(\\mathbf {r_{i}} ,L))d\\mathbf {r_{i} } \\right]^2 }{\\eta },\\\\\\hat{\\varepsilon } _{i}& =\\frac{\\hat{\\sigma }_{i}^{2}-N_{0i}-\\nu _{el} }{\\hat{t} _{i}} +\\hat{V} _{A} \\left\\lbrace \\frac{E\\left[ (A_{\\alpha }^{i})^{2} cos^{2}\\Delta \\varphi ^{i} \\right] }{(E\\left[ A_{\\alpha }^{i} cos\\Delta \\varphi ^{i} \\right])^{2} }+\\frac{E\\left[ (A_{\\alpha }^{i})^{2} sin^{2}\\Delta \\varphi ^{i} \\right] }{(E\\left[ A_{\\alpha }^{i} cos\\Delta \\varphi ^{i} \\right])^{2} }-1 \\right\\rbrace .\\end{split}$ Taking all subchannels into consideration, the estimated values of atmospheric channel transmittance $\\left\\langle \\hat{T}\\right\\rangle $ and excess noise $ \\hat{\\varepsilon } $ are $\\begin{split}\\left\\langle \\hat{T} \\right\\rangle &=\\sum _{i=1}^{M}p_{i}T_{i} = \\frac{1}{\\eta } \\sum _{i=1}^{M} p_{i} \\left[ \\mathrm {exp}\\Big (-(\\sigma _{\\Delta \\varphi ^{i}) }^{2}\\Big ) \\int _{\\mathfrak {B} } \\sqrt{I(\\mathbf {r_{i }} ,L)}p(I(\\mathbf {r_{i}} ,L))d\\mathbf {r_{i} } \\right]^2,\\\\\\hat{\\varepsilon }& =\\sum _{i=1}^{M} p_{i}\\varepsilon _{i}=\\sum _{i=1}^{M} p_{i}\\Bigg ( \\frac{\\hat{\\sigma }_{i}^{2}-N_{0i}-\\nu _{el} }{\\hat{t}_{i} }+\\hat{V} _{A} \\left\\lbrace \\frac{E\\left[ (A_{\\alpha }^{i})^{2} cos^{2}\\Delta \\varphi ^{i} \\right] }{(E\\left[ A_{\\alpha }^{i} cos\\Delta \\varphi ^{i} \\right])^{2} }+\\frac{E\\left[ A_{\\alpha }^{i} sin^{2}\\Delta \\varphi ^{i} \\right] }{(E\\left[ A_{\\alpha }^{i} cos\\Delta \\varphi ^{i} \\right])^{2} }-1 \\right\\rbrace \\Bigg ).\\end{split}$ Thus, the asymptotic secret key rate of the free-space CVQKD system in reverse reconciliation is expressed as $K(\\left\\langle \\hat{T } \\right\\rangle ,\\hat{\\varepsilon } )=(1-\\text{FER}) \\left[ \\beta _{R} I_{AB}(\\left\\langle \\hat{T } \\right\\rangle ,\\hat{\\varepsilon } )-\\chi _{BE} (\\left\\langle \\hat{T } \\right\\rangle ,\\hat{\\varepsilon })-\\bigtriangleup (n)\\right],$ where $\\text{FER} \\in \\left[ 0,1 \\right] $ , representing the frame-error rate, and $\\beta _{R} \\in \\left[ 0,1 \\right] $ , representing the efficiency of information reconciliation.", "$ I_{AB}$ means mutual information between Alice and Bob, $\\chi _{BE}$ means the maximum amount of information that Eve can steal from Bob limited by the Holevo bound, and $\\bigtriangleup (n)$ is related to the security parameter of the privacy amplification." ] ]
2207.10444
[ [ "Supergranular Fractal Dimension and Solar Rotation" ], [ "Abstract We present findings from an analysis of the fractal dimension of solar supergranulation as a function of latitude, supergranular cell size and solar rotation, employing spectroheliographic data in the Ca II K line of solar cycle no.", "23.", "We find that the fractal dimension tends to decrease from about 1.37 at the equator to about 1 at 20 degree latitude in either hemisphere, suggesting that solar rotation rate has the effect of augmenting the irregularity of supergranular boundaries.", "Considering that supergranular cell size is directly correlated with fractal dimension, we conclude that the mechanism behind our observation is that solar rotation influences the cell outflow strength, and thereby cell size, with the latitude dependence of the supergranular fractal dimension being a consequence thereof." ], [ "Introduction", "Supergranules, the large convective eddies discovered by Hart in the year 1950 and later characterized by [16] are believed to be visible manifestations of sub-photospheric convection currents.", "Typically, these cellular patterns have a horizontal flow velocity in the range of 0.3 to 0.4 km/s, an autocorrelation length scale of around 30 Mm and a lifetime of about 24 hour [32].", "The supergranular pattern as a whole tends to be irregularly surface filling [17] and has an estimated lifetime of about 2 days [7].", "While their horizontal flows may reach  300-400m/s, their upflows are an order of magnitude slower.", "Unlike granules, they are not thought to be truly convective, which explains why they are better observed in Dopplergrams than in intensitygrams.", "Indeed, this is a reason why they were initially discovered through Doppler images.", "It is known that supergranular cell boundaries coincide with the chromospheric networks, attributed to magnetic fields flushed to the cell boundaries by the horizontal flow (Simon and Leighton, 1964).", "The size and flow spectrum associated with supergranulation include smaller cells in such a way that the spectrum of supergranules leads to the spectrum of granulation [12] and has a dependence on Solar cycle phase and total irradiance [18].", "Here it may be noted that both Doppler signals and the spectral component due to granules are visible in SDO/HMI data [38].", "A number of researchers have noted the effects of interaction between solar activity and the supergranular magnetic network.", "Based on an analysis of spectroheliograms spanning seven consecutive solar maxima, [33] claim that the chromospheric network cell size is smaller at the solar maximum phase than at the solar minimum phase.", "This is in consonance with the findings of [13] on the chromospheric network variability, of [3] on the network geometry, and of [29], who study magnetic field influence on network scale, but differs from a study on the related velocity and magnetic fields [37], and [21], who have reported larger network cells areas in higher magnetic activity regions.", "The supergranular rotation rate at the solar equator has been reported by various authors and found to be about 3$\\%$ more than the surface plasma's rotation rate, a phenomenon termed as 'supergranular superrotation' [6], [2], but it should be noted that this is probably a projection effect and not a genuine wave phenomenon [10].", "Based on a time-distance helioseismology analysis of the SOHO-MDI, the pattern of supergranulation is found to be oscillatory [7], generating waves with a time period between six and nine days.", "The apparent superrotation may be explained by the fact that the waves are largely prograde.", "The fractal dimension is a useful mathematical representation for describing the complexity of geometrical structures and for understanding the underlying dynamics [19].An object is called a fractal if it displays self-similarity at different scales.", "Fractal analysis has been used to study the turbulence of the magnetoconvection of solar magnetic fields [15], [36].", "Fractal analysis has been used in the context of solar surface studies, such as in the context of dopplergrams [20] and Ca II K filtergrams of SoHO MDI [26] and of KSO data [5], [27].", "The fractal nature of supergranulation was studied in detail by [24] and its relation to solar activity by [25], where the role of turbulence on the complexity of the cell was indicated.", "Pic du Midi data was to calculate the granulation pattern's fractal dimension [31], which was the first application of fractal dimension investigation to a solar surface phenomenon.", "For smaller granules, they obtained a fractal dimenson of 2 for large granules and $1.25$ for smaller ones.", "[4] used fractal analysis to explain the turbulent origin of supergranulation.", "They chose an intensity threshold and produced binary image representing the chromospheric network and used a medial axis transform (skeleton) of the binary image to unleash the geometrical properties of the cells.", "To calculate the degree of circularity of supergranular cells, [35] used the tessellation method on the supergranulation pattern." ], [ " Data and Analysis", "This analysis uses the quiet region data (in both quiescent and active phases) of the solar cycle no.", "23 (covering the years 1996 to 2008) from the Kodaikanal Solar Observatory (KSO)https://kso.iiap.res.in archives.", "Figure REF depicts data obtained during the active phase of this cycle.", "The KSO's dual telescope is equipped with a Ca II K spectroheliograph with a spectral disperson of 7 $Å$ /mm near 3930 $Å$ .", "It employs a 6 cm image obtained with a Cooke photovisual triplet of 30 cm, onto which sunlight is reflected by a 460 mm diameter Foucault siderostat.", "Light with a band with of 0.5 $Å$ is admitted by the exit slits.", "The images are suitably time-average to remove the effects of p-mode oscillations.", "Well-formed supergranular cells within an angular distance of $20^\\circ $ are selected by visual inspection, where the restrictionis made to minimize projection effects, cf.", "[27].", "Figure REF is part of a full-disk image in which we highlight a few regions where we are able to visually identify well-defined cells.", "Figure: Spectroheliogram of Ca II K from KSO, indicating supergranules selections, taken during cycle no.", "23, in particular the active phase of October, 2000.", "The image orientation is N-S.Per day the setup generates 144 images with post-averaged time cadence of 10 min.", "As the image resolution is 2 arcsec, which is twice the granular scale, it is expected that our results are insensitve to granular effects.", "About 400 well-defined cells were extracted from quiet regions within the belt between $20^\\circ $ N and S. The area-perimeter relation is obtained from them forms the basis for deriving the fractal dimension [22].", "Figure: Ca II spectroheliogram scan: mean-shifted profile of a selected supergranule showing two crests, which stand for the cell boundary.", "If the peak position was ambigous, one could potentially try to use a Gaussian profile to fit the cell wall.", "However, as in the above case, the position could be unambiguously determined.", "The cell area and perimeter are obtain with multiple such scans.", "(The negative values corresponds to points with below-mean intensity.", ")The methodology was manual and not automated.", "It goes briefly as follows: first the visually identified cell subjected using IDL software to a “two-dimensional tomography”, i.e., multiple sequential scans, such as shown in Figure  REF .", "In each scan, the cell boundaries define the area included in the scan, which is added to obtain a consolidated area, while the locus of boundaries across scans determines the cell perimeter.", "Our analysis, based on direct visual inspection, yields a cell size in consonance with other works which employ methods that track individual cells [24], [8].", "The latter reference infers cell diameter between 13 to 18 Mm, employing a tessellation procedure based on the steepest gradient algorithm, obtained a characteristic cell diameter in the range 13-18 Mm, which is half of the cell scale obtained using methods such as autocorrelation method or spherical harmonics decomposition [12].", "The cause of this discrepancy is a matter under current investigation, to be reported elsewhere.", "Figure REF gives the area vs perimeter plot for the analyzed cells, demonstrating a power-law relationship.", "If $P$ and $A$ denote the cell's perimeter and area, respectively, then the fractal dimension $D$ is obtained according to: $\\begin{aligned}\\centering D\\delta \\log (A) = 2\\delta \\log (P).\\end{aligned}$ Perfect circles or squares, for which the area increases quadratically as a function of the perimeter, we find that the fractal dimension $D = 1$ .", "The more the cell structure deviates from regularity by being denticulate (i.e., the boundaries are craggy and rugged), the more it causes greater perimeter length to enclose a given area, and thereby the more is the increase of the fractal dimension towards 2.", "Figure: Log-log plot of supergranular perimeter vs area in units of Mm and Mm 2 ^2, respectively.", "The displayed data consists of about 130 cells, corresponding to the first data point in Figure .The chosen region of study, which is about 30$^\\circ $ subtended about the image center, should contain approximately 300 cells per image.", "Thus, in principle, a greater number of cells can be employed than used in this study.", "In automated methods of cell extraction (such as the steepest-gradient method based tessellation technique of one of the authors here (e.g., [35]), or autocorrelation based extraction of cell scale (e.g., [30], by the same authors), a greater region can be mechanically covered for study.", "However, such methods require a degree of interpretation, such as (in the former case) whether the extracted cells are precisely supergranules or include other cell-like regions of smaller or larger scale.", "In the latter case, the autocorrelation scale may be enhanced as an artefact of open cells, which lack a well-defined boundary.", "The present manual method has the advantage of visually selecting well-defined cells, but being time-consuming, yields fewer cells in a given time.", "Further, the present method may involve a selection effect in that it may be biased towards cells of smaller size.", "This is because apparently they tend to be better defined than larger cells, which tend to have more broken / diffuse boundary walls.", "Ideally, it would be apt to develop a supervised machine-learning algorithm that is trained by the present visual inspection method." ], [ " Cell size, fractal dimension and rotation", "In a first analysis, we look at how the fractal dimension varies with supergranular length scale.", "We have considered four size ranges, combining data across all latitudes.", "Figure  REF depicts a broad but well-established dependence of fractal dimension on the area of the supergranular cells and is shown in different ranges of the supergranular cell area.", "For those cells whose area is below 100 Mm$^2$ , the fractal dimension is found to be about 1, meaning that they are quite regular in shape.", "On the other hand, for an area between (100-200) Mm$^2$ , (200-300) Mm$^2$ , (300-400) Mm$^2$ the fractal dimension is found to be about 1.38, 1.5 and 1.68 respectively indicating a more irregular-shaped perimeter.", "Figure: Supergranular fractal dimension dependence on area, showing that larger cells are more irregular shaped.", "Here, the area parameter is grouped into bands of size 100 Mm 2 ^2, which is large enough to enable inclusion of a statistically significant number of data points.", "In passing, it may be mentioned that this behavior is in agreement with the findings of Meunier (1999) for active regions.The choice of a band of 100 Mm$^2$ to classify the cells is rather arbitrary, but found to be convenient for our data set.", "Thus our main observation here is that the smaller supergranular cells are more regular shaped than larger ones.", "This agrees with the result reported by [35], who find that larger cells have less regular boundaries (quantified through a “circularity” parameter).", "This feature is attributed to the idea that supergranular outflows become choppier at larger distances, reflected in the irregularity of the swept-out magnetic fields.", "Here, it is of interest to note that [4] used fractal analysis to explain the turbulent origin of supergranulation." ], [ "Latitude, Solar rotation and fractal dimension", "In a second analysis, the fractal dimension is computed for the latitude belts (0-3), (3-6), (6-9), (9-12), (12-15), (15-18) and (18-21), the data comprise cells from both hemispheres.", "In this case, cells are not sifted according to size.", "Columns $\\#$ 2 and $\\#$ 3 of Table REF gives the latitude range and corresponding fractal dimension.", "It shows that at lower latitudes, the estimated fractal dimension is higher than that at the higher latitudes (cf.", "[28]).", "The result is given in Table  REF .", "The data of Figure REF and Table REF together suggests that supergranular cell sizes fall slightly at higher latitudes in the selected belt, in agreement with the observation of [30].", "Table: The individual perimeter-vs-area plots are used to obtain fractal dimension for each latitude belt, plotted in Figure .", "For each latitude belt, the fractal dimension derived is based on about 50-100 cells.The latitudinal dependence of supragranular fractal dimension suggests a connection to solar differential rotation and possibly to supergranular superrotation.", "The cellular rotation rate, as determined by [11], is: $\\begin{aligned}\\centering \\Omega (\\theta ,\\lambda )/2\\pi = [1+g(\\lambda )](454- 51\\sin ^2\\theta - 92 \\sin ^4\\theta ),\\end{aligned}$ where $\\lambda $ is latitude and $g(\\lambda ) = \\tanh (\\lambda /31)[2.3-\\tanh ((\\lambda -65)/20)] /73.3$ is expressed in Mm and g($\\lambda $ ) is a dimensionless quantity.", "With a typical value $\\lambda = 32$ Mm, the value of $(1+g (\\lambda ))$ turns out to be about 1.017.", "The value of rotation for the mid-belt is given in Table REF .", "Eq.", "(REF ) shows that the rotation rate falls off as one moves away from the equator in either hemisphere, similar latitudinal dependence of the fractal dimension.", "In order to connect the observation given by Eq.", "(REF ) to our data, we shall assume a simple linear relation between fractal dimension and rotation given by $D = a + b(\\Omega /2\\pi )$ , for certain real parameters $a$ and $b$ .", "The form of Eq.", "(REF ) leads us to the relation $\\begin{aligned}\\centering D = 1.34 - 3.5 \\sin ^2 \\theta - 6.3 \\sin ^4\\theta .\\end{aligned}$ which is found to provide a reasonable fit to the data of Table REF .", "Using Eqs.", "(REF ) and (REF ) to eliminate $\\theta $ , we obtain: $\\begin{aligned}\\centering D = -29.6 + 0.067(\\Omega /2\\pi )\\end{aligned}$ plotted in Figure REF .", "Other slightly different versions of the dependence Eq.", "(REF ) are possible, e.g., [14], and accordingly we may obtain slight variations of Eq.", "(REF ).", "Figure: Variation of fractal dimension with rotation from the data of Table and Eq.", "().", "The linear fit comes from assuming a linear relation between the two variables, and requiring a best fit subject to the constraints of Eqs.", "() and ().Two causes may be at play working hand in hand to produce the rotational dependence of fractal dimension, given by Eq.", "(REF ).", "First is that, as we reported above, cell sizes fall towards higher latitudes (Table REF ), which may be a rotational effect and can be understood as follows.", "The differential rotation through the dynamo action causes an enhancement of quiet sun magnetic fields at higher latitudes.", "This field enhancement is expected to have a constricting influence on cell size [33], leading to smaller cells at higher latitudes, as confirmed by [30].", "And as we show later (below in Eq.", "(REF )), larger cells are expected to have a greater fractal dimension.", "By virtue of Eq.", "(REF ), we know that rotation speed falls towards higher latitudes.", "These considerations provide a basis for the observed direct correlation between $D$ and the rotation rate.", "Another possible cause is related to the fact that when the radial outflow of a supergranule encounters the ambient plasma at the cell boundary, the fluidic stress and hence turbulence is expected to be relatively less where the plasma rotation speed is lower, assuming uniform outflow speed across the latitudes.", "Correspondingly, the cell boundaries at latitudes associated with slower rotation, namely the higher latitudes, are expected to be less corrugated, or in other words, have lower fractal dimension, as we find in Table REF ." ], [ "Conclusion & Discussions", "We have found that the fractal dimension for supergranulation is directly correlated with supergranular cell size (Figure REF ), but anti-correlated with latitude (Table  REF ).", "Taking into account the observed quartic polynomial relationship between Solar rotation and the sine of the latitude, Eq.", "(REF ), we have proposed a simple dependence of fractal dimension on solar rotation.", "We now briefly and qualitatively consider the question of a potential underlying mechanism to explain this behavior and that we hope to understand more quantitatively in a future work.", "The latitude dependence of fractal dimension $D$ is expected to be influenced by its dependence on the scale of supergranulation and the quiet Sun magnetic field distirbution.", "We now discuss the nature of these two dependences.", "With regard to the latter, we remark that the magnetic flux tubes, “frozen” into the plasma, have the constricting property, essentially because charged particles aren't allowed to cut across field lines.This is due to the Lorentz force, given by $\\vec{F}_{\\rm L} \\propto \\vec{v} \\times {\\bf B}$ , where ${\\bf B}$ and $\\vec{v}$ represent magnetic field intensity and velocity, respectively.", "Indeed, the flow of plasma across a field line is forbidden in the limit of extremely high electrical conductivity because it would generate enormous eddy currents [1].", "We now speculate on a potential qualitative scenario that can account for our results.", "Assuming that supergranules are convective cells, magnetic field is expected to be accumulated at the supergranular edges thanks to the above magnetohydrodynamical feature.", "Larger number of flux tubes transported to the edges of the larger cells due to convective motions and the associated solar rotation may be a key factor in determining how strongly the supergranular outflow pushes against the ambient plasma, resulting in smaller cells at higher latitudes in the chosen latitudinal range.", "Since the cell wall is formed by a heating of the overlying plasma by the magnetic flux swept by the supergranular convective flow, larger cells typically show more fluctuations and discontinuities in the cell wall, and hence larger fractal dimension.", "This may explain the direct correlation between cell size and the fractal dimension (Figure REF ).", "We propose a simple model that tries to capture the above idea.", "For the turbulent medium described by Kolmogorov theory applied to Solar convection associated with supergranulation, we expect the relation between the horizontal speed $v_{\\rm horiz}$ and the cell size $L$ being given by: $v_{\\rm horiz} = \\eta ^{1/3} \\times L^{1/3},$ where $\\eta $ is connected to the plasma injection rate [23].", "Letting $T = L/v_{\\rm horiz}$ represent the time that a plasma fluid element takes to traverse from the point of upflow at the cell center to the boundary, and $\\delta _{\\rm horiz}$ represent the standard deviation in the horizontal velocity, we may then estimate that the standard deviation induced in $L$ is given by $\\delta _L = T \\delta _{\\rm horiz} = \\eta ^{-1/3}L^{2/3}\\delta _{\\rm horiz},$ which implies that the cell boundary has greater spread, the greater is the cell size.", "[23] estimate using SOHO dopplergram data that $\\eta , \\delta _{\\rm horiz}$ and the mean value of $L$ are, respectively, $2.89 \\times 10^{-6}$ km$^2$ s$^{-3}$ , 74.1 m/s and 33.7 Mm.", "Substituting these values into the right hand side of Eq.", "(REF ), we obtain about 5.4 Mm for $\\delta _L$ , which is close to the value of standard deviation in $L$ of 8.96 Mm reported by [23].", "It is not unreasonable to assume that the standard deviations mentioned above obtained over many cells also indicate the variation of the corresponding variables over different times and positions in a given cell.", "Under this assumption, Eq.", "(REF ) can be interpreted as asserting that the boundaries of larger cell show greater fluctuation, and thus by extension, greater fractal dimension, consistent with the plot in Figure REF .", "Our result appears to support previous studes [34], [35], which reports that larger cells have a more craggy perimeter.", "[30] have reported a decrease in the autocorrelation scale of supergranules as one moves to higher latitudes until $\\pm 20^\\circ $ , and an increase thereafter until $\\pm 30^{\\circ }$ .", "In conjunction with Figure REF , this would suggest that the fractal dimension must have an analogous latitude dependence, with minima around $\\pm 20^\\circ $ .", "Thus, whilst $D$ has the expected behavior at the lower latitudes, it appears that other factors must be invoked to explain its behavior farther up.", "Here we note that quiet Sun fields are reported to show enhancements around the equator and $\\pm 30^{\\circ }$ [9].", "This, in light of the preceding argument, would be consistent with the data of Table REF , except that we would expect a dip in $D$ close to the equator.", "In conclusion, it appears that the latitude dependence of $D$ that we find is the resultant of the somewhat conflicting constraints imposed by the cell scale and quiet Sun magnetic field distribution.", "We may conclude that further study, using a different method of cell statistics analysis to process a larger number of cells, is needed to unravel the detailed behavior of $D$ as a function of latitude.", "It will of be of interest to try to quantitatively obtain Eq.", "(REF ) based on these consideration, which would then lead to Eq.", "(REF ) in conjunction with Eq.", "(REF ).", "In future works, we propose to return to the same data, but using other approaches, such as an autocorrelation, spectral analysis or an automated tessellation.", "Here it is worth noting that a turbulent origin of supergranulation has been studied, and in particular [4] have used fractal analysis in this context.", "In the theory of turbulent energy cascade, the Kolmogorov spectrum for energy as function of wave number $k$ is given by $k^{-\\frac{5}{3}}$ implies that the variance of temperature varies with length scale as $r^{2/3}$ , while variance of pressure varies as $r^{4/3}$ [24].", "[19] showed that the fractal dimension of an isosurface is given by $D = D_E - 2 \\times \\langle \\zeta \\rangle $ , where $D_E$ is the Euclidean dimension of the object (here 2, for supergranulation) and $\\langle \\zeta \\rangle $ is the exponent in the functional form of variance for the given quantity.", "Accordingly, for isotherms and isobars we find $D=5/3 \\approx 1.66$ and $D = 4/3 \\approx 1.33$ , respectively.", "Our data in Table REF show that each latitude, the fractal structure of supergranulation is closer to an isobaric than isothermal pattern.", "It would be interesting study whether the assumed linear behavior that underlies Eq.", "(REF ) is related to this." ], [ "Acknowledgement", "We thank Indian Institute of Astrophysics (IIA) for providing Ca-K filtergram data, and Fiaz for providing technical help with image handling.", "We are grateful to Prof. J. Singh for his valuable suggestions and support." ] ]
2207.10490
[ [ "Communication Lower Bounds and Optimal Algorithms for Multiple\n Tensor-Times-Matrix Computation" ], [ "Abstract Multiple Tensor-Times-Matrix (Multi-TTM) is a key computation in algorithms for computing and operating with the Tucker tensor decomposition, which is frequently used in multidimensional data analysis.", "We establish communication lower bounds that determine how much data movement is required to perform the Multi-TTM computation in parallel.", "The crux of the proof relies on analytically solving a constrained, nonlinear optimization problem.", "We also present a parallel algorithm to perform this computation that organizes the processors into a logical grid with twice as many modes as the input tensor.", "We show that with correct choices of grid dimensions, the communication cost of the algorithm attains the lower bounds and is therefore communication optimal.", "Finally, we show that our algorithm can significantly reduce communication compared to the straightforward approach of expressing the computation as a sequence of tensor-times-matrix operations." ], [ "Introduction", "The Tucker tensor decomposition is a low-rank representation or approximation that enables significant compression of multidimensional data.", "The Tucker format consists of a core tensor, which is much smaller than the original data tensor, along with a factor matrix for each mode, or dimension, of the data.", "Computations involving Tucker-format tensors, such as tensor inner products, often require far fewer operations than with their full-format, dense representations.", "As a result, the Tucker decomposition is often used as a dimensionality reduction technique before other types of analysis are done, including computing a CP decomposition [8], for example.", "A 3-way Tucker-format tensor can be expressed using the tensor notation $T̰ = G̰ \\times _1 {\\mathbf {\\mathbf {A}}}^{(1)} \\times _2 {\\mathbf {\\mathbf {A}}}^{(2)} \\times _3 {\\mathbf {\\mathbf {A}}}^{(3)}$ , where $G̰$ is the 3-way core tensor, ${\\mathbf {\\mathbf {A}}}^{(n)}$ is a tall-skinny factor matrix corresponding to mode $n$ , and $\\times _n$ denotes the tensor-times-matrix (TTM) operation in the $n$ th mode [17].", "Here, $T̰$ is the full-format representation of the tensor that can be constructed explicitly by performing multiple TTM operations.", "We call this collective operation the Multi-TTM computation, which is the focus of this work.", "Multi-TTM is a fundamental computation in the context of Tucker-format tensors.", "When the Tucker decomposition is used as a data compression tool, Multi-TTM is exactly the decompression operation, which is necessary when the full format is required for visualization [18], for example.", "In the case of full decompression, the input tensor is small and the output tensor is large.", "One of the quasi-optimal algorithms for computing the Tucker decomposition is the Truncated Higher-Order SVD algorithm [27], [19], in which each factor matrix is computed as the leading left singular vectors of a matrix unfolding of the tensor.", "In this algorithm, the smaller core tensor is computed via Multi-TTM involving the larger data tensor and the computed factor matrices.", "When the computational costs of the matrix SVDs are reduced using randomization, Multi-TTM becomes the overwhelming bottleneck computation [22], [25].", "Since the overall size of multidimensional data grows quickly, there have been many recent efforts to parallelize the computation of the Tucker decomposition and the operations on Tucker-format tensors [2], [9], [21], [11], [4].", "There has also been recent progress in establishing lower bounds on the communication costs of parallel algorithms for tensor computations, including the Matricized-Tensor Times Khatri-Rao product (MTTKRP) [5], [6], [28] and symmetric tensor contractions [24].", "However, to our knowledge, no communication lower bounds have been previously established for computations relating to Tucker-format tensors.", "In this work, we prove communication lower bounds for a class of Multi-TTM algorithms.", "Additionally, we provide a parallel algorithm that attains the lower bound to within a constant factor and is therefore communication optimal.", "To minimize the number of arithmetic operations in a Multi-TTM computation, the TTM operations should be performed in sequence, forming temporary intermediate tensors after each step.", "One of the key observations of this work is that when Multi-TTM is performed in parallel, this approach may communicate more data than necessary, even if communication-optimal algorithms are used for each individual TTM.", "By considering the Multi-TTM computation as a whole, we can devise parallel algorithms that can communicate less than this TTM-in-Sequence approach, often with negligible increase in computation.", "Our proposed algorithm provides greatest benefit when the input and output tensors vary greatly in size.", "The main contributions of this paper are to establish communication lower bounds for the parallel load balanced Multi-TTM computation; propose a communication optimal parallel algorithm; show that in many typical scenarios, the straightforward approach based on a sequence of TTM operations communicates more than performing Multi-TTM as a whole.", "The rest of the paper is organized as follows.", "sec:relatedWork describes previous work on communication lower bounds for matrix multiplication and some tensor operations.", "In sec:notations, we present our notations and preliminaries for the general Multi-TTM computation.", "To reduce the complexity of notations, we first focus on 3-dimensional Multi-TTM computation for which we present communication lower bounds and a communication optimal algorithm in sec:3dLowerBounds and sec:3dUpperBounds, respectively.", "In sec:experiments, we validate the optimality of the proposed algorithm and show that it significantly reduces communication compared to the TTM-in-Sequence approach with negligible increase in computation in many practical cases.", "We present our general results in sec:genLowerBounds,sec:genUpperBounds, and propose conclusions and perspectives in sec:conclusions." ], [ "Related Work", "A number of studies have focused on communication lower bounds for matrix multiplication, starting with the work by Hong and Kung [14] to determine the minimum number of I/O operations for sequential matrix multiplication using red-blue pebble game.", "Irony et al.", "[15] extended this work for the parallel case.", "Demmel et al.", "[13] studied memory independent communication lower bounds for rectangular matrix multiplication based on aspect ratios of matrices.", "Recently, Smith et al.", "[23] and Al Daas et al.", "[1] have tightened communication lower bounds for matrix multiplication.", "Ballard et al.", "[3] extended communication lower bounds of the matrix multiplication for any computations that can be written as 3 nested loops.", "Christ et al.", "[12] generalized the method to prove communication lower bounds of 3 nested loop computations for arbitrary loop nesting.", "We apply their approach to our Multi-TTM definition.", "There is limited work on communication lower bounds for tensor operations.", "Solomonik et al.", "[24] proposed communication lower bounds for symmetric tensor contraction algorithms.", "Ballard et al.", "[5] proposed communication lower bounds for MTTKRP computation with cubical tensors.", "This work is extended in [6] to handle varying tensor dimensions.", "A sequential lower bound for tile-based MTTKRP algorithms is proved by Ziogas et al.", "[28].", "We use some results from [5], [6] to prove communication lower bounds for Multi-TTM." ], [ "Notations and Preliminaries", "In this section, we present our notations and basic lemmas for $d$ -dimensional Multi-TTM computation.", "In sec:3dLowerBounds,sec:3dUpperBounds,sec:experiments, we focus on $d=3$ , i.e., $Y̰$ = $X̰ \\times _1 {{\\mathbf {\\mathbf {A}}}^{(1)}}^{\\sf T}\\times _2 {{\\mathbf {\\mathbf {A}}}^{(2)}}^{\\sf T}\\times _3 {{\\mathbf {\\mathbf {A}}}^{(3)}}^{\\sf T}$ .", "We present our general results in sec:genLowerBounds,sec:genUpperBounds.", "We use boldface uppercase Euler script letters to denote tensors ($X̰$ ) and boldface uppercase letters with superscripts to denote matrices (${\\mathbf {\\mathbf {A}}}^{(1)}$ ).", "We use lowercase letters with subscripts to denote sizes ($n_1$ ) and add the prime symbol to them to denote the indices ($n_1^\\prime $ ).", "We use one-based indexing throughout and $[d]$ to denote the set $\\lbrace 1,2,\\cdots , d\\rbrace $ .", "To improve the presentation, we denote the product of elements having the same lowercase letter with all subscripts by the lowercase letter only ($n_1\\cdots n_d$ by $n$ and $r_1\\cdots r_d$ by $r$ ).", "We denote the product of the $i$ rightmost terms with the capital letter with subscript $i$ , $N_i=\\prod _{j=d-i+1}^dn_j$ and $R_i = \\prod _{j=d-i+1}^d r_i$ , thus $n=N_d$ , and $n_d=N_1$ .", "Let $Y̰\\in \\mathbb {R}^{r_1\\times \\cdots \\times r_d}$ be the $d$ -mode output tensor, $X̰\\in \\mathbb {R}^{n_1\\times \\cdots \\times n_d}$ be the $d$ -mode input tensor, and ${\\mathbf {\\mathbf {A}}}^{(k)} \\in \\mathbb {R}^{n_k\\times r_k}$ be the matrix of the $k$ th mode.", "Then the Multi-TTM computation can be represented as $Y̰= X̰\\times _1 {{\\mathbf {\\mathbf {A}}}^{(1)}}^{\\sf T}\\cdots \\times _d {{\\mathbf {\\mathbf {A}}}^{(d)}}^{\\sf T}$ .", "Without loss of generality and to simplify notation, we consider that the input tensor $X̰$ is larger than the output tensor $Y̰$ , or $n\\ge r$ .", "This corresponds to computing the core tensor of a Tucker decomposition given computed factor matrices, for example.", "However, the opposite relationship where the output tensor is larger (e.g., $X̰= Y̰\\times _1 {{\\mathbf {\\mathbf {A}}}^{(1)}} \\cdots \\times _d {{\\mathbf {\\mathbf {A}}}^{(d)}}$ ) is also an important use case, corresponding to forming an explicit representation of a (sub-)tensor of a Tucker-format tensor.", "Our results extend straightforwardly to this case.", "We also assume without loss of generality that the tensor modes are ordered in such a way that $n_1r_1 \\le n_2r_2 \\le \\cdots \\le n_dr_d$ .", "Let $X̰$ be an $n_1\\times \\cdots \\times n_d$ tensor, $Y̰$ be an $r_1\\times \\cdots \\times r_d$ tensor, and ${\\mathbf {\\mathbf {A}}}^{(j)}$ be an $n_j \\times r_j$ matrix for $j\\in [d]$ .", "Multi-TTM computes $Y̰= X̰\\times _1 {{\\mathbf {\\mathbf {A}}}^{(1)}}^{\\sf T}\\cdots \\times _d {{\\mathbf {\\mathbf {A}}}^{(d)}}^{\\sf T}$ where for each $(r_1^\\prime ,\\ldots ,r_d^\\prime ) \\in [r_{1}] \\times \\cdots \\times [r_d]$ , $Y̰(r_1^\\prime ,\\ldots ,r_d^\\prime ) = \\sum _{\\lbrace n_k^\\prime \\in [n_k]\\rbrace _{k \\in [d]}} X̰(n_1^\\prime ,\\ldots ,n_d^\\prime ) \\prod _{j \\in [d]} {\\mathbf {\\mathbf {A}}}^{(j)}(n_j^\\prime ,r_j^\\prime ).$ Let us consider an example when $d=2$ .", "In this scenario, the input and output tensors are in fact matrices ${\\mathbf {\\mathbf {X}}},{\\mathbf {\\mathbf {Y}}}$ , and ${\\mathbf {\\mathbf {Y}}} = {\\mathbf {\\mathbf {A}}}^{(1){\\sf T}}{\\mathbf {\\mathbf {X}}}{\\mathbf {\\mathbf {A}}}^{(2)}$ .", "As mentioned earlier, Multi-TTM computation can be performed as a sequence of TTM operations, in this case two matrix multiplications.", "However, we define the Multi-TTM to perform all the products at once for each term of the summation of eq:ourMultiTTMDef.", "Our definition comes at greater arithmetic cost, as partial $(d+1)$ -ary multiplies are not computed and reused, but we will see that this approach can reduce communication cost.", "We describe how the extra computation can often be reduced to a negligible cost in sec:cost-3d.", "We can write pseudocode for the Multi-TTM with the following: $&\\text{for $n_1^\\prime = 1{:}n_1$, \\ldots , for $n_d^\\prime = 1{:}n_d$,}\\\\&\\quad \\text{for $r_1^\\prime = 1{:}r_1$, \\ldots , for $r_d^\\prime = 1{:}r_d$,}\\\\&\\quad \\quad Y̰(r_1^\\prime ,\\ldots ,r_d^\\prime )\\ += X̰(n_1^\\prime ,\\ldots ,n_d^\\prime ) \\,\\cdot {\\mathbf {\\mathbf {A}}}^{(1)}(n_{1}^\\prime ,r_1^\\prime ) \\cdot \\cdots \\cdot {\\mathbf {\\mathbf {A}}}^{(N)}(n_d^\\prime ,r_d^\\prime )\\\\$ A parallel atomic Multi-TTM algorithm computes each term of the summation of eq:ourMultiTTMDef atomically on a unique processor, but it can distribute the $nr$ terms over processors in any way.", "Here atomic computation of a single $(d{+}1)$ -ary multiplication for a parallel algorithm means that all the multiplications of this operation are performed on only one processor, i.e., all $d+1$ inputs are accessed on that processor in order to compute the single output value.", "This assumption is necessary for our communication lower bounds.", "Processors can reorganize their local atomic operations to reduce computational costs without changing the communication or violating parallel atomicity.", "However it is reasonable for an algorithm to break this assumption in order to improve arithmetic costs by reusing partial results across processors, and we compare against such algorithms in sec:experiments." ], [ "Parallel Computation Model", "We consider that the computation is distributed across $P$ processors.", "Each processor has its own local memory and is connected to all other processors via a fully connected network.", "Every processor can operate on data in its local memory and must communicate to access data of other processors.", "Hence, communication refers to send and receive operations that transfer data from local memory to the network and vice-versa.", "Communication cost mainly depends on two factors – the amount of data communicated (bandwidth cost) and the number of messages (latency cost).", "Latency cost is dominated by bandwidth cost for computations involving large messages, so we focus on bandwidth cost in this work and refer it as communication cost throughout the text.", "We assume the links of the network are bidirectional and that the communication cost is independent of the number of pairs of processors that are communicating.", "Each processor can send and receive at most one message at the same time.", "In our model, the communication cost of an algorithm refers to the cost along the critical path." ], [ "Existing Results", "Our work relies on two fundamental results.", "The first, a geometric result on lattices, allows us to relate the volume of computation to the amount of data accessed by determining the maximum data reuse.", "The result is a specialization of the Hölder-Brascamp-Lieb inequalities [7].", "This result has previously been used to derive lower bounds for tensor computations [5], [6], [12], [16] in a similar way to the use of the Loomis-Whitney inequality [20] in derivations of communication lower bounds for linear algebra [3].", "The result is proved in [12], but we use the statement from [5].", "Here $$ represents a vector of all ones.", "Consider any positive integers $\\ell $ and $m$ and any $m$ projections $\\phi _j:\\mathbb {Z}^\\ell \\rightarrow \\mathbb {Z}^{\\ell _j}$ ($\\ell _j\\le \\ell $ ), each of which extracts $\\ell _j$ coordinates $S_j\\subseteq [\\ell ]$ and forgets the $\\ell -\\ell _j$ others.", "Define $\\mathcal {C} = \\big \\lbrace s̭ \\in [0,1]^m:{\\mathbf {\\mathbf {\\Delta }}}\\cdot s̭\\ge \\big \\rbrace \\text{,}$ where the $\\ell \\times m$ matrix ${\\mathbf {\\mathbf {\\Delta }}}$ has entries ${\\mathbf {\\mathbf {\\Delta }}}_{i,j} = 1 \\text{ if } i\\in S_j \\text{ and } {\\mathbf {\\mathbf {\\Delta }}}_{i,j} = 0 \\text{ otherwise}\\text{.", "}$ If $[s_1\\ \\cdots \\ s_m]^{\\sf T}\\in \\mathcal {C}$ , then for all $F\\subseteq \\mathbb {Z}^\\ell $ , $ |F| \\le \\prod _{j\\in [m]}|\\phi _j(F)|^{s_j}\\text{.", "}$ The second result, a general constrained optimization problem, allows us to cast the communication cost of an algorithm as the objective function in an optimization problem where the constraints are imposed by properties of the computation within the algorithm.", "A version of the result is proved in [6] and used to derive the general communication lower bound for MTTKRP.", "Consider the constrained optimization problem: $\\min \\sum _{j\\in [d]}x_j$ such that $\\frac{nr}{P} \\le \\prod _{j\\in [d]}x_j \\quad \\text{and} \\quad 0\\le x_j \\le k_j \\quad \\text{for all} \\quad 1\\le j\\le d$ for some positive constants $k_1 \\le k_2\\le \\cdots \\le k_d$ with $\\prod _{j\\in [d]}k_j = nr$ .", "Then the minimum value of the objective function is $I\\left(K_I/P\\right)^{1/I} + \\sum _{j\\in [d-I]}k_j$ where we use the notation $K_I=\\prod _{j=d-I+1}^d k_j$ and $1\\le I \\le d$ is defined such that $k_j < (K_{d-j+1}/P)^{1/(d-j+1)} \\text{ for } 1 \\le j \\le d-I,\\\\k_\\ell \\ge \\left(K_{d-\\ell +1}/P\\right)^{1/(d-\\ell +1)} \\text{ for } d-I< \\ell \\le d.$ The minimum is achieved at the point $x̭^*$ defined by ${x_j}^*=k_j$ for $1\\le j \\le d-I$ , ${x_\\ell }^*=\\left(K_I/P\\right)^{1/I}$ for $d-I<\\ell \\le d$ .", "While lem:mttkrpOpt can be straightforwardly derived from the previous work, we provide a proof in app:sec:optLemmaProof for completeness.", "We represent it in this form to be directly applicable to all the constrained optimization problems in this paper.", "The lower bound and constraint on the products of the upper bounds are derived from the Multi-TTM computation.", "The additional constraint on the product of the upper bounds implies that there is always a feasible solution to the optimization problem for $P\\ge 1$ .", "We can note that the $d$ conditions are examined to determine the value $I$ .", "We calculate the ranges of $P$ for each $I$ based on these conditions in lemma:matrixOptimalSolutions,lemma:tensorOptimalSolutions,lemma:genMatrixOptimalSolutions." ], [ "Lower Bounds for 3-dimensional Multi-TTM", "We obtain the lower bound results for 3D tensors in this section, presented as theorem:lb:3DMultiTTM.", "The lower bound is independent of the size of the local memory of each processor, similar to previous results for matrix multiplication [1], [13] and MTTKRP [5], [6], and it varies with respect to the number of processors $P$ relative to the matrix and tensor dimensions of the problem.", "The crux of the proof considers a single processor that performs $1/P$ th of the computation and has access to $1/P$ th of the data.", "Finding the lower bound on the data that processor must communicate is reduced to solving a constrained optimization problem: we seek to minimize the number of elements of the matrices and tensors that the processor must access in order to execute its computation subject to structure constraints of Multi-TTM.", "The most important constraint derives from lem:hbl, which relates a subset of the computation within a Multi-TTM algorithm to the data it requires.", "The other constraints provide upper bounds on the data required from each array.", "The upper bounds are necessary to establish the tightest lower bounds in the cases where $P$ is small.", "We show that the optimization problem can be separated into two independent problems, one for the matrix data and one for the tensor data.", "lemma:matrixOptimalSolutions,lemma:tensorOptimalSolutions state the two constrained optimization problems along with their analytic solutions, both of which follow from lem:mttkrpOpt.", "That is, setting $d=3$ , $k_1=n_1r_1$ , $k_2=n_2r_2$ and $k_3=n_3r_3$ in lem:mttkrpOpt, we obtain lemma:matrixOptimalSolutions.", "Similarly, setting $d=2$ with $k_1=r$ and $k_2=n$ , we obtain lemma:tensorOptimalSolutions.", "We recall here that $r=r_1r_2r_3$ and $n=n_1n_2n_3$ .", "Consider the following optimization problem: $\\min _{x,y,z} x+y+z$ such that $\\frac{nr}{P} &\\le xyz \\\\0 &\\le \\phantom{y}x\\phantom{z} \\le n_1r_1 \\\\0 &\\le \\phantom{x}y\\phantom{z} \\le n_2r_2 \\\\0 &\\le \\phantom{x}z\\phantom{y} \\le n_3r_3,$ where $n_1r_1 \\le n_2r_2 \\le n_3r_3$ , and $n_1,n_2,n_3,r_1,r_2,r_3,P \\ge 1$ .", "The optimal solution $({x}^*,{y}^*,{z}^*)$ depends on the relative values of the constraints, yielding three cases: if $P < \\frac{n_3r_3}{n_2r_2}$ , then ${x}^*=n_1r_1$ , ${y}^*=n_2r_2$ , ${z}^*=\\frac{n_3r_3}{P}$ ; if $\\frac{n_3r_3}{n_2r_2}\\le P < \\frac{n_2n_3r_2r_3}{n_1^2r_1^2}$ , then ${x}^*=n_1r_1$ , ${y}^*={z}^*= \\big (\\frac{n_2n_3r_2r_3}{P}\\big )^{\\frac{1}{2}}$ ; if $\\frac{n_2n_3r_2r_3}{n_1^2r_1^2} \\le P$ , then ${x}^*={y}^*={z}^*= \\big (\\frac{nr}{P}\\big )^{\\frac{1}{3}}$ ; which can be visualized as follows.", "[scale=0.815, every node/.style=transform shape] [->, thick] (-0.1,0) – (15,0) node [below right, scale=1.2] $P$ ; (0, 0.1) – node [below, pastelred, scale=1.2]1(0,-0.1); (5, 0.1) – node [below, pastelred, scale=1.2]$\\frac{n_3r_3}{n_2r_2}$ (5,-0.1); (10, 0.1) – node [below, pastelred, scale=1.2] $\\frac{n_2n_3r_2r_3}{n_1^2r_1^2}$ (10,-0.1); align=left,below,scale=1.2] at (2.5, -0.4) ${x}^*=n_1r_1$ ${y}^*=n_2r_2$ ${z}^*=\\frac{n_3r_3}{P}$ ; align=left,below,scale=1.2] at (7.5, -0.6) ${x}^*=n_1r_1$ ${y}^*={z}^*= \\big (\\frac{n_2n_3r_2r_3}{P}\\big )^{1/2}$ ; align=center,below,scale=1.2] at (12.5, -0.6) ${x}^*={y}^*={z}^*=$ $\\qquad \\quad \\big (\\frac{nr}{P}\\big )^{1/3}$ ; Consider the following optimization problem: $\\min _{u,v} u+v$ such that $\\frac{nr}{P} &\\le uv \\\\0 &\\le \\;u\\; \\le r \\\\0 &\\le \\;v\\; \\le n,$ where $n\\ge r$ , and $n,r,P \\ge 1$ .", "The optimal solution $({u}^*,{v}^*)$ depends on the relative values of the constraints, yielding two cases: if $P < \\frac{n}{r}$ , then ${u}^*= r$ , ${v}^* = \\frac{n}{P}$ ; if $ \\frac{n}{r} \\le P$ , then ${u}^*={v}^*= \\big (\\frac{nr}{P}\\big )^{\\frac{1}{2}}$ ; which can be visualized as follows.", "[scale=0.815, every node/.style=transform shape] [->, thick] (-0.1,0) – (15,0) node [below right, scale=1.2] $P$ ; (0, 0.1) – node [below, pastelred, scale=1.2]1(0,-0.1); (7.5, 0.1) – node [below, pastelred, scale=1.2]$\\frac{n}{r}$ (7.5,-0.1); align=left,below,scale=1.2] at (3.75, -0.325) ${u}^*= r$ ${v}^* = \\frac{n}{P}$ ; align=left,below,scale=1.2] at (11.25, -0.5) ${u}^*={v}^*= \\big (\\frac{nr}{P}\\big )^{1/2}$ ;" ], [ "Communication Lower Bounds for Multi-TTM", "We now state the lower bounds for 3-dimensional Multi-TTM.", "After this, we also present a corollary for cubical tensors.", "Any computationally load balanced atomic Multi-TTM algorithm that starts and ends with one copy of the data distributed across processors involving 3D tensors with dimensions $n_1, n_2, n_3$ and $r_1, r_2, r_3$ performs at least $A+B-\\left(\\frac{n}{P}+\\frac{r}{P} +\\sum _{j=1}^3 \\frac{n_jr_j}{P}\\right)$ sends or receives where $A &= {\\left\\lbrace \\begin{array}{ll} n_1r_1 + n_2r_2 + \\frac{n_3r_3}{P} & \\text{ if } P<\\frac{n_3r_3}{n_2r_2} \\\\ n_1r_1 + 2\\left(\\frac{n_2n_3r_2r_3}{P}\\right)^{\\frac{1}{2}} &\\text{ if } \\frac{n_3r_3}{n_2r_2}\\le P < \\frac{n_2n_3r_2r_3}{n_1^2r_1^2} \\\\ 3\\left(\\frac{nr}{P}\\right)^{\\frac{1}{3}} &\\text{ if } \\frac{n_2n_3r_2r_3}{n_1^2r_1^2} \\le P\\end{array}\\right.}", "\\\\B &= {\\left\\lbrace \\begin{array}{ll} r + \\frac{n}{P} & \\text{ if } P < \\frac{n}{r} \\\\ 2\\left(\\frac{nr}{P}\\right)^{\\frac{1}{2}} &\\text{ if } \\frac{n}{r} \\le P\\text{.}\\end{array}\\right.", "}$ Let $F$ be the set of loop indices associated with the 4-ary multiplication performed by a processor.", "As we assumed the algorithm is computationally load balanced, $|F| = nr/P$ .", "We define $\\phi _{X̰}(F)$ , $\\phi _{Y̰}(F)$ and $\\phi _j(F)$ to be the projections of $F$ onto the indices of the arrays $X̰, Y̰$ , and ${\\mathbf {\\mathbf {A}}}^{(j)}$ for $1\\le j\\le 3$ which correspond to the elements of the array that must be accessed by the processor.", "We use lem:hbl to obtain a lower bound on the number of array elements that must be accessed by the processor.", "The computation involves 5 arrays (2 tensors and 3 matrices) with 6 loop indices (see the atomic Multi-TTM definition in sec:notations), hence the $6\\times 5$ matrix corresponding to the projections above is given by ${\\mathbf {\\mathbf {\\Delta }}} = \\begin{bmatrix}{\\mathbf {\\mathbf {I}}}_{3\\times 3} & _3 & _3\\\\ {\\mathbf {\\mathbf {I}}}_{3\\times 3} & _3 & _3 \\end{bmatrix}\\text{.", "}$ Here $_3$ and $_3$ represent the 3-dimensional vectors of all ones and zeros, respectively, and ${\\mathbf {\\mathbf {I}}}_{3\\times 3}$ represents the $3\\times 3$ identity matrix.", "We recall from lem:hbl that ${\\mathbf {\\mathbf {\\Delta }}}_{i,j}=1$ if loop index $i$ is used to access array $j$ and ${\\mathbf {\\mathbf {\\Delta }}}_{i,j}=0$ otherwise.", "The first three columns of ${\\mathbf {\\mathbf {\\Delta }}}$ correspond to matrices and the remaining two columns correspond to tensors.", "In this case, we have $\\mathcal {C} = \\big \\lbrace s̭ \\in [0,1]^5:{\\mathbf {\\mathbf {\\Delta }}}\\cdot s̭\\ge \\big \\rbrace \\text{.", "}$ Recall that $$ represents a vector of all ones.", "Here ${\\mathbf {\\mathbf {\\Delta }}}$ is not full rank, therefore, we consider all vectors $v̭ \\in \\mathcal {C}$ such that ${\\mathbf {\\mathbf {\\Delta }}}\\cdot v̭=$ .", "Such a vector $v̭$ is of the form $[a\\; a\\; a\\; 1\\text{-}a\\; 1\\text{-}a]$ where $0\\le a\\le 1$ .", "Therefore, we obtain $\\frac{nr}{P} \\le \\Big (\\prod _{j\\in [3]}|\\phi _j(F)|\\Big )^a \\big (|\\phi _{X̰}(F)||\\phi _{Y̰}(F)|\\big )^{1\\text{-}a}\\text{ for all } 0\\le a\\le 1.$ The above constraint is equivalent to $\\frac{nr}{P} \\le \\prod _{j\\in [3]}|\\phi _j(F)|$ and $\\frac{nr}{P} \\le |\\phi _{X̰}(F)||\\phi _{Y̰}(F)|$ .", "To see this equivalence note that the forward direction is implied by setting $a=0$ and $a=1$ .", "For the opposite direction, taking the first of the two constraints to the power $a$ and the second to the power $1-a$ then multiplying the two terms yields the original.", "Clearly a projection onto an array cannot be larger than the array itself, thus $|\\phi _{X̰}(F)| \\le n$ , $|\\phi _{Y̰}(F)|\\le r$ , and $|\\phi _j(F)|\\le n_jr_j$ for $1\\le j \\le 3$ .", "As the constraints related to projections of matrices and tensors are disjoint, we solve them separately and then sum the results to get a lower bound on the set of elements that must be accessed by the processor.", "We obtain a lower bound on $A$ , the number of elements of the matrices that must be accessed by the processor by using lemma:matrixOptimalSolutions, and a lower bound on $B$ , the number of elements of the tensors that must be accessed by the processor by using lemma:tensorOptimalSolutions.", "By summing both, we get the positive terms of the lower bound.", "To bound the sends or receives, we consider how much data the processor could have had at the beginning or at the end of the computation.", "Assuming there is exactly one copy of the data at the beginning and at the end of the computation, there must exist a processor which has access to at most $1/P$ of the elements of the arrays at the beginning or at the end of the computation.", "By employing the previous analysis, this processor must access $A+B$ elements of the arrays, but can only have $\\frac{n}{P}+\\frac{r}{P} +\\sum _{j\\in [3]} \\frac{n_jr_j}{P}$ elements of the arrays stored.", "Thus it must perform the specified amount of sends or receives.", "We denote the lower bound of theorem:lb:3DMultiTTM by ${\\sc LB} $ and use it extensively in sec:parallelAlgoritm:selectionpiqi while analyzing the communication cost of our parallel algorithm.", "We also state the result for 3-dimensional Multi-TTM computation with cubical tensors, which is a direct application of theorem:lb:3DMultiTTM with $n_1=n_2=n_3=n^{\\frac{1}{3}}$ and $r_1=r_2=r_3=r^{\\frac{1}{3}}$ .", "Any computationally load balanced atomic Multi-TTM algorithm that starts and ends with one copy of the data distributed across processors involving 3D cubical tensors with dimensions $n^{\\frac{1}{3}}\\times n^{\\frac{1}{3}}\\times n^{\\frac{1}{3}}$ and $r^{\\frac{1}{3}}\\times r^{\\frac{1}{3}}\\times r^{\\frac{1}{3}}$ (with $n\\ge r$ ) performs at least $3\\left(\\frac{nr}{P}\\right)^{\\frac{1}{3}} + r - \\frac{3(nr)^{\\frac{1}{3}}+r}{P}$ sends or receives when $P<\\frac{n}{r}$ and at least $3\\left(\\frac{nr}{P}\\right)^{\\frac{1}{3}}+2\\left(\\frac{nr}{P}\\right)^{\\frac{1}{2}} - \\frac{n+3(nr)^{\\frac{1}{3}}+r}{P}$ send or receives when $P \\ge \\frac{n}{r}$ .", "In particular, we note that the lower bound for cubical atomic Multi-TTM algorithms is smaller than that of a TTM-in-Sequence approach for many typical scenarios in the case $P<n/r$ , as we discuss further in sec:experiments." ], [ "Parallel Algorithm for 3-dimensional Multi-TTM", "We organize $P$ processors into a 6-dimensional $p_1 \\times p_2 \\times p_3 \\times q_1 \\times q_2 \\times q_3$ logical processor grid.", "We arrange the grid dimensions such that $p_1$ , $p_2$ , $p_3$ , $q_1$ , $q_2$ , $q_3$ evenly distribute $n_1$ , $n_2$ , $n_3$ , $r_1$ , $r_2$ , $r_3$ , respectively.", "A processor coordinate is represented as $(p_1^\\prime , p_2^\\prime , p_3^\\prime , q_1^\\prime , q_2^\\prime , q_3^\\prime )$ , where $1\\le p_{k}^\\prime \\le p_k$ , $1\\le q_{k}^\\prime \\le q_k$ for $k=1,2,3$ .", "To be consistent with our notation, we denote $p_1p_2p_3$ and $q_1q_2q_3$ by $p$ and $q$ .", "$X̰_{p_1^\\prime p_2^\\prime p_3^\\prime }$ denotes the subtensor of $X̰$ owned by processors $(p_1^\\prime , p_2^\\prime ,$ $p_3^\\prime , *, *, *)$ .", "Similarly, $Y̰_{q_1^\\prime q_2^\\prime q_3^\\prime }$ denotes the subtensor of $Y̰$ owned by processors $(*, *, *, q_1^\\prime , q_2^\\prime , q_3^\\prime )$ .", "${\\mathbf {\\mathbf {A}}}^{(1)}_{p_1^\\prime q_1^\\prime }$ , ${\\mathbf {\\mathbf {A}}}^{(2)}_{p_2^\\prime q_2^\\prime }$ and ${\\mathbf {\\mathbf {A}}}^{(3)}_{p_3^\\prime q_3^\\prime }$ denote submatrices of ${\\mathbf {\\mathbf {A}}}^{(1)}$ , ${\\mathbf {\\mathbf {A}}}^{(2)}$ and ${\\mathbf {\\mathbf {A}}}^{(3)}$ owned by processors $(p_1^\\prime , *, *, q_1^\\prime , *, *)$ , $(*, p_2^\\prime , *, *, q_2^\\prime , *)$ and $(*, *, p_3^\\prime , *, *, q_3^\\prime )$ , respectively.", "We impose that there is one copy of data in the system at the start and end of the computation, and every array is distributed evenly among the sets of processors whose coordinates are different for the corresponding dimensions of the variable.", "For example, $X̰_{111}$ = $X̰(1:\\frac{n_1}{p_1}, 1:\\frac{n_2}{p_2}, 1:\\frac{n_3}{p_3})$ is owned by processors $(1,1,1,*,*,*)$ .", "Similarly, ${\\mathbf {\\mathbf {A}}}^{(1)}_{12}$ = ${\\mathbf {\\mathbf {A}}}^{(1)}(1:\\frac{n_1}{p_1}, \\frac{r_1}{q_1}+1:2\\frac{r_1}{q_1})$ is owned by processors $(1,*,*,2,*,*)$ .", "We assume that data inside these sets of processors is also evenly distributed.", "For example, in the beginning, processor ($1,1,1,2,1,3$ ) owns $\\frac{1}{P}$ th portion of each input variable: $\\frac{p}{P}$ th portion of $X̰_{111}$ , $\\frac{p_1q_1}{P}$ th portion of ${\\mathbf {\\mathbf {A}}}^{(1)}_{12}$ , $\\frac{p_2q_2}{P}$ th portion of ${\\mathbf {\\mathbf {A}}}^{(2)}_{11}$ , and $\\frac{p_3q_3}{P}$ th portion of ${\\mathbf {\\mathbf {A}}}^{(3)}_{13}$ .", "fig:dataDistribution illustrates examples of our data distribution model for two of the arrays.", "Figure: Subtensor X̰ 231 X̰_{231} is distributed evenly among processors (2,3,1,*,*,*)(2,3,1,*,*,*).", "Similarly, submatrix 𝐀 31 (2) {\\mathbf {\\mathbf {A}}}^{(2)}_{31} is distributed evenly among processors (*,3,*,*,1,*)(*,3,*,*,1,*).", "[htb] Parallel Atomic 3-dimensional Multi-TTM [1] $X̰$ , ${\\mathbf {\\mathbf {A}}}^{(1)}$ , ${\\mathbf {\\mathbf {A}}}^{(2)}$ , ${\\mathbf {\\mathbf {A}}}^{(3)}$ , $p_1 \\times p_2 \\times p_3 \\times q_1 \\times q_2 \\times q_3$ logical processor grid $Y̰$ such that $Y̰= X̰\\times _1 {{\\mathbf {\\mathbf {A}}}^{(1)}}^{\\sf T}\\times _2 {{\\mathbf {\\mathbf {A}}}^{(2)}}^{\\sf T}\\times _3 {{\\mathbf {\\mathbf {A}}}^{(3)}}^{\\sf T}$ $(p_1^\\prime , p_2^\\prime , p_3^\\prime , q_1^\\prime , q_2^\\prime , q_3^\\prime )$ is my processor id //All-gather input tensor $X̰$ $X̰_{p_1^\\prime p_2^\\prime p_3^\\prime }$ = All-Gather($X̰$ , $(p_1^\\prime , p_2^\\prime , p_3^\\prime , *, *, *)$ ) //All-gather input matrices ${\\mathbf {\\mathbf {A}}}^{(1)}_{p_1^\\prime q_1^\\prime }$ = All-Gather(${\\mathbf {\\mathbf {A}}}^{(1)}$ , $(p_1^\\prime , *, *, q_1^\\prime , *, *)$ ) ${\\mathbf {\\mathbf {A}}}^{(2)}_{p_2^\\prime q_2^\\prime }$ = All-Gather(${\\mathbf {\\mathbf {A}}}^{(2)}$ , $(*, p_2^\\prime , *, *, q_2^\\prime , *)$ ) ${\\mathbf {\\mathbf {A}}}^{(3)}_{p_3^\\prime q_3^\\prime }$ = All-Gather(${\\mathbf {\\mathbf {A}}}^{(3)}$ , $(*, *, p_3^\\prime , *, *, q_3^\\prime )$ ) //Local computations in a temporary tensor $T̰$ $T̰$ = Local-Multi-TTM($X̰_{p_1^\\prime p_2^\\prime p_3^\\prime }$ , ${\\mathbf {\\mathbf {A}}}^{(1)}_{p_1^\\prime q_1^\\prime }$ , ${\\mathbf {\\mathbf {A}}}^{(2)}_{p_2^\\prime q_2^\\prime }$ , ${\\mathbf {\\mathbf {A}}}^{(3)}_{p_3^\\prime q_3^\\prime }$ ) //Reduce-scatter the output tensor in $Y̰_{q_1^\\prime q_2^\\prime q_3^\\prime }$ Reduce-Scatter($Y̰_{q_1^\\prime q_2^\\prime q_3^\\prime }$ , $T̰$ , $(*, *, *, q_1^\\prime , q_2^\\prime , q_3^\\prime )$ ) alg:3dmultittm presents a parallel algorithm to compute 3-dimensional Multi-TTM.", "When it completes, $Y̰_{q_1^\\prime q_2^\\prime q_3^\\prime }$ is distributed evenly among processors $(*,$ $*,$ $*,$ $q_1^\\prime ,$ $q_2^\\prime ,$ $q_3^\\prime )$ .", "fig:s-eps-converted-to.pdfOfParallelAlgorithm shows the s-eps-converted-to.pdf of the algorithm for a single processor in a $3\\times 3\\times 3\\times 3\\times 3\\times 3$ grid.", "Figure: S-eps-converted-to.pdf of alg:3dmultittm for processor (2,1,1,1,3,1)(2,1,1,1,3,1), where p 1 =p 2 =p 3 =q 1 =q 2 =q 3 =3p_1=p_2=p_3=q_1=q_2=q_3=3.", "Highlighted areas correspond to the data blocks on which the processor is operating.", "The dark red highlighting represents the input/output data initially/finally owned by the processor, and the light red highlighting corresponds to received/sent data from/to other processors in All-Gather/Reduce-Scatter collectives to compute Y̰ 131 Y̰_{131}." ], [ "Cost Analysis", "Now we analyze computation and communication costs of the algorithm.", "The dimension of the local tensor $X̰_{p_1^\\prime p_2^\\prime p_3^\\prime }$ is $\\frac{n_1}{p_1} \\times \\frac{n_2}{p_2} \\times \\frac{n_3}{p_3}$ , the dimension of the local matrix ${\\mathbf {\\mathbf {A}}}^{(k)}_{p_k^\\prime q_k^\\prime }$ is $\\frac{n_i}{p_i}\\times \\frac{r_i}{q_i}$ for $i=1,2,3$ , and the dimension of the temporary tensor $T̰$ is $\\frac{r_1}{q_1} \\times \\frac{r_2}{q_2} \\times \\frac{r_3}{q_3}$ .", "The local Multi-TTM computation in Line  can be performed as a sequence of TTM operations to mininimize the number of arithmetic operations.", "Assuming the TTM operations are performed in their order, first with ${\\mathbf {\\mathbf {A}}}^{(1)}$ , then with ${\\mathbf {\\mathbf {A}}}^{(2)}$ , and in the end with ${\\mathbf {\\mathbf {A}}}^{(3)}$ , then each processor performs $2\\Big (\\frac{n_1n_2n_3r_1}{p_1p_2p_3q_1} + \\frac{n_2n_3r_1r_2}{p_2p_3q_1q_2} + \\frac{n_3r_1r_2r_3}{p_3q_1q_2q_3}\\Big )$ operations to perform the local computation.", "Communication occurs only in the All-Gather and Reduce-Scatter collectives in Lines , , , and .", "Each processor is involved in one All-Gather involving the input tensor, three All-Gathers involving input matrices and one Reduce-Scatter involving the output tensor.", "Lines , , , and specify simultaneous $\\frac{P}{p}$ , $\\frac{P}{p_1q_1}$ , $\\frac{P}{p_2q_2}$ and $\\frac{P}{p_3q_3}$ All-Gathers respectively, and Line  specifies simultaneous $\\frac{P}{q}$ Reduce-Scatters.", "For simplicity of discussion, we consider that the number of processors involved in the collectives is a power of 2.", "We also assume that communication optimal collective algorithms are used.", "The optimal latency and bandwidth costs of both collectives on $Q$ processors are $\\log (Q)$ and $(1-\\frac{1}{Q})w$ , respectively, where $w$ denotes the words of data in each processor after All-Gather or before Reduce-Scatter collective.", "Each processor also performs $(1-\\frac{1}{Q})w$ computations for the Reduce-Scatter collective.", "We point the reader to [26], [10] for more details on efficient algorithms for collectives.", "Hence the bandwidth costs of Lines , , , in alg:3dmultittm are $(1-\\frac{p}{P}) \\frac{n}{p}$ , $(1-\\frac{p_1q_1}{P}) \\frac{n_1r_1}{p_1q_1}$ , $(1-\\frac{p_2q_2}{P}) \\frac{n_2r_2}{p_2q_2}$ , $(1-\\frac{p_3q_3}{P})\\frac{n_3r_3}{p_3q_3}$ respectively to accomplish All-Gather operations, and the bandwidth cost of performing the Reduce-Scatter operation in Line  is $(1-\\frac{q}{P}) \\frac{r}{q}$ .", "Thus the overall bandwidth cost of alg:3dmultittm for each processor is $\\frac{n}{p} + \\frac{n_1r_1}{p_1q_1} + \\frac{n_2r_2}{p_2q_2} + \\frac{n_3r_3}{p_3q_3} + \\frac{r}{q} - \\left(\\frac{n + n_1r_1 + n_2r_2 + n_3r_3 + r}{P}\\right).$ The latency costs of Lines , , , , are $\\log (\\frac{P}{p})$ , $\\log (\\frac{P}{p_1q_1})$ , $\\log (\\frac{P}{p_2q_2})$ , $\\log (\\frac{P}{p_3q_3})$ , $\\log (\\frac{P}{q})$ respectively.", "Thus the overall latency cost of alg:3dmultittm for each processor is $\\log \\left(\\frac{P}{p}\\right) + \\log \\left(\\frac{P}{p_1q_1}\\right) + \\log \\left(\\frac{P}{p_2q_2}\\right) + \\log \\left(\\frac{P}{p_3q_3}\\right) + \\log \\left(\\frac{P}{q}\\right) = \\log \\left(\\frac{P^5}{p^2q^2}\\right) = 3 \\log (P).$ Due to the Reduce-Scatter operation, each processor also performs $(1-\\frac{q}{P}) \\frac{r}{q}$ computations, which is asymptotically small compared to the computations of Line ." ], [ "Selection of $p_i$ and {{formula:386d66fc-51c7-4b11-b63b-0bb456ee43ca}} in Algorithm ", "We must select the processor dimensions carefully such that alg:3dmultittm is communication optimal.", "We attempt to select the processor dimensions $p_i$ and $q_i$ in such a way that the terms in the communication cost match the optimal solutions of lemma:tensorOptimalSolutions,lemma:matrixOptimalSolutions.", "In other words, we want to select $p_i$ and $q_i$ such that $\\frac{n_1r_1}{p_1q_1}={x}^*$ , $\\frac{n_2r_2}{p_2q_2}={y}^*$ , and $\\frac{n_3r_3}{p_3q_3}={z}^*$ from lemma:matrixOptimalSolutions, and $\\frac{n}{p}={v}^*,\\frac{r}{q}={u}^*$ from lemma:tensorOptimalSolutions.", "While the optimal values are given as a single term, we have two or three processor grid dimensions we need to fix in order to match each term, and each processor grid dimension appears in two equations.", "In general, we are able to set the processor grid dimensions in a way that is consistent with these equations.", "However, they are subject to additional constraints that are not imposed by the optimization problem.", "Specifically, we have $1\\le p_i\\le n_i$ and $1\\le q_i\\le r_i$ for $1\\le i \\le 3$ .", "The lower bounds are imposed because processor grid dimensions must be at least 1.", "The upper bounds are imposed to ensure that each processor performs its fair share of the computations.", "We assume that $P\\le nr$ , so that every processor has at least one 4-ary multiplication term to compute.", "For simplicity, we assume that the final grid dimensions are integers and perfectly divide the corresponding input and output dimensions.", "However, we also discuss how to handle non-integer grid dimensions for a specific set of inputs in sec:exp:configurations.", "In order to define processor grid dimensions, we begin by determining a set of values that match the lower bound terms and denote these by $\\hat{p_i}, \\hat{q_i}$ with their products denoted $\\hat{p}$ and $\\hat{q}$ .", "Then, we will consider how to adapt $\\hat{p_i}$ and $\\hat{q_i}$ so that the additional constraints are met.", "During the adaption, we maintain the tensor communication costs, modify the matrix communication costs, and then bound the additional costs in terms of communication lower bounds of tensors.", "As $X̰$ and $Y̰$ are 3-dimensional tensors, we have $n_i, r_i \\ge 2$ for all $1\\le i\\le 3$ .", "For better readability, we use the notation ${\\sc O} =\\frac{\\sum _{j \\in [3]}n_jr_j + r + n}{P}$ , the amount of data owned by a single processor at the beginning and end of the algorithm.", "There exist $p_i,q_i$ with $1\\le p_i\\le n_i, 1\\le q_i\\le r_i$ for $i=1,2,3$ such that alg:3dmultittm is communication optimal to within a constant factor.", "We break our analysis into 2 scenarios which are further broken down into cases.", "In each case, we obtain $\\hat{p_i}$ and $\\hat{q_i}$ such that the terms in the communication cost match the corresponding lower bound terms and also satisfy at least one of the two constraints: $1\\le \\hat{p_i} \\le n_i$ or $1\\le \\hat{q_i}\\le r_i$ for $1\\le i \\le 3$ .", "We handle all cases from both scenarios together in the end, and adapt these values to get $p_i$ and $q_i$ which respect both lower and upper bounds.", "$\\bullet $ Scenario I $\\left(P < \\frac{n}{r}\\right)$ : This scenario corresponds to the first case of the tensor term in ${\\sc LB} $ .", "Thus, we set $\\hat{p_i}, \\hat{q_i}$ in such a way that the tensor terms in the communication cost match the tensor terms of ${\\sc LB} $ : $ \\hat{p}=P, \\hat{q} = 1.$ This implies $\\hat{q_i} = 1$ for $1\\le i\\le 3$ .", "We break this scenario into 3 cases, each corresponding to a case in the matrix term of ${\\sc LB} $ .", "(Case 1) $P < \\min \\left\\lbrace \\frac{n_3r_3}{n_2r_2}, \\frac{n}{r}\\right\\rbrace $ : Setting the matrix communication costs to the matrix terms in the corresponding case of the lower bound yields $\\frac{n_1r_1}{\\hat{p_1}\\hat{ q_1} } = n_1r_1, \\: \\frac{n_2r_2}{\\hat{p_2}\\hat{q_2} } = n_2r_2, \\: \\frac{n_3r_3}{\\hat{p_3}\\hat{q_3}} = \\frac{n_3r_3}{P}.$ Thus, we set $\\hat{p_1} = \\hat{p_2} = \\hat{q_1} = \\hat{q_2} = \\hat{q_3} = 1$ and $\\hat{p_3} = P$ to satisfy eq:s1,eq:s1c1.", "(Case 2) $\\frac{n_3r_3}{n_2r_2} \\le P < \\min \\left\\lbrace \\frac{n_2n_3r_2r_3}{n_1^2r_1^2}, \\frac{n}{r}\\right\\rbrace $ : Setting the matrix communication costs to the matrix terms in the corresponding case of the lower bound yields $ \\frac{n_1r_1}{\\hat{p_1}\\hat{q_1}} = n_1r_1,\\: \\frac{n_2r_2}{\\hat{p_2}\\hat{q_2}} = \\frac{n_3r_3}{\\hat{p_3}\\hat{q_3}} = \\left(\\frac{n_2n_3r_2r_3}{P}\\right)^{1/2}.$ We set $\\hat{p_1} = \\hat{q_1} = \\hat{q_2} = \\hat{q_3} = 1$ , $\\hat{p_2} = n_2r_2\\left(\\frac{P}{n_2n_3r_2r_3}\\right)^{\\frac{1}{2}}$ , and $\\hat{p_3} = n_3r_3\\left(\\frac{P}{n_2n_3r_2r_3}\\right)^{\\frac{1}{2}}$ to satisfy eq:s1,eq:s1c2.", "(Case 3) $\\frac{n_2n_3r_2r_3}{n_1^2r_1^2} \\le P < \\frac{n}{r}$ : Setting the matrix communication costs to match the matrix terms in the corresponding case of the lower bound yields $\\frac{n_1r_1}{\\hat{p_1}\\hat{q_1}} = \\frac{n_2r_2}{\\hat{p_2}\\hat{q_2}} = \\frac{n_3r_3}{\\hat{p_3}\\hat{q_3}} = \\left(\\frac{nr}{P}\\right)^{1/3}.", "$ Thus we set $\\hat{q_1}=\\hat{q_2}=\\hat{q_3} = 1$ , $\\hat{p_1} = n_1r_1 \\big (\\frac{P}{nr}\\big )^{\\frac{1}{3}},$ $\\hat{p_2} = n_2r_2 \\big (\\frac{P}{nr}\\big )^{\\frac{1}{3}},$ and $\\hat{p_3} = n_3r_3 \\big (\\frac{P}{nr}\\big )^{\\frac{1}{3}}$ to satisfy eq:s1,eq:s1c3.", "Note that in all the cases of this scenario we have $1\\le \\hat{q_i}=1 < r_i$ for $1\\le i \\le 3$ , but we cannot ensure similar upper bounds for $\\hat{p_i}$ .", "We will adapt processor grid dimensions for both scenarios in the end as they require the same s-eps-converted-to.pdf.", "$\\bullet $ Scenario II $\\left(\\frac{n}{r}\\le P\\right)$ : This scenario corresponds to the second case of the tensor term in ${\\sc LB} $ .", "Thus, we set $\\hat{p_i}, \\hat{q_i}$ in such a way that $\\frac{n}{\\hat{p}}=\\frac{r}{\\hat{q}}=\\left(\\frac{nr}{P}\\right)^{1/2}.$ Again, we break this scenario into 3 cases each corresponding to a case in the matrix term of ${\\sc LB} $ .", "(Case 1) $\\frac{n}{r} \\le P < \\frac{n_3r_3}{n_2r_2}$ : Setting the matrix communication costs to the matrix terms in the corresponding case of the lower bound yields $\\frac{n_1r_1}{\\hat{p_1}\\hat{q_1}} = n_1r_1, \\: \\frac{n_2r_2}{\\hat{p_2}\\hat{q_2}} = n_2r_2, \\: \\frac{n_3r_3}{\\hat{p_3}\\hat{q_3}} = \\frac{n_3r_3}{P}.$ Thus we set $\\hat{p_1} = \\hat{q_1} = \\hat{p_2} = \\hat{q_2} = 1$ , $\\hat{p_3} = n\\left(\\frac{P}{nr}\\right)^{1/2}$ , and $\\hat{q_3} = r\\left(\\frac{P}{nr}\\right)^{1/2}$ to satisfy eq:s2,eq:s2c1.", "As $\\frac{n}{r} \\le P \\le nr$ and $r \\le n$ , we have $1\\le \\hat{p_3} \\le n$ and $1 \\le \\hat{q_3} \\le r$ , but cannot ensure $\\hat{p_3}\\le n_3$ or $\\hat{q_3}\\le r_3$ .", "However, $\\hat{p_3}\\hat{q_3}=P < \\frac{n_3r_3}{n_2r_2}$ implies that at least one is satisfied.", "Therefore, we guarantee that $1\\le i \\le 3$ , $1\\le \\hat{p_i}, 1\\le \\hat{q_i}$ and either $\\hat{p_i} \\le n_i$ or $\\hat{q_i} \\le r_i$ .", "(Case 2) $\\max \\left\\lbrace \\frac{n_3r_3}{n_2r_2}, \\frac{n}{r} \\right\\rbrace \\le P < \\frac{n_2n_3r_2r_3}{n_1^2r_1^2}$ : Setting the matrix communication costs to the matrix terms in the corresponding case of the lower bound yields $\\frac{n_1r_1}{\\hat{p_1}\\hat{q_1}} = n_1r_1, \\frac{n_2r_2}{\\hat{p_2}\\hat{q_2}} = \\frac{n_3r_3}{\\hat{p_3}\\hat{q_3}} = \\left(\\frac{n_2n_3r_2r_3}{P}\\right)^{1/2}.$ The equations eq:s2,eq:s2c2 do not uniquely determine $\\hat{p_1},\\hat{p_2},\\hat{p_3},\\hat{q_1},\\hat{q_2},$ and $\\hat{q_3}$ .", "The following is one possible solution: $\\hat{p_1}=\\hat{q_1}=1$ , $\\hat{p_2}=n_2\\left(\\frac{n_1P}{n_2n_3r}\\right)^{1/4}$ , $\\hat{p_3}=n_3\\left(\\frac{n_1P}{n_2n_3r}\\right)^{1/4}$ , $\\hat{q_2}=r_2\\left(\\frac{r_1P}{nr_2r_3}\\right)^{1/4}$ , and $\\hat{q_3}=r_3\\left(\\frac{r_1P}{nr_2r_3}\\right)^{1/4}$ .", "Note that $P < \\frac{n_2n_3r_2r_3}{n_1^2r_1^2}$ implies that $\\hat{p_2} < n_2,$ $\\hat{p_3} < n_3,$ $\\hat{q_2}<r_2,$ and $\\hat{q_3} < r_3$ .", "We are not able to ensure $\\hat{p_2}, \\hat{p_3}, \\hat{q_2}, \\hat{q_3}$ are all at least 1 in this case.", "We will handle both Case 2 and Case 3 together as they require the same analysis.", "(Case 3) $\\max \\left\\lbrace \\frac{n_2n_3r_2r_3}{n_1^2r_1^2}, \\frac{n}{r}\\right\\rbrace \\le P$ : Setting the matrix communication costs to the matrix terms in the corresponding case of the lower bound yields $ \\frac{n_1r_1}{\\hat{p_1}\\hat{q_1}} = \\frac{n_2r_2}{\\hat{p_2}\\hat{q_2}} = \\frac{n_3r_3}{\\hat{p_3}\\hat{q_3}} = \\left(\\frac{nr}{P}\\right)^{\\frac{1}{3}}.$ As in Case 2, the equations eq:s2,eq:s2c3 do not uniquely determine $\\hat{p_i},\\hat{q_i}$ for $1\\le i\\le 3$ .", "We choose a cubical distribution, namely $\\frac{n_1}{p_1}=\\frac{n_2}{p_2}=\\frac{n_3}{p_3}=\\frac{r_1}{q_1} =\\frac{r_2}{q_2} =\\frac{r_3}{q_3}$ and obtain the following solution, $\\hat{p_i} = n_i\\left(\\frac{P}{nr}\\right)^{1/6}$ , $\\hat{q_i} = r_i\\left(\\frac{P}{nr}\\right)^{1/6}$ for $1\\le i\\le 3.$ As $P\\le nr$ we have $\\hat{p_i}\\le n_i$ and $\\hat{q_i}\\le r_i$ for $1\\le i \\le 3$ .", "Again we are not able to ensure $\\hat{p_i}$ and $\\hat{q_i}$ are all greater than 1 for $1\\le i\\le 3$ .", "Now we handle Case 2 and Case 3 of Scenario II here.", "Communication cost for the obtained set of values matches the lower bound, therefore we have $1\\le \\hat{p_i}\\hat{q_i}\\le n_ir_i$ for $1\\le i \\le 3$ , $1\\le \\hat{p} \\le n$ and $1\\le \\hat{q}\\le r$ .", "We perform the following: for a given $i$ , at most one of $\\hat{p_i}$ or $\\hat{q_i}$ can be smaller than 1.", "If either is smaller than one, set it to 1 and update the other so that $\\hat{p_i}\\hat{q_i}$ does not change.", "This, however, might change $\\hat{p}$ and $\\hat{q}$ .", "The increase factor in one of $\\hat{p}$ and $\\hat{q}$ (say $\\hat{q}$ without loss of generality), $f=\\hat{q}_{\\text{new}}/\\hat{q}_{\\text{orig}}$ , is reflected in a decrease in the other ($\\hat{p}$ ).", "As the original $\\hat{q} \\ge 1$ , we can factor $f$ , $f= f_1 \\cdot f_2 \\cdot f_3$ ($f_i\\ge 1$ ) such that $\\hat{q}_i = \\hat{q_i}/f_i \\ge 1$ and $\\hat{p}_i = \\hat{p_i} f_i \\ge 1$ , and hence $\\hat{p}$ and $\\hat{q}$ are back to their original values.", "Note that in the above updates, we always maintain $\\hat{q_i} \\le r_i$ .", "Therefore, it is guaranteed that $1\\le \\hat{q_i} \\le r_i, 1\\le \\hat{p_i}$ for $1\\le i \\le 3$ .", "If the increase factor was in $\\hat{p}$ , we would have obtained $1\\le \\hat{p_i} \\le n_i, 1\\le \\hat{q_i}$ for $1\\le i \\le 3$ .", "Therefore, we have $1\\le \\hat{p_i} \\le n_i, 1\\le \\hat{q_i}$ and/or $1\\le \\hat{q_i} \\le r_i, 1\\le \\hat{p_i}$ for $1\\le i \\le 3$ .", "Now for all the cases of both scenarios, it remains to adapt $\\hat{p}_i$ and $\\hat{q}_i$ such that $\\hat{p}_i \\le n_i$ and $\\hat{q}_i\\le r_i$ .", "We know that one of them is guaranteed from the way they are selected in all the cases.", "We now obtain $p_1,p_2,p_3, q_1, q_2, q_3$ from $\\hat{p_i},\\hat{q_i}$ such that both lower and upper bounds are respected, and $p_1p_2p_3=\\hat{p}$ and $q_1q_2q_3=\\hat{q}$ .", "The intuition is to maintain the tensor communication terms in the lower bound.", "Initially we set $p_i=\\hat{p_i}, q_i=\\hat{q_i}$ for $1\\le i \\le 3$ .", "If $\\hat{p_i} > n_i$ for some $i$ , we represent this index with $l$ , and the other two indices with $j$ and $k$ .", "As $\\hat{p} \\le n$ , therefore $\\hat{p_j} \\le n_j$ or/and $\\hat{p_k} \\le n_k$ .", "Without loss of generality, we assume that $\\hat{p_k} \\le n_k$ .", "Now we first update $p_l$ , and then $p_j$ , and in the end $p_k$ with the following expressions: $p_l = \\min \\left\\lbrace n_l, \\frac{\\hat{p}}{p_j p_k}\\right\\rbrace $ , $p_j = \\min \\left\\lbrace n_j, \\frac{\\hat{p}}{p_k p_l}\\right\\rbrace $ , $p_k = \\min \\left\\lbrace n_k, \\frac{\\hat{p}}{p_l p_j}\\right\\rbrace $ .", "The same update can be done if $\\hat{q}_i>r_i$ for some $i$ .", "Now we assess how much additional communication is required for the matrices.", "If $\\nexists i\\in [3]$ such that $\\hat{p_i} > n_i$ or $\\hat{q_i} > r_i$ then $\\sum _{i \\in [3]}\\frac{n_ir_i}{p_iq_i} = \\sum _{i \\in [3]}\\frac{n_ir_i}{\\hat{p_i}\\hat{q_i}}$ .", "We can note that due to our particular selections of $\\hat{p_i}$ and $\\hat{q_i}$ , $\\nexists i,j\\in [3]$ such that $\\hat{p_i} > n_i$ and $\\hat{q_j} > r_j$ .", "Suppose $\\exists i\\in [3]$ such that $\\hat{p_i} > n_i$ then $\\hat{p}> 2$ and $\\sum _{i \\in [3]}\\frac{n_ir_i}{p_iq_i} &\\le \\sum _{i \\in [3]} \\max \\left\\lbrace \\frac{n_ir_i}{\\hat{p_i}\\hat{q_i}}, \\frac{r_i}{\\hat{q_i}}\\right\\rbrace && \\text{\\colorbox {shadecolor}{ $q_i=\\hat{q_i}$, and $p_i\\ge \\hat{p_i}$ or $p_i=n_i$}}\\\\&= \\sum _{i \\in [3]} \\Big (\\frac{n_ir_i}{\\hat{p_i}\\hat{q_i}} + \\frac{r_i}{\\hat{q_i}} - \\min \\left\\lbrace \\frac{n_ir_i}{\\hat{p_i}\\hat{q_i}}, \\frac{r_i}{\\hat{q_i}}\\right\\rbrace \\Big ) && \\text{\\colorbox {shadecolor}{$\\max \\lbrace a,b\\rbrace = a+b-\\min \\lbrace a,b\\rbrace $}}\\\\& < \\sum _{i \\in [3]}\\big (\\frac{n_ir_i}{\\hat{p_i}\\hat{q_i}} + \\frac{r_i}{\\hat{q_i}}\\big ) -2 && \\text{\\colorbox {shadecolor}{$\\hat{p_i}\\hat{q_i} \\le n_ir_i$ and $\\hat{q_i}\\le r_i$}}\\\\&\\le \\sum _{i \\in [3]}\\frac{n_ir_i}{\\hat{p_i}\\hat{q_i}}+ \\frac{r}{\\hat{q}} && \\text{\\colorbox {shadecolor}{$\\forall a_i\\ge 1, a_1+a_2+a_3$-$2 \\le a_1a_2a_3$}}\\\\&< \\sum _{i \\in [3]}\\frac{n_ir_i}{\\hat{p_i}\\hat{q_i}}+ 2\\big (\\frac{r}{\\hat{q}} - \\frac{r}{\\hat{p}\\hat{q}}\\big )&&\\\\&= \\sum _{i \\in [3]}\\frac{n_ir_i}{\\hat{p_i}\\hat{q_i}} + 2\\big (\\frac{r}{\\hat{q}} - \\frac{r}{P}\\big ).&&$ Similarly, if $\\exists i\\in [3]$ such that $\\hat{q_i} > r_i$ we can obtain $\\sum _{i \\in [3]}\\frac{n_ir_i}{p_iq_i} < \\sum _{i \\in [3]}\\frac{n_ir_i}{\\hat{p_i}\\hat{q_i}} + 2\\big (\\frac{n}{\\hat{p}} - \\frac{n}{P}\\big )$ .", "Therefore, in all situations, $\\sum _{i \\in [3]} \\frac{n_ir_i}{p_iq_i} +\\frac{r}{q} + \\frac{n}{p} - {\\sc O} \\le 3 \\Big (\\sum _{i \\in [3]}\\frac{n_ir_i}{\\hat{p_i}\\hat{q_i}} $ $+ \\frac{r}{\\hat{q}} + \\frac{n}{\\hat{p}} - {\\sc O} \\Big ) = 3{\\sc LB} $ ." ], [ "Simulated Evaluation", "In this section, we verify our theoretical claims on particular sets of 3D tensor dimensions using a simulated evaluation.", "In sec:exp-optimality, we demonstrate that the communication cost of alg:3dmultittm matches the lower bound of theorem:lb:3DMultiTTM, and we provide intuition for relationships among the communication costs of the individual tensors and matrices.", "In sec:exp-comparison, we compare the approach of alg:3dmultittm for evaluating Multi-TTM with a TTM-in-Sequence approach, demonstrating realistic scenarios when alg:3dmultittm communicates significantly less data and performs a negligible amount of extra computation.", "Throughout this section, we restrict to cases where all tensor dimensions and numbers of processors are powers of 2.", "We vary the number of processors $P$ from 2 to ${\\sc P_{\\max }} =\\min \\lbrace n_1r_1, n_2r_2, n_3r_3, n, r\\rbrace $ , which ensures that each processor owns some data of every tensor and matrix.", "The costs of alg:3dmultittm depend on the processor grid, and in these experiments, we perform an exhaustive search for the best configuration.", "We describe in sec:exp:configurations how to adapt the processor grid selection scheme described in sec:parallelAlgoritm:selectionpiqi to obtain integer-valued processor grid dimensions, and we show that we can obtain nearly optimal configurations without exhaustive search." ], [ "Verifying Optimality of alg:3dmultittm", "theorem:optimality:3DMultiTTMAlgorithm states that alg:3dmultittm attains the communication lower bound to within a constant factor, and in this section we verify the result in a variety of scenarios.", "Recall from theorem:lb:3DMultiTTM that the lower bound is $A+B-{\\sc O} $ , where $A &= {\\left\\lbrace \\begin{array}{ll} n_1r_1 + n_2r_2 + \\frac{n_3r_3}{P} & \\text{ if } P<\\frac{n_3r_3}{n_2r_2} \\\\ n_1r_1 + 2\\left(\\frac{n_2n_3r_2r_3}{P}\\right)^{\\frac{1}{2}} &\\text{ if } \\frac{n_3r_3}{n_2r_2}\\le P < \\frac{n_2n_3r_2r_3}{n_1^2r_1^2} \\\\ 3\\left(\\frac{nr}{P}\\right)^{\\frac{1}{3}} &\\text{ if } \\frac{n_2n_3r_2r_3}{n_1^2r_1^2} \\le P\\end{array}\\right.}", "\\\\B &= {\\left\\lbrace \\begin{array}{ll} r + \\frac{n}{P} & \\text{ if } P < \\frac{n}{r} \\\\ 2\\left(\\frac{nr}{P}\\right)^{\\frac{1}{2}} &\\text{ if } \\frac{n}{r} \\le P\\text{.}\\end{array}\\right.}", "\\\\{\\sc O} &= \\frac{n_1r_1+n_2r_2+n_3r_3 + r + n}{P}.$ Here, $A$ corresponds to the matrix entries accessed, $B$ corresponds to the tensor entries accessed, and ${\\sc O} $ corresponds to the data owned by a single processor.", "The costs of alg:3dmultittm are given by eq:alg-costs, which we re-write here as $\\frac{n_1r_1}{p_1q_1} + \\frac{n_2r_2}{p_2q_2} + \\frac{n_3r_3}{p_3q_3} + \\frac{n}{p} + \\frac{r}{q} - {\\sc O},$ where $\\lbrace p_i\\rbrace $ and $\\lbrace q_i\\rbrace $ specify the processor grid dimensions.", "The first three terms correspond to matrix entries and the middle two terms correspond to tensor entries.", "Figure: Matrix and tensor communication costs in LB and alg:3dmultittm for different configurations.", "The sum of LB(Matrix){\\sc LB(Matrix)} and LB(Tensor){\\sc LB(Tensor)} equals to the lower bound (LB{\\sc LB} ), and the sum of Alg.", "5.1 (Matrix) and Alg.", "5.1 (Tensor) equals to the upper bound (Alg.", "5.1).", "Lower bounds are almost indistinguishable from the corresponding upper bounds.Figure REF shows both components, matrix and tensor communication costs, for three distinct input sizes as we vary the number of processors.", "In these plots, we show both algorithmic costs (upper bounds) and lower bounds, but they are indistinguishable because the largest differences in overall costs we observe are $9\\%$ for fig:lb:allcases at $P=2^{13}$ and $13\\%$ for fig:lb:matrixdominated,fig:lb:genpattern at $P=2$ , verifying theorem:lb:3DMultiTTM for these scenarios.", "In fig:lb:allcases, the input and output tensors have varying dimensions: the input tensor is $2^{12}\\times 2^{13}\\times 2^{19}$ and the output is $2^{8}\\times 2^{13}\\times 2^{11}$ .", "We choose these dimensions so that all five cases of the values of $A$ and $B$ are represented.", "For these inputs, the tensor communication cost dominates the matrix communication for all values of $P$ considered.", "When $P<2^4$ , the first cases for $A$ and $B$ apply, and the algorithm selects a processor grid such that $p_3=P$ , implying that only one tensor and two matrices are communicated.", "In this case, both expressions simplify to $(r+n_1r_1+n_2r_2)(1-1/P)$ , which is why we see initial increase as $P$ increases at the left end of the plot.", "For $2^4\\le P < 2^{12}$ , the second case for $A$ and the first case for $B$ apply, and the algorithm selects a processor grid with $p_2>1$ and $p_3>1$ .", "Here, the matrix communication begins to decrease, but it is dominated by the tensor communication, which is maintained at $r(1-1/P)$ .", "For $2^{12}\\le P$ , the second case for $B$ applies, and we see that tensor communication decreases as $P$ increases (proportional to $P^{-1/2}$ as we see from the lower bound).", "In this regime, the algorithm is selecting grids with both $p>1$ and $q>1$ and communicating both tensors.", "Another transition occurs at $P=2^{16}$ , switching from the second to third case of $A$ , but this change in matrix cost has a negligible effect.", "fig:lb:matrixdominated demonstrates a scenario where the matrix costs dominate the tensor costs: the input tensor is cubical with dimension $2^{12}$ and the output tensor is cubical with dimension $2^4$ .", "Here we scale $P$ only up to $2^{12}$ , the number of entries in the output tensor.", "Because the tensors are cubical, the lower bounds simplify as in corollary:lb:3DMultiTTM:cubicaltensors, and the algorithm chooses processor grids that are as cubical as possible.", "For all values of $P$ in this experiment, the third case of $A$ and the first case of $B$ apply, and the algorithm selects $p_1\\approx p_2\\approx p_3$ and $q=1$ .", "We see that the overall cost is deceasing proportional to $P^{-1/3}$ until the tensor communication cost starts to contribute more significantly.", "fig:lb:genpattern considers cubical tensors with larger dimensions to show a more general pattern.", "For tensor dimensions $n_i=2^{20}$ and $r_i=2^8$ , we observe a transition point where tensor communication overtakes matrix communication.", "Similar to the case of fig:lb:matrixdominated, matrix costs dominate for small $P$ and scale like $P^{-1/3}$ .", "However, for $P\\ge 2^{17}$ , the tensor costs dominate the matrix costs and communication costs scale less efficiently as the first case of $B$ applies.", "We emphasize that for all three of these experiments, the algorithmic costs match the lower bounds nearly exactly for all values of $P$ ." ], [ "Comparing alg:3dmultittm with TTM-in-Sequence", "As mentioned previously, a Multi-TTM computation may be performed as sequence of TTM operations.", "In this TTM-in-Sequence approach, a single matrix is multiplied with the tensor and an intermediate tensor is computed and stored.", "For each remaining matrix, single-matrix TTMs are performed in sequence until the final result is computed.", "This approach can reduce the number of arithmetic operations compared to direct evaluation of atomic expression given in def:amttm.", "The computational cost depends (often significantly) on the order of the TTMs performed.", "The TTM-in-Sequence approach is parallelized in the TuckerMPI library [4].", "We note that theorem:lb:3DMultiTTM does not apply to this parallelization, as it violates the parallel atomicity assumption.", "In this section, we provide a comparison between alg:3dmultittm and the TTM-in-Sequence approach to show that our approach can significantly reduce communication in important scenarios without performing too much extra computation.", "In particular, we observe greatest benefit of alg:3dmultittm when $r$ is very small relative to $n$ (or vice versa) and $P$ is small relative to the ratio of $n$ and $r$ .", "These scenarios occur in the context of computing and using Tucker decompositions for highly compressible tensors that exhibit small multilinear ranks.", "The computational cost of TuckerMPI's algorithm with cubical tensors is the same for all possible orderings of the TTMs.", "In our comparison, we consider that the TTMs are performed in increasing mode order.", "While no single communication lower bound exists for all parallel TTM-in-Sequence algorithms, we show in sec:exp:lbTTM-in-Sequence that TuckerMPI's algorithm attains nearly the same cost as tight matrix multiplication lower bounds [1] applied to each TTM it chooses to perform.", "Thus, no other parallelization of the TTM-in-Sequence approach can reduce communication without breaking the assumptions of the matrix multiplication lower bounds (e.g., using fast matrix multiplication).", "The TuckerMPI parallelization uses a 3D logical processor grid with dimensions $\\tilde{p_1}\\times \\tilde{p_2}\\times \\tilde{p_3}$ .", "When the TTMs are performed in increasing mode order, the overall communication cost of their algorithm is $\\frac{r_1n_2n_3}{\\tilde{p_2}\\tilde{p_3}} + \\frac{n_1r_1}{\\tilde{p_1}} + \\frac{r_1r_2n_3}{\\tilde{p_1}\\tilde{p_3}} +\\frac{n_2r_2}{\\tilde{p_2}} + \\frac{r_1r_2r_3}{\\tilde{p_1}\\tilde{p_2}} + \\frac{n_3r_3}{\\tilde{p_3}} \\phantom{\\frac{r_1r_2n_3}{\\tilde{p_1}\\tilde{p_3}}}\\qquad \\qquad \\\\ \\phantom{\\frac{r_1r_2n_3}{\\tilde{p_1}\\tilde{p_3}}}\\qquad \\qquad - \\frac{r_1n_2n_3+r_1r_2n_3+r_1r_2r_3 + n_1r_1 +n_2r_2+n_3r_3}{P}, $ as specified in [4], though we include the cost of communicating the matrices (their analysis assumes the matrices are already redundantly distributed).", "We use exhaustive search to determine the processor grid that minimizes the cost of eq:TuckerMPI-cost in our comparisons.", "Figure: Communication cost comparison of alg:3dmultittm and TTM-in-Sequence .Figure: Comparison of alg:3dmultittm and the TTM-in-Sequence approach for fixed r 1 =r 2 =r 3 =2 6 r_1=r_2=r_3=2^6 and P=2 12 P=2^{12}." ], [ "Communication Cost", "To compare communications costs, we perform 4 experiments involving cubical tensors.", "The first three simulated evaluations consider strong scaling and are presented in fig:comp.", "Two of these experiments use the same tensor dimensions as the two cubical examples in fig:lb.", "The first experiment involves an input tensor of dimension $n_i=2^{12}$ and output dimension $r_i=2^4$ (fig:commcostcomparison:12-4), the second has dimensions $n_i=2^{13}$ and $r_i=2^6$ (fig:commcostcomparison:13-6), and the third has the largest dimensions $n_i=2^{20}$ and $r_i=2^8$ (fig:commcostcomparison:25-10).", "fig:commcostcomparison:12-4 shows that alg:3dmultittm performs less communication than TTM-in-Sequence for $P\\le 2^{12}<n/r$ .", "The largest communication reduction occurs at $P=2^{12}$ and is approximately $5\\times $ .", "In the second experiment, we see cases where TTM-in-Sequence performs less communication than alg:3dmultittm and in fact beats the lower bound of theorem:lb:3DMultiTTM (which is possible because it breaks the atomicity assumption).", "alg:3dmultittm is more communication efficient for $P\\le 2^{16}$ , achieving a speedup of up to $2\\times $ , but communicates more for larger $P$ .", "In the third experiment with larger tensors, fig:commcostcomparison:25-10 demonstrates similar qualitative behavior to the first, with alg:3dmultittm outperforming TTM-in-Sequence and a maximum communication reduction of approximately $12\\times $ at $P=2^{21}$ .", "In the fourth experiment, with results shown in fig:commcostcomparison:riPfixed, we fix the output tensor dimension $r_i=2^6$ and number of processors $P=2^{12}$ and vary the input tensor dimension $n_i$ .", "We observe that for $2^6\\le n_i< 2^{12}$ , the TTM-in-Sequence approach communicates less data than alg:3dmultittm.", "For $n_i \\ge 2^{12}$ , alg:3dmultittm communicates less data, and the factor of improvement is maintained at approximately $6\\times $ as $n_i$ scales up." ], [ "Computation Cost", "Assuming TuckerMPI uses increasing mode order, the parallel computational cost is $2 \\cdot \\frac{r_1n_1n_2n_3+r_1r_2n_2n_3+r_1r_2r_3n_3}{P}=2\\left(\\frac{r^{1/3}n}{P}+\\frac{r^{2/3}n^{2/3}}{P}+\\frac{rn^{1/3}}{P}\\right),$ where the right hand side is simplified under the assumption of cubical tensors.", "In these experiments where $n\\gg r$ , alg:3dmultittm selects a processor grid such that $q=1$ and $p_1\\approx p_2\\approx p_3$ .", "In this case the computation cost given in sec:cost-3d simplifies to $2\\left(\\frac{r^{1/3}n}{P}+\\frac{r^{2/3}n^{2/3}}{P^{2/3}}+\\frac{rn^{1/3}}{P^{1/3}}\\right).$ Note that this cost is much smaller than $4nr/P$ , the cost of evaluating eq:ourMultiTTMDef directly with computational load balance, and it is achieved by performing local computation using a TTM-in-Sequence approach.", "While the first terms of the two computational cost expressions match, we observe greater computational cost from alg:3dmultittm in the second and third terms.", "These terms are lower order when $P\\ll n/r$ , in which case the extra computational cost of alg:3dmultittm is negligible.", "When $P=n/r$ , the extra computational cost is no more than $3\\times $ .", "In the first three experiments, when our approach reduces communication, the extra computational costs were at most $6\\%$ , $30\\%$ , and $7\\%$ , respectively.", "The extra computation required for the greatest reductions in communication in those experiments were $6\\%$ , $2\\%$ , and $7\\%$ .", "For the fourth experiment, the extra computation is approximately $13\\%$ at $n_i=2^{13}$ , where alg:3dmultittm provides communication reduction, and decreases as $n_i$ increases.", "In all these experiments, we see that when alg:3dmultittm provides a reduction in communication costs, the extra computational costs remain negligible." ], [ "Lower Bounds of General Multi-TTM", "We present our lower bound results for $d$ -dimensional tensors in this section.", "Similar to the 3-dimensional lower bound proof, we consider a single processor that performs $1/P$ th of the computation and has access to $1/P$ th of the data.", "We again seek to minimize the number of elements of the matrices and tensors that the processor must access in order to execute its computation subject to the constraints of the structure of Multi-TTM by solving two independent problems, one for the matrix data and one for the tensor data." ], [ "General Constrained Optimization Problems", "Here we present a generalization of lemma:matrixOptimalSolutions for $d$ dimensions.", "As before, this corollary is a direct result of lem:mttkrpOpt.", "Recall the notation $N_i=\\prod _{j=d-i+1}^dn_j$ and $R_i = \\prod _{j=d-i+1}^d r_i$ .", "Consider the following optimization problem: $\\min _{x̭} \\sum _{i \\in [d]} x_i$ such that $\\frac{nr}{P} \\le \\prod _{i\\in [d]}x_i \\quad \\text{and}\\quad 0 \\le x_i \\le n_ir_i \\quad \\text{for all}\\quad 1\\le i\\le d, $ where $n_i, r_i,P \\ge 1$ and $n_ir_i \\le n_{i+1}r_{i+1}$ .", "The optimal solution $\\mathbf {x} = [{x_1}^*$ $\\cdots $ ${x_d}^*]$ depends on the values of constants, yielding $d$ cases.", "[scale=0.495, every node/.style=transform shape] [thick] (-0.1,0) – (10.5,0); [thick, dashed] (10.5,0) – (14.5,0); [->, thick] (14.5,0) – (25,0) node [below right,scale=1.6] $P$ ; (0, 0.1) – node [below, pastelred, scale=1.6]1(0,-0.1); (5, 0.1) – node [below, pastelred, scale=1.6]$\\frac{N_1R_1}{n_{d-1}r_{d-1}}$ (5,-0.1); (10, 0.1) – node [below, pastelred, scale=1.6] $\\frac{N_2R_2}{(n_{d-2}r_{d-2})^2}$ (10,-0.1); (15, 0.1) – node [below, pastelred, scale=1.6] $\\frac{N_{d-2}R_{d-2}}{(n_2r_2)^{d-2}}$ (15,-0.1); (20, 0.1) – node [below, pastelred, scale=1.6] $\\frac{N_{d-1}R_{d-1}}{(n_1r_1)^{d-1}}$ (20,-0.1); align=left,below, scale=1.5] at (2.5, -0.4) ${x_1}^*=n_1r_1$ $\\qquad \\vdots $ ${x_{d-1}}^*= n_{d-1}r_{d-1}$ ${x_d}^* =\\frac{N_1R_1}{P}$ ; align=left,below, scale=1.5] at (8.15, -0.75) ${x_1}^*=n_1r_1$ $\\qquad \\vdots $ ${x_{d-2}}^*= n_{d-2}r_{d-2}$ ${x_{d-1}}^* = {x_d}^*$ = $\\quad \\left(\\frac{N_2R_2}{P}\\right)^{1/2}$ ; align=center,below, scale=1.5] at (16.5, -0.75) ${x_1}^*=n_1r_1$ ${x_2}^*= \\cdots = {x_d}^*=$ $\\qquad \\quad \\big (\\frac{N_{d-1}R_{d-1}}{P}\\big )^\\frac{1}{d-1}$ ; align=center,below, scale=1.5] at (22.5, -1.25) ${x_1}^*= \\cdots = {x_d}^*$ = $\\qquad \\quad \\big (\\frac{N_dR_d}{P}\\big )^{1/d}$ ; [label=$\\bullet $ ] If $P <\\frac{N_1R_1}{n_{d-1}r_{d-1}}$ , then ${x_j}^* = n_jr_j \\quad \\text{for}\\quad 1\\le j\\le d-1 \\quad \\text{and}\\quad {x_d}^*=\\frac{N_1R_1}{P}.$ If $\\frac{N_{i-1}R_{i-1}}{(n_{d+1-i}r_{d+1-i})^{i-1}} \\le P < \\frac{N_iR_i}{(n_{d-i}r_{d-i})^i}$ for some $i=2,\\cdots , d-1$ , then ${x_j}^* = n_jr_j \\quad \\text{for}\\quad 1\\le j\\le d-i \\quad \\text{and}\\quad {x_{d+1-i}}^*=\\cdots ={x_d}^* =\\left(N_iR_i/P\\right)^{1/i}.$ If $\\frac{N_{d-1}R_{d-1}}{(n_1r_1)^{d-1}} \\le P$ , then ${x_1}^*=\\cdots ={x_d}^* = \\left(N_dR_d/P\\right)^{1/d}.$" ], [ "Communication Lower Bounds", "We now state the lower bounds for the general Multi-TTM computation in the following theorem.", "Any computationally load balanced atomic Multi-TTM algorithm that starts and ends with one copy of the data distributed across processors and involves $d$ -dimensional tensors with dimensions $n_1, n_2, \\ldots , n_d$ and $r_1, r_2, \\ldots , r_d$ performs at least $A+B-\\left(\\frac{n}{P}+\\frac{r}{P} +\\sum _{j=1}^d \\frac{n_jr_j}{P}\\right)$ sends or receives where $A &= {\\left\\lbrace \\begin{array}{ll} \\sum _{j=1}^{d\\text{-}1}n_jr_j + \\frac{N_1R_1}{P} &\\text{ if } P<\\frac{N_1R_1}{n_{d\\text{-}1}r_{d\\text{-}1}}\\text{,}\\\\\\sum _{j=1}^{(d\\text{-}i)} n_jr_j + i\\left(\\frac{N_{i}R_{i}}{P}\\right)^\\frac{1}{i} &\\text{ if } \\frac{N_{i\\text{-}1}R_{i\\text{-}1}}{(n_{d+1\\text{-}i}r_{d+1\\text{-}i})^{i\\text{-}1}} \\le P < \\frac{N_{i}R_{i}}{(n_{d\\text{-}i}r_{d\\text{-}i})^i}\\text{,} \\\\& \\hfill \\text{for some } 2\\le i\\le d-1, \\\\d\\left(\\frac{N_dR_d}{P}\\right)^\\frac{1}{d} &\\text{ if } \\frac{N_{d\\text{-}1}R_{d\\text{-}1}}{(n_1r_1)^{d\\text{-}1}} \\le P\\text{.}\\end{array}\\right.", "}\\\\B &= {\\left\\lbrace \\begin{array}{ll} r + \\frac{n}{P} & \\text{ if } P < \\frac{n}{r}\\text{,} \\\\ 2\\left(\\frac{nr}{P}\\right)^{\\frac{1}{2}} &\\text{ if } \\frac{n}{r} \\le P\\text{.}\\end{array}\\right.", "}$ We prove this by applying lemma:genMatrixOptimalSolutions,lemma:tensorOptimalSolutions and extending the arguments of theorem:lb:3DMultiTTM in a straightforward way (though with more complicated notation).", "Interested readers can see the detailed proof in app:sec:genLowerBounds." ], [ "Parallel Algorithm for General Multi-TTM", "We present a parallel algorithm for $d$ -dimensional Multi-TTM computation in this section, which is analogous to alg:3dmultittm.", "We organize $P$ processors into a $2d$ -dimensional logical processor grid with dimensions $p_1 \\times \\cdots \\times p_d \\times q_1 \\times \\cdots \\times q_d$ .", "As before, we consider that $\\forall i \\in [d]$ , $p_i$ and $q_i$ evenly divide $n_i$ and $r_i$ , respectively.", "A processor coordinate is represented as $(p_1^\\prime , \\cdots , p_d^\\prime , q_1^\\prime ,\\cdots , q_d^\\prime )$ , where $\\forall i \\in [d]$ , $1\\le p_{i}^\\prime \\le p_i$ and $1\\le q_{i}^\\prime \\le q_i$ .", "We again impose that there is one copy of data in the system at the beginning and the end of the computation.", "alg:genMultittm presents our proposed parallel algorithm for $d$ -dimensional Multi-TTM computation.", "[H] Parallel Atomic d-dimensional Multi-TTM [1] $X̰$ , ${\\mathbf {\\mathbf {A}}}^{(1)}$ , $\\cdots $ , ${\\mathbf {\\mathbf {A}}}^{(d)}$ , $p_1 \\times \\cdots \\times p_d \\times q_1 \\times \\cdots \\times q_d$ logical processor grid $Y̰$ such that $Y̰= X̰\\times _1 {{\\mathbf {\\mathbf {A}}}^{(1)}}^{\\sf T}\\cdots \\times _d {{\\mathbf {\\mathbf {A}}}^{(d)}}^{\\sf T}$ $(p_1^\\prime , \\cdots , p_d^\\prime , q_1^\\prime , \\cdots , q_d^\\prime )$ is my processor id //All-gather input tensor $X̰$ $X̰_{p_1^\\prime \\cdots p_d^\\prime }$ = All-Gather($X̰$ , $(p_1^\\prime , \\cdots , p_d^\\prime , *, \\cdots , *)$ ) //All-gather all input matrices $i=1,\\cdots , d$ ${\\mathbf {\\mathbf {A}}}^{(i)}_{p_i^\\prime q_i^\\prime }$ = All-Gather(${\\mathbf {\\mathbf {A}}}^{(i)}$ , $(*,\\cdots ,*, p_i^\\prime ,* \\cdots ,*, q_i^\\prime , *)$ ) //Perform local computations in a temporary tensor $T̰$ $T̰$ = Local-Multi-TTM($X̰_{p_1^\\prime \\cdots p_d^\\prime }$ , ${\\mathbf {\\mathbf {A}}}^{(1)}_{p_1^\\prime q_1^\\prime }$ ,$\\cdots $ , ${\\mathbf {\\mathbf {A}}}^{(d)}_{p_d^\\prime q_d^\\prime }$ ) //Reduce-scatter the output tensor in $Y̰_{q_1^\\prime \\cdots q_d^\\prime }$ Reduce-Scatter($Y̰_{q_1^\\prime \\cdots q_d^\\prime }$ , $T̰$ , $(*, \\cdots , *, q_1^\\prime , \\cdots , q_d^\\prime )$ ) The data distribution model and cost analysis of the algorithm are similar to those of alg:3dmultittm and are presented in app:sec:genUpperBounds.", "theorem:optimality:genMultiTTMAlgorithm extends theorem:optimality:3DMultiTTMAlgorithm to the $d$ -dimensional case.", "A detailed proof can be found in app:sec:ddiemsnionalparallelAlgoritm:selectionpiqi.", "There exist $p_i,q_i$ with $1\\le p_i\\le n_i, 1\\le q_i\\le r_i$ for $i=1,\\cdots ,d$ such that alg:genMultittm is communication optimal to within a constant factor.", "We present a comparison of the communication costs of alg:genMultittm with the TTM-in-Sequence approach implemented in TuckerMPI [4] for $4/5/6$ -dimensional Multi-TTM computations in app:sec:generalMultiTTM:evaluation.", "Our results are consistent with what we observe for 3-dimensional Multi-TTM computations in sec:experiments.", "When the input tensor is much larger than the output tensor and the number of entries in the output tensor is less than that of the matrices, our algorithm significantly reduces communication compared to the TTM-in-Sequence approach.", "As in the 3D case, when $P\\ll n/r$ , the extra computation is negligible when the TTM-in-Sequence approach is used locally to reduce computation." ], [ "Conclusions", "In this work, we establish communication lower bounds for the parallel Multi-TTM computation and present an optimal parallel algorithm that organizes the processors in a $2d$ -dimensional grid for $d$ -dimensional tensors.", "By judiciously selecting the processor grid dimensions, we prove that our algorithm attains the lower bounds to within a constant factor.", "To verify the theoretical analysis, we simulate Multi-TTM computations using a variety of values for the number of processors, $P$ , the dimension, $d$ , and sizes, $n_i$ and $r_i$ ; compute the communication costs of our algorithm corresponding to each simulation; and compute the optimal communication cost provided by the theoretical lower bound.", "These simulations show that the communication costs of the proposed algorithm are close to optimal.", "When one of the tensors is much larger than the other tensor, which is typical in compression algorithms based on the Tucker decomposition, our algorithm significantly reduces communication costs over the conventional approach of performing the computation as a sequence of tensor-times-matrix operations.", "While applying HBL-inequalities (lem:hbl) to compute lower bounds, we encounter a ${\\mathbf {\\mathbf {\\Delta }}}$ matrix that is not full rank.", "In previous work where ${\\mathbf {\\mathbf {\\Delta }}}$ is square and nonsingular, the tightest constraint is derived from the unique solution to ${\\mathbf {\\mathbf {\\Delta }}}s̭=$ .", "Our approach for Multi-TTM, where ${\\mathbf {\\mathbf {\\Delta }}}$ is rectangular and consistent, is to derive constraints for all vectors $s̭$ that satisfy ${\\mathbf {\\mathbf {\\Delta }}}s̭=$ and then to select only two constraints which represent all constraints.", "We would like to explore how to select such representative constraints directly for general computations that involve a rank-deficient ${\\mathbf {\\mathbf {\\Delta }}}$ from HBL.", "Motivated by the simulated communication cost comparisons, our next goal is to implement the parallel atomic algorithm and verify the performance improvement in practice.", "Further, because neither the atomic or TTM-in-sequence approach is always superior in terms of communication, we wish to explore hybrid algorithms to account for significant dimension reduction in some modes but modest reduction in others.", "Given the computation and communication capabilities of a parallel platform, it would also be interesting to study the computation-communication tradeoff for these two approaches and how to minimize the overall execution time in practice.", "Finally, this work considers that each processor has enough memory.", "A natural extension is to study communication lower bounds for Multi-TTM computations with limited memory sizes." ], [ "Proof of lem:mttkrpOpt", "In this section, we prove lem:mttkrpOpt as it is written, instead of relying on the reader to derive this result from [6].", "This proof relies on two additional results.", "The first, [6], states that the first constraint of the optimization problem is quasiconvex [1].", "The second, [1], states that satisfying the Karush-Kuhn-Tucker (KKT) conditions is sufficient for a solution to the optimization problem to be optimal as the optimization problem minimizes a differentiable convex function and the contraints are all differentiable quasiconvex functions.", "[Proof of lem:mttkrpOpt] To begin we note that the objective and all but the first constraint are affine functions, which are differentiable, convex, and quasiconvex.", "The first constraint is differentiable, and it is quasiconvex in the positive orthant by [6].", "Thus the KKT conditions are sufficient to demonstrate the optimality of any solution by  [1], and we will prove the optimality of the solution $\\mathbf {{x}^*}=\\begin{bmatrix}{x_1}^* &{x_2}^* & \\cdots & {x_d}^*\\end{bmatrix}$ by finding dual variables ${\\mu _i}^*$ for $0\\le i\\le d$ such that the KKT conditions are satisfied.", "We now convert the problem to standard notation.", "The minimization objective function is $f(\\mathbf {x}) = \\sum _{j \\in [d]} x_j,$ and the constraints are given by $g_0(\\mathbf {x}) &= \\frac{nr}{P} - \\prod _{j \\in [d]} x_j\\text{,}\\\\g_i(\\mathbf {x}) &= x_i -k_i \\text{ for all }i\\in [d]\\text{.", "}$ Partial derivatives for $j\\in [d]$ are given by $\\frac{\\partial f}{\\partial x_j} (\\mathbf {x}) &=1 \\text{,} \\\\\\frac{\\partial g_0}{\\partial x_j} (\\mathbf {x}) &= -\\prod _{\\ell \\in [d]-\\lbrace j\\rbrace }x_\\ell \\text{,} \\\\\\frac{\\partial g_i}{\\partial x_j} (\\mathbf {x}) &= {\\left\\lbrace \\begin{array}{ll} 1 & \\text{if} \\quad i=j \\\\ 0 & \\text{else}.", "\\end{array}\\right.}", "\\\\$ The KKT conditions of $({\\mathbf {x}}^*, {\\mathbf {\\mu }}^*)$ are: Primal feasibility: $g_i(\\mathbf {{x}^*}) \\le 0$ , for $0\\le i\\le d$ .", "Stationarity: $\\frac{\\partial f}{\\partial x_j}(\\mathbf {{x}^*}) + \\sum _{i=0}^{d}{\\mu _i}^* \\frac{\\partial g_i}{\\partial x_j}(\\mathbf {{x}^*}) = 0$ , for $j\\in [d]$ .", "Dual feasibility: ${\\mu _i}^* \\ge 0$ , for $0\\le i\\le d$ .", "Complementary slackness: ${\\mu _i}^* g_i(\\mathbf {{x}^*})=0$ , for $0\\le i\\le d$ .", "Recall from the statement of lem:mttkrpOpt that $K_I=\\prod _{j=d-I+1}^d k_j$ and $1\\le I \\le d$ is defined such that $k_j < (K_{d-j+1}/P)^{1/(d-j+1)} \\text{ for } 1 \\le j \\le d-I,\\\\k_\\ell \\ge \\left(K_{d-\\ell +1}/P\\right)^{1/(d-\\ell +1)} \\text{ for } d-I< \\ell \\le d.$ We claim that the optimal primal solution is ${x_j}^* = {\\left\\lbrace \\begin{array}{ll} k_j &\\qquad \\text{if } j \\le d-I\\text{,} \\\\ (K_I/P)^{1/I} &\\qquad \\text{if } d-I < j \\le d\\text{.}\\end{array}\\right.", "}$ and the optimal dual solution is ${\\mu _i}^* = {\\left\\lbrace \\begin{array}{ll}\\frac{(K_I/P)^{1/I}}{nr} & \\qquad \\text{if } i=0\\text{,} \\\\\\frac{(K_I/P)^{1/I}}{k_i} -1 &\\qquad \\text{if } 0<i\\le d-I\\text{,}\\\\0&\\qquad \\text{if } d-I < i \\le d\\text{.}\\end{array}\\right.", "}$ We now check that $\\mathbf {{x}^*}$ satisfies the primal feasibility condition.", "By direct verification (and the fact that $nr=\\prod _{j\\in [d]} x_j$ ), we have $g_0({\\mathbf {x}}^*)=0$ .", "Clearly $g_i(\\mathbf {{x}^*})= 0$ for all $i\\in [d-I]$ , as ${x_i}^* = k_i$ for all $i\\in [d-I]$ .", "To see that $g_i(\\mathbf {{x}^*})\\le 0$ for $d-I<i\\le d$ , it is sufficient to recall that $(K_I/P)^{1/I}\\le k_{d-I+1}$ by the definition of $I$ , and that ${x_i}^* = (K_I/P)^{1/I}$ and $k_{d-I+1}\\le k_i$ for $d-I<i\\le d$ .", "Stationarity follows from direct verification of the condition for $j\\in [d]$ .", "To check dual feasibility, we note that all the factors of ${\\mu _0}^*$ are positive, thus ${\\mu _0}^* > 0$ .", "To show that ${\\mu _i}^* >0$ for $i\\in [d-I]$ , it is sufficient to show that $k_{d-I} < (K_I/P)^{1/I}$ as $k_1\\le \\cdots \\le k_{d-I}$ .", "This is implied by $k_{d-I} < (K_{I+1}/P)^{1/(I+1)} = (k_{d-I}K_I/P)^{1/I}$ , where the first inequality comes from the definition of $I$ , and the second from the definition of the right products $K_j$ .", "Finally, complementary slackness is satisfied because $g_i(\\mathbf {{x}^*}) = 0$ for $0\\le i\\le d-I$ , and $\\mu _i = 0$ for $d-I < i \\le d$ .", "$\\phantom{This is here to move the proof symbol to its correct location...}$" ], [ "Details for Simulated Evaluation of 3-Dimensional Multi-TTM", "In this section, we provide more details for the simulated evaluation of alg:3dmultittm and its comparison to the TTM-in-Sequence approach.", "The analysis of the communication optimality of alg:3dmultittm did not consider integrality constraints on the processor grid dimensions.", "The simulated evaluation in sec:experiments considered all possible processor grid configurations using exhaustive search; we explain in sec:exp:configurations a more efficient process for determining an optimal grid when $P$ is a power of two.", "In sec:exp-comparison, we also compare alg:3dmultittm against an implementation of the TTM-in-Sequence approach as implemented by TuckerMPI [4].", "We argue in sec:exp:lbTTM-in-Sequence that this implementation is nearly communication optimal given the computation that it performs, validating our comparison against it.", "fig:commcostcomp-bestvfast presents results relevant to both sec:exp:configurations,sec:exp:lbTTM-in-Sequence." ], [ "Obtaining Integral Processor Grids for alg:3dmultittm", "In order to determine the communication cost of alg:3dmultittm, one must determine the processor grid.", "Obtaining $p_i$ and $q_i$ from the procedure in sec:3dUpperBounds may yield non-integer values.", "The following procedure allows us to convert these to integers under our assumption that all parameters are powers of 2.", "Recall that we consider $P=pq$ with $p=p_1p_2p_3$ and $q=q_1q_2q_3$ .", "If $\\lfloor \\log _2(p)+0.5\\rfloor = \\lfloor \\log _2(p)\\rfloor $ , then we set $p=2^{\\lfloor \\log _2(p)\\rfloor }$ , otherwise we set $p=2^{\\lceil \\log _2(p)\\rceil }$ , distributing the modification evenly between $p_1, p_2,$ and $p_3$ .", "Now, we keep $p=p_1p_2p_3$ constant, and convert each $p_i$ to an integer.", "We set $p_1=2^{\\lfloor \\log _2(p_1)+0.5\\rfloor }$ distributing the changes evenly among $p_2$ and $p_3$ .", "To see that our new value of $p_1$ must still be smaller than $n_1$ , we note that our original $p_1$ was less than $n_1$ which is a power of 2 by our assumption.", "If we increased $p$ in our first step, then distributing the modifications evenly between $p_1, p_2$ and $p_3$ increased them by at most $2^{1/6}$ .", "Thus $p_1\\le n_1$ will imply that $\\lfloor \\log _2(p_1 \\cdot 2^{1/6})+0.5\\rfloor \\le \\log _2(n_1)$ .", "Note that this most recent modification to $p_1$ changes $p_2$ and $p_3$ .", "Then, we set $p_2=2^{\\lfloor \\log _2(p_2)+0.5\\rfloor }$ and adapt $p_3$ accordingly.", "A similar argument to what is used for $p_1$ will show that $p_2$ and $p_3$ are also not larger than their corresponding dimensions.", "Having completed our work on the processor dimensions associated with the first tensor, we set $q=\\frac{P}{p}$ distributing the changes evenly among the $q_i$ , then force each $q_i$ to be an integer following the same procedure as for the $p_i$ .", "We denote the communication cost of alg:3dmultittm for the grid determined using this method by alg:3dmultittm (fast) and the communication cost using exhaustive search by alg:3dmultittm (best).", "We note that this procedure can increase the total number of accessed elements of any variable at most 4 times, but we see in fig:commcostcomp-bestvfast that the communication costs of both procedures are exactly the same for the examples we consider.", "These problems match those presented in fig:commcostcomparison.", "Figure: Communication cost comparison of alg:3dmultittm using best processor grid against fast method and of the TTM-in-Sequence approach implemented by TuckerMPI against the lower bounds.", "alg:3dmultittm (fast) and alg:3dmultittm (best) are the same for all the configurations." ], [ "TTM-in-Sequence Lower Bounds", "Here we discuss communication lower bounds for the TTM-in-Sequence approach with cubical tensors.", "There has not been any proven bound for this approach other than individual bounds for each TTM (a single matrix multiply) computation, assuming the sequence of TTMs has been specified.", "The sum of individual bounds provides a communication lower bound for this approach.", "We obtain the tightest (and obtainable) lower bound for each TTM from [1], which depends on the relative matrix dimensions and number of processors, and represent the sum by $C_{LB}$ (TTM-in-Seq).", "We also note that $C_{LB}$ (TTM-in-Seq) may not be always attainable as data distributions for two successive TTMs may be non-compatible and require extra communication.", "When the input tensor dimensions are much larger than the output tensor dimensions, most of the computation and communication occurs in the first TTM, so we also consider the communication lower bound of only that matrix multiplication, which also provides a valid lower bound for the entire TTM-in-Sequence computation.", "Recall that we obtain the algorithmic cost of TTM-in-Sequence by exhaustively searching for the best processor grid configuration given the communication costs specified by eq:TuckerMPI-cost.", "fig:commcostcomp-bestvfast shows a comparison of TTM-in-Seq and $C_{LB}$ (TTM-in-Seq) for the tensor dimensions presented in fig:commcostcomparison.", "We can see that the communication costs of TTM-in-Seq are very close to $C_{LB}$ (TTM-in-Seq), the largest differences are $7.9\\%$ for fig:commcostcomp-bestvfast:12-4 at $P=2^5$ , $25\\%$ for fig:commcostcomp-bestvfast:13-6 at $P=2^6$ , and $9.3\\%$ for fig:commcostcomp-bestvfast:20-8 at $P=2^{21}$ .", "Comparing $C_{LB}$ (1st TTM) and $C_{LB}$ (TTM-in-Seq), we see that for these examples at least half the communication of the entire TTM-in-Sequence is required by the first TTM, and it is completely dominated by the first TTM when $P$ is large." ], [ "General Case", "In this section, we provide results and proofs for general multi-TTM computations.", "We present the proof of communication lower bounds, theorem:lb:genMultiTTM, in app:sec:genLowerBounds.", "In app:sec:genUpperBounds, we discuss computation and communication costs of alg:genMultittm, and how to select processor grid dimensions such that our algorithm is communication optimal.", "We present a comparison of the communication costs of our algorithm with a TTM-in-Sequence approach for $3/4/5/6$ -dimensional Multi-TTM computations in app:sec:generalMultiTTM:evaluation." ], [ "Communication Lower Bounds for General Multi-TTM", "[Proof of theorem:lb:genMultiTTM] Let $F$ be the set of loop indices associated with the $(d{+}1)$ -ary multiplications performed by a processor.", "As we assumed the algorithm is computationally load balanced, $|F| = nr/P$ .", "We define $\\phi _{X̰}(F)$ , $\\phi _{Y̰}(F)$ and $\\phi _j(F)$ to be the projections of $F$ onto the indices of the arrays $X̰, Y̰$ , and ${\\mathbf {\\mathbf {A}}}^{(j)}$ for $1\\le j\\le d$ which correspond to the elements of the arrays that must be accessed by the processor.", "We use lem:hbl to obtain a lower bound on the number of array elements that must be accessed by the processor.", "The matrix corresponding to the projections above is given by ${\\mathbf {\\mathbf {\\Delta }}} = \\begin{bmatrix}{\\mathbf {\\mathbf {I}}}_{d\\times d} & _d & _d\\\\ {\\mathbf {\\mathbf {I}}}_{d\\times d} & _d & _d \\end{bmatrix}\\text{.}", "$ Here $_d$ and $_d$ denote the $d$ -dimensional vectors of all ones and zeros, respectively, and ${\\mathbf {\\mathbf {I}}}_{d\\times d}$ denotes the $d\\times d$ identity matrix.", "As before we define $\\mathcal {C} = \\big \\lbrace s̭ \\in [0,1]^{d+2}:{\\mathbf {\\mathbf {\\Delta }}}\\cdot s̭\\ge \\big \\rbrace \\text{.", "}$ We recall that $$ represents a vector of all ones.", "As in the proof of theorem:lb:3DMultiTTM, ${\\mathbf {\\mathbf {\\Delta }}}$ is not full rank, so we again consider each vector $v̭ \\in \\mathcal {C}$ such that ${\\mathbf {\\mathbf {\\Delta }}}\\cdot v̭=$ .", "Such a vector $v̭$ is of the form $\\begin{bmatrix} a & \\cdots & a & 1-a & 1-a \\end{bmatrix}$ where $0\\le a\\le 1$ .", "Thus, we obtain $\\frac{nr}{P} \\le \\Big (\\prod _{j\\in [d]}|\\phi _j(F)|\\Big )^a \\big (|\\phi _{X̰}(F)||\\phi _{Y̰}(F)|\\big )^{1\\text{-}a}\\text{.", "}$ Similar to the 3D case, the above constraint is equivalent to $\\frac{nr}{P} \\le \\prod _{j\\in [d]}|\\phi _j(F)|$ and $\\frac{nr}{P} \\le |\\phi _{X̰}(F)||\\phi _{Y̰}(F)|$ .", "Clearly a projection onto an array can not be larger than the array itself, thus $|\\phi _{X̰}(F)| \\le n$ , $|\\phi _{Y̰}(F)|\\le r$ , and $|\\phi _j(F)|\\le n_jr_j$ for $1\\le j \\le d$ .", "As the constraints related to the projections of matrices and tensors are disjoint, we solve them separately and then sum the results to get a lower bound on the set of elements that must be accessed by the processor.", "We obtain a lower bound on $A$ , the number of elements of the matrices that must be accessed by the processor by using lemma:genMatrixOptimalSolutions, and a lower bound on $B$ , the number of elements of the tensors that must be accessed by the processor by using lemma:tensorOptimalSolutions.", "By summing both, we get the positive terms of the lower bound.", "To bound the sends or receives, we consider how much data the processor could have had at the beginning or at the end of the computation.", "Assuming there is exactly one copy of the data at the beginning and at the end of the computation, there must exist a processor which has access to at most $1/P$ of the elements of the arrays at the beginning or at the end of the computation.", "By employing the previous analysis, this processor must access $A+B$ elements of the arrays, but can only have $\\frac{n}{P}+\\frac{r}{P} +\\sum _{j\\in [d]} \\frac{n_jr_j}{P}$ elements of the arrays stored.", "Thus it must perform the specified amount of sends or receives.", "As we did previously, we denote the lower bound of theorem:lb:genMultiTTM by ${\\sc LB} $ and use it extensively while determining the optimal processor grid dimensions for our algorithm in app:sec:ddiemsnionalparallelAlgoritm:selectionpiqi." ], [ "Parallel Algorithm for General Multi-TTM", "Here we discuss our data distribution model for alg:genMultittm.", "$X̰_{p_1^\\prime \\cdots p_d^\\prime }$ and $Y̰_{q_1^\\prime \\cdots q_d^\\prime }$ denote the subtensors of $X̰$ and $Y̰$ owned by processors $(p_1^\\prime ,\\cdots , p_d^\\prime , *,\\cdots , *)$ and $(*, \\cdots , *, q_1^\\prime ,\\cdots , q_d^\\prime )$ , respectively.", "${\\mathbf {\\mathbf {A}}}^{(i)}_{p_i^\\prime q_i^\\prime }$ denotes the submatrix of ${\\mathbf {\\mathbf {A}}}^{(i)}$ owned by processors $(*,\\cdots ,*, p_i^\\prime ,*,\\cdots , *,\\linebreak q_i^\\prime , *,\\cdots , *)$ .", "We impose that there is one copy of data in the system at the beginning and the end of the computation, and each subarray is distributed evenly among the set of processors which own the data.", "When alg:genMultittm completes, $Y̰_{q_1^\\prime \\cdots q_d^\\prime }$ is distributed evenly among processors $(*, \\cdots , *, q_1^\\prime , \\cdots , q_d^\\prime )$ .", "We recall that $\\prod _{i=1}^dp_i$ and $\\prod _{i=1}^dq_i$ are denoted by $p$ and $q$ , respectively." ], [ "Cost Analysis", "Now we analyze computation and communication costs of the algorithm.", "As before, the local Multi-TTM computation in Line  can be performed as a sequence of TTM operations to mininimize the number of arithmetic operations.", "Assuming the TTM operations are performed in their order, first with ${\\mathbf {\\mathbf {A}}}^{(1)}$ , then with ${\\mathbf {\\mathbf {A}}}^{(2)}$ , and so on until the last is performed with ${\\mathbf {\\mathbf {A}}}^{(d)}$ , then each processor performs $\\sum _{k=1}^d \\left(2\\prod _{i=1}^k \\frac{r_i}{q_i}\\prod _{j=k}^d\\frac{n_j}{p_j}\\right)$ local computations in Line .", "In Line , each processor also performs $(1-\\frac{q}{P}) \\frac{r}{q}$ computations due to the Reduce-Scatter operation.", "Communication occurs only in All-Gather and Reduce-Scatter collectives in Lines , , and .", "Line  specifies $\\frac{P}{p}$ simultaneous All-Gathers, Line  specifies $\\frac{P}{p_iq_i}$ simultaneous All-Gathers in the $i$ th loop iteration, and Line  specifies simultaneous $\\frac{P}{q}$ Reduce-Scatters.", "Each processor is involved in one All-Gather involving the input tensor, $d$ All-Gathers involving input matrices and one Reduce-Scatter involving the output tensor.", "As before, we assume bandwidth and latency optimal algorithms are used for the All-Gather and Reduce-Scatter collectives.", "Hence the bandwidth costs of the All-Gather operations are $(1-\\frac{p}{P}) \\frac{n}{p}$ for Line , and $\\sum _{i=1}^d(1-\\frac{p_iq_i}{P}) \\frac{n_ir_i}{p_iq_i}$ for the $d$ iterations of Line .", "The bandwidth cost of the Reduce-Scatter operation in Line  is $(1-\\frac{q}{P}) \\frac{r}{q}$ .", "Hence the overall bandwidth cost of alg:genMultittm for each processor is $\\frac{n}{p} + \\frac{r}{q} + \\sum _{i=1}^d\\frac{n_ir_i}{p_iq_i} - \\left(\\frac{n+r+\\sum _{i=1}^d n_ir_i}{P}\\right)$ .", "The latency costs are $\\log \\left(\\frac{P}{p}\\right)$ and $\\log \\left(\\frac{P}{q}\\right)$ for Lines  and respectively, and $\\sum _{i=1}^d \\log \\left(\\frac{P}{p_iq_i}\\right)$ for the $d$ iterations of Line .", "Thus the overall latency cost of alg:genMultittm is $\\log \\left(\\frac{P}{p}\\right)+ \\sum _{i=1}^d \\log \\left(\\frac{P}{p_iq_i}\\right) + \\log \\left(\\frac{P}{q}\\right) = d\\log (P).$" ], [ "Selection of $p_j$ and {{formula:6ca65408-041a-4dc8-902b-d105c8b13810}} in alg:genMultittm", "Similar to sec:parallelAlgoritm:selectionpiqi, we prove the optimality of alg:genMultittm by selecting $p_j$ and $q_j$ such that the communication cost of alg:genMultittm matches the communication lower bounds of the Multi-TTM computation (Theorem REF ), and then determine how much additional communication is required to ensure the selected $p_j, q_j$ satisfy the additional constraints ($1\\le p_j \\le n_j, 1\\le q_j \\le r_j$ ) necessary for processor grid dimensions.", "As $X̰$ and $Y̰$ are $d$ -dimensional tensors, we have $n_j, r_j \\ge 2$ for $1\\le j\\le d$ .", "For better readability, similar to the previous convention, we denote $\\frac{\\sum _{j \\in [d]}n_jr_j + r +n}{P}$ by ${\\sc O} $ .", "[Proof of theorem:optimality:genMultiTTMAlgorithm] As we did previously, we break our analysis into 2 scenarios which are further broken down into all possible cases.", "In each case, we obtain $\\hat{p_j}$ and $\\hat{q_j}$ such that the terms in the communication cost match the corresponding lower bound terms and satisfy at least one of the two sets of constraints, $1\\le \\hat{p_j} \\le n_j$ , $1\\le \\hat{q_j}$ or $1\\le \\hat{q_j}\\le r_j$ , $1\\le \\hat{p_j}$ for $1\\le j \\le d$ .", "We handle all cases of both scenarios together in the end, and adapt these values to get $p_j$ and $q_j$ which respect both lower and upper bounds for all values of $j$ .", "Then we determine how much additional communication may be required.", "We denote $\\prod _{i=1}^d\\hat{p_i}$ and $\\prod _{i=1}^d\\hat{q_i}$ by $\\hat{p}$ and $\\hat{q}$ .", "$\\bullet $ Scenario I $\\left(P < \\frac{n}{r}\\right)$ : This scenario corresponds to the first case of the tensor term in ${\\sc LB} $ .", "Thus, we set $\\hat{p_j}, \\hat{q_j}$ in such a way that the tensor terms in the communication cost match the tensor terms of ${\\sc LB} $ : $ \\hat{p}=P, \\hat{q} = 1.$ This implies $\\hat{q_j} = 1$ for $1\\le j\\le d$ .", "We break this scenario into $d$ cases parameterized by $I$ : $\\frac{N_{I-1}R_{I-1}}{(n_{d-I+1}r_{d-I+1})^{I-1}} \\le P < \\min \\big \\lbrace \\frac{N_IR_I}{(n_{d-I}r_{d-I})^I}, \\frac{n}{r}\\big \\rbrace $ .", "The cases degenerate to $P < \\min \\big \\lbrace \\frac{N_1R_1}{n_{d-1}r_{d-1}}, \\frac{n}{r}\\big \\rbrace $ , when $I=1$ , and $\\frac{N_{d-1}R_{d-1}}{(n_1r_1)^{d-1}} \\le P < \\frac{n}{r}$ when $I=d$ .", "Setting the matrix communication costs to the matrix terms of the lower bound in the corresponding cases yields $\\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} = n_jr_j\\text{ if } 1\\le j\\le d-I,\\qquad \\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} = \\left(\\frac{N_IR_I}{P}\\right)^\\frac{1}{I}\\text{ if }d-I < j \\le d.$ Thus, $\\hat{q_j}=1$ for all $1\\le j\\le d$ , $\\hat{p_j} = 1$ if $1\\le j\\le d-I$ and $\\hat{p_j} = n_jr_j\\big (\\frac{P}{N_IR_I}\\big )^\\frac{1}{I}$ if $d-I < j\\le d$ to satisfy eq:genS1,eq:genS1Ci.", "Note that when $I=1$ , $p_d=P\\ge 1$ , and for the other values of $I$ , $p_j \\ge 1$ because $n_1r_1 \\le \\cdots \\le n_dr_d$ and $\\frac{N_{I-1}R_{I-1}}{(n_{d-I+1}r_{d-I+1})^{I-1}} \\le P$ .", "Additionally we have that $1= \\hat{q_j} < r_j$ for $1\\le j\\le d$ .", "However, we are not able to ensure $\\hat{p_j}\\le n_j$ when $d-I < j\\le d$ .", "We will handle all cases of both scenarios together as they require the same analysis.", "$\\bullet $ Scenario II $\\left(\\frac{n}{r}\\le P\\right)$ : This scenario corresponds to the second case of the tensor term in ${\\sc LB} $ .", "Thus, we set $\\hat{p_i}, \\hat{q_i}$ in such a way that $\\frac{n}{\\hat{p}}=\\frac{r}{\\hat{q}}=\\left(\\frac{nr}{P}\\right)^{1/2}.$ Again, we break this scenario into $d$ cases parameterized by $I$ : $\\max \\left\\lbrace \\frac{N_{I-1}R_{I-1}}{(n_{d-I+1}r_{d-I+1})^{I-1}},\\frac{n}{r}\\right\\rbrace \\le P < \\frac{N_IR_I}{(n_{d-I}r_{d-I})^I}\\;\\cdot $ The cases degenerate to $P < \\frac{N_1R_1}{n_{d-1}r_{d-1}}$ , when $I=1$ , and $\\max \\left\\lbrace \\frac{N_{d-1}R_{d-1}}{(n_1r_1)^{d-1}}, \\frac{n}{r}\\right\\rbrace \\le P$ when $I=d$ .", "Setting the matrix communication costs to match the corresponding matrix terms in the lower bound yields $\\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} = n_jr_j\\text{ if }1\\le j\\le d-I, \\qquad \\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} = \\left(\\frac{N_IR_I}{P}\\right)^\\frac{1}{I}\\text{ if }d-I < j \\le d.$ Thus we set $\\hat{q_j} = \\hat{p_j} = 1$ for all $1\\le j\\le d-I$ .", "When $2\\le I \\le d$ , the equations above do not uniquely determine $\\hat{p_j},\\hat{q_j}$ for $d-I<j\\le d$ .", "However, setting $\\hat{p_j} = n_j\\left(\\frac{nP}{rN_I^2}\\right)^{1/2I}$ and $\\hat{q_j}=r_j\\left(\\frac{rP}{nR_I^2}\\right)^{1/2I}$ for $d-I < j\\le d$ satisfies equations REF , REF in all cases.", "Note that we cannot ensure lower and upper bounds on $\\hat{p_j}$ and $\\hat{q_j}$ .", "We now look for new solutions to the equations twice.", "First, we ensure that all lower bounds are respected, i.e., $1\\le \\hat{p_j}$ and $1\\le \\hat{q_j}$ , and then we guarantee that all upper bounds of $\\hat{p_j}$ or $\\hat{q_j}$ are satisfied, i.e., $\\hat{p_j} \\le n_j$ or $\\hat{q_j} \\le r_j$ .", "As we compared communication cost of each term with its corresponding lower bound to obtain $\\hat{p_j}$ and $\\hat{q_j}$ , we have $1\\le \\hat{p_j}\\hat{q_j} \\le n_jr_j$ for $1\\le j \\le d$ , $1\\le \\hat{p}\\le n$ and $1\\le \\hat{q}\\le r$ in all $d$ cases.", "However, we may not have $1\\le \\hat{p_j}$ or $1\\le \\hat{q_j}$ for some $j$ .", "We now seek new solutions that are all greater than 1.", "First we will increase all $\\hat{q_j}, \\hat{p_j}$ that are less than 1 in a way that preserves products $\\hat{p_j}\\hat{q_j}$ but does not preserve $\\hat{p}$ and $\\hat{q}$ .", "Then we will adjust $\\hat{p_j}$ and $\\hat{q_j}$ to force the products $\\hat{p}$ and $\\hat{q}$ back to their initial values.", "Let $q^b$ denote the product of all $\\hat{q_j}$ such that $\\hat{q_j} < 1$ , and $p^b$ denote the product of all $\\hat{p_j}$ such that $\\hat{p_j} < 1$ .", "Without loss of generality, if $q^b \\le p^b$ , set $a=1$ $\\hat{p}^{orig} = \\hat{p}$ , and $\\hat{q}^{orig}=\\hat{q}$ .", "We perform the following updates: Looping over the index $j$ from 1 to $d$ , if $\\hat{q_j} < 1$ then set $a=a\\cdot \\hat{q_j}, \\hat{p_j}=\\hat{p_j}\\hat{q_j}, \\hat{q_j}=1$ ; else if $\\hat{p_j} < 1$ then set $a=a/\\hat{p_j}, \\hat{q_j}=\\hat{p_j}\\hat{q_j}, \\hat{p_j}=1.$ This step preserves all products $\\hat{p_j}\\hat{q_j}$ and enforces $1\\le \\hat{p_j}, 1\\le \\hat{q_j}$ for $1\\le j\\le d$ , but it does not preserve $\\hat{p}, \\hat{q}$ .", "At the end of this step, we have $a = q^b/p^b < 1$ , $\\hat{p} = a\\cdot \\hat{p}^{orig}$ , and $\\hat{q}=\\hat{q}^{orig}/a$ .", "In order to force $\\hat{p}$ and $\\hat{q}$ to match their initial values, we decrease some $\\hat{q_j}$ in such a way that $\\hat{q}$ is decreased by a factor of $a$ .", "This is possible because $1\\le \\hat{q}^{orig} =a\\cdot \\hat{q}$ .", "Looping over the index $j$ from 1 to $d$ , if $\\hat{q_j} > 1$ then set $\\hat{q_j}^{prev}= \\hat{q_j}, \\hat{q_j} = \\max (1, a\\cdot \\hat{q_j}), a = a\\left(\\frac{\\hat{q_j}^{prev}}{\\hat{q_j}}\\right), \\hat{p_j} = \\hat{p_j}\\left(\\frac{\\hat{q_j}^{prev}}{\\hat{q_j}}\\right)$ .", "At the end of this step, $\\hat{q}$ has been decreased by a factor of $q^b/p^b$ , $\\hat{p_j}\\hat{q_j}$ were all preserved, and thus, $\\hat{p}$ has been increased by a factor of $p^b/q^b$ , hence $\\hat{q}=\\hat{q}^{orig}$ and $\\hat{p}=\\hat{p}^{orig}$ .", "After the above updates, we have $1\\le \\hat{p_j}, 1\\le \\hat{q_j}$ for $1\\le j \\le d$ , and the products match the initial products thus are valid solutions to the original equations.", "If $p_b < q_b$ we would perform the same process, but changing the actions on the $\\hat{q_j}$ to be performed on the $\\hat{p_j}$ and vice versa.", "We now handle upper bounds of $\\hat{p_j}$ and $\\hat{q_j}$ .", "If $\\exists j,k$ such that $\\hat{p_j} > n_j$ and $\\hat{q_k}>r_k$ , we again seek new solutions such that all $\\hat{p_j}$ or all $\\hat{q_j}$ satisfy upper bounds while respecting lower bounds of all variables.", "Let $p^t$ and $q^t$ denote the products of all $\\hat{p_j}$ and $\\hat{q_j}$ , respectively, such that $\\hat{p_j}\\hat{q_j} \\ne 1$ .", "As $\\forall j, 1\\le \\hat{p_j}\\hat{q_j} \\le n_jr_j$ , therefore $p^t$ is not more than the product of the corresponding $n_j$ and/or $q^t$ is not more than the product of the corresponding $r_j$ .", "If the first constraint is satisfied, then we perform the following updates: Looping over the index $j$ from 1 to $d$ , if $p^t >1$ and $\\hat{p_j} \\hat{q_j} \\ne 1$ then set $a=\\hat{p_j} \\hat{q_j}, \\hat{p_j}= \\min (p^t, n_j), \\hat{q_j}=\\frac{a}{\\hat{p_j}}, p^t = \\frac{p^t}{\\hat{p_j}}$ .", "After the above updates, we have $1\\le \\hat{p_j} \\le n_j, 1\\le \\hat{q_j}$ for $1\\le j \\le d$ , and $\\hat{p_j}\\hat{q_j}$ , $\\hat{p}$ and $\\hat{q}$ are back to their original values.", "If the first constraint is not satisfied, we would perform the same process on $\\hat{q_j}$ instead of $\\hat{p_j}$ .", "Now for all cases of both scenarios, we know that $1\\le \\hat{p}_j$ and $1\\le \\hat{q_j}$ for $1\\le j\\le d$ , and either $\\hat{p}_j\\le n_j$ for $1\\le j\\le d$ or $\\hat{q_j}\\le r_j$ for $1\\le j\\le d$ .", "It remains to adapt $\\hat{p}_j$ and $\\hat{q}_j$ such that both $\\hat{p}_j \\le n_j$ and $\\hat{q}_j\\le r_j$ for $1\\le j\\le d$ .", "We obtain $p_1,\\ldots , p_d, q_1,\\ldots ,q_d$ from $\\hat{p_j}$ and $\\hat{q_j}$ such that $p_1\\cdots p_d=\\hat{p}$ and $q_1\\cdots q_d=\\hat{q}$ .", "The intuition is to maintain the tensor communication terms in the lower bound.", "Initially, we set $p_j = \\hat{p}_j$ and $q_j = \\hat{q}_j$ for $1\\le j \\le d$ .", "If $1\\le p_j\\le n_j$ and $1\\le q_j\\le r_j$ for $1\\le j\\le d$ , then $\\sum _{j \\in [d]}\\frac{n_jr_j}{p_jq_j} = \\sum _{j \\in [d]}\\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}}$ and the communication cost exactly matches the lower bound.", "Otherwise, we adapt the values and determine the effects on the matrix communication costs.", "We recall that due of our particular selections of $\\hat{p_j}$ and $\\hat{q_j}$ , $\\nexists j,\\ell \\in [d]$ such that $\\hat{p_j} > n_j$ and $\\hat{q_\\ell } > r_\\ell $ .", "If $\\hat{p}_j > n_j$ for some $j\\in [d]$ , then we iterate over the index $j$ from $d$ to 1 setting $p_j=\\min \\left\\lbrace n_j, \\frac{\\hat{p}}{\\prod _{\\ell \\in [d]-\\lbrace j\\rbrace }p_\\ell }\\right\\rbrace $ .", "We iterate again from $d$ to 1 with the same expression.", "Iterating twice ensures that all updates are visible to all $p_j$ .", "Now we assess how much additional communication is required for the matrices.", "As $\\hat{p_j} > n_j$ for some $j$ , it must be the case that $\\hat{p} \\ge 2$ .", "Thus $\\sum _{j \\in [d]}\\frac{n_jr_j}{p_jq_j} &\\le \\sum _{j \\in [d]}\\max \\left\\lbrace \\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}}, \\frac{r_j}{\\hat{q_j}}\\right\\rbrace \\\\& = \\sum _{j \\in [d]}\\Big (\\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} + \\frac{r_j}{\\hat{q_j}} - \\min \\big \\lbrace \\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}},\\frac{r_j}{\\hat{q_j}}\\big \\rbrace \\Big )\\\\&< \\sum _{j \\in [d]}\\left(\\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} + \\frac{r_j}{\\hat{q_j}}\\right) -(d-1)\\\\&\\le \\sum _{j \\in [d]} \\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} + \\frac{r}{\\hat{q}}\\\\& < \\sum _{j \\in [d]} \\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} + 2 \\left(\\frac{r}{\\hat{q}} - \\frac{r}{\\hat{p}\\hat{q}}\\right)\\\\&= \\sum _{j \\in [d]} \\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} + 2 \\left(\\frac{r}{\\hat{q}} - \\frac{r}{P}\\right)\\text{.", "}$ Similarly, if $\\exists j\\in [d]$ such that $\\hat{q_j} > r_j$ , the same update can be performed to the $q_j$ , and we obtain $\\sum _{j \\in [d]}\\frac{n_jr_j}{p_jq_j} < \\sum _{j \\in [d]} \\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} + 2 \\left(\\frac{n}{\\hat{p}} - \\frac{n}{P}\\right)$ .", "Therefore, $\\sum _{j \\in [d]}\\frac{n_jr_j}{p_jq_j} + \\frac{r}{q} + \\frac{n}{p} -{\\sc O} \\le 3\\left(\\sum _{j \\in [d]}\\frac{n_jr_j}{\\hat{p_j}\\hat{q_j}} + \\frac{r}{\\hat{q}} + \\frac{n}{\\hat{p}}-{\\sc O} \\right) = 3{\\sc LB} $ ." ], [ "Simulated Evaluation", "Similar to sec:experiments, we compare communication costs of our algorithm and a TTM-in-Sequence approach implemented in the TuckerMPI library.", "We again restrict to cases where all dimensions are powers of 2, and vary the number of processors $P$ from 2 to ${\\sc P_{\\max }} $ in multiples of 2, where ${\\sc P_{\\max }} =\\min \\lbrace n_1r_1, \\cdots , n_dr_d, n, r\\rbrace $ .", "Like sec:experiments, we look at all possible processor grid dimensions and represent the minimum communication costs of our algorithm and TuckerMPI algorithm by alg:genMultittm (best) and TTM-in-Seq, respectively.", "The TTM-in-Sequence approach described in [4] organizes $P$ in a $d$ -dimensional $\\tilde{p_1}\\times \\cdots \\times \\tilde{p_d}$ logical processor grid.", "Assuming TTMs are performed in increasing mode order, the overall communication cost of this algorithm is $\\frac{r_1n_2\\cdots n_d}{\\frac{P}{\\tilde{p_1}}} + \\frac{r_1r_2n_3\\cdots n_d}{\\frac{P}{\\tilde{p_2}}} +\\cdots + \\frac{r_1r_2\\cdots r_d}{\\frac{P}{\\tilde{p_d}}} -\\frac{r_1n_2\\cdots n_d + r_1r_2n_3\\cdots n_d + \\cdots + r_1r_2\\cdots r_d}{P} \\\\ \\qquad \\qquad +\\frac{n_1r_1}{\\tilde{p_1}} +\\cdots +\\frac{n_dr_d}{\\tilde{p_d}} - \\frac{n_1r_1+\\cdots +n_dr_d}{P}.", "$ The first line corresponds to tensor communication and the second line corresponds to matrix communication.", "As mentioned earlier, the TTM-in-Sequence approach forms a tensor after each TTM computation.", "Each positive term of the first line corresponds to the number of entries of such a tensor accessed by a processor in TuckerMPI.", "Figure: Communication cost comparison of alg:genMultittm and the TTM-in-Sequence approach implemented by the TuckerMPI library.", "Note that LB{\\sc LB} is a communication lower bound for atomic Multi-TTM algorithms, not for the TTM-in-Sequence approach.", "Communication cost of our approach (alg:genMultittm (best)) is very close to the lower bound (LB{\\sc LB} ).Figure: Matrix and Tensor communication costs in alg:genMultittm and the TTM-in-Sequence approach.Figure: Communication cost comparison of alg:genMultittm and the TTM-in-Sequence approach for 3/4/6-dimensional Multi-TTM computations.We again look at cases where the input tensors are large and the output tensors are small.", "fig:genMultiTTM:experiments:niriconstants shows comparison of alg:genMultittm (best) and TTM-in-Seq with our communication lower bounds (${\\sc LB} $ ) for 3/4/5-dimensional Multi-TTM computations.", "For $P=2$ , both approaches perform the same amount of communication.", "After that, the total number of accessed elements in both approaches decreases, however the rate of owned elements decreases at the faster rate.", "Hence we see slight increase in both curves.", "This behavior continues roughly till $2^{n_i-r_i}$ processors for TTM-in-Seq curve.", "In this region, TuckerMPI selects $\\tilde{p_1}=\\cdots =\\tilde{p_{d-1}} = 1$ and $\\tilde{p_d}=P$ .", "Our algorithm selects $p_1\\approx \\cdots \\approx p_d$ and $q_1=\\cdots =q_d=1$ .", "These processor grid dimensions result in the same tensor communication cost for both approaches.", "However our approach reduces matrix communication cost roughly $(1-\\frac{1}{d})P^\\frac{1}{d}$ times, hence it is better than the TTM-in-Sequence approach.", "fig:genMultiTTM:experiments:niriconstants:distribution shows the distribution of matrix and tensor communication costs in both approaches.", "In general, our approach significantly minimizes the matrix communication costs in all the plots and is better when the number of the entries in the output tensor is less than that of the matrices.", "When the communication cost is dominated by the output tensor, our approach is outperformed by the TTM-in-Sequence approach, which is the case in subfig:TTMinSequenceOutPerorms.", "Now we consider a different set of experiments.", "Here the number of entries in the input tensor is $(2^{10})^d$ for $d$ -dimensional computation.", "We fix the number of entries in the output tensor to $2^{12}$ and present comparisons of the considered approaches in fig:genMultiTTM:experiments:2-12 for 3/4/6-dimensional Multi-TTM computations.", "These dimensions allow both tensors to be cubical.", "As the number of entries in the matrices are greater than the number of entries in the output tensor, our approach is always superior to the TTM-in-Sequence approach." ], [ "Acknowledgments", "This work is supported by the National Science Foundation under Grant No.", "CCF-1942892 and OAC-2106920.", "This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant agreement No.", "810367)." ] ]
2207.10437
[ [ "Study of 2021 outburst of the recurrent nova RS Ophiuchi:\n Photoionization and morpho-kinematic modelling" ], [ "Abstract We present the evolution of the optical spectra of the 2021 outburst of RS Ophiuchi (RS Oph) over about a month after the outburst.", "The spectral evolution is similar to the previous outbursts.", "Early spectra show prominent P Cygni profiles of hydrogen Balmer, \\ion{Fe}{ii}, and \\ion{He}{i} lines.", "The emission lines were very broad during the initial days, which later became narrower and sharper as the nova evolved.", "This is interpreted as the expanding shocked material into the winds of the red giant companion.", "We find that the nova ejecta expanded freely for $\\sim 4$ days, and afterward, the shock velocity decreased monotonically with time as $v\\propto t^{-0.6}$.", "The physical and chemical parameters associated with the system are derived using the photoionization code \\textsc{cloudy}.", "The best-fit \\textsc{cloudy} model shows the presence of a hot central white dwarf source with a roughly constant luminosity of $\\sim$1.00 $\\times$ 10$^{37}$ erg s$^{-1}$.", "The best-fit photoionization models yield absolute abundance values by number, relative to solar of He/H $\\sim 1.4 - 1.9$, N/H = $70 - 95$, O/H = $0.60 - 2.60$, and Fe/H $\\sim 1.0 - 1.9$ for the ejecta during the first month after the outburst.", "Nitrogen is found to be heavily overabundant in the ejecta.", "The ejected hydrogen shell mass of the system is estimated to be in the range of $3.54 - 3.83 \\times 10^{-6} M_{\\odot}$.", "The 3D morpho-kinematic modelling shows a bipolar morphology and an inclination angle of $i=30^{\\circ}$ for the RS Oph binary system." ], [ " Reference sheet for natbib usage (Describing version 7.1 from 2003/06/06) For a more detailed description of the natbib package, the source file natbib.dtx.", "The natbib package is a reimplementation of the \\cite command, to work with both author–year and numerical citations.", "It is compatible with the standard bibliographic style files, such as plain.bst, as well as with those for harvard, apalike, chicago, astron, authordate, and of course natbib.", "Load with \\usepackage[options]{natbib}.", "See list of options at the end.", "I provide three new .bst files to replace the standard numerical ones: plainnat.bst       abbrvnat.bst       unsrtnat.bst The natbib package has two basic citation commands, \\citet and \\citep for textual and parenthetical citations, respectively.", "There also exist the starred versions \\citet* and \\citep* that print the full author list, and not just the abbreviated one.", "All of these may take one or two optional arguments to add some text before and after the citation.", "Table: NO_CAPTION Multiple citations may be made by including more than one citation key in the \\cite command argument.", "Table: NO_CAPTION These examples are for author–year citation mode.", "In numerical mode, the results are different.", "Table: NO_CAPTION As an alternative form of citation, \\citealt is the same as \\citet but without parentheses.", "Similarly, \\citealp is \\citep without parentheses.", "Multiple references, notes, and the starred variants also exist.", "Table: NO_CAPTION The \\citetext command allows arbitrary text to be placed in the current citation parentheses.", "This may be used in combination with \\citealp.", "In author–year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa.", "This is provided with the extra commands Table: NO_CAPTION If the first author's name contains a von part, such as “della Robbia”, then \\cite{dRob98} produces “della Robbia (1998)”, even at the beginning of a sentence.", "One can force the first letter to be in upper case with the command \\Citet instead.", "Other upper case commands also exist.", "Table: NO_CAPTION These commands also exist in starred versions for full author names.", "Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e.", "as Paper I, Paper II.", "Such aliases can be defined and used, textual and/or parenthetical with: Table: NO_CAPTION These citation commands function much like \\citet and \\citep: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks.", "Use the command \\bibpunct with one optional and 6 mandatory arguments: the opening bracket symbol, default = ( the closing bracket symbol, default = ) the punctuation between multiple citations, default = ; the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author–year, default = author–year; the punctuation that comes between the author names and the year the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); the opening bracket symbol, default = ( the closing bracket symbol, default = ) the punctuation between multiple citations, default = ; the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author–year, default = author–year; the punctuation that comes between the author names and the year the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); The optional argument is the character preceding a post-note, default is a comma plus space.", "In redefining this character, one must include a space if one is wanted.", "Example 1, \\bibpunct{[}{]}{,}{a}{}{;} changes the output of \\cite{jon90,jon91,jam92} into [Jones et al.", "1990; 1991, James et al.", "1992].", "Example 2, \\bibpunct[; ]{(}{)}{,}{a}{}{;} changes the output of \\cite{jon90} into (Jones et al.", "1990; and references therein).", "Redefine \\bibsection to the desired sectioning command for introducing the list of references.", "This is normally \\section* or \\chapter*.", "Define \\bibpreamble to be any text that is to be printed after the heading but before the actual list of references.", "Define \\bibfont to be a font declaration, e.g.", "\\small to apply to the list of references.", "Define \\citenumfont to be a font declaration or command like \\itshape or \\textit.", "Redefine \\bibnumfmt as a command with an argument to format the numbers in the list of references.", "The default definition is [#1].", "The indentation after the first line of each reference is given by \\bibhang; change this with the \\setlength command.", "The vertical spacing between references is set by \\bibsep; change this with the \\setlength command.", "If one wishes to have the citations entered in the .idx indexing file, it is only necessary to issue \\citeindextrue at any point in the document.", "All following \\cite commands, of all variations, then insert the corresponding entry to that file.", "With \\citeindexfalse, these entries will no longer be made.", "The natbib package is compatible with the chapterbib package which makes it possible to have several bibliographies in one document.", "The package makes use of the \\include command, and each \\included file has its own bibliography.", "The order in which the chapterbib and natbib packages are loaded is unimportant.", "The chapterbib package provides an option sectionbib that puts the bibliography in a \\section* instead of \\chapter*, something that makes sense if there is a bibliography in each chapter.", "This option will not work when natbib is also loaded; instead, add the option to natbib.", "Every \\included file must contain its own \\bibliography command where the bibliography is to appear.", "The database files listed as arguments to this command can be different in each file, of course.", "However, what is not so obvious, is that each file must also contain a \\bibliographystyle command, preferably with the same style argument.", "Do not use the cite package with natbib; rather use one of the options sort or sort&compress.", "These also work with author–year citations, making multiple citations appear in their order in the reference list.", "Use option longnamesfirst to have first citation automatically give the full list of authors.", "Suppress this for certain citations with \\shortcites{key-list}, given before the first citation.", "Any local recoding or definitions can be put in natbib.cfg which is read in after the main package file.", "round (default) for round parentheses; square for square brackets; curly for curly braces; angle for angle brackets; colon (default) to separate multiple citations with colons; comma to use commas as separaters; authoryear (default) for author–year citations; numbers for numerical citations; super for superscripted numerical citations, as in Nature; sort orders multiple citations into the sequence in which they appear in the list of references; sort&compress as sort but in addition multiple numerical citations are compressed if possible (as 3–6, 15); longnamesfirst makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); sectionbib redefines \\thebibliography to issue \\section* instead of \\chapter*; valid only for classes with a \\chapter command; to be used with the chapterbib package; nonamebreak keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some hyperref problems.", "(default) for round parentheses; for square brackets; for curly braces; for angle brackets; (default) to separate multiple citations with colons; to use commas as separaters; (default) for author–year citations; for numerical citations; for superscripted numerical citations, as in Nature; orders multiple citations into the sequence in which they appear in the list of references; as sort but in addition multiple numerical citations are compressed if possible (as 3–6, 15); makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); redefines \\thebibliography to issue \\section* instead of \\chapter*; valid only for classes with a \\chapter command; to be used with the chapterbib package; keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some hyperref problems." ] ]
2207.10473
[ [ "Hochschild homology, trace map and $\\zeta$-cycles" ], [ "Abstract In this paper we consider two spectral realizations of the zeros of the Riemann zeta function.", "The first one involves all non-trivial (non-real) zeros and is expressed in terms of a Laplacian intimately related to the prolate wave operator.", "The second spectral realization affects only the critical zeros and it is cast in terms of sheaf cohomology.", "The novelty is that the base space is the Scaling Site playing the role of the parameter space for the $\\zeta$-cycles and encoding their stability by coverings." ], [ "Introduction", "In this paper we give a Hochschild homological interpretation of the zeros of the Riemann zeta function.", "The root of this result is in the recognition that the map $({\\mathcal {E}}f)(u)=u^{1/2}\\sum _{n>0} f(nu)$ which is defined on a suitable subspace of the linear space of complex-valued even Schwartz functions on the real line, is a trace in Hochschild homology, if one brings in the construction the projection $\\pi :{\\mathbb {A}}_{\\mathbb {Q}}\\rightarrow {\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}_{\\mathbb {Q}}$ from the rational adèles to the adèle classes (see Section ).", "In this paper, we shall consider two spectral realizations of the zeros of the Riemann zeta function.", "The first one involves all non-trivial (i.e.", "non-real) zeros and is expressed in terms of a Laplacian intimately related to the prolate wave operator (see Section ).", "The second spectral realization is sharper inasmuch as it affects only the critical zeros.", "The main players are here the $\\zeta $ -cycles introduced in [7], and the Scaling Site [6] as their parameter space, which encodes their stability by coverings.", "The $\\zeta $ -cycles give the theoretical geometric explanation for the striking coincidence between the low lying spectrum of a perturbed spectral triple therein introduced (see [7]), and the low lying (critical) zeros of the Riemann zeta function.", "The definition of a $\\zeta $ -cycle derives, as a by-product, from scale-invariant Riemann sums for complex-valued functions on the real half-line $[0,\\infty )$ with vanishing integral.", "For any $\\mu \\in {\\mathbb {R}}_{>1}$ , one implements the linear (composite) map $\\Sigma _\\mu {\\mathcal {E}}: {{\\mathcal {S}}^{\\rm ev}_0}\\rightarrow L^2(C_\\mu )$ from the Schwartz space ${{\\mathcal {S}}^{\\rm ev}_0}$ of real valued even functions $f$ on the real line, with $f(0)=0$ , and vanishing integral, to the Hilbert space $L^2(C_\\mu )$ of square integrable functions on the circle $C_\\mu ={\\mathbb {R}}_+^*/\\mu ^{\\mathbb {Z}}$ of length $L=\\log \\mu $ , where $(\\Sigma _\\mu g)(u):=\\sum _{k\\in {\\mathbb {Z}}} g(\\mu ^ku).$ The map $\\Sigma _\\mu $ commutes with the scaling action ${\\mathbb {R}}^*_+\\ni \\lambda \\mapsto f(\\lambda ^{-1}x)$ on functions, while ${\\mathcal {E}}$ is invariant under a normalized scaling action on ${{\\mathcal {S}}^{\\rm ev}_0}$ .", "In this set-up one has Definition A $\\zeta $ -cycle is a circle $C$ of length $L=\\log \\mu $ whose Hilbert space $L^2(C)$ contains $\\Sigma _\\mu {\\mathcal {E}}({{\\mathcal {S}}^{\\rm ev}_0})$ as a non dense subspace.", "Next result is known (see [7] Theorem 6.4) Theorem 1.1 The following facts hold The spectrum of the scaling action of ${\\mathbb {R}}_+^*$ on the orthogonal space to $\\Sigma _\\mu {\\mathcal {E}}({{\\mathcal {S}}^{\\rm ev}_0})$ in $L^2(C_\\mu )$ is contained in the set of the imaginary parts of the zeros of the Riemann zeta function $\\zeta (z)$ on the critical line $\\Re (z)=\\frac{1}{2}$ .", "Let $s>0$ be a real number such that $\\zeta (\\frac{1}{2}+is)=0$ .", "Then any circle $C$ whose length is an integral multiple of $\\frac{2\\pi }{s}$ is a $\\zeta $ -cycle, and the spectrum of the action of ${\\mathbb {R}}_+^*$ on $(\\Sigma _\\mu {\\mathcal {E}}({{\\mathcal {S}}^{\\rm ev}_0}))^\\perp $ contains $s$ .", "Theorem REF states that for a countable and dense set of values of $L\\in {\\mathbb {R}}_{>0}$ , the Hilbert spaces ${\\mathcal {H}}(L):=(\\Sigma _\\mu {\\mathcal {E}}({{\\mathcal {S}}^{\\rm ev}_0}))^\\perp $ are non-trivial and, more importantly, that as $L$ varies in that set, the spectrum of the scaling action of ${\\mathbb {R}}_+^*$ on the family of the ${\\mathcal {H}}(L)$ 's is the set $Z$ of imaginary parts of critical zeros of the Riemann zeta function.", "In fact, in view of the proven stability of $\\zeta $ -cycles under coverings, the same element of $Z$ occurs infinitely many times in the family of the ${\\mathcal {H}}(L)$ 's.", "This stability under coverings displays the Scaling Site ${S}={[0,\\infty )\\rtimes {{\\mathbb {N}}^{\\times }}}$ as the natural parameter space for the $\\zeta $ -cycles.", "In this paper, we show (see Section ) that after organizing the family ${\\mathcal {H}}(L)$ as a sheaf over ${S}$ and using sheaf cohomology, one obtains a spectral realization of critical zeros of the Riemann zeta function.", "The key operation in the construction of the relevant arithmetic sheaf is given by the action of the multiplicative monoid ${\\mathbb {N}}^\\times $ on the sheaf of smooth sections of the bundle $L^2$ determined by the family of Hilbert spaces $L^2(C_\\mu )$ , $\\mu ={\\exp L}$ , as $L$ varies in $(0,\\infty )$ .", "For each $n\\in {\\mathbb {N}}^\\times $ there is a canonical covering map $C_{\\mu ^n}\\rightarrow C_\\mu $ , where the action of $n$ corresponds to the operation of sum on the preimage of a point in $C_\\mu $ under the covering.", "This action turns the (sub)sheaf of smooth sections vanishing at $L=0$ into a sheaf ${\\mathcal {L}}^2$ over ${S}$ .", "The family of subspaces $\\Sigma _\\mu {\\mathcal {E}}({{\\mathcal {S}}^{\\rm ev}_0})\\subset L^2(C_\\mu )$ generates a closed subsheaf $\\overline{\\Sigma {\\mathcal {E}}}\\subset {\\mathcal {L}}^2$ and one then considers the cohomology of the related quotient sheaf ${\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}}$ .", "In view of the property of ${\\mathbb {R}}_+^*$ -equivariance under scaling, this construction determines a spectral realization of critical zeros of the Riemann zeta function, also taking care of eventual multiplicities.", "Our main result is the following Theorem 1.2 The cohomology $H^0({S},{\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}})$ endowed with the induced canonical action of ${\\mathbb {R}}_+^*$ is isomorphic to the spectral realization of critical zeros of the Riemann zeta function, given by the action of ${\\mathbb {R}}_+^*$ , via multiplication with $\\lambda ^{is}$ , on the quotient of the Schwartz space ${\\mathcal {S}}({\\mathbb {R}})$ by the closure of the ideal generated by multiples of $\\zeta \\left(\\frac{1}{2} +is\\right)$ .", "This paper is organized as follows.", "Section recalls the main role played by the (image of the) map ${\\mathcal {E}}$ in the study of the spectral realization of the critical zeros of the Riemann zeta function.", "In Section we show the identification of the Hochschild homology HH$_0$ of the noncommutative space ${\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}_{\\mathbb {Q}}$ with the coinvariants for the action of ${\\mathbb {Q}}^\\times $ on the Schwartz algebra, using the (so-called) “wrong way” functoriality map $\\pi _!$ associated to the projection $\\pi :{\\mathbb {A}}_{\\mathbb {Q}}\\rightarrow {\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}_{\\mathbb {Q}}$ .", "We also stress the relevant fact that the Fourier transform on adèles becomes canonical after passing to $\\text{HH}_0$ of the adèle class space of the rationals.", "The key Proposition REF describes the invariant part of such $\\text{HH}_0$ as the space of even Schwartz functions on the real line and identifies the trace map with the map ${\\mathcal {E}}$ .", "Section takes care of the two vanishing conditions implemented in the definition of ${\\mathcal {E}}$ and introduces the operator $\\Delta =H(1+H)$ ($H$ being the generator of the scaling action of ${\\mathbb {R}}^*_+$ on ${\\mathcal {S}}({\\mathbb {R}})^{\\text{ev}}$ ) playing the role of the Laplacian and intimately related to the prolate operator.", "Finally, Section is the main technical section of this paper since it contains the proof of Theorem REF ." ], [ "The map ${\\mathcal {E}}$ and the zeros of the zeta function", "The adèle class space of the rationals ${\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}_{\\mathbb {Q}}$ is the natural geometric framework to understand the Riemann-Weil explicit formulas for $L$ -functions as a trace formula [3].", "The essence of this result lies mainly in the delicate computation of the principal values involved in the distributions appearing in the geometric (right-hand) side of the semi-local trace formula of op.cit.", "(see Theorem 4 for the notations) $\\text{Trace}(R_\\Lambda U(h))=h(1)\\int _{\\lambda \\in C_S\\atop |\\lambda |\\in [\\Lambda ^{-1},\\Lambda ]}d^*\\lambda +\\sum _{\\nu \\in S}\\int ^{\\prime }_{{\\mathbb {Q}}_\\nu ^*}\\frac{h(u^{-1})}{|1-u|}d^*u+o(1)\\qquad \\text{for $\\Lambda \\rightarrow \\infty $}$ (later recast in the softer context of [13]).", "There is a rather simple analogy related to the spectral (left-hand) side of the explicit formulas for a global field $K$ (see [4] Section 2 for the notations) $\\hat{h}(0)+\\hat{h}(1) - \\sum _{\\chi \\in \\widehat{C_{K,1}}}\\sum _{\\rho \\in Z_{\\tilde{\\chi }}}\\hat{h}(\\tilde{\\chi },\\rho ) = \\sum _\\nu \\int ^{\\prime }_{K_v^*}\\frac{h(u^{-1})}{|1-u|}d^*u$ which may help one to realize how the sum over the zeros of the zeta function appears.", "Here this relation is simply explained.", "Given a complex valued polynomial $P(x)\\in {\\mathbb {C}}[x]$ , one may identify the set of its zeros as the spectrum of the endomorphism $T$ of multiplication by the variable $x$ computed in the quotient algebra ${\\mathbb {C}}[x]/(P(x))$ .", "It is well known that the matrix of $T$ , in the basis of powers of $x$ , is the companion matrix of $P(x)$ .", "Furthermore, the trace of its powers, readily computed from the diagonal terms of powers of the companion matrix in terms of the coefficients of $P(x)$ , gives the Newton-Girard formulae.This is an efficient way to find the power sum of roots of $P(x)$ without actually finding the roots explicitly.", "Newton's identities supply the calculation via a recurrence relation with known coefficients.", "If one transposes this result to the case of the Riemann zeta function $\\zeta (s)$ , one sees that the multiplication by $P(x)$ is replaced here with the map ${\\mathcal {E}}(f)(u):=u^{1/2}\\sum _{n=1}^\\infty f(nu),$ while the role of $T$ (the multiplication by the variable) is played by the scaling operator $u\\partial _u$ .", "These statements may become more evident if one brings in the Fourier transform.", "Indeed, let $f\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ be an even Schwartz function and let $w(f)(u)=u^{1/2}f(u)$ be the unitary identification of $f$ with a function in $L^2({\\mathbb {R}}_+^*,d^*u)$ , where $d^*u:=du/u$ denotes the Haar measure.", "Then, by composing $w$ with the (multiplicative) Fourier transform ${\\mathbb {F}}: L^2({\\mathbb {R}}_+^*,d^*u)\\rightarrow L^2({\\mathbb {R}})$ , ${\\mathbb {F}}(h)(s)=\\int _{{\\mathbb {R}}_+^*}h(u)u^{-is}d^*u$ one obtains ${\\mathbb {F}}(w(f))=\\psi , \\qquad \\psi (z)=\\int _{{\\mathbb {R}}_+^*}f(u)u^{\\frac{1}{2}-iz}d^*u.$ The function $\\psi (z)$ is holomorphic in the complex half-plane $\\mathfrak {H}=\\lbrace z\\in {\\mathbb {C}}~|~\\Im (z)>-\\frac{1}{2}\\rbrace $ since $f(u)=O(u^{-N})$ for $u\\rightarrow \\infty $ .", "Moreover, for $n\\in {\\mathbb {N}}$ , one has $\\int _{{\\mathbb {R}}_+^*}u^{1/2}f(nu)u^{-iz}d^*u=n^{-1/2+iz}\\int _{{\\mathbb {R}}_+^*}v^{1/2}f(v)v^{-iz}d^*v.$ In the region $\\Im (z)>\\frac{1}{2}$ one derives, by applying Fubini theorem, the following equality $\\int _{{\\mathbb {R}}_+^*}{\\sum _{n=1}^\\infty } u^{1/2}f(nu)u^{-iz}d^*u= \\left({\\sum _{n=1}^\\infty } n^{-1/2+iz}\\right)\\int _{{\\mathbb {R}}_+^*}v^{1/2}f(v)v^{-iz}d^*v.$ Thus, for all $z\\in {\\mathbb {C}}$ with $\\Im (z)>\\frac{1}{2}$ one obtains $\\int _{{\\mathbb {R}}_+^*}{\\mathcal {E}}(f)(u)u^{-iz}d^*u=\\zeta (\\frac{1}{2}-iz)\\psi (z).$ If one assumes now that the Schwartz function $f$ fulfills $\\int _{\\mathbb {R}}f(x)dx=0$ , then $\\psi (\\frac{i}{2})=0$ .", "Both sides of (REF ) are holomorphic functions in $\\mathfrak {H}$ : for the integral on the left-hand side, this can be seen by using the estimate ${\\mathcal {E}}(f)(u)=O(u^{1/2})$ that follows from the Poisson formula.", "This proves that (REF ) continues to hold also in the complex half-plane $\\mathfrak {H}$ .", "Thus one sees that the zeros of $\\zeta (\\frac{1}{2}-iz)$ in the strip $\\vert \\Im (z)\\vert <\\frac{1}{2}$ are the common zeros of all functions ${\\mathbb {F}}({\\mathcal {E}}(f))(z), \\qquad f \\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}_1:=\\lbrace f \\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}~|~ \\int _{\\mathbb {R}}f(x)dx=0\\rbrace .$ One may eventually select the even Schwartz function $f(x)=e^{-\\pi x^2} \\left(2 \\pi x^2-1\\right)$ to produce a specific instance where the zeros of ${\\mathbb {F}}({\\mathcal {E}}(f))$ are exactly the non-trivial zeros of $\\zeta (\\frac{1}{2}-iz)$ , since in this case $\\psi (z)=\\frac{1}{4} \\pi ^{-\\frac{1}{4}+\\frac{i z}{2}} (-1-2 i z) \\Gamma \\left(\\frac{1}{4}-\\frac{i z}{2}\\right)$ ." ], [ "Geometric interpretation", "In this section we continue the study of the map ${\\mathcal {E}}$ with the goal to achieve a geometric understanding of it.", "This is obtained by bringing in the construction the adèle class space of the rationals, whose role is that to grant for the replacement, in (REF ), of the summation over the monoid ${\\mathbb {N}}^{\\times }$ with the summation over the group ${\\mathbb {Q}}^\\times $ .", "Then, up to the factor $u^{1/2}$ , ${\\mathcal {E}}$ is understood as the composite $\\iota ^*\\circ \\pi _!$ , where the map $\\iota :{\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}^*_{\\mathbb {Q}}/{\\hat{{\\mathbb {Z}}}^\\times }\\rightarrow {\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}_{\\mathbb {Q}}/{\\hat{{\\mathbb {Z}}}^\\times }$ is the inclusion of idèle classes in adèle classes and $\\pi :{\\mathbb {A}}_{\\mathbb {Q}}/{\\hat{{\\mathbb {Z}}}^\\times }\\rightarrow {\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}_{\\mathbb {Q}}/{\\hat{{\\mathbb {Z}}}^\\times }$ is induced by the projection ${\\mathbb {A}}_{\\mathbb {Q}}\\rightarrow {\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}_{\\mathbb {Q}}$ .", "We shall discuss the following diagram ${ Y={\\mathbb {A}}_{\\mathbb {Q}}/{\\hat{{\\mathbb {Z}}}^\\times }[d]^\\pi & \\\\X={\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}_{\\mathbb {Q}}/{\\hat{{\\mathbb {Z}}}^\\times }& {\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}^\\times _{\\mathbb {Q}}/{\\hat{{\\mathbb {Z}}}^\\times }={\\mathbb {R}}_+^*[l]_{\\iota } }$ The conceptual understanding of the map $\\pi _!$ uses Hochschild homology of noncommutative algebras.", "We recall that the space of adèle classes i.e.", "the quotient ${\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}_{\\mathbb {Q}}$ is encoded algebraically by the cross-product algebra ${\\mathcal {A}}:={\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})\\rtimes {\\mathbb {Q}}^\\times .$ The Schwartz space ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ is acted upon by (automorphisms of) ${\\mathbb {Q}}^\\times $ corresponding to the scaling action of ${\\mathbb {Q}}^\\times $ on rational adèles.", "An element of ${\\mathcal {A}}$ is written symbolically as a finite sum $\\sum a(q)U(q), \\qquad a(q)\\in {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}}).$ From the inclusion of algebras ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})\\subset {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})\\rtimes {\\mathbb {Q}}^\\times = {\\mathcal {A}}$ one derives a corresponding morphism of Hochschild homologies $\\pi _!", ": \\text{HH}({\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}}))\\longrightarrow \\text{HH}({\\mathcal {A}}).$ Here, we use the shorthand notation $\\text{HH}(A):=\\text{HH}(A,A)$ for the Hochschild homology of an algebra $A$ with coefficients in the bimodule $A$ .", "In noncommutative geometry, the vector space of differential forms of degree $k$ is replaced by the Hochschild homology $\\text{HH}_k(A)$ .", "If the algebra $A$ is commutative and for $k=0$ , $\\text{HH}_0(A)=A$ , so that 0-forms are identified with functions.", "Indeed, the Hochschild boundary map $b:A^{\\otimes \\, 2}\\rightarrow A, \\qquad b(x\\otimes y)=xy-yx$ is identically zero when the algebra $A$ is commutative.", "This result does not hold when $A={\\mathcal {A}}$ , since ${\\mathcal {A}}={\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})\\rtimes {\\mathbb {Q}}^\\times $ is no longer commutative.", "It is therefore meaningful to bring in the following Proposition 3.1 The kernel of $\\pi _!", ": \\text{HH}_0({\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}}))\\rightarrow \\text{HH}_0({\\mathcal {A}})$ is the ${\\mathbb {C}}$ -linear span $E$ of functions $f-f_q$ , with $f\\in {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ , $q\\in {\\mathbb {Q}}^\\times $ , and where we set $f_q(x):=f(qx)$ .", "For any $f,g\\in {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ and $q\\in {\\mathbb {Q}}^\\times $ one has $fg-(fg)_q=fg-U(q)fg U(q^{-1})=xy-yx, \\qquad x:=f U(q^{-1}), \\ y:=U(q)g.$ One knows ([14] Lemma 1) that any function $f\\in {\\mathcal {S}}({\\mathbb {R}})$ is a product of two elements of ${\\mathcal {S}}({\\mathbb {R}})$ .", "Moreover, an element of the Bruhat-Schwartz space ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ is a finite linear combination of functions of the form $e\\otimes f$ , with $e^2=e$ .", "Thus any $f\\in {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ can be written as a finite sum of products of two elements of ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ , so that (REF ) entails $f-f_q\\in \\ker \\pi _!$ .", "Conversely, let $f\\in \\ker \\pi _!$ .", "Then there exists a finite number of pairs $x_i, y_i \\in {\\mathcal {A}}$ such that $f=\\sum [x_i,y_i]$ .", "Let $P: {\\mathcal {A}}\\rightarrow {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ be the projection on the coefficient $a(1)$ of $U(1)=1$ i.e.", "$P\\left( \\sum a(q)U(q)\\right) :=a(1).$ Then $f=\\sum [x_i,y_i]$ implies $f=\\sum P( [x_i,y_i])$ .", "We shall prove that for any pair $x, y \\in {\\mathcal {A}}$ one has $P( [x,y])\\in E$ .", "Indeed, one has $x=\\sum a(q)U(q), \\ y=\\sum b(q^{\\prime }) U(q^{\\prime }),\\ [x,y]=\\sum (a(q)b(q^{\\prime })_q-b(q^{\\prime })a(q)_{q^{\\prime }})U(qq^{\\prime })$ so that $P( [x,y])=\\sum (a(q)b(q^{-1})_q-b(q^{-1})a(q)_{q^{-1}}).$ This projection belongs to $E$ in view of the fact that $a(q)b(q^{-1})_q-b(q^{-1})a(q)_{q^{-1}}=h_q-h, \\qquad h=b(q^{-1})a(q)_{q^{-1}}.$ This completes the proof.", "Proposition REF shows that the image of $\\pi _!", ": \\text{HH}_0({\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}}))\\rightarrow \\text{HH}_0({\\mathcal {A}})$ is the space of coinvariants for the action of ${\\mathbb {Q}}^\\times $ on ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ , i.e.", "the quotient of ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ by the subspace $E$ .", "An important point now to remember is that the Fourier transform becomes canonically defined on the above quotient.", "Indeed, the definition of the Fourier transform on adèles depends on the choice of a non-trivial character $\\alpha $ on the additive, locally compact group ${\\mathbb {A}}_{\\mathbb {Q}}$ , which is trivial on the subgroup ${\\mathbb {Q}}\\subset {\\mathbb {A}}_{\\mathbb {Q}}$ .", "It is defined as follows ${\\mathbb {F}}_\\alpha (f)(y):= \\int _{\\mathbb {R}}f(x)\\alpha (xy) dx.$ The space of characters of the compact group $G={\\mathbb {A}}_{\\mathbb {Q}}/{\\mathbb {Q}}$ is one dimensional as a ${\\mathbb {Q}}$ -vector space, thus any non-trivial character $\\alpha $ as above is of the form $\\beta (x)=\\alpha (qx)$ , so that ${\\mathbb {F}}_\\beta (f)(y):= \\int _{\\mathbb {R}}f(x)\\alpha (qxy) dx={\\mathbb {F}}_\\alpha (f)_q(y).$ Therefore, the difference ${\\mathbb {F}}_\\beta -{\\mathbb {F}}_\\alpha $ vanishes on the quotient of ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ by $E$ and this latter space is preserved by ${\\mathbb {F}}_\\alpha $ since ${\\mathbb {F}}_\\alpha (f_q)={\\mathbb {F}}_\\alpha (f)_{q^{-1}}$ ." ], [ "$\\text{HH}$ , Morita invariance and the trace map", "Let us recall that given an algebra $A$ , the trace map ${\\rm Tr}: M_n(A)\\rightarrow A, \\qquad {\\rm Tr}((a_{ij}):=\\sum a_{jj}$ induces an isomorphism in degree zero Hochschild homology which extends to higher degrees.", "If $A$ is a convolution algebra of the étale groupoid of an equivalence relation ${\\mathcal {R}}$ with countable orbits on a space $Y$ , and $\\pi :Y\\rightarrow Y/{\\mathcal {R}}$ is the quotient map, the trace map takes the following form ${\\rm Tr}(f)(x):=\\sum _{\\pi (j)=x} f(j,j).$ The trace induces a map on $\\text{HH}_0$ of the function algebras, provided one takes care of the convergence issue when the size of equivalence classes is infinite.", "If the relation ${\\mathcal {R}}$ is associated with the orbits of the free action of a discrete group $\\Gamma $ on a locally compact space $Y$ , the convolution algebra is the cross product of the algebra of functions on $Y$ by the discrete group $\\Gamma $ .", "In this case, the étale groupoid is $Y\\rtimes \\Gamma $ , where the source and range maps are given resp.", "by $s(y,g)=y$ and $r(y,g)=gy$ .", "The elements of the convolution algebra are functions $f(y,g)$ on $Y\\rtimes \\Gamma $ .", "The diagonal terms in (REF ) correspond to the elements of $Y\\rtimes \\Gamma $ such that $s(y,g)=r(y,g)$ , meaning that $g=1$ is the neutral element of $\\Gamma $ , since the action of $\\Gamma $ is assumed to be free.", "Then, the trace map is ${\\rm Tr}((f)(x)=\\sum _{\\pi (y)=x} f(y,1).$ This sum is meaningful on the space of the proper orbits of $\\Gamma $ .", "For a lift $\\rho (x)\\in Y$ , with $\\pi (\\rho (x))=x$ the trace reads as ${\\rm Tr}(f)(x)=\\sum _{g\\in \\Gamma } f(g\\rho (x),1).$ In the case of $Y={\\mathbb {A}}_{\\mathbb {Q}}$ acted upon by $\\Gamma ={\\mathbb {Q}}^\\times $ , the proper orbits are parameterized by the idèle classes and this space embeds in the adèle classes by means of the inclusion $\\iota : {\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}^\\times _{\\mathbb {Q}}\\rightarrow {\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}_{\\mathbb {Q}}.$ We identify the idèle class group $C_{\\mathbb {Q}}={\\mathbb {Q}}^\\times \\backslash {\\mathbb {A}}^\\times _{\\mathbb {Q}}$ with ${\\hat{{\\mathbb {Z}}}^\\times }\\times {\\mathbb {R}}_+^*$ , using the canonical exact sequence affected by the modulus $1\\rightarrow {\\hat{{\\mathbb {Z}}}^\\times }\\rightarrow C_{\\mathbb {Q}}\\stackrel{{\\rm Mod}}{\\longrightarrow } {\\mathbb {R}}_+^*\\rightarrow 1.$ There is a natural section $\\rho :C_{\\mathbb {Q}}\\rightarrow {\\mathbb {A}}^\\times _{\\mathbb {Q}}$ of the quotient map, given by the canonical inclusion ${\\hat{{\\mathbb {Z}}}^\\times }\\times {\\mathbb {R}}_+^*\\subset {\\mathbb {A}}_{\\mathbb {Q}}^f\\times {\\mathbb {R}}={\\mathbb {A}}_{\\mathbb {Q}}.$ Next, we focus on the ${\\hat{{\\mathbb {Z}}}^\\times }$ -invariant part of ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ .", "Then, with the notations of Proposition REF we have Lemma 3.2 The following facts hold Let $h\\in {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})^{{\\hat{{\\mathbb {Z}}}^\\times }}$ , then there exists $ f\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ with $h-1_{\\hat{{\\mathbb {Z}}}}\\otimes f\\in E^{{\\hat{{\\mathbb {Z}}}^\\times }}$ .", "Let $ f\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ and $\\tilde{f}=(1_{\\hat{{\\mathbb {Z}}}}\\otimes f)U(1)\\in S({\\mathbb {A}}_{\\mathbb {Q}})^{{\\hat{{\\mathbb {Z}}}^\\times }}\\rtimes {\\mathbb {Q}}^\\times $ , then one has ${\\rm Tr}(\\tilde{f})(u)=2\\sum _{n\\in {\\mathbb {N}}^{\\times }} f(nu)\\qquad \\forall u\\in {\\mathbb {R}}_+^*.$ Let $ f\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ , then $1_{\\hat{{\\mathbb {Z}}}}\\otimes f\\in E^{{\\hat{{\\mathbb {Z}}}^\\times }}\\iff f=0$ .", "(i) By definition, the elements of the Bruhat-Schwartz space ${{\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})}$ are finite linear combinations of functions on ${\\mathbb {A}}_{\\mathbb {Q}}$ of the form ($S\\ni \\infty $ is a finite set of places) $f=\\otimes f_v, \\qquad f_v=1_{{\\mathbb {Z}}_v}~ \\forall v \\notin S, \\quad f_{\\infty }\\in {\\mathcal {S}}({\\mathbb {R}}), \\quad f_p\\in {\\mathcal {S}}({\\mathbb {Q}}_p)\\quad \\forall p\\in S\\setminus \\infty ,$ where ${\\mathcal {S}}({\\mathbb {Q}}_p)$ denotes the space of locally constant functions with compact support.", "An element of ${\\mathcal {S}}({\\mathbb {Q}}_p)$ which is ${\\mathbb {Z}}_p^*$ -invariant is a finite linear combination of characteristic functions $(1_{{\\mathbb {Z}}_p})_{p^n}(x):=1_{{\\mathbb {Z}}_p}(p^n x)$ .", "Thus an element $h\\in {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})^{{\\hat{{\\mathbb {Z}}}^\\times }}$ is a finite linear combination of functions of the form $f=\\otimes f_v, \\qquad f_v=1_{{\\mathbb {Z}}_v}\\quad \\forall v \\notin S, \\quad f_{\\infty }\\in {\\mathcal {S}}({\\mathbb {R}}), \\quad f_p=(1_{{\\mathbb {Z}}_p})_{p^{n_p}}\\quad \\forall p\\in S\\setminus \\infty $ With $q=\\prod p^{-n_p}$ one has with $\\ell (x):=f(qx)$ , $\\ell =1_{\\hat{{\\mathbb {Z}}}}\\otimes g$ , $\\ell -f\\in E^{{\\hat{{\\mathbb {Z}}}^\\times }}$ and the replacement of $g$ with its even part $\\frac{1}{2} (g(x)+g(-x))$ does not change the class of $f$ modulo $E^{{\\hat{{\\mathbb {Z}}}^\\times }}$ .", "(ii) Let $f\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ and $\\tilde{f}=(1_{\\hat{{\\mathbb {Z}}}}\\otimes f)U(1)\\in S({\\mathbb {A}}_{\\mathbb {Q}})^{{\\hat{{\\mathbb {Z}}}^\\times }}\\rtimes {\\mathbb {Q}}^\\times $ .", "For $u\\in {\\mathbb {R}}_+^*$ the lift $\\rho (u)\\in {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ is $(1, u)\\in {\\hat{{\\mathbb {Z}}}^\\times }\\times {\\mathbb {R}}_+^*\\subset {\\mathbb {A}}_{\\mathbb {Q}}^f\\times {\\mathbb {R}}={\\mathbb {A}}_{\\mathbb {Q}}$ , and by applying (REF ) one has ${\\rm Tr}(\\tilde{f})(u)=\\sum _{q\\in {\\mathbb {Q}}^\\times } \\tilde{f}(q \\rho (x),1)=\\sum _{q\\in {\\mathbb {Q}}^\\times } (1_{\\hat{{\\mathbb {Z}}}}\\otimes f)(q,qu)=2\\sum _{n\\in {\\mathbb {N}}^{\\times }} f(nu)\\quad \\forall u\\in {\\mathbb {R}}_+^*.$ (iii) Let $ f\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ such that $1_{\\hat{{\\mathbb {Z}}}}\\otimes f\\in E^{{\\hat{{\\mathbb {Z}}}^\\times }}$ .", "Let $\\tilde{f}=(1_{\\hat{{\\mathbb {Z}}}}\\otimes f)U(1)\\in S({\\mathbb {A}}_{\\mathbb {Q}})^{{\\hat{{\\mathbb {Z}}}^\\times }}\\rtimes {\\mathbb {Q}}^\\times $ .", "By Proposition REF the Hochschild class in $\\text{HH}_0({\\mathcal {A}})$ of $\\tilde{f}$ is zero, thus ${\\rm Tr}(\\tilde{f})=0$ .", "It follows from (REF ) that ${\\mathcal {E}}(f)(u)=0$ $\\forall u\\in {\\mathbb {R}}_+^*$ .", "Then (REF ) implies that the function $\\psi (z)=\\int _{{\\mathbb {R}}_+^*}f(u)u^{\\frac{1}{2}-iz}d^*u$ is well defined in the half-plane $\\Im (z) >\\frac{1}{2}$ where it vanishes identically, thus $f=0$ .", "The converse of the statement is obvious.", "The next statement complements Proposition REF , with a description of the range of $\\pi _!", ": \\text{HH}_0({\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})^{{\\hat{{\\mathbb {Z}}}^\\times }})\\rightarrow \\text{HH}_0({\\mathcal {A}})^{{\\hat{{\\mathbb {Z}}}^\\times }}$ ; it also shows that the map ${\\mathcal {E}}(f)(u)=u^{1/2}\\sum _{n=1}^\\infty f(nu)$ coincides, up to the factor $\\frac{u^{1/2}}{2}$ , with the trace map (REF ).", "We keep the notations of Lemma  REF Proposition 3.3 The map ${\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}\\rightarrow \\pi _!\\left(\\text{HH}_0({\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})^{{\\hat{{\\mathbb {Z}}}^\\times }})\\right)$ , $f\\mapsto \\tilde{f}$ is an isomorphism, this means that $\\pi _!\\left(\\text{HH}_0({\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})^{{\\hat{{\\mathbb {Z}}}^\\times }})\\right)$ is determined by the images of the elements of the subalgebra $1_{\\hat{{\\mathbb {Z}}}}\\otimes {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}\\subset {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})^{{\\hat{{\\mathbb {Z}}}^\\times }}$ .", "Furthermore, one has the identity ${\\mathcal {E}}(f)(u)=\\frac{u^{1/2}}{2}\\ {\\rm Tr}(\\tilde{f})(u)\\quad f\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev},~\\forall u\\in {\\mathbb {R}}_+^*.$ The first statement follows from Lemma REF (i) and (iii).", "The second statement from (ii) of the same lemma." ], [ "The Laplacian $\\Delta =H(1+H)$", "This section describes the spectral interpretation of the squares of non-trivial zeros of the Riemann zeta function in terms of a suitable Laplacian.", "It also shows the relation between this Laplacian and the prolate wave operator." ], [ "The vanishing conditions", "One starts with the exact sequence $0\\rightarrow {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})_1 \\rightarrow {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})\\stackrel{\\epsilon }{\\rightarrow } {\\mathbb {C}}(1)\\rightarrow 0$ associated to the kernel of the ${\\mathbb {Q}}^\\times $ -invariant linear functional $\\epsilon (f)=\\int _{{\\mathbb {A}}_{\\mathbb {Q}}} f(x)dx\\in {\\mathbb {C}}(1)$ .", "By implementing in the above sequence the evaluation $\\delta _0(f):=f(0)$ , one obtains the exact sequence $0\\rightarrow {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})_0 \\rightarrow {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})\\stackrel{(\\delta _0,\\epsilon )}{\\longrightarrow } {\\mathbb {C}}(0)\\oplus {\\mathbb {C}}(1)\\rightarrow 0.$ The next lemma shows that both ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})_0$ and ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})_1$ have a description in terms of the ranges of two related differential operators.", "For simplicity of exposition, we restrict our discussion to the ${\\hat{{\\mathbb {Z}}}^\\times }$ -invariant parts of these function spaces.", "Lemma 4.1 Let $H:{\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})\\rightarrow {\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})$ , $H:=x\\partial _x$ be the generator of the scaling action of $1\\times {\\mathbb {R}}_+^*\\subset {\\rm GL}_1({\\mathbb {A}}_{\\mathbb {Q}})$ .", "Then one has $H$ commutes with the action of ${\\rm GL}_1({\\mathbb {A}}_{\\mathbb {Q}})$ and restricts to ${\\mathcal {S}}({\\mathbb {A}}_{\\mathbb {Q}})^{{\\hat{{\\mathbb {Z}}}^\\times }}$ .", "$(1+H)$ induces an isomorphism ${\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}\\rightarrow {\\mathcal {S}}({\\mathbb {R}})_1^{\\rm ev}$ .", "$H(1+H)$ induces an isomorphism ${\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}\\rightarrow {\\mathcal {S}}({\\mathbb {R}})_0^{\\rm ev}$ .", "(i) follows since ${\\rm GL}_1({\\mathbb {A}}_{\\mathbb {Q}})$ is abelian, thus $H$ commutes with the action of ${\\rm GL}_1({\\mathbb {A}}_{\\mathbb {Q}})$ .", "(ii)  The functional $f\\mapsto \\epsilon (f)=\\int _{{\\mathbb {A}}_{\\mathbb {Q}}} f(x)dx$ vanishes on the range of $1+H$ since $\\epsilon ((1+H)f)=\\int _{{\\mathbb {A}}_{\\mathbb {Q}}} \\partial (xf)dx=0,$ thus the range of $(1+H)$ is contained in ${\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}_1$ .", "The equation $Hf+f=0$ implies that $xf(x)$ is constant, hence $f=0$ , for $f\\in {\\mathcal {S}}({\\mathbb {R}})$ .", "Thus $(1+H):{\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}\\rightarrow {\\mathcal {S}}({\\mathbb {R}})_1^{\\rm ev}$ is injective.", "Let now $f\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}_1$ and let $\\widehat{f}$ be its Fourier transform.", "Then $\\widehat{f} \\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ and $\\widehat{f}(0)=0$ .", "The function $g(x):=\\widehat{f}(x)/x$ , $g(0):=0$ is smooth, and $g\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm odd}$ , while the function $h(x):=\\int _{-\\infty }^x g(y)dy$ fulfills $h\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ and $\\partial _x h=g$ .", "One has $Hh=\\widehat{f}$ , thus $(-1-H)\\widehat{h}= f$ .", "This shows that $(1+H):{\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}\\rightarrow {\\mathcal {S}}({\\mathbb {R}})_1^{\\rm ev}$ is also surjective.", "(iii) Since the evaluation $f\\mapsto f(0)$ is invariant under scaling, it vanishes on the range of $H$ .", "Similarly, one sees that the functional $f\\mapsto \\int _{{\\mathbb {A}}_{\\mathbb {Q}}} f(x)dx$ vanishes on the range of $1+H$ .", "Thus the range of $H(1+H)$ is contained in ${\\mathcal {S}}({\\mathbb {R}})_0$ .", "The equation $Hf=0$ implies that the function $f$ is constant and hence $f=0$ , for $f\\in {\\mathcal {S}}({\\mathbb {R}})$ .", "Similarly $Hf+f=0$ implies that $xf(x)$ is constant and hence $f=0$ for $f\\in {\\mathcal {S}}({\\mathbb {R}})$ .", "Thus $H(1+H): {\\mathcal {S}}({\\mathbb {R}})\\rightarrow {\\mathcal {S}}({\\mathbb {R}})_0$ is injective.", "Let now $f\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ with $f(0)=0$ .", "Then the function $g(x):=f(x)/x$ , $g(0):=0$ , is smooth, $g\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm odd}$ and there exists a unique $h\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ such that $\\partial _x h=g$ .", "One has $Hh=f$ so that $(-1-H)\\widehat{h}=\\widehat{f}$ .", "Thus if $\\widehat{f}(0)=0$ one has $\\widehat{h}(0)=0$ and there exists $k\\in {\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}$ with $Hk=\\widehat{h}$ .", "Then $-(1+H)\\widehat{k}=h$ and $H(1+H)\\widehat{k}=-f$ .", "This shows that $H(1+H):{\\mathcal {S}}({\\mathbb {R}})^{\\rm ev}\\rightarrow {\\mathcal {S}}({\\mathbb {R}})_0^{\\rm ev}$ is surjective and an isomorphism." ], [ "The Laplacian $\\Delta =H(1+H)$ and its spectrum", "This section is based on the following heuristic dictionary suggesting a parallel between some classical notions in Hodge theory on the left-hand side, and their counterparts in noncommutative geometry, for the adèle class space of the rationals.", "The notations are inclusive of those of Section Table: NO_CAPTIONNext Proposition is a variant of the spectral realization in [8], [9].", "Proposition 4.2 The following facts hold The trace map ${\\rm Tr}$ commutes with $\\Delta =H(1+H)$ and the range of ${\\rm Tr}\\circ \\Delta $ is contained in the strong Schwartz space ${\\bf S\\,}({\\mathbb {R}}_+^*):=\\,\\cap _{\\beta \\in {\\mathbb {R}}}\\,\\mu ^\\beta {\\mathcal {S}}({\\mathbb {R}}_+^*)$ , with $\\mu $ denoting the Modulus.", "The spectrum of $\\Delta $ on the quotient of ${\\bf S\\,}({\\mathbb {R}}_+^*)$ by the closure of the range of ${\\rm Tr}\\circ \\Delta $ is the set (counted with possible multiplicities) $\\left\\lbrace \\Big (z-\\frac{1}{2}\\Big )^2-\\frac{1}{4}\\mid z\\in {\\mathbb {C}}\\setminus {\\mathbb {R}}, ~ \\zeta (z)=0\\right\\rbrace .$ (i) The trace map of (REF ) commutes with $\\Delta $ .", "By Lemma REF (iii) the range of $\\Delta $ is ${\\mathcal {S}}({\\mathbb {R}})_0^{\\rm ev}$ thus the range of ${\\mathcal {E}}\\circ (H(1+H))$ is contained in ${\\bf S\\,}({\\mathbb {R}}_+^*)$ (see [9], Lemma 2.51).", "(ii) By construction, ${\\bf S\\,}({\\mathbb {R}}_+^*)$ is the intersection, indexed by compact intervals $J\\subset {\\mathbb {R}}$ , of the spaces $\\cap _{\\beta \\in J}\\,\\mu ^\\beta {\\mathcal {S}}({\\mathbb {R}}_+^*) $ .", "The Fourier transform ${\\mathbb {F}}(f)(z):=\\int _{{\\mathbb {R}}_+^*}f(u)u^z d^*u $ defines an isomorphism of these function spaces with the Schwartz spaces ${\\mathcal {S}}(I)$ , labeled by vertical strips $I:=\\lbrace z\\in {\\mathbb {C}}\\mid \\Re (z)\\in J\\rbrace $ , of holomorphic functions $f$ in $I$ with $p_{k,m}(f)<\\infty $ for all $k,m\\in {\\mathbb {N}}$ where $p_{k,m}(f)=\\sum _{n=0}^m\\,\\frac{1}{n!", "}\\,\\sup _I (1+|z|)^k\\cdot |\\partial ^nf(z)|\\,.$ These norms define the Fréchet topology on ${\\mathcal {S}}(I)$ .", "It then follows from Lemma 4.125 of [9] that, for $I$ sufficiently large, the quotient of ${\\mathcal {S}}(I)$ by the closure of the range of ${\\rm Tr}\\circ \\Delta $ decomposes into a direct sum of finite-dimensional spaces associated to the projections $\\Pi (N)$ , $N\\in {\\mathbb {Z}}$ which fulfill the following properties Each $\\Pi (N)$ is an idempotent.", "The sequence of $\\Pi (N)$ 's is of tempered growth.", "The rank of $\\Pi (N)$ is $O(\\log |N|)$ for $|N|\\rightarrow \\infty $ .", "For any $f\\in {\\mathcal {S}}(I)$ the sequence $f\\,\\Pi (N)$ is of rapid decay and $\\sum _{N\\in {\\mathbb {Z}}}\\,f\\,\\Pi (N)=f \\qquad \\forall f\\in {\\mathcal {S}}(I)\\,.$ This direct sum decomposition commutes with $\\Delta $ since both $\\Pi (N)$ and the conjugate of $\\Delta $ by the Fourier transform ${\\mathbb {F}}$ are given by multiplication operators.", "The conjugate of $H$ by ${\\mathbb {F}}$ is the multiplication by $-z$ , so that the conjugate of $\\Delta $ is the multiplication by $-z(1-z)$ .", "The spectrum of $\\Delta $ is the union of the spectra of the finite-dimensional operators $\\Delta _N:=\\Pi (N)\\Delta =\\Delta \\Pi (N)$ .", "By [9], Corollary 4.118, and the proof of Theorem 4.116, the finite-dimensional range of $\\Pi (N)$ is described by the evaluation of $f\\in {\\mathcal {S}}(I)$ on the zeros $\\rho \\in Z(N)$ of the Riemann zeta function which are inside the contour $\\gamma _N$ , i.e.", "by the map ${\\mathcal {S}}(I)\\ni f\\mapsto f|_{Z(N)}=(f^{(n_\\rho )}(\\rho ))_{\\rho \\in Z(N)}\\in {\\mathbb {C}}^{(n_\\rho )}$ where ${\\mathbb {C}}^{(n_\\rho )}$ denotes the space of dimension $n_\\rho $ of jets of order equal to the order $n_\\rho $ of the zero $\\rho $ of the zeta function.", "Moreover, the action of $\\Delta _N$ is given by the matrix associated with the multiplication of $f\\in {\\mathcal {S}}(I)$ by $-z(1-z)$ : this gives a triangular matrix whose diagonal is given by $n_\\rho $ terms all equal to $-\\rho (1-\\rho )$ .", "Thus the spectrum of $\\Delta $ on the quotient of ${\\bf S\\,}({\\mathbb {R}}_+^*)$ by the closure of the range of ${\\rm Tr}\\circ \\Delta $ is the set (counted with multiplicities) $\\left\\lbrace \\Big (\\rho -\\frac{1}{2}\\Big )^2-\\frac{1}{4}\\mid \\rho \\in {\\mathbb {C}}\\setminus {\\mathbb {R}}, \\ \\zeta (\\rho )=0\\right\\rbrace .$ Corollary 4.3 The spectrum of $\\Delta $ on the quotient of the strong Schwartz space ${\\bf S\\,}({\\mathbb {R}}_+^*)$ by the closure of the range of ${\\rm Tr}\\circ \\Delta $ is negative if and only if the Riemann Hypothesis holds.", "This follows from Proposition REF and the fact that for $\\rho \\in {\\mathbb {C}}$ $\\Big (\\rho -\\frac{1}{2}\\Big )^2 -\\frac{1}{4}\\le 0 \\iff \\rho \\in [0,1]\\cup \\frac{1}{2} +i {\\mathbb {R}}.$ Remark 4.4 The main interest of the above reformulation of the spectral realization of [8], [9] in terms of the Laplacian $\\Delta $ is that the latter is intimately related to the prolate wave operator $W_\\lambda $ that is shown in [10] to be self-adjoint and have, for $\\lambda =\\sqrt{2}$ the same UV spectrum as the Riemann zeta function.", "The relation between $\\Delta $ and $W_\\lambda $ is that the latter is a perturbation of $\\Delta $ by a multiple of the Harmonic oscillator." ], [ "Sheaves on the Scaling Site and $H^0({S},{\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}})$", "Let $\\mu \\in {\\mathbb {R}}_{>1}$ and $\\Sigma _\\mu $ be the linear map on functions $g: {\\mathbb {R}}^*_+ \\rightarrow {\\mathbb {C}}$ of sufficiently rapid decay at 0 and $\\infty $ defined by $(\\Sigma _\\mu g)(u)=\\sum _{k\\in {\\mathbb {Z}}}g(\\mu ^ku).$ We shall denote with ${{\\mathcal {S}}^{\\rm ev}_0}$ the linear space of real-valued, even Schwartz functions $f \\in {\\mathcal {S}}({\\mathbb {R}})$ fulfilling the two conditions $f(0) = 0 = \\int _{\\mathbb {R}}f(x)dx$ .", "The map ${\\mathcal {E}}: {{\\mathcal {S}}^{\\rm ev}_0} \\rightarrow {\\mathbb {R}},\\qquad ({\\mathcal {E}}f)(u)= u^{1/2}\\sum _{n>0}f(nu)$ is proportional to a Riemann sum for the integral of $f$ .", "The following lemma on scale invariant Riemann sums justifies the pointwise “well-behavior\" of (REF ) (see [7] Lemma 6.1) Lemma 5.1 Let $f$ be a complex-valued function of bounded variation on $(0,\\infty )$ .", "Assume that $f$ is of rapid decay for $u\\rightarrow \\infty $ , $O(u^2)$ when $ u\\rightarrow 0$ , and that $\\int _0^{\\infty }f(t)dt=0$ .", "Then the following properties hold The function $({\\mathcal {E}}f)(u)$ in (REF ) is well-defined pointwise, is $O(u^{1/2})$ when $u\\rightarrow 0$ , and of rapid decay for $u\\rightarrow \\infty $ .", "Let $g={\\mathcal {E}}(f)$ , then the series (REF ) is geometrically convergent, and defines a bounded and measurable function on ${\\mathbb {R}}_+^*/\\mu ^{{\\mathbb {Z}}}$ .", "We recall that a sheaf over the Scaling Site ${S}={[0,\\infty )\\rtimes {{\\mathbb {N}}^{\\times }}}$ is a sheaf of sets on $[0,\\infty )$ (endowed with the euclidean topology) which is equivariant for the action of the multiplicative monoid ${\\mathbb {N}}^{\\times }$ [6].", "Since we work in characteristic zero we select as structure sheaf of ${S}$ the ${\\mathbb {N}}^{\\times }$ -equivariant sheaf ${\\mathcal {O}}$ whose sections on an open set $U\\subset [0,\\infty )$ define the space of smooth, complex-valued functions on $U$ .", "The next proposition introduces two relevant sheaves of ${\\mathcal {O}}$ -modules.", "Proposition 5.2 Let $L\\in (0,\\infty )$ , $\\mu =\\exp L$ , and $C_\\mu ={\\mathbb {R}}^*_+/\\mu ^{\\mathbb {Z}}$ .", "The following facts hold As $L$ varies in $(0,\\infty )$ , the pointwise multiplicative Fourier transform ${\\mathbb {F}}:L^2(C_\\mu )\\rightarrow \\ell ^2({\\mathbb {Z}}), \\qquad {\\mathbb {F}}(\\xi )(n)=L^{-\\frac{1}{2}}\\int _{C_\\mu } \\xi (u)u^{-\\frac{2\\pi i n}{L}}d^*u$ defines an isomorphism between the family of Hilbert spaces $L^2(C_\\mu )$ and the restriction to $(0,\\infty )$ of the trivial vector bundle $L^2=[0,\\infty )\\times \\ell ^2({\\mathbb {Z}})$ .", "The smooth sections vanishing at $L=0$ of the vector bundle $L^2$ together with the linear maps $\\sigma _n:L^2(C_{\\mu ^n})\\rightarrow L^2(C_\\mu ), \\qquad \\sigma _n(\\xi )(u)=\\sum _{j=1}^n \\xi (\\mu ^j u)$ define a sheaf ${\\mathcal {L}}^2$ of ${\\mathcal {O}}$ -modules on ${S}$ .", "For $f\\in {{\\mathcal {S}}^{\\rm ev}_0}$ , the maps $\\Sigma {\\mathcal {E}}(f)(L):= \\Sigma _{\\mu }{\\mathcal {E}}(f)$ as in (REF ) define smooth (global) sections $\\Sigma {\\mathcal {E}}(f)$ of ${\\mathcal {L}}^2$ .", "For any open set $U\\subset [0,\\infty )$ , the submodules closure of $C_0^{\\infty }(U,\\Sigma {\\mathcal {E}}({{\\mathcal {S}}^{\\rm ev}_0}))$ in $ C_0^{\\infty }(U,L^2)$ are ${\\mathbb {N}}^{\\times }$ -equivariant and define a subsheaf $\\overline{\\Sigma {\\mathcal {E}}}\\subset {\\mathcal {L}}^2$ of closed ${\\mathcal {O}}$ -modules on ${S}$ .", "For $U\\subset [0,\\infty )$ open, a section $\\xi \\in C^\\infty _0(U, L^2)$ belongs to the submodule $C_0^{\\infty }(U,\\overline{\\Sigma {\\mathcal {E}}})$ if and only if each Fourier component ${\\mathbb {F}}(\\xi )(n)$ as in (REF ) belongs to the closure in $C_0^{\\infty }(U,{\\mathbb {C}})$ of the submodule generated by the multiples of the function $\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i n}{L}\\right)$ ($\\zeta (z)$ is the Riemann zeta function).", "(i) holds since the Fourier transform is a unitary isomorphism.", "(ii) The sheaf ${\\mathcal {L}}^2$ on $[0,\\infty )$ is defined by associating to an open subset $U\\subset [0,\\infty )$ the space ${\\mathcal {F}}(U)=C_0^{\\infty }(U,L^2)$ of smooth sections vanishing at $L=0$ of the vector bundle $L^2$ .", "The action of ${\\mathbb {N}}^{\\times }$ on ${\\mathcal {L}}^2$ is given, for $n\\in {\\mathbb {N}}^\\times $ and for any pair of opens $U$ and $U^{\\prime }$ of $[0,\\infty )$ , with $n U\\subset U^{\\prime }$ , by ${\\mathcal {F}}(U,n): C_0^{\\infty }(U^{\\prime },L^2)\\rightarrow C_0^{\\infty }(U,L^2), \\qquad {\\mathcal {F}}(U,n)(\\xi )(x)= \\sigma _n(\\xi (nx)).$ Note that with $\\mu =\\exp x$ one has $\\xi (nx)\\in L^2(C_{\\mu ^n})$ and $\\sigma _n(\\xi (nx))\\in L^2(C_\\mu )$ .", "By construction one has: $\\sigma _n\\sigma _m=\\sigma _{nm}$ , thus the above action of ${\\mathbb {N}}^{\\times }$ turns ${\\mathcal {L}}^2$ into a sheaf on ${S}={[0,\\infty )\\rtimes {{\\mathbb {N}}^{\\times }}}$ .", "(iii)  By Lemma REF (i), ${\\mathcal {E}}(f)(u)$ is pointwise well-defined, it is $O(u^{1/2})$ for $u\\rightarrow 0$ , and of rapid decay for $u\\rightarrow \\infty $ .", "By (ii) of the same lemma one has ${\\mathbb {F}}(\\Sigma _\\mu ({\\mathcal {E}}(f)))(n)=L^{-\\frac{1}{2}}\\int _{C_\\mu } \\Sigma _\\mu ({\\mathcal {E}}(f))(u)u^{-\\frac{2\\pi i n}{L}}d^*u =L^{-\\frac{1}{2}}\\int _{{\\mathbb {R}}_+^*} {\\mathcal {E}}(f)(u)u^{-\\frac{2\\pi i n}{L}}d^*u.$ It then follows from [7] (see $(6.4)$ which is valid for $z=\\frac{2\\pi n}{L}\\in {\\mathbb {R}}$ ) that ${\\mathbb {F}}(\\Sigma _\\mu ({\\mathcal {E}}(f)))(n)=L^{-\\frac{1}{2}}\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i n}{L}\\right)\\int _{{\\mathbb {R}}_+^*}u^{\\frac{1}{2}} f(u)u^{-\\frac{2\\pi i n}{L}}d^*u.$ Since $f\\in {{\\mathcal {S}}^{\\rm ev}_0}$ , with $w(f)(u):=u^{1/2}f(u)$ , the multiplicative Fourier transform ${\\mathbb {F}}(w(f))=\\psi $ , $\\psi (z):=\\int _{{\\mathbb {R}}_+^*}f(u)u^{\\frac{1}{2}-iz}d^*u$ is holomorphic in the complex half-plane defined by $\\Im (z)>-5/2$ [7].", "Moreover, by construction ${{\\mathcal {S}}^{\\rm ev}_0}$ is stable under the operation $f\\mapsto u\\partial _u f+\\frac{1}{2} f$ , hence $w({{\\mathcal {S}}^{\\rm ev}_0})$ is stable under $f\\mapsto u\\partial _uf$ .", "This operation multiplies ${\\mathbb {F}}(w(f))(z)=\\psi (z)$ by $iz$ .", "This argument shows that for any integer $m>0$ , $z^m\\psi (z)$ is bounded in a strip around the real axis and hence that the derivative $\\psi ^{(k)}(s)$ is $O(\\vert s\\vert ^{-m})$ on ${\\mathbb {R}}$ , for any $k\\ge 0$ .", "By applying classical estimates due to Lindelof [11], (see [1] inequality (56)), the derivatives $\\zeta ^{(m)}(\\frac{1}{2}+iz)$ are $O(\\vert z\\vert ^ \\alpha )$ for any $\\alpha >1/4$ .", "Thus all derivatives $\\partial _L^m$ of the function (REF ), now re-written as $h(L,n):=L^{-\\frac{1}{2}}\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i n}{L}\\right)\\psi (\\frac{2\\pi n}{L})$ , are sequences of rapid decay as functions of $n\\in {\\mathbb {Z}}$ .", "It follows that $\\Sigma {\\mathcal {E}}(f)$ is a smooth (global) section of the vector bundle $L^2$ over $(0,\\infty )$ .", "Moreover, when $n\\ne 0$ the function $h(L,n)$ tends to 0 when $L\\rightarrow 0$ and the same holds for all derivatives $\\partial _L^m h(L,n)$ .", "In fact, for any $m, k\\ge 0$ , one has $\\sum _{n\\ne 0} \\vert \\partial _L^m h(L,n)\\vert ^2=O(L^k) \\ \\ \\rm {when} \\ \\ L\\rightarrow 0.$ This result is a consequence of the rapid decay at $\\infty $ of the derivatives of the function $\\psi $ , and the above estimate of $\\zeta (z)$ and its derivatives.", "For $n=0$ one has $h(L,0)=L^{-\\frac{1}{2}}\\zeta (\\frac{1}{2})\\psi (0)$ .", "(iv) For any open subset $U\\subset [0,\\infty )$ the vector space $C_0^{\\infty }(U,L^2)$ admits a natural Frechet topology with generating seminorms of the form ($K\\subset U$ compact subset) $p_K^{(n,m)}(\\xi ):=\\max _K L^{-n}\\Vert \\partial _L^m \\xi (L)\\Vert _{L^2}.$ One obtains a space of smooth sections $C_0^{\\infty }(U,\\Sigma {\\mathcal {E}})\\subset C_0^{\\infty }(U,L^2)$ defined as sums of products $\\sum h_j\\Sigma {\\mathcal {E}}(f_j)$ , with $f_j\\in {{\\mathcal {S}}^{\\rm ev}_0}$ and $h_j\\in C_0^{\\infty }(U,L^2)$ .", "The map $\\sigma _n:L^2(C_{\\mu ^n})\\rightarrow L^2(C_\\mu )$ in (ii) is continuous, and from the equality $\\sigma _n\\circ \\Sigma _{\\mu ^n}=\\Sigma _\\mu $ it follows (here we use the notations as in the proof of (ii)) that the sections $\\xi \\in C^{\\infty }(U^{\\prime },L^2(C^{\\prime }))$ which belong to $C^{\\infty }(U^{\\prime },\\overline{\\Sigma {\\mathcal {E}}({{\\mathcal {S}}^{\\rm ev}_0})})$ are mapped by ${\\mathcal {F}}(U,n)$ inside $C^{\\infty }(U,\\overline{\\Sigma {\\mathcal {E}}({{\\mathcal {S}}^{\\rm ev}_0})})$ .", "In this way one obtains a sheaf $\\overline{\\Sigma {\\mathcal {E}}}\\subset {\\mathcal {L}}^2$ of ${\\mathcal {O}}$ -modules over ${S}$ .", "(v) Let $\\xi \\in H^0(U,\\overline{\\Sigma {\\mathcal {E}}})$ .", "By hypothesis, $\\xi $ is in the closure of $C_0^{\\infty }(U,\\Sigma {\\mathcal {E}}({{\\mathcal {S}}^{\\rm ev}_0}))\\subset C_0^{\\infty }(U,L^2)$ for the Frechet topology.", "The Fourier components of $\\xi $ define continuous maps in the Frechet topology, thus it follows from (REF ) that the functions $f_n={\\mathbb {F}}(\\xi )(n)$ are in the closure, for the Frechet topology on $C_0^{\\infty }(U,{\\mathbb {C}})$ , of $C_0^{\\infty }(U,{\\mathbb {C}})g_n$ , where $g_n(L):=\\zeta \\left(\\frac{1}{2}-\\frac{ 2\\pi i n }{L}\\right)$ is a multiplier of $C_0^{\\infty }(U,{\\mathbb {C}})$ .", "This conclusion holds thanks to the moderate growth of the Riemann zeta function and its derivatives on the critical line.", "Conversely, let $\\xi \\in C_0^{\\infty }(U,L^2)$ be such that each of its Fourier components ${\\mathbb {F}}(\\xi )(n)$ belongs to the closure for the Frechet topology of $C_0^{\\infty }(U,{\\mathbb {C}})$ , of $C_0^{\\infty }(U,{\\mathbb {C}})g_n$ .", "Let $\\rho \\in C_c^{\\infty }([0,\\infty ),[0,1])$ defined to be identically equal to 1 on $[0,1]$ and with support inside $[0,2]$ .", "The functions $\\alpha _k(x):=\\rho ((kx)^{-1})$ ($k>1$ ) fulfill the following three properties $\\alpha _k(x)=0, \\quad \\forall x<(2k)^{-1}$ ,      $\\alpha _k(x)=1, \\quad \\forall x>k^{-1}$ .", "$\\alpha _k\\in C_0^{\\infty }([0,\\infty ))$ .", "For all $m >0$ there exists $C_m<\\infty $ such that $\\vert x^{2m}\\partial _x^m\\alpha _k(x)\\vert \\le C_m k^{-1}$ $\\forall x\\in [0,1], k>1$ .", "To justify (3), note that $x^2\\partial _x f((kx)^{-1})=-k^{-1}f^{\\prime }((kx)^{-1})$ and that the derivatives of $\\rho $ are bounded.", "Thus one has $\\vert (x^2\\partial _x)^m\\alpha _k(x)\\vert \\le \\Vert \\rho ^{(m)}\\Vert _{\\infty } k^{-m}$ $\\forall x\\in [0,\\infty ),~ k>1$ which implies (3) by induction on $m$ .", "Thus, when $k\\rightarrow \\infty $ one has $\\alpha _k\\xi \\rightarrow \\xi $ in the Frechet topology of $C_0^{\\infty }(U,L^2)$ .", "This is clear if $0\\notin U$ since then, on any compact subset $K\\subset U$ , all $\\alpha _k$ are identically equal to 1 for $k>(\\min K)^{-1}$ .", "Assume now that $0\\in U$ and let $K=[0,\\epsilon ]\\subset U$ .", "With the notation of (REF ) let us show that $p_K^{(n,m)}((\\alpha _k-1)\\xi )\\rightarrow 0$ when $k\\rightarrow \\infty $ .", "Since $\\alpha _k(x)=1, \\, \\forall x>k^{-1}$ one has, using the finiteness of $p_K^{(n+1,m)}(\\xi )$ $\\max _K L^{-n} \\Vert ((\\alpha _k-1) \\partial _L^m \\xi )(L)\\Vert _{L^2}\\le \\max _{(0,1/k]} L^{-n} \\Vert (\\partial _L^m \\xi )(L)\\Vert _{L^2}\\stackrel{k\\rightarrow \\infty }{\\rightarrow } 0.$ Then one obtains $L^{-n} \\partial _L^m((\\alpha _k-1) \\xi )(L)=L^{-n}((\\alpha _k-1) \\partial _L^m \\xi )(L)+\\sum _1^m L^{-n} {m\\atopwithdelims ()j}(\\partial _L^j \\alpha _k)(L) (\\partial _L^{m-j} \\xi )(L).$ Thus using (3) above and the finiteness of the norms $p_K^{(n+2j,m-j)}(\\xi )$ one derives: $p_K^{(n,m)}((\\alpha _k-1)\\xi )\\rightarrow 0$ when $k\\rightarrow \\infty $ .", "It remains to show that $\\alpha _k \\xi $ belongs to the submodule $C_0^{\\infty }(U,\\overline{\\Sigma {\\mathcal {E}}})$ .", "It is enough to show that for $K\\subset (0,\\infty )$ a compact subset with $\\min K>0$ , one can approximate $\\xi $ by elements of $C_0^{\\infty }(U,\\Sigma {\\mathcal {E}})$ for the norm $p_K^{(0,m)}$ .", "Let $P_N$ be the orthogonal projection in $L^2(C_\\mu )$ on the finite-dimensional subspace determined by the vanishing of all Fourier components ${\\mathbb {F}}(\\xi )(\\ell )$ for any $\\ell , \\vert \\ell \\vert >N$ .", "Given $L\\in K$ and $\\epsilon >0$ there exists $N(L,\\epsilon )<\\infty $ such that $\\Vert (1-P_N)\\partial _L^j\\xi (L)\\Vert <\\epsilon \\quad \\forall j\\le m, \\ N\\ge N(L,\\epsilon ).$ The smoothness of $\\xi $ implies that there exists an open neighborhood $V(L,\\epsilon )$ of $L$ such that (REF ) holds in $V(L,\\epsilon )$ .", "The compactness of $K$ then shows that there exists a finite $N_K$ such that $\\Vert (1-P_N)\\partial _L^j\\xi (L)\\Vert <\\epsilon \\quad \\forall j\\le m, \\ L\\in K, \\ N\\ge N_K.$ It now suffices to show that one can approximate $P_N\\xi $ , for the norm $p_K^{(0,m)}$ , by elements of $C_0^{\\infty }(U,\\Sigma {\\mathcal {E}})$ .", "To achieve this result, we let $L_0\\in K$ and $\\delta _j\\in C_c^{\\infty }({\\mathbb {R}}_+^*)$ , $\\vert j \\vert \\le N$ be such that ${\\mathbb {F}}(\\delta _j)\\left(\\frac{ 2 \\pi j^{\\prime }}{L_0}\\right)={\\left\\lbrace \\begin{array}{ll}1 & j=j^{\\prime }\\\\0 & j\\ne j^{\\prime }\\end{array}\\right.", "}\\quad \\forall j^{\\prime }, \\vert j^{\\prime } \\vert \\le N, \\ \\ \\int _{{\\mathbb {R}}_+^*}u^{1/2}\\delta _j(u)d^*u=0\\quad \\forall j, \\vert j \\vert \\le N.$ One construct $\\delta _j$ starting with a function $h\\in C_c^{\\infty }({\\mathbb {R}}_+^*)$ such that ${\\mathbb {F}}(h)\\left(\\frac{ 2 \\pi j}{L_0}\\right)\\ne 0$ and acting on $h$ by a differential polynomial whose effect is to multiply ${\\mathbb {F}}(h)$ by a polynomial vanishing on all $\\frac{ 2 \\pi j^{\\prime }}{L_0}$ , $j^{\\prime }\\ne j$ and at $i/2$ .", "By hypothesis each Fourier component ${\\mathbb {F}}(\\xi )(n)$ belongs to the closure in $C_0^{\\infty }(U,{\\mathbb {C}})$ of the multiples of the function $\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i n}{L}\\right)$ .", "Thus, given $\\epsilon >0$ one has functions $f_n\\in C_0^{\\infty }(U,{\\mathbb {C}})$ , $\\vert n\\vert \\le N$ such that $\\max _K \\vert \\partial _L^j\\left({\\mathbb {F}}(\\xi )(n)-\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i n}{L}\\right)f_n(L)\\right)\\vert \\le \\epsilon \\quad \\forall j\\le m, \\quad \\vert n\\vert \\le N.$ We now can find a small open neighborhood $V$ of $L_0$ and functions $\\phi _j\\in C^{\\infty }(V)$ , $\\vert j\\vert \\le N$ such that $\\sum \\phi _j(L){\\mathbb {F}}(\\delta _j)\\left(\\frac{ 2 \\pi n}{L}\\right)=L^{\\frac{1}{2}}f_n(L)\\quad \\forall L\\in V.$ This is possible because the determinant of the matrix $M_{n,j}(L)={\\mathbb {F}}(\\delta _j)\\left(\\frac{ 2 \\pi n}{L}\\right)$ is non-zero in a neighborhood of $L_0$ where $M_{n,j}(L_0)$ is the identity matrix.", "The even functions $d_j(u)$ on ${\\mathbb {R}}$ , which agree with $u^{-1/2}\\delta _j(u)$ for $u>0$ , are all in ${{\\mathcal {S}}^{\\rm ev}_0}$ since $\\int _{\\mathbb {R}}d_j(x)dx=2\\int _{{\\mathbb {R}}_+^*}u^{1/2}\\delta _j(u)d^*u=0$ .", "One then has ${\\mathbb {F}}(\\Sigma _\\mu ({\\mathcal {E}}(d_j)))(n)=L^{-\\frac{1}{2}}\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i n}{L}\\right){\\mathbb {F}}_\\mu (\\delta _j)\\left(\\frac{ 2 \\pi n}{L}\\right)$ by (REF ), and by (REF ) one gets $\\sum \\phi _j(L){\\mathbb {F}}(\\Sigma _\\mu ({\\mathcal {E}}(d_j)))(n)=\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i n}{L}\\right)f_n(L)\\quad \\forall L\\in V.$ One finally covers $K$ by finitely many such open sets $V$ and use a partition of unity subordinated to this covering to obtain smooth functions $\\varphi _\\ell \\in C_c^{\\infty }(0,\\infty )$ , $g_\\ell \\in {{\\mathcal {S}}^{\\rm ev}_0}$ such that the Fourier component of index $n$ , $\\vert n\\vert \\le N$ , of $\\sum \\varphi _\\ell \\Sigma {\\mathcal {E}}(g_\\ell )$ is equal to $\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i n}{L}\\right)f_n(L)$ on $K$ .", "This shows that $\\xi $ belongs to the closure of $C_0^{\\infty }(U,\\Sigma {\\mathcal {E}}({{\\mathcal {S}}^{\\rm ev}_0}))\\subset C_0^{\\infty }(U,L^2)$ .", "We recall that the space of global sections $H^0({\\mathcal {T}},{\\mathcal {F}})$ of a sheaf of sets ${\\mathcal {F}}$ in a Grothendieck topos ${\\mathcal {T}}$ is defined to be the set ${\\rm Hom}_{{\\mathcal {T}}}(1,{\\mathcal {F}})$ , where 1 denotes the terminal object of ${\\mathcal {T}}$ .", "For ${\\mathcal {T}}={S}$ and ${\\mathcal {F}}$ a sheaf of sets on $[0,\\infty )$ , 1 assigns to an open set $U\\subset [0,\\infty )$ the single element $*$ , on which ${\\mathbb {N}}^{\\times }$ acts as the identity.", "Thus, we understand an element of ${\\rm Hom}_{{S}}(1,{\\mathcal {F}})$ as a global section $\\xi $ of ${\\mathcal {F}}$ , where ${\\mathcal {F}}$ is viewed as a sheaf on $[0,\\infty )$ invariant under the action of ${\\mathbb {N}}^{\\times }$ .", "With the notations of Proposition REF and for $\\xi \\in {\\rm Hom}_{{S}}(1,{\\mathcal {L}}^2)$ , we write $\\widehat{\\xi }(L,n):={\\mathbb {F}}(\\xi )(n)$ for the (multiplicative) Fourier components of $\\xi $ .", "Then we have Lemma 5.3 The following facts hold The map $\\gamma :H^0({S},{\\mathcal {L}}^2)\\rightarrow C_0^{\\infty }([0,\\infty ),{\\mathbb {C}})\\times C_0^{\\infty }([0,\\infty ),{\\mathbb {C}})\\qquad \\gamma (\\xi )=\\widehat{\\xi }(\\pm 1)$ is an isomorphism of ${\\mathbb {C}}$ -vector spaces.", "The subspace $\\gamma (H^0({S},\\overline{\\Sigma {\\mathcal {E}}}))$ is the closed ideal (for the Frechet topology on $C_0^{\\infty }([0,\\infty ),{\\mathbb {C}})$ ), generated by the multiplication with the functions $\\zeta \\left(\\frac{1}{2} \\mp \\frac{ 2\\pi i }{L}\\right)$ .", "(i) Let $\\xi \\in {\\rm Hom}_{{S}}(1,{\\mathcal {L}}^2)$ : this is a global section $\\xi \\in C_0^{\\infty }([0,\\infty ),L^2)$ invariant under the action of ${\\mathbb {N}}^{\\times }$ , i.e.", "such that $\\sigma _n(\\xi (nL))=\\xi (L)$ for all pairs $(L,n)$ .", "The Fourier components $\\widehat{\\xi }(L,n)$ of any such section are smooth functions of $L\\in [0,\\infty )$ vanishing at $L=0$ , for $n\\ne 0$ , as well as all their derivatives.", "The equality $\\sigma _n(\\xi (L))=\\xi (L/n)$ entails, for $n>0$ , $\\widehat{\\xi }(L,n)&=L^{-\\frac{1}{2}}\\int _{C_\\mu } \\xi (L)(u)u^{-\\frac{2\\pi i n}{L}}d^*u=L^{-\\frac{1}{2}}\\int _{C_{\\mu }^{1/n}} \\xi (L/n)(u)u^{-\\frac{2\\pi i }{L}}d^*u=\\\\&=n^{-\\frac{1}{2}}\\widehat{\\xi }(L/n,1).$ This shows that the $\\widehat{\\xi }(L,n)$ are uniquely determined, for $n>0$ by the function $\\widehat{\\xi }(L,1)$ and, for $n<0$ , by the function $\\widehat{\\xi }(L,-1)$ .", "With $g(L)=\\widehat{\\xi }(L,0)$ one has: $g(L)=n^{-\\frac{1}{2}}g(L/n)$ for all $n>0$ .", "This implies, since ${\\mathbb {Q}}_+^*$ is dense in ${\\mathbb {R}}_+^*$ and $g$ is assumed to be smooth, that $g$ is proportional to $L^{-\\frac{1}{2}}$ and hence identically 0, since it corresponds to a global section smooth at $0\\in [0,\\infty )$ .", "This argument proves that $ \\gamma $ is injective.", "Let us show that $\\gamma $ is also surjective.", "Given a pair of functions $f_\\pm \\in C_0^{\\infty }([0,\\infty ),{\\mathbb {C}})$ we construct a global section $\\xi \\in H^0({S},{\\mathcal {L}}^2)$ such that $\\gamma (\\xi )=(f_+,f_-)$ .", "One defines $\\xi (L)\\in L^2(C_\\mu )$ by by means of its Fourier components set to be $ \\widehat{\\xi }(L,0):=0$ , and for $n\\ne 0$ by $\\widehat{\\xi }(L,n):=\\vert n\\vert ^{-\\frac{1}{2}}f_{\\rm sign(n)}(L/n).$ Since $f_\\pm (x)$ are of rapid decay for $x\\rightarrow 0$ , $\\sum \\vert \\widehat{\\xi }(L,n)\\vert ^2<\\infty $ , thus $\\xi (L)\\in L^2(C_\\mu )$ .", "All derivatives of $f_\\pm (x)$ are also of rapid decay for $x\\rightarrow 0$ , thus all derivatives $\\partial _L^k(\\xi (L))$ belong to $L^2(C_\\mu )$ and that the $L^2$ -norms $\\Vert \\partial _L^k(\\xi (L))\\Vert $ are of rapid decay for $L\\rightarrow 0$ .", "By construction $\\sigma _n(\\xi (L))=\\xi (L/n)$ , which entails $\\xi \\in H^0({S},{\\mathcal {L}}^2)$ with $\\gamma (\\xi )=(f_+,f_-)$ .", "(ii) Let $\\xi \\in H^0({S},\\overline{\\Sigma {\\mathcal {E}}})$ .", "By Proposition REF (v), the functions $f_\\pm =\\widehat{\\xi }(L,\\pm 1)$ are in the closure, for the Frechet topology on $C_0^{\\infty }([0,\\infty ),{\\mathbb {C}})$ , of the ideal generated by the functions $\\zeta \\left(\\frac{1}{2} \\mp \\frac{ 2\\pi i }{L}\\right)$ .", "Conversely, let $\\xi \\in H^0({S},{\\mathcal {L}}^2)$ and assume that $\\gamma (\\xi )$ is in the closed submodule generated by multiplication with $\\zeta \\left(\\frac{1}{2} \\mp \\frac{ 2\\pi i}{L}\\right)$ .", "The ${\\mathbb {N}}^{\\times }$ -invariance of $\\xi $ implies $\\widehat{\\xi }(L,n)=\\vert n\\vert ^{-\\frac{1}{2}}\\widehat{\\xi }(L/\\vert n\\vert ,{\\rm sign}(n))$ for $n\\ne 0$ .", "Thus the Fourier components $\\widehat{\\xi }(L,n)$ belong to the closure in $C_0^{\\infty }(U,{\\mathbb {C}})$ of the multiples of the function $\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i n}{L}\\right)$ , then Proposition REF (v) again implies $\\xi \\in H^0({S},\\overline{\\Sigma {\\mathcal {E}}})$ .", "The action of ${\\mathbb {R}}_+^*$ on the sheaf ${\\mathcal {L}}^2$ is given by the action $\\vartheta $ on the Fourier components of its sections $\\xi $ .", "With $\\mu =\\exp L$ , $L\\in (0,\\infty )$ , $n\\in {\\mathbb {N}}^*$ and $\\lambda \\in {\\mathbb {R}}^*_+$ , this is $\\widehat{\\vartheta (\\lambda )\\xi }(L,n)&=L^{-\\frac{1}{2}}\\int _{C_\\mu } \\xi (\\lambda ^{-1}u)u^{-\\frac{2\\pi i n}{L}}d^*u =\\lambda ^{-\\frac{2\\pi i n}{L}} L^{-\\frac{1}{2}}\\int _{C_\\mu } \\xi (v)v^{-\\frac{2\\pi i n}{L}}d^*v=\\\\&=\\lambda ^{-\\frac{2\\pi i n}{L}}\\widehat{\\xi }(L,n)$ The following result explains in particular how the quotient sheaf ${\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}}$ on ${S}$ handles eventual multiplicities of critical zeros of the zeta function.", "Theorem 5.4 The induced action of ${\\mathbb {R}}_+^*$ on the global sections $H^0({S},{\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}})$ is canonically isomorphic to the action of ${\\mathbb {R}}_+^*$ , via multiplication with $\\lambda ^{is}$ , on the quotient of the Schwartz space ${\\mathcal {S}}({\\mathbb {R}})$ by the closure of the ideal generated by multiples of $\\zeta \\left(\\frac{1}{2} +is\\right)$ .", "We first show that the canonical map $q:H^0({S},{\\mathcal {L}}^2)\\rightarrow H^0({S},{\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}}))$ is surjective.", "Let $\\xi \\in H^0({S},{\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}}))$ : as a section of ${\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}}$ on $[0,\\infty )$ , there exists an open neighborhood $V=[0,\\epsilon )$ of $0\\in [0,\\infty )$ and a section $\\eta \\in C_0^{\\infty }(V,L^2)$ such that the class of $\\eta $ in $C_0^{\\infty }(V,L^2/\\overline{\\Sigma {\\mathcal {E}}})$ is the restriction of $\\xi $ to $V$ .", "The Fourier components $\\widehat{\\eta }(L,n)$ are meaningful for $L\\in V$ .", "Since $\\xi $ is ${\\mathbb {N}}^{\\times }$ -invariant, for any $n\\in {\\mathbb {N}}^{\\times }$ the class of ${\\mathcal {F}}(V/n,n)(\\eta )$ , with ${\\mathcal {F}}(V/n,n)(\\eta )(L):= \\sigma _n(\\eta (nL))$ (see (REF )) is equal to the class of the restriction of $\\eta $ in $C_0^{\\infty }(V/n,L^2/\\overline{\\Sigma {\\mathcal {E}}})$ .", "We thus obtain $\\eta (L) -{\\mathcal {F}}(V/n,n)(\\eta )\\in C_0^{\\infty }(V/n,\\overline{\\Sigma {\\mathcal {E}}})$ Furthermore, the Fourier components of $\\alpha ={\\mathcal {F}}(V/n,n)(\\eta )$ are given by $\\widehat{\\alpha }(L,k)=n^{\\frac{1}{2}}\\widehat{\\eta }(nL,nk).$ Thus, the functions $\\widehat{\\eta }(L,k)-\\widehat{\\alpha }(L,k)= \\widehat{\\eta }(L,k)-n^{\\frac{1}{2}}\\widehat{\\eta }(nL,nk)$ are the Fourier components of an element in $C_0^{\\infty }(V/n,\\overline{\\Sigma {\\mathcal {E}}})$ .", "By Proposition REF (v), these components are in the closure in $C_0^{\\infty }(V/n,{\\mathbb {C}})$ of the multiples of the function $\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i k}{L}\\right)$ .", "For $k=1$ the function $\\widehat{\\eta }(L,1)-n^{\\frac{1}{2}}\\widehat{\\eta }(nL,n)$ is in the closure in $C_0^{\\infty }(V/n,{\\mathbb {C}})$ of the multiples of the function $\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi i }{L}\\right)$ .", "This implies that $\\widehat{\\eta }(L,n)-n^{-\\frac{1}{2}}\\widehat{\\eta }(L/n,1)$ is in the closure in $C_0^{\\infty }(V,{\\mathbb {C}})$ of the multiples of the function $\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi in }{L}\\right)$ .", "For $k=-1$ one obtains a similar result for the Fourier components $\\widehat{\\eta }(L,n)$ , $n<0$ .", "Thus, again by Proposition REF (v), one knows that the class of $\\eta $ in $C_0^{\\infty }(V,L^2/\\overline{\\Sigma {\\mathcal {E}}}))$ s not altered if one replaces $\\eta $ with $\\eta _1\\in C_0^{\\infty }(V,L^2)$ $\\widehat{\\eta }_1(L,n):=\\vert n\\vert ^{-\\frac{1}{2}}\\eta (L/\\vert n\\vert ,{\\rm sign}(n)).$ Next step is to extend the functions $\\eta (L,\\pm 1)\\in C_0^{\\infty }(V,{\\mathbb {C}})$ to $f_\\pm \\in C_0^{\\infty }([0,\\infty ),{\\mathbb {C}})$ fulfilling the following property.", "For any open set $U\\subset [0,\\infty )$ and a section $\\beta \\in C_0^{\\infty }(U,L^2)$ , with the class of $\\beta $ in $C_0^{\\infty }(U,L^2/\\overline{\\Sigma {\\mathcal {E}}}))$ being the restriction of $\\xi $ to $U$ , the functions $\\widehat{\\beta }(L,\\pm 1)-f_\\pm (L)$ belong to the closure in $C_0^{\\infty }(U,{\\mathbb {C}})$ of the multiples of the function $\\zeta \\left(\\frac{1}{2}\\mp \\frac{2\\pi i }{L}\\right)$ .", "To construct $f_\\pm $ one considers the sheaf ${\\mathcal {G}}_\\pm $ which is the quotient of the sheaf of $C_0^{\\infty }([0,\\infty ),{\\mathbb {C}})$ functions by the closure of the ideal subsheaf generated by the multiples of the function $\\zeta \\left(\\frac{1}{2}\\mp \\frac{2\\pi i }{L}\\right)$ .", "Since the latter is a module over the sheaf of $C^{\\infty }$ functions, it is a fine sheaf, thus a global section of ${\\mathcal {G}}_\\pm $ can be lifted to a function.", "By Proposition REF (v), the Fourier components $\\widehat{\\xi }_j(L,\\pm 1)$ of local sections $\\xi _j$ of $L^2$ representing $\\xi $ define a global section of ${\\mathcal {G}}_\\pm $ .", "The functions $f_\\pm $ are obtained by lifting these sections.", "By appealing to Lemma REF , we let $\\phi \\in H^0({S},L^2)$ to be the unique global section such that $\\gamma (\\phi )=(f_+,f_-)$ .", "Then we show that $q(\\phi )=\\xi $ .", "We have already proven that the restrictions to $V=[0,\\epsilon )$ are the same.", "Thus it is enough to show that given $L_0>0$ and a lift $\\xi _0\\in C_0^{\\infty }(U,L^2)$ of $\\xi $ in a small open interval $U$ containing $L_0$ , the difference $\\delta =\\phi -\\xi _0$ is a section of $\\overline{\\Sigma {\\mathcal {E}}}$ .", "Again by Proposition REF (v), it suffices to show that the Fourier components $\\widehat{\\delta }(L,n)$ are in the closure of the ideal generated by multiples of $\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi in }{L}\\right)$ .", "The ${\\mathbb {N}}^{\\times }$ -invariance of $\\xi $ shows that ${\\mathcal {F}}(U/n,n)(\\xi _0)$ (see (REF )) is a lift of $\\xi $ in $U/n$ .", "Thus by the defining properties of the functions $f_\\pm $ one has $\\widehat{{\\mathcal {F}}(U/n,n)(\\xi _0)}(\\pm 1)-f_\\pm \\in \\overline{C^{\\infty }(U,{\\mathbb {C}})\\zeta _\\pm }, \\qquad \\text{for}\\quad \\zeta _\\pm (L)=\\zeta \\left(\\frac{1}{2}\\mp \\frac{2\\pi i }{L}\\right).$ With a similar argument and using the invariance of $\\phi $ under the action of ${\\mathcal {F}}(U/n,n)$ , one obtains that $\\widehat{\\delta }(n)$ is in the closure of the ideal generated by the multiples of $\\zeta \\left(\\frac{1}{2}-\\frac{2\\pi in }{L}\\right)$ .", "We have thus proved that $q:H^0({S},{\\mathcal {L}}^2)\\rightarrow H^0({S},{\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}}))$ is surjective.", "One then obtains the exact sequence $0\\rightarrow H^0({S},\\overline{\\Sigma {\\mathcal {E}}}))\\rightarrow H^0({S},{\\mathcal {L}}^2)\\rightarrow H^0({S},{\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}}))\\rightarrow 0.$ This sequence is equivariant for the action (REF ) of $\\vartheta $ of ${\\mathbb {R}}_+^*$ on the bundle $L^2$ .", "For $h\\in L^1({\\mathbb {R}}_+^*,d^*u)$ one has $\\widehat{(\\vartheta (h)\\xi )}(L,n)={\\mathbb {F}}(h)\\left(\\frac{2\\pi n}{L}\\right) \\widehat{\\xi }(L,n).$ To obtain the required isomorphism between the two spectral realizations, one uses the isomorphism (REF ) of Lemma REF .", "Denote for short $(C_0^{\\infty })^2=C_0^{\\infty }([0,\\infty ))\\times C_0^{\\infty }([0,\\infty ))$ .", "One maps the Schwartz space ${\\mathcal {S}}({\\mathbb {R}})$ to $(C_0^{\\infty })^2$ by $\\Phi (f)=(\\Phi _+(f),\\Phi _-(f)), \\qquad \\Phi _\\pm (f)(L):=f\\left(\\pm \\frac{2\\pi }{L}\\right).$ $ \\Phi $ is well defined since all derivatives of $\\Phi _\\pm (f)(L)$ tend to 0 when $L\\rightarrow 0$ (any function $f\\in {\\mathcal {S}}({\\mathbb {R}})$ is of rapid decay as well as all its derivatives).", "The exact sequence (REF ), together with Lemma REF , then gives an induced isomorphism $\\gamma : H^0({S},{\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}}))\\simeq (C_0^{\\infty })^2/ \\left(\\overline{C_0^{\\infty }\\zeta _+}\\times \\overline{C_0^{\\infty }\\zeta _-}\\right).$ In turn, the map $ \\Phi $ induces a morphism $\\tilde{\\Phi }:{\\mathcal {S}}({\\mathbb {R}})/({\\mathcal {S}}({\\mathbb {R}})\\zeta ) \\rightarrow (C_0^{\\infty })^2/ \\left(\\overline{C_0^{\\infty }\\zeta _+}\\times \\overline{C_0^{\\infty }\\zeta _-}\\right).$ By (REF ) this morphism is equivariant for the action of ${\\mathbb {R}}_+^*$ .", "The map $ \\Phi $ is not an isomorphism since elements of its range have finite limits at $\\infty $ .", "However it is injective and its range contains all elements of $(C_0^{\\infty })^2$ which have compact support.", "Since $\\zeta _\\pm (L)=\\zeta \\left(\\frac{1}{2}\\mp \\frac{2\\pi i }{L}\\right)$ tends to a finite non-zero limit when $L\\rightarrow 0$ , $\\tilde{\\Phi }$ is an isomorphism.", "Remark 5.5 By a Theorem of Whitney (see [12], Corollary 1.7), the closure of the ideal of multiples of $\\zeta \\left(\\frac{1}{2} +is\\right)$ in ${\\mathcal {S}}({\\mathbb {R}})$ is the subspace of those $f\\in {\\mathcal {S}}({\\mathbb {R}})$ which vanish of the same order as $\\zeta $ at every (critical) zero $s\\in Z$ .", "Thus if any such zero is a multiple zero of order $m>1$ , one finds that the action of ${\\mathbb {R}}_+^*$ on the global sections of the quotient sheaf ${\\mathcal {L}}^2/\\overline{\\Sigma {\\mathcal {E}}}$ admits a non-trivial Jordan decomposition of the form $\\vartheta (\\lambda )\\xi =\\lambda ^{is}(\\xi +N(\\lambda )\\xi ), \\ \\ $ with $N(\\lambda )^m=0$ and $(1+N(u))(1+N(v))=1+N(uv)$ for all $u,v\\in {\\mathbb {R}}_+^*$ ." ] ]
2207.10419
[ [ "Deep Diffusion Models for Seismic Processing" ], [ "Abstract Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing.", "These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces.", "In the past years, there has been a remarkable increase of machine-learning-based solutions that have addressed the aforementioned issues.", "In particular, deep-learning practitioners have usually relied on heavily fine-tuned, customized discriminative algorithms.", "Although, these methods can provide solid results, they seem to lack semantic understanding of the provided data.", "Motivated by this limitation, in this work, we employ a generative solution, as it can explicitly model complex data distributions and hence, yield to a better decision-making process.", "In particular, we introduce diffusion models for three seismic applications: demultiple, denoising and interpolation.", "To that end, we run experiments on synthetic and on real data, and we compare the diffusion performance with standardized algorithms.", "We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows." ], [ "Introduction", "Deep generative learning has become an important research area in the machine learning community, being more relevant in many applications.", "Namely, they are widely used for image synthesis and various image-processing tasks such as editing, interpolation, colourization, denoising, and super-resolution.", "Recently, diffusion probabilistic models [1], [2] have emerged as a novel, powerful class of generative learning methods.", "In a short period of time, these models have achieved surprisingly high performance [3], [4], [5], [6], and have even surpassed state-of-the-art algorithms like generative adversarial networks [7] (GANs) and variational autoencoders [8] (VAEs).", "At the same time, the geophysics community has been actively adopting deep-learning techniques to boost and automate numerous seismic interpretation tasks including fault picking [9], [10], salt delineation [11], [12], well-to-seismic tie [13], [14], horizon tracking [15], [16], multiple removal [17], [18], etc.", "Nonetheless, to the best of our knowledge, there has not been yet any work exploring the application of diffusion models to seismic data and thus, studying their potential advantages to already established deep-learning approaches in this domain.", "Driven by this motivation, in this work, we study the applicability of diffusion models for seismic processing.", "Seismic imaging is essential to discover and characterize economically worthwhile geological reservoirs, such as hydrocarbons accumulations, and to manage the extraction of the resources stored in them.", "Unfortunately, recorded seismic signals at the surface are inevitably contaminated by coherent and incoherent noise of various nature.", "The process of removing the noise, while retaining the primary signal, is called seismic processing.", "In this paper, we focus on three relevant, well-known seismic processing tasks: demultiple, denoising and interpolation.", "Demultiple and denoising are both removing unwanted signals from the seismic section; the first gets rid of coherent noise caused by reverberations of waves between strong reflectors, whereas the latter removes incoherent noise of miscellaneous causes.", "The goal of interpolation is to fill-in gaps in the image caused by limitations during acquisition.", "Although at the first glance the nature of these problems might look different or unrelated, it is possible to formulate a common framework, in which they can be solved.", "This is feasible, due to the fact that the diffusion models, like most of generative models, learn the density distribution of the input data.", "In other words, unlike discriminative approaches which draw boundaries in the data space, the generative approaches model how data is placed throughout the space [19].", "As a result, they are powerful algorithms that can be independently applied to a large diversity of problems." ], [ "Background", "Generative models for modelling estimate the marginal distribution, denoted as $p(x)$ , over observable variables $x$ , e.g., images.", "In the literature, we can find different formulations that tackle this problem such as autoregressive generative models, latent variable models, flow-based models, and energy-based models." ], [ "Latent Variable Models", "The main idea of this type of models is to utilize latent variables $z$ to formulate the joint distribution $p(x,z)$ , which describes the marginal distribution as a function of learnable parameters ${\\theta }$ (likelihood).", "Mathematically, it can be written as: $\\begin{split}& z \\sim p_{\\theta }(z) \\\\& x \\sim p_{\\theta }(x|z) \\\\p_{\\theta }(x) = \\int _z p_{\\theta }(x &,z) = \\int _z p_{\\theta }(x|z) p_{\\theta }(z).\\end{split}$ Unfortunately, for most of the problems we do not have access to the true distribution $p(x)$ and hence, we need to fit our model to some empirically observed subset.", "One solution is to use Monte Carlo sampling to approximate the integral over $z$ to try to estimate the model parameters $\\theta $ .", "Nonetheless, this approach does not scale to high dimensions of $z$ and consequently, we will suffer from issues associated with the curse of dimensionality.", "Another solution is to use variational inference, e.g., VAE [8].", "In particular, the lower bound of the log-likelihood function, called the Evidence Lower BOund (ELBO).", "The ELBO provides a joint optimization objective, which simultaneously updates the variational posterior $q_{\\phi }(z|x)$ and likelihood model $p_{\\theta }(x|z)$ .", "The objective is written as: $\\begin{split}\\mathrm {log} \\, p(x) & \\ge \\mathbb {E}_{z\\sim q_{\\phi }(z|x)}[\\mathrm {log} \\, p_{\\theta }(x|z)] \\\\& -\\mathrm {KL}[q_{\\phi }(z|x)||p(z)],\\end{split}$ where KL stands for the Kullback-Leibler divergence.", "Figure: Scheme of the different latent variable models.", "(Top) Single latent variable model.", "(Center) Hierarchical latent variable model.", "(Bottom) Diffusion model." ], [ "Hierarchical Latent Variable Models", "Once defined a single stochastic layer, it is straightforward to derive hierarchical extensions.", "For example, let us consider a latent variable model with two latent variables $z_1$ and $z_2$ .", "We can define the joint distribution $p(x,z_1,z_2$ ) and marginalizing out the latent variables: $\\begin{split}p_{\\theta }(x) = &\\int _{z_1} \\int _{z_2} p_{\\theta }(x,z_1,z_2) \\\\= &\\int _{z_1} \\int _{z_2} p_{\\theta }(x|z_1)p_{\\theta }(z_1|z_2)p_{\\theta }(z_2).\\end{split}$ Similar to the single latent model, we can derive the variational approximation (ELBO) to the true posterior as: $\\begin{split}\\mathrm {log} \\, p(x) & \\ge \\mathbb {E}_{z_1\\sim q_{\\phi }(z_1|z_2)}[\\mathrm {log} \\, p_{\\theta }(x|z_1)] \\\\& -\\mathrm {KL}[q_{\\phi }(z_1|x)||p_{\\theta }(z_1|x)] \\\\& -\\mathrm {KL}[q_{\\phi }(z_2|z_1)||p(z_2)].\\end{split}$" ], [ "Diffusion Models", "Diffusion models belong to the latent variable family as well.", "In fact, we can think of them as a specific realization of a hierarchical latent variable model, where the inference modelRemember that the inference model relates a set of observable variables to a set of latent variables, e.g., $q(z|x)$ .", "does not have learnable parameters.", "Instead, it is constructed so that the final latent distribution $q(x_T)$ converges to a standard Gaussian (where $T$ is the number of latent variables).", "The objective function of diffusion models is written as: ${\\begin{array}{c}\\mathrm {log} \\, p(x) \\ge \\\\\\mathbb {E}_{x_{1:T} \\sim q(x_{1:T}|x_0)} [\\mathrm {KL} (q(x_{T}|x_0)||p_{\\theta }(x_{T})) \\\\+ \\sum _{t=2}^{T} \\mathrm {KL} (q(x_{t-1}|x_t,x_0)||p_{\\theta }(x_{t-1}|x_t))\\\\- \\mathrm {log} \\, p_{\\theta }(x_{0}|x_1)].\\end{array}}$ Under certain assumptions, this objective can be further simplified, leading to the following approximation: $\\begin{split}\\mathrm {log} \\, p(x) &\\gtrsim \\sum _{t=2}^{T} \\mathrm {KL} (q(x_{t-1}|x_t,x_0)||p_{\\theta }(x_{t-1}|x_t)) \\\\& = \\sum _{t=2}^{T} || \\epsilon - \\epsilon _{\\theta }(\\sqrt{\\bar{\\alpha }_t}x_0+\\sqrt{1-\\bar{\\alpha }_t}\\epsilon ,t)||^2.\\end{split}$ Note that we drop the expectation for clarity.", "The exact derivation can be found in [2]." ], [ "Methodology", "In this section, we provide a brief overview of diffusion models formulation.", "Note that we do not aim at covering the entire derivations.", "For a more in-depth, detailed mathematical description, we refer the reader to [2]." ], [ "Background", "On a high level, diffusion models consist of two parts: forward diffusion and parametrized reverse.", "The forward diffusion part can be described as a process, where Gaussian noise $\\epsilon $ is gradually applied to the input image $x_0$ until the image becomes entirely unrecognizable from a normal distribution $x_T \\sim \\mathcal {N}(0,\\mathrm {I})$ ($T$ is the number of transformation steps).", "That is to say, at each step of this process, the noise is incrementally added to the data, $x_0 \\xrightarrow{} x_1 \\xrightarrow{} ... \\xrightarrow{} x_T$ .", "This procedure together with the Markov assumptionMarkov assumption is used to describe a model that holds the memoryless property of a stochastic process.", "leads to a simple parameterization forward process expressed as: $\\begin{split}q(x_{1:T}|x_0) & = \\prod _{t = 1}^{T} q(x_t|x_{t-1}) \\\\& = \\prod _{t = 1}^{T} \\mathcal {N}(x_t; \\sqrt{1-\\beta _t}x_{t-1},\\beta _t \\mathrm {I}),\\end{split}$ where the variable $\\beta $ defines a fixed variance schedule, chosen such that $q(x_T|x_0) \\approx \\mathcal {N}(0,\\mathrm {I})$ .", "Figure: Denoising diffusion process.While the Markov chain of the forward diffusion gradually adds noise to the input (dash arrows), the reverse process removes it stepwise (solid arrows).The second part, the parametrized reverse process, represents the data synthesis.", "Thus, it undoes the forward diffusion process and performs iterative denoising.", "To that end, the reverse process is trained to generate data by converting random noise into realistic data.", "Formally, this generative process is defined as a stochastic process, which iteratively removes noise from the input images using deep neural networks.", "Starting with the pure Gaussian noise $p(x_T) = \\mathcal {N}(x_T, 0,\\mathrm {I})$ , the model learns the joint distribution $p_{\\theta }(x_{0:T})$ as: $\\begin{split}p_{\\theta }(x_{0:T}) & = p(x_T) \\prod _{t = 1}^{T} p_{\\theta }(x_{t-1}|x_t) \\\\& = p(x_T) \\prod _{t = 1}^{T} \\mathcal {N}(x_{t-1};\\mu _{\\theta }(x_t,t), \\Sigma _{\\theta }(x_t,t)),\\end{split}$ where the time-dependent parameters of the Gaussian transformations $\\theta $ are learned.", "Note in particular that the Markov formulation asserts that a given reverse diffusion transformation distribution depends only on the previous timestep." ], [ "Training", "A diffusion model is trained by finding the reverse Markov transitions that maximize the likelihood of the training data.", "In practice, this process consists of optimizing the variational lower bound on the log likelihood.", "Hereunder the simplified expression derived by [2]: $\\begin{split}\\mathrm {log} \\, p(x) &\\gtrsim \\sum _{t=2}^{T} || \\epsilon - \\epsilon _{\\theta }(\\sqrt{\\bar{\\alpha }_t}x_0+\\sqrt{1-\\bar{\\alpha }_t}\\epsilon ,t)||^2,\\end{split}$ where $\\alpha _t = 1- \\beta _t \\; \\mathrm {and} \\; \\bar{\\alpha }_t = \\prod _{i = 1}^{T} \\alpha _i.$ Note, ultimately, the deep neural network learns to predict the noise component $\\epsilon $ at any given timestep." ], [ "Experiments", "In this section, we validate the flexibility of diffusion models for different seismic tasks.", "In particular, we analyse three case studies: demultiple, denoising and interpolation.", "To do that, we present an end-to-end deep-learning approach that can deal (separately) with demultiple, denoising and interpolation scenarios.", "Furthermore, we benchmark the results with alternative paradigms that are currently employed in both academia and industry domains.", "The implementation details are as following: In all our experiment, we train the diffusion model for 200,000 iterations with a batch size of 32; we set $\\beta $ to follow a linear schedule, and we use a depth of 2000 timesteps for both the forward process (see Equation REF ) and the reverse denoising process (see Equation REF ).", "Figure: In each reverse step tt, the model ϵ θ \\epsilon _{\\theta } is fed with the semi-denoised multiple-free image x t x_t and the multiple-infested input.As an output, the network generates the image x t-1 x_{t-1}, which should have less noise and no multiples.Figure: This figure displays two cropped gathers that contain multiples (input), and the results after applying the demultiple algorithms.Moreover, we plot the difference between the input and the output to check the content that has been removed.Note that we apply a scaling factor of 3 in the differences to stress the changes." ], [ "Architecture", "Image diffusion models commonly employ a time-conditional U-net [20], parametrized as $\\epsilon _{\\theta }(\\circ , t)$ , as a neural backbone.", "This architecture was initially introduced in [2], where the main motivation for this topology choice was the requirement for the model to have identical input and output dimensionality.", "The architecture consists of a stack of residual layers and downsampling convolutions, followed by a stack of residual layers with upsampling convolutions; skip connections connect the layers with the same spatial size.", "Furthermore, it uses a global attention layer with a single head to add a projection of the timestep embedding into each residual block." ], [ "Demultiple", "Primary seismic reflections are events which have reflected only once, and they are employed to describe the subsurface interfaces.", "Multiples, on the contrary, are events which appear when the signal has not taken a direct path from the source to the receiver after reflecting on a subsurface boundary.", "The presence of multiples in a recorded dataset can trigger erroneous interpretations, since they do not only interfere with the analysis in the post-stack domain, e.g., stratigraphic interpretation, but also with the analysis in the pre-stack domain, e.g., amplitude variation with offset inversion.", "Thereby, the demultiple process plays a crucial role in any seismic processing workflow.", "In this first experiment, we follow the approach from [21], [18], and generate synthetic pairs of multiple-infested and multiple-free gathers.", "This data setup allows us to train the model in a supervised manner and therefore, we can frame the demultiple problem as an image-to-image transformation task, where the network learns to remove the multiples without removing primary energy.", "As in [18], the training dataset is designed to include a rich amount of features present in real datasets, to maximize transferability to real case uses.", "To that end, we employ as a baseline a conditional diffusion models proposed by [22].", "More specifically, we condition our model by concatenating the semi-denoised multiple-free image $x_t$ with the multiple-infested input (see Figure REF ).", "Ideally, the network should return an improved semi-denoised multiple-free gather $x_{t-1}$ that after $T$ reverse steps should converge into a noise- and multiple-free gather $x_0$ .", "Once the model is trained, it is crucial to assess the inference capabilities of the network when working on real data, i.e., generalizability.", "Nonetheless, this is not a granted property in deep-learning models due to the distribution gap between different datasets, e.g., the gap between synthetic and real datasets [23].", "In our experiments, we test the diffusion approach on the dataset from the Volve field made available under Equinor Open Data Licence.", "Furthermore, we compare the outcomes with two other multiple-attenuation methodologies: one based on Radon-transform [24] and one based on deep learning [18].", "Figure REF shows an example of such a comparison, where we can observe how the diffusion solution offers competitive results, despite minimal hyperparameter tuning involved.", "For additional results, see Figure REF in the Appendix.", "Figure: Mean and standard deviation of SSIM and SNR metrics calculated on 500 random denoised images.Results from diffusion and FX-Decon scenarios.Figure: This figure shows an example of denoising.The first row contains the original image and the input image (original with noise).The second row presents the diffusion and the FX-Decon results.Finally, the third and four rows display the difference between the results and the original and the input data, respectively.Figure: This figure shows an example of interpolation.From left to right: the original image, the mask, the input image (original with mask), the diffusion result and its difference with respect to the original image." ], [ "Denoising", "Incoherent noise can be caused by superposition of numerous unwanted signals from various sources such as ocean waves, wind and electrical instrument noise among others.", "Removing such incoherent noise can improve the overall signal-to-noise ratio and, consequently, increase the certainty of interpretation.", "Traditional approaches can be subdivided into two main categories: the prediction filtering methods and domain transform methods.", "The first type assumes linearity and predictability of the signal, and constructs a predictive filter to suppress the noise [25], [26].", "These methods have been widely adopted by the industry due to their efficiency, although they tend to under-suppress noise and occasionally suffer from signal leakage [27].", "The second type of methods uses mathematical transformations, e.g., Fourier transform [28], wavelet transform [29], curvelet transform [30], [31], to steer the seismic data into domains, where seismic signals and noise can be easier separated and then leverage the sparse characteristics of seismic data.", "This approach, however, often requires a time-consuming transform coefficient tuning.", "To cope with this drawback, a new trend based on deep-learning algorithms has emerged, resulting in optimized solutions that remove incoherent noise from seismic data as well as speed up the inference time [32], [33].", "Similar to the demultiple scenario, we create pairs of images to train our diffusion model.", "Nonetheless, this time, the objective is to eliminate undesired uncorrelated noise, while preserving the inherent characteristics of the data.", "To that end, the pairs of training data consist of a real image and their noisy version.", "To create the noisy images, we synthetically add Gaussian noise to the original real images with a variability of the 50% of their energy.", "For this second case of study, we train on 1994 BP [34] dataset, from which we extract random patches (from different shot gathers) that neither overlap among each other, nor have more than 40% of their content equal to 0.", "In this fashion, we try to guarantee certain level of variety in the training data.", "For the testing set, we apply the same conditions as for training.", "Additionally, we employ a second dataset (Model94 [34]) to evaluate the generalization capacity of our system.", "As for comparison, we use a spectral filtering technique based on the Fourier transform, namely a complex Wiener prediction filter called FX-Decon [25], [26], which is dedicated for signal extraction and non-coherent noise suppression in the frequency domain.", "To assess the results, we use structural similarity index (SSIM) and signal-to-noise ratio (SNR) as quantitative metrics.", "Figure REF displays them for each configuration, i.e., different datasets and methods, and we can observe how the diffusion model provides the best scores when we test on data coming from the same dataset as the one used for training.", "However, as expected, it has a drastic drop when we test on a new dataset, e.g., Model94.", "This phenomenon is mainly caused by the distribution gap between different datasets.", "On the other hand, FX-Decon achieves similar performance on both datasets (no drop), as this method does not involve any learning, i.e., data fitting.", "Finally, Figure REF illustrates a denoising example for both algorithms.", "The difference between the outputs and the original data (third row in Figure REF ) allows us to see that diffusion model removes some coherent signal, while FX-Decon does not.", "Ideally, this should be corrected, but we leave this improvement for future work.", "Nevertheless, overall, the diffusion approach leads to less noisy outputs, as can be noticed in the output image.", "For additional results, see Figure REF in the Appendix." ], [ "Interpolation", "Seismic data processing algorithms greatly benefit from regularly sampled and reliable data.", "However, it is rarely the case where the acquired data is presented flawless, i.e., complete shot gathers without missing traces.", "Frequently, the reason for that are acquisition constraints such as geophones issues, topography, and economical limitations.", "As a consequence, interpolation techniques are a fundamental key for most seismic processing systems.", "Figure: Mean and standard deviation of SSIM and SNR metrics calculated on 500 random interpolated images.Results from diffusion and U-net scenarios.In this last case of study, we evaluate the capacity of our diffusion model to interpolate missing traces.", "To that end, we follow the evaluation methodology introduced by [35], namely, we consider the scenario with irregular missing traces and with a level of decimation set to 50% (see Figure REF ).", "Regarding the data for this experiment, we repeat the setup presented in the denoising section, using 1994 BP dataset for training and testing, and Model94 for testing on a new dataset.", "Finally, to have a baseline to compare with, we implement the so-called “standard” topology from [35], which is essentially a U-net-like network.", "Figure REF shows the qualitative evaluation of the diffusion approach and of the U-net baseline.", "Although results from the latter are superior, the improvement could be considered marginal given the small metric differences.", "Furthermore, both algorithms seem to struggle when inferring on unseen datasets.", "On the other hand, besides the quantitative results, the potential that diffusion models might bring is objectively higher than discriminative models, as the former are generative models and therefore, can capture more advanced data properties.", "For additional results, see Figure REF in the Appendix." ], [ "Discussion", "In this work, we propose a generative framework based on diffusion models to address several seismic tasks.", "In particular, our case studies include demultiple, denoising and interpolation.", "To solve them, we define the problem as an image-to-image transformation, where we have an input image that requires certain modifications so that, the output result belongs to the target domain.", "For example, in the demultiple scenario, given a multiple-infested gather (input domain), our diffusion approach has to identify the multiples and cancel them out, leading to a multiple-free output gather (target domain).", "The results of our experimental evaluations are fairly encouraging, as they show competitive performance, when comparing with standardized, customized algorithms.", "As we pointed out before, diffusion models for seismic data is an unexplored field to date and hence, the ultimate goal of this project is not to outperform these current algorithms in their respective areas, but to provide a solid analysis of the applicability and flexibility of this novel framework.", "Therefore, the main success of our implementation can be regarded as proof of concept that can be used to adopt generative models, namely diffusion models, in the geoscience community.", "We believe that our work can help to lay the foundation for future research that can benefit both academia and industry." ], [ "Acknowledgement", "This work was developed in the Fraunhofer Cluster of Excellence Cognitive Internet Technologies.", "The authors would like to acknowledge the members of the Fraunhofer ITWM DLSeis consortium (http://dlseis.org) for their financial support.", "We are also grateful to Equinor and Volve Licence partners for releasing Volve seismic field data under an Equinor Open Data Licence." ], [ "More Results", "We provide additional results, where we can visualize the evolution of the reverse process for all the aforementioned case studies (see Figure REF , Figure REF and Figure REF ).", "Note that the subindexes of the $x$ indicate the output of an intermediate step during the inference process, being $x_{1999}$ random noise and $x_{0}$ the final output of the model.", "Figure: This figure displays demultiple results at different intermediate steps for the reverse process.Note that the first two rows show synthetic data examples, while the last two from the Volve dataset.Figure: This figure displays denoising results at different intermediate steps for the reverse process.Note that the examples belong to the Model94 dataset.Figure: This figure displays interpolation results at different intermediate steps for the reverse process.Note that the examples belong to the Model94 dataset." ] ]
2207.10451
[ [ "Computing the homology functor on semi-algebraic maps and diagrams" ], [ "Abstract Developing an algorithm for computing the Betti numbers of semi-algebraic sets with singly exponential complexity has been a holy grail in algorithmic semi-algebraic geometry and only partial results are known.", "In this paper we consider the more general problem of computing the image under the homology functor of a semi-algebraic map $f:X \\rightarrow Y$ between closed and bounded semi-algebraic sets.", "For every fixed $\\ell \\geq 0$ we give an algorithm with singly exponential complexity that computes bases of the homology groups $\\mathrm{H}_i(X), \\mathrm{H}_i(Y)$ (with rational coefficients) and a matrix with respect to these bases of the induced linear maps $\\mathrm{H}_i(f):\\mathrm{H}_i(X) \\rightarrow \\mathrm{H}_i(Y), 0 \\leq i \\leq \\ell$.", "We generalize this algorithm to more general (zigzag) diagrams of maps between closed and bounded semi-algebraic sets and give a singly exponential algorithm for computing the homology functors on such diagrams.", "This allows us to give an algorithm with singly exponential complexity for computing barcodes of semi-algebraic zigzag persistent homology in small dimensions." ], [ "Introduction", "Let $\\mathrm {R}$ be a real closed field and $ an ordered domain contained in $ R$.$ The problem of effective computation of topological properties of semi-algebraic subsets of $\\mathrm {R}^k$ has a long history.", "Semi-algebraic subsets of $\\mathrm {R}^k$ are subsets defined by first-order formulas in the language of ordered fields (with parameters in $\\mathrm {R}$ ).", "Since the first-order theory of real closed fields admits quantifier-elimination, we can assume that each semi-algebraic subset $S \\subset \\mathrm {R}^k$ is defined by some quantifier-free formula $\\phi $ .", "A quantifier-free formula $\\phi (X_1,\\ldots ,X_k)$ in the language of ordered fields with parameters in $, is a formula with atoms of the form $ P = 0, P > 0, P < 0$,$ P X1,...,Xk]$.$ Semi-algebraic subsets of $\\mathrm {R}^k$ have tame topology.", "In particular, closed and bounded semi-algebraic subsets of $\\mathrm {R}^k$ are semi-algebraically triangulable (see for example [3]).", "This means that there exists a finite simplicial complex $K$ , whose geometric realization, $|K|$ , considered as a subset of $\\mathrm {R}^N$ for some $N >0$ , is semi-algebraically homeomorphic to $S$ .", "The semi-algebraic homeomorphism $|K| \\rightarrow S$ is called a semi-algebraic triangulation of $S$.", "All topological properties of $S$ are then encoded in the finite data of the simplicial complex $K$ .", "There exists a classical algorithm which takes as input a quantifier-free formula defining a semi-algebraic set $S$ , and produces as output a semi-algebraic triangulation of $S$ (see for instance [3]).", "However, this algorithm is based on the technique of cylindrical algebraic decomposition, and hence the complexity of this algorithm is prohibitively expensive, being doubly exponential in $k$ .", "More precisely, given a description by a quantifier-free formula involving $s$ polynomials of degree at most $d$ , of a closed and bounded semi-algebraic subset of $S \\subset \\mathrm {R}^k$ , there exists an algorithm computing a semi-algebraic triangulation of $h: |K| \\rightarrow S$ , whose complexity is bounded by $(s d)^{2^{O(k)}}$ .", "Moreover, the size of the simplicial complex $K$ (measured by the number of simplices) is also bounded by $(s d)^{2^{O(k)}}$ ." ], [ "Doubly vs singly exponential", "One can ask whether the doubly exponential behavior for the semi-algebraic triangulation problem is intrinsic to the problem.", "One reason to think that it is not so comes from the fact that the ranks of the homology groups of $S$ (following the same notation as in the previous paragraph), and so in particular those of the simplicial complex $K$ , is bounded by $(O(sd))^k$ (see for instance [3]), which is singly exponential in $k$ .", "So it is natural to ask if this singly exponential upper bound on $\\mathrm {rank}(\\mbox{\\rm H}_*(S))$ is “witnessed” by an efficient semi-algebraic triangulation of small (i.e.", "singly exponential) size.", "This is not known.", "In fact, designing an algorithm with a singly exponential complexity for computing a semi-algebraic triangulation of a given semi-algebraic set has remained a holy grail in the field of algorithmic real algebraic geometry and little progress has been made over the last thirty years on this problem (at least for general semi-algebraic sets).", "We note here that designing algorithms with singly exponential complexity has being a leit motif in the research in algorithmic semi-algebraic geometry over the past decades – starting from the so called “critical-point method” which resulted in algorithms for testing emptiness, connectivity, computing the Euler-Poincaré characteristic, as well as for the first few Betti numbers of semi-algebraic sets (see [2] for a history of these developments and contributions of many authors).", "More recently, such algorithms have also been developed in other (more numerical) models of computations [11], [12], [13].", "In [11], [12], [13], the authors take a different approach.", "Working over $\\mathbb {R}$ , and given a “well-conditioned” semi-algebraic subset $S\\subset \\mathbb {R}^k$ , they compute a witness complex whose geometric realization is $k$ -equivalent to $S$ .", "The size of this witness complex as well the complexity of the algorithm is bounded singly exponentially in $k$ , but also depends on a real parameter, namely the condition number of the input (and so this bound is not uniform).", "The algorithm will fail for ill-conditioned input when the condition number becomes infinite.", "This is unlike the kind of algorithms we consider in the current paper, which are supposed to work for all inputs and with uniform complexity upper bounds.", "So these approaches are not comparable." ], [ "Homology as a functor", "Homology is a functor from the category of topological spaces to $\\mathbb {Z}$ -modules.", "Restricted to the category of semi-algebraic sets and maps and considering homology groups with only rational coefficients, it is a functor from the category of semi-algebraic subsets of $\\mathrm {R}^k, k >0$ to finite dimensional $\\mathbb {Q}$ -vector spaces.", "The algorithms discussed in the previous section aimed only at computing the dimension of the homology groups.", "However, a very natural algorithmic question that arises is the following.", "Problem 1 Given a first-order formula $\\phi _f$ describing the graph of a semi-algebraic map $f:X \\rightarrow Y$ , compute with singly exponential complexity a description of the map $\\mbox{\\rm H}_i(f): \\mbox{\\rm H}_i(X) \\rightarrow \\mbox{\\rm H}_i(Y)$ (i.e.", "compute a basis of of $\\mbox{\\rm H}_i(X),\\mbox{\\rm H}_i(Y)$ and the matrix corresponding to these bases of the linear map $\\mbox{\\rm H}_i(f)$ ).", "More generally, given a diagram of semi-algebraic maps, compute with singly exponential complexity bases of the homology groups of the various semi-algebraic sets and matrices corresponding to the different maps.", "We will say that such an algorithm computes the homology functor for semi-algebraic maps (or more generally diagram of maps) in dimension $i$.", "Remark 1.1 Studying the “functor complexity” of the homology functor was raised in [5] in the setting of categorical complexity.", "In this paper we initiate the study of this functor from the complexity point of view, though the definition of complexity that we use in this paper is the classical notion and not the categorical one introduced in [5].", "Remark 1.2 One important point to note that is that semi-algebraic maps $f:X \\rightarrow Y$ between closed and bounded semi-algebraic sets are not necessarily triangulable (unless $\\dim Y \\le 1$ ).", "An easy example is the so called “blow-down” map.", "Example 1.1 Let $S \\subset \\mathrm {R}^3$ be defined by the formula $(Y - ZX = 0) \\wedge (X^2 + Y^2 - 1 \\le 0) \\wedge (Y - X \\le 0) \\wedge (X \\ge 0),$ $T$ the unit disk in $\\mathrm {R}^2$ , and $f:S \\rightarrow T$ the projection map along the $Z$ -coordinate.", "The map $f$ is easily seen to be not triangulable.", "More precisely, there are no semi-algebraic triangulations $h_S:|K_S| \\rightarrow S, h_T: |K_T| \\rightarrow T,$ and a simplicial map $F:K_S \\rightarrow K_T$ , such that $|F| \\circ h_S = h_T \\circ F.$ Thus, one cannot expect to solve Problem REF by computing semi-algebraic triangulations of $h_X: |K_X| \\rightarrow X$ and $h_Y:|K_Y| \\rightarrow Y$ , such that the induced map $h_Y^{-1} \\circ f \\circ h_X$ is simplicial.", "Remark 1.3 It is not at all clear if the algorithms designed so far for computing Betti numbers of semi-algebraic sets with singly exponential complexity (both exact algorithms such as those in [1], [4] or numerical ones such as those in [11], [12], [13]) can extend to solve Problem REF ." ], [ "Main contributions", "The main contribution of this paper is a partial solution to Problem REF .", "We prove the following theorem which we state informally here (see Theorem REF in Section  for a precise statement).", "Theorem (Computing homology functor on semi-algebraic maps) For each fixed $\\ell \\ge 0$ , there exists an algorithm with singly exponential complexity that computes the homology functor for semi-algebraic maps between closed and bounded semi-algebraic sets in each dimension $i, 0 \\le i \\le \\ell $ .", "Remark 1.4 Note that up to isomorphism (in the category of vector spaces) a linear map $L: V \\rightarrow W$ between finite dimensional vector spaces $V,W$ is determined by the numbers $\\dim V, \\dim W, \\mathrm {rank}(L)$ .", "Thus, for a semi-algebraic map $f:X \\rightarrow Y$ , computing a description up to isomorphism of the linear map $\\mbox{\\rm H}_i(f):\\mbox{\\rm H}_i(X) \\rightarrow \\mbox{\\rm H}_i(Y)$ , amounts to computing $\\dim \\mbox{\\rm H}_i(X),\\dim \\mbox{\\rm H}_i(Y), \\mathrm {rank}(\\mbox{\\rm H}_i(f))$ .", "However, the isomorphism class of more general diagrams of vector spaces (such as zigzag diagrams in Theorem REF ) is not determined just by the dimensions of the vector spaces and the ranks of the linear maps." ], [ "More general diagrams", "Once we have an algorithm for computing the homology functor on semi-algebraic maps it is natural to try to extend it to more complicated diagrams of maps.", "As an example, in this paper we consider zigzag diagrams (see Notation REF below) of semi-algebraic maps.", "We prove the following theorem (see Theorem REF for a more precise statement).", "Theorem (Computing homology functor on zigzag diagrams) For each fixed $\\ell \\ge 0$ , there exists an algorithm with singly exponential complexity that computes the homology functor for diagrams of semi-algebraic maps of the zigzag type between closed and bounded semi-algebraic sets in each dimension $i, 0 \\le i \\le \\ell $ ." ], [ "Computing semi-algebraic zigzag persistence", "As an application of the previous theorem we consider the problem of computing zigzag persistence modules for semi-algebraic maps.", "Persistent homology theory is foundational in the area of topological data analysis mainly in the context of finite simplicial complexes (see for example [17], [16] for background and many applications of persistent homology theory).", "Persistence homology is defined for any filtration of topological spaces and its underlying module structure gives rise to barcodes via decomposition into irreducibles.", "The algorithmic study of persistent homology for filtrations of semi-algebraic sets by the sublevel sets of a polynomial function was initiated in [7] and we refer the reader to that paper for the basic definitions including that of barcodes.", "Although persistent homology was originally defined for filtrations of topological spaces, it has since been generalized to arbitrary diagrams $D:P \\rightarrow \\mathbf {Top}$ (here $D$ is a functor from a poset category $P$ to the category of topological spaces) [10].", "One particular class of diagrams that has been studied in the literature are zigzag diagrams.", "Zigzag persistence modules was introduced in [14] and was studied from the algebraic as well as algorithmic point of view.", "In particular, they showed that it is possible to associate barcodes to zigzag persistent homology as well and gave an algorithm to compute them.", "Zigzag persistence is a very active area of current research [9], [15].", "The previous theorem allows to reduce the computation of the barcode of a zigzag diagram of semi-algebraic maps to that of finite dimensional vector spaces, and here we can use the algorithm in [14].", "In this way we obtain for each fixed $\\ell $ , a singly exponential algorithm for computing zigzag persistent module for semi-algebraic zigzag diagrams in dimensions 0 to $\\ell $ .", "We have the following theorem.", "Theorem 1 (Computing semi-algebraic zigzag persistent modules) For each fixed $\\ell \\ge 0$ , there exists an algorithm with singly exponential complexity that computes the barcode of diagrams of semi-algebraic maps of the zigzag type between closed and bounded semi-algebraic sets in each dimension $i, 0 \\le i \\le \\ell $ ." ], [ "Key ideas", "We summarize here the key ideas that go into the proofs of the theorems stated above." ], [ "Replacing semi-algebraic maps by inclusions which are homologically equivalent", "Semi-algebraic maps between closed and bounded semi-algebraic sets which are inclusions can be treated much more easily than in the general case.", "Indeed, the main result proved in [6] implies that for each fixed $\\ell \\ge 0$ , there exists an algorithm that given a semi-algebraic inclusion map $X \\hookrightarrow Y$ between closed and bounded semi-algebraic sets $X,Y$ as input, computes two finite simplicial complexes, $K$ and $L$ with $K \\subset L$ , such that the inclusion of the corresponding geometric realizations $|K| \\hookrightarrow |L|$ is $\\ell $ -equivalent to the inclusion $X \\hookrightarrow Y$ (see Definition REF ).", "The key new idea in this paper is to replace an arbitrary semi-algebraic map between closed and bounded semi-algebraic sets, by an inclusion map which is equivalent to the given map up to homotopy." ], [ "Realizing the mapping cylinder of a semi-algebraic map up to homotopy as a semi-algebraic set", "The standard tool for achieving this involves taking the mapping cylinder, $\\mathrm {cyl}(f)$ , of the map $f$ .", "The canonical inclusion $X \\hookrightarrow \\mathrm {cyl}(f)$ is then homotopy equivalent to the $f$ .", "However, the definition of mapping cylinder (see Definition REF below) involves identification of certain points of a disjoint union (or equivalently passing to a quotient space).", "While quotients of semi-algebraic sets by proper equivalence relations are semi-algebraic (see for example [20]), no singly exponential algorithm for computing a semi-algebraic description of such a quotient is known.", "We overcome this difficulty by modifying the construction of the mapping cylinder (see Section REF ).", "We associated to the map $f$ , a modified mapping cylinder of $f$ , which we denote $\\widetilde{\\mathrm {{cyl}}}(f)$ (see Definition REF ) which is a semi-algebraic set having similar properties as mapping cylinder of $f$ (see Proposition REF ).", "The definition of $\\widetilde{\\mathrm {{cyl}}}(f)$ does not involve taking quotients.", "However, it does involve quantifier elimination of an existential block of quantifiers (or equivalently taking the image under a linear projection map).", "This leads to a further technical complication.", "It is important for us in order to be able to apply the result of [6] that the semi-algebraic sets that we deal are not only closed, but are described by closed formulas (see Notation REF ).", "While the image under projection of a closed and bounded semi-algebraic set is closed and bounded, the quantifier elimination algorithm that we use to obtain its description by a quantifier-free formula is not guaranteed to produce a closed description.", "Indeed no algorithm with singly exponential complexity is known for obtaining a closed description of a given closed semi-algebraic set and designing such an algorithm is considered to be a difficult open problem in algorithmic semi-algebraic geometry.", "Thus, we need an additional step." ], [ "Replacing closed semi-algebraic set by ones described by closed formulas", "We replace (see Section REF ) a closed semi-algebraic set by another one, which is infinitesimally larger but has the same homotopy type, and moreover is described by a closed formula having size bounded linearly in the size of the original formula (see Notation REF and Proposition REF below).", "For this purpose, as usual in algorithmic semi-algebraic geometry we utilize extensions (obtained by adjoining infinisteimal elements) of the given real closed fields by fields of Puiseux series in these infinitesimals (see Section REF )." ], [ "Mapping cylinder for diagrams", "Finally, to extend our algorithm to zigzag diagrams we need to further generalize the definition of $\\widetilde{\\mathrm {{cyl}}}(f)$ , so that every map in the diagram simultaneously becomes inclusions without changing the homological type of the diagram (see Section ).", "We generalize the definition of $\\widetilde{\\mathrm {{cyl}}}(f)$ to define $\\widetilde{\\mathrm {{cyl}}}(D)$ , the semi-algebraic mapping cylinder of a zigzag diagram $D$ (Definition REF ).", "We prove that the $\\widetilde{\\mathrm {{cyl}}}(D)$ and $D$ are homologically equivalent (Proposition REF ).", "Using similar techniques as in the case of maps (i.e.", "replacing by a set defined by closed formulas etc.)", "we are then able to extend the algorithm for maps to the case of zigzag diagrams, and ultimately give an algorithm to compute barcodes of semi-algebraic zigzag diagrams.", "The rest of the paper is organized as follows.", "In Section  we fix notation and give precise definitions of complexity and topological equivalences.", "We also give the necessary background in real algebraic geometry to make the rest of the paper self-contained.", "In Section  we give precise statements of the theorems proved in this paper.", "The subsequent sections are devoted to the proofs of these theorems.", "In Section  we state and prove some mathematical results that play an important role in the algorithms described in this paper.", "In Section REF , we give the construction of the semi-algebraic mapping cylinder (i.e.", "of the semi-algebraic set $\\widetilde{\\mathrm {{cyl}}}(f)$ referred to in the previous paragraph) and prove its main properties.", "In Section REF we give the procedure for replacing a given closed semi-algebraic set by one having the same homotopy type and which is described by a closed formula.", "In Section  we complete the proof of Theorem REF .", "Finally, in Section  we apply the ideas developed in the proof of Theorem REF to develop an algorithm for computing semi-algebraic zigzag persistent barcodes.", "In Section  we state some open problems." ], [ "Homological equivalence of semi-algebraic maps", "We begin with the precise definitions of the two kinds of topological equivalence that we are going to use in this paper." ], [ "Homological equivalences", "Definition 2.1 (Homological $\\ell $ -equivalences) We say that a semi-algebraic map $f:X \\rightarrow Y$ between two semi-algebraic sets $X,Y$ is a homological $\\ell $ -equivalence, if the induced homomorphisms between the homology groups $\\mbox{\\rm H}_i(f): \\mbox{\\rm H}_i(X) \\rightarrow \\mbox{\\rm H}_i(Y)$ are isomorphisms for $0 \\le i \\le \\ell $ .", "Given two semi-algebraic maps $f:X \\rightarrow Y,f^{\\prime }: X^{\\prime } \\rightarrow Y^{\\prime }$ , a homological $\\ell $ -equivalence between $f$ and $f^{\\prime }$ is a pair of semi-algebraic maps $F_X, F_Y$ such that $f^{\\prime } \\circ F_X = F_Y \\circ f$ , and $F_X, F_Y$ are homological $\\ell $ -equivalences.", "The relation of homological $\\ell $ -equivalence as defined above is not an equivalence relation since it is not symmetric.", "In order to make it symmetric one needs to “formally invert” homologically $\\ell $ -equivalences.", "Definition 2.2 (Homologically $\\ell $ -equivalent) We will say that $X$ is homologically $\\ell $ -equivalent to $Y$ (denoted $X \\sim _\\ell Y$ ), if and only if there exists spaces, $X=X_0,X_1,\\ldots ,X_n=Y$ and homological $\\ell $ -equivalences $f_1,\\ldots ,f_{n}$ as shown below: ${&X_1 [ld]_{f_1}[rd]^{f_2} &&X_3[ld]_{f_3} [rd]^{f_4}& \\cdots &\\cdots &X_{n-1}[ld]_{f_{n-1}}[rd]^{f_{n}} & \\\\X_0 &&X_2 && \\cdots &\\cdots && X_n&}.$ Similarly, we say that a semi-algebraic map $f:X \\rightarrow X^{\\prime }$ is homologically $\\ell $ -equivalent to the semi-algebraic map $g:Y \\rightarrow Y^{\\prime }$ , if there exists maps $f = f_0, f_1,\\ldots , f_n = g$ , and homological $\\ell $ -equivalences, $F_1,\\ldots ,F_n$ , as below: ${&f_1 [ld]_{F_1}[rd]^{F_2} &&f_3[ld]_{F_3} [rd]^{F_4}& \\cdots &\\cdots &f_{n-1}[ld]_{F_{n-1}}[rd]^{F_{n}} & \\\\f_0 &&f_2 && \\cdots &\\cdots && f_n&}.$ It is clear that $\\sim _\\ell $ is an equivalence relation.", "Remark 2.1 One main tool that we use is the Vietoris-Begle theorem.", "Since, there are many versions of the Vietoris-Begle theorem in the literature we make precise what we use below.", "If $X \\subset \\mathbb {R}^m, Y \\subset \\mathbb {R}^n$ are compact semi-algebraic subsets (and so are locally contractible), and $f:X \\rightarrow Y$ is a semi-algebraic continuous map such that $f^{-1}(y)$ is homologically $\\ell $ -connected for each $y \\in Y$ , then we can conclude that $f$ is a homological $\\ell $ -equivalence (see for example, the statement of the Vietoris-Begle theorem in [18]).", "This latter theorem is also valid for semi-algebraic maps between closed and bounded semi-algebraic sets over arbitrary real closed fields, once we know it for maps between compact semi-algebraic subsets over $\\mathbb {R}$ .", "This follows from a standard argument using the Tarski-Seidenberg transfer principle and the fact that homology groups of closed bounded semi-algebraic sets can be defined in terms of finite triangulations.", "We will refer to this version of the Vietoris-Begle theorem as the homological version of the Vietoris-Begle theorem." ], [ "Definition of complexity of algorithms", "We will use the following notion of “complexity of an algorithm” in this paper.", "We follow the same definition as used in the book [3].", "Definition 2.3 (Complexity of algorithms) In our algorithms we will take as input quantifier-free first order formulas whose terms are polynomials with coefficients belonging to an ordered domain $ contained in a real closed field $ R$.By \\emph {complexity of an algorithm} we will mean the number of arithmetic operations and comparisons in the domain $ .", "If $ \\mathbb {R}$ , then the complexity of our algorithm will agree with the Blum-Shub-Smale notion of real number complexity [8].", "In case, $ \\mathbb {Z}$ , then we are able to deduce the bit-complexity of our algorithms in terms of the bit-sizes of the coefficients of the input polynomials, and this will agree with the classical (Turing) notion of complexity." ], [ "Real algebraic preliminaries", "Notation 2.1 (Realizations, $\\mathcal {P}$ -, $\\mathcal {P}$ -closed semi-algebraic sets) For any finite set of polynomials $\\mathcal {P}\\subset \\mathrm {R}[ X_{1} , \\ldots ,X_{k} ]$ , we call any quantifier-free first order formula $\\phi $ with atoms, $P =0, P < 0, P>0, P \\in \\mathcal {P}$ , to be a $\\mathcal {P}$ -formula.", "Given any semi-algebraic subset $Z \\subset \\mathrm {R}^k$ , we call the realization of $\\phi $ in $Z$ , namely the semi-algebraic set ${\\mathcal {R}}(\\phi ,Z) & := & \\lbrace \\mathbf {x} \\in Z \\mid \\phi (\\mathbf {x})\\rbrace $ a $\\mathcal {P}$ -semi-algebraic subset of $Z$.", "If $Z = \\mathrm {R}^k$ , we will denote the realization of $\\phi $ in $\\mathrm {R}^k$ by ${\\mathcal {R}}(\\phi )$ .", "We say that a quantifier-free formula $\\phi $ is closed if it is a formula in disjunctive normal form with no negations, and with atoms of the form $P \\ge 0, P \\le 0$ (resp.", "$P > 0, P < 0$ ), where $P \\in \\mathrm {R}[X_1,\\ldots ,X_k]$ .", "If the set of polynomials appearing in a closed formula is contained in a finite set $\\mathcal {P}$ , we will call such a formula a $\\mathcal {P}$ -closed formula, and we call the realization, ${\\mathcal {R}}\\left(\\phi \\right)$ , a $\\mathcal {P}$ -closed semi-algebraic set.", "We now state precisely the main results proved in this paper." ], [ "Main Results", "Theorem 2 For each $\\ell \\ge 0$ , there is an algorithm that accepts as input finite sets of polynomials $\\mathcal {P}_S \\subset X_1,\\ldots ,X_k],$ $\\mathcal {P}_T \\subset Y_1,\\ldots ,Y_m],$ $\\mathcal {P}_f \\subset X_1,\\ldots ,X_k,Y_1,\\ldots Y_m];$ a $\\mathcal {P}_S$ -closed formula $\\phi _S$ , a $\\mathcal {P}_T$ -closed formula $\\phi _T$ , and a $\\mathcal {P}_f$ -closed formula $\\phi _f$ , such that ${\\mathcal {R}}(\\phi _S), {\\mathcal {R}}(\\phi _T)$ are bounded and ${\\mathcal {R}}(\\phi _f,\\mathrm {R}^k \\times \\mathrm {R}^m)$ is the graph of a semi-algebraic map $f: S = {\\mathcal {R}}(\\phi _S) \\rightarrow {\\mathcal {R}}(\\phi _T) = T$ ; and produces as output for each $i, 0 \\le i \\le \\ell $ : bases of $\\mbox{\\rm H}_i(S),\\mbox{\\rm H}_i(T)$ ; the matrix corresponding to these bases of the linear map $\\mbox{\\rm H}_i(f): \\mbox{\\rm H}_i(X)\\rightarrow \\mbox{\\rm H}_i(Y)$ .", "The complexity of the algorithm is bounded by $(s d)^{(k+m)^{O(\\ell )}},$ where $s = \\max (\\mathrm {card}(\\mathcal {P}_S), \\mathrm {card}(\\mathcal {P}_T), \\mathrm {card}(\\mathcal {P}_f)),$ and $d = \\max _{P \\in \\mathcal {P}_S \\cup \\mathcal {P}_T \\cup \\mathcal {P}_f} \\deg (P).$ Theorem REF will follow (using standard linear algebra algorithms) from the following theorem.", "Theorem 3 For each $\\ell \\ge 0$ , there is an algorithm that accepts as input finite sets of polynomials $\\mathcal {P}_S \\subset X_1,\\ldots ,X_k],$ $\\mathcal {P}_T \\subset Y_1,\\ldots ,Y_m],$ $\\mathcal {P}_f \\subset X_1,\\ldots ,X_k,Y_1,\\ldots Y_m];$ a $\\mathcal {P}_S$ -closed formula $\\phi _S$ , a $\\mathcal {P}_T$ -closed formula $\\phi _T$ , and a $\\mathcal {P}_f$ -closed formula $\\phi _f$ , such that ${\\mathcal {R}}(\\phi _S), {\\mathcal {R}}(\\phi _T)$ are bounded and ${\\mathcal {R}}(\\phi _f,\\mathrm {R}^k \\times \\mathrm {R}^m)$ is the graph of a semi-algebraic map $f: S = {\\mathcal {R}}(\\phi _S) \\rightarrow {\\mathcal {R}}(\\phi _T) = T$ ; and produces as output simplicial complexes $\\Delta _S, \\Delta _T, \\Delta _S \\subset \\Delta _T$ , such that $|\\Delta _S| \\hookrightarrow |\\Delta _Y|$ is homologically $\\ell $ -equivalent to $f: S \\rightarrow T$ .", "The complexity of the algorithm is bounded by $(s d)^{(k+m)^{O(\\ell )}},$ where $s = \\max (\\mathrm {card}(\\mathcal {P}_S), \\mathrm {card}(\\mathcal {P}_T), \\mathrm {card}(\\mathcal {P}_f)),$ and $d = \\max _{P \\in \\mathcal {P}_S \\cup \\mathcal {P}_T \\cup \\mathcal {P}_f} \\deg (P).$ We extend Theorem REF to zigzag diagrams.", "We prove the following theorem.", "Theorem 4 For each fixed $\\ell \\ge 0$ , there exists an algorithm with the following properties.", "The algorithm takes the following input: $R >0$ ; a tuple of closed formulas $\\Phi = (\\phi _0,\\ldots ,\\phi _n)$ , with $S_i = {\\mathcal {R}}(\\phi _i,B) \\subset \\mathrm {R}^k$ for $0 \\le i \\le n$ , where $B = \\overline{B_k(0,R)}$ ; a tuple of closed formulas $\\Psi = (\\psi _1,\\ldots ,\\psi _n),$ such that ${\\mathcal {R}}(\\Psi _i, B \\times B)$ is the graph of a semi-algebraically continuous map $f_i:S_i \\rightarrow S_{i-1}$ if $i$ is odd, and is the graph of a semi-algebraically continuous map $f_i:S_{i-1} \\rightarrow S_{i}$ if $i$ is even.", "The algorithm produces as output for each $i,0 \\le i \\le \\ell $ : Bases of the homology groups $\\mbox{\\rm H}_i(S_0), 1 \\le j \\le n$ ; Matrices of the maps $\\mbox{\\rm H}_i(f_j): \\mbox{\\rm H}_i(S_{j-1}) \\rightarrow \\mbox{\\rm H}_i(S_j), 1 \\le j \\le n$ .", "The complexity of the algorithm is bounded by $(n s d)^{k^{O(\\ell )}}$ , where $s$ is the cardinality of the set of polynomials occurring in all the formulas in the input and $d$ their maximum degree." ], [ "Mathematical Preliminaries", "In this section we state and prove some mathematical results that will play a role in the proofs of the main theorems." ], [ "Replacing semi-algebraic maps by inclusion maps", "The key idea that goes into the proof of Theorem REF below is a semi-algebraic adaptation of the classical topological notion of a mapping cylinder of a map $f:X \\rightarrow Y$ which we recall now.", "Definition 4.1 (Mapping cylinder) Given a map $f: X \\rightarrow Y$ , the mapping cylinder $\\mathrm {cyl}(f)$ of $f$ is the space defined by $\\mathrm {cyl}(f) = \\left((X \\times [0,1]) \\coprod Y\\right)/\\sim ,$ where for each $x \\in X$ , $(x,0) \\sim f(x)$ .", "It is easy to prove that there exists a deformation retraction $p:\\mathrm {cyl}(f) \\rightarrow Y$ .", "Denoting by $i:X \\rightarrow \\mathrm {cyl}(f)$ , the inclusion $i(x) = (x,1)$ , we have a factorization $f = p \\circ i$ .", "Since, $p$ is a homotopy equivalence, one obtains that the inclusion $i:X \\rightarrow \\mathrm {cyl}(f)$ is homologically equivalent to the map $f$ via the commutative diagram ${X [r]^i [d]^{\\mathrm {id}} & \\mathrm {cyl}(f) [d]^p \\\\X [r]^f & Y}.$ Now suppose that $f:S \\rightarrow T$ is a semi-algebraic map between two closed and bounded semi-algebraic sets $S,T$ .", "The mapping cylinder construction suggests a way to construct an inclusion map $i:S \\hookrightarrow \\mathrm {cyl}(f)$ which is homologically equivalent to $f$ .", "This is important for us since once we have replaced the given map $f$ by an inclusion, we can apply the main result in [6] to obtain a pair of simplicial complexes, $\\Delta _1 \\subset \\Delta _2$ such that the inclusion $|\\Delta _1| \\hookrightarrow |\\Delta _1|$ is homologically $\\ell $ -equivalent to $i:S \\hookrightarrow \\mathrm {cyl}(f)$ , and hence to $f$ .", "However, one obstruction to realizing the above goal is the fact that the definition of the mapping cylinder involves taking a quotient.", "It is true that the quotient of a closed and bounded semi-algebraic set by a proper equivalence relation is homeomorphic to a semi-algebraic set [20] – however, there is no algorithm known with a singly exponential complexity for obtaining a semi-algebraic description of this quotient.", "We take a slightly different route.", "We define below a modification of the classical mapping cylinder of a map $f$ , which we denote by $\\widetilde{\\mathrm {{cyl}}}(f)$ (see Definition REF ) which in the case where $f$ is a semi-algebraic map between closed and bounded semi-algebraic sets satisfies the same properties as the classical mapping cylinder i.e.", "$f$ factorizes through an inclusion $i:S \\rightarrow \\widetilde{\\mathrm {{cyl}}}(f)$ and a semi-algebraic homotopy equivalence $p:\\widetilde{\\mathrm {{cyl}}}(f) \\rightarrow T$ , so that finally $f = p \\circ i$ , and the inclusion $i:S \\rightarrow \\widetilde{\\mathrm {{cyl}}}(f)$ is homologically equivalent to $f$ .", "The main advantage of $\\widetilde{\\mathrm {{cyl}}}(f)$ is that, as a semi-algebraic set it is described by a (existentially) quantified formula (see Eqn.", "(REF )) which is determined in a simple way from any first-order formulas defining the semi-algebraic sets $S$ , $T$ and the graph of the map $f$ .", "Using effective quantifier-elimination algorithms we can then obtain a quantifier-free formula defining $\\widetilde{\\mathrm {{cyl}}}(f)$ .", "There is one technical issue that creates complications in the above picture.", "If $S,T$ are closed and bounded semi-algebraic sets, then the semi-algebraic set $\\widetilde{\\mathrm {{cyl}}}(f)$ is obtained as the image under projection of a closed and bounded semi-algebraic set, and is thus known to be closed and bounded.", "However, even if we start with closed formulas defining $S,T$ and $\\mathrm {graph}(f)$ , since the known effective quantifier-elimination algorithm with single exponential complexity that we use does not guarantee that the quantifier-free formula that we obtain describing $\\widetilde{\\mathrm {{cyl}}}(f)$ is closed.", "It is important for the algorithm downstream that we use for simplicial replacement that this description be closed.", "We deal with this technical issue in a subsequent section.", "We now define $\\widetilde{\\mathrm {{cyl}}}(f)$ .", "Definition 4.2 (Mapping cylinder for semi-algebraic maps) Let $S \\subset \\mathrm {R}^k, T \\subset \\mathrm {R}^m$ be semi-algebraic subsets and $f:S \\rightarrow T$ a semi-algebraic map.", "We denote $\\widetilde{\\mathrm {{cyl}}}(f) = \\lbrace (\\lambda \\cdot x, f(x), \\lambda ) \\ | \\ x \\in S, \\lambda \\in [0, 1]\\rbrace \\cup \\lbrace (\\mathbf {0}, y, 0) \\ | \\ y \\in T\\rbrace ,$ With the above notation we have the following proposition.", "Proposition 4.1 Suppose that $S$ is closed and bounded and let $r: \\widetilde{\\mathrm {{cyl}}}(f) \\rightarrow T$ be the map defined by $r(x, y, \\lambda ) = y$ .", "Then, $r$ is a homological equivalence.", "It follows from Eqn.", "(REF ), that for $y \\in T$ , $r^{-1}(y) = \\lbrace (\\lambda \\cdot x, y , \\lambda ) \\ | \\ x \\in S, f(x) = y, \\lambda \\in [0, 1]\\rbrace \\cup \\lbrace (\\mathbf {0}, y, 0)\\rbrace .$ There are two cases.", "If $y \\in \\mathrm {Im}(f)$ , then $r^{-1}(y) = \\lbrace (\\lambda \\cdot x, y , \\lambda ) \\ | \\ x \\in S, f(x) = y, \\lambda \\in [0, 1]\\rbrace ,$ which is semi-algebraically homeomorphic to the cone over $f^{-1}(y)$ and hence semi-algebraically contractible.", "If $y \\notin \\mathrm {Im}(f)$ , then $r^{-1}(y) = \\lbrace (\\mathbf {0}, y, 0)\\rbrace $ and hence semi-algebraically contractible.", "The proposition now follows from the homological version of the Vietoris-Begle theorem (see Remark REF ).", "Proposition 4.2 Let $i: S \\rightarrow \\widetilde{\\mathrm {{cyl}}}(f)$ be the injective map $x \\mapsto (x, f(x), 1)$ .", "Then the following diagram is commutative.", "$\\begin{tikzcd}S [r, \"f\"] [d, hook, \"i\"] & T [d, \"id\"] \\\\\\widetilde{\\mathrm {{cyl}}}(f)[r, \"r\"]& T\\end{tikzcd}$ This is immediate from the definition of $\\widetilde{\\mathrm {{cyl}}}(f)$ (Eqn.", "(REF )) and the definition of the map $r$ .", "Proposition 4.3 Suppose that $S$ is closed and bounded.", "Then the inclusion map, $i(S) \\hookrightarrow \\widetilde{\\mathrm {{cyl}}}(f)$ is homologically equivalent to $f:S \\rightarrow T$ .", "Follows directly from Propositions REF and REF .", "Proposition 4.4 Let $\\phi _S(X_1,\\ldots ,X_k), \\phi _T(Y_1,\\ldots ,Y_m), \\phi _f(X_1,\\ldots ,X_k,Y_1,\\ldots ,Y_m),$ be first order formulas such that ${\\mathcal {R}}(\\phi _f,\\mathrm {R}^k \\times \\mathrm {R}^m)$ is the graph of a semi-algebraic map $f: S = {\\mathcal {R}}(\\phi _S) \\rightarrow {\\mathcal {R}}(\\phi _T) = T$ .", "Let $\\Theta _{\\phi _S,\\psi _T,\\phi _f}(\\overline{X},\\overline{Y},T) = \\ \\big ( \\ (T=0) \\wedge \\overline{X}=\\mathbf {0} \\wedge \\phi _{T}(\\overline{Y}) \\ \\big ) \\ \\vee \\\\\\big ( \\ (0 \\le T \\le 1) \\wedge \\exists \\ \\overline{Z} \\ (\\overline{X}=T \\overline{Z} \\wedge \\phi _S(\\overline{Z}) ) \\wedge \\phi _f(\\overline{Z}, \\overline{Y}) \\wedge \\phi _{T}(\\overline{Y}) \\ \\big ).$ Then, ${\\mathcal {R}}(\\Theta _{\\phi _S,\\psi _T,\\phi _f}) = \\widetilde{\\mathrm {{cyl}}}(f).$ Moreover, ${\\mathcal {R}}(\\Theta _{\\phi _S,\\psi _T,\\phi _f} \\wedge (T=1)) = i(S) \\hookrightarrow \\widetilde{\\mathrm {{cyl}}}(f).$ Follows directly from the definition of $\\widetilde{\\mathrm {{cyl}}}(f)$ (Eqn.", "(REF ))." ], [ "Replacing a closed semi-algebraic set by a semi-algebraic set defined by a closed formula", "One basic open problem in algorithmic semi-algebraic geometry is to design an efficient algorithm which takes as input a quantifier-free formula $\\phi $ such that ${\\mathcal {R}}(\\phi )$ is a closed semi-algebraic set $S \\subset \\mathrm {R}^k$ , and produces as output a finite set $\\mathcal {P} \\subset \\mathrm {R}[X_1,\\ldots ,X_k]$ and a $\\mathcal {P}$ -closed formula $\\psi $ such that ${\\mathcal {R}}(\\phi ) = {\\mathcal {R}}(\\psi )$ .", "No algorithm with a singly exponential complexity is known for this problem.", "In the absence of an efficient algorithm for solving the above problem, we consider the following substitute that is often enough for application.", "A fundamental construction due to Gabrielov and Vorobjov [19] gives an efficient procedure to replace an arbitrary semi-algebraic set by a closed and bounded one having the same homotopy type.", "This homotopy equivalence is usually not a deformation retraction.", "We describe below a construction similar to that in [19], when applied to a formula $\\phi $ such that ${\\mathcal {R}}(\\phi ,B)$ is a closed semi-algebraic subset of a closed Euclidean ball $B \\subset \\mathrm {R}^k$ , produces a closed formula $\\psi $ defined over a real closed extension $\\mathrm {R}^{\\prime }$ of $\\mathrm {R}$ , such that the extension of ${\\mathcal {R}}(\\phi ,B)$ to $\\mathrm {R}^{\\prime k}$ is a semi-algebraic deformation retraction of ${\\mathcal {R}}(\\psi ,B)$ .", "But we first need to introduce some preliminary definitions and notation." ], [ "Real closed extensions and Puiseux series", "We will need some properties of Puiseux series with coefficients in a real closed field.", "We refer the reader to [3] for further details.", "Notation 4.1 For $\\mathrm {R}$ a real closed field we denote by $\\mathrm {R}\\left\\langle {\\varepsilon }\\right\\rangle $ the real closed field of algebraic Puiseux series in ${\\varepsilon }$ with coefficients in $\\mathrm {R}$ .", "We use the notation $\\mathrm {R}\\left\\langle {\\varepsilon }_{1},\\ldots , {\\varepsilon }_{m} \\right\\rangle $ to denote the real closed field $\\mathrm {R}\\left\\langle {\\varepsilon }_{1} \\right\\rangle \\left\\langle {\\varepsilon }_{2} \\right\\rangle \\cdots \\left\\langle {\\varepsilon }_{m} \\right\\rangle .$ Note that in the unique ordering of the field $\\mathrm {R}\\left\\langle {\\varepsilon }_{1}, \\ldots , {\\varepsilon }_{m}\\right\\rangle $ , $0< {\\varepsilon }_{m} \\ll {\\varepsilon }_{m-1} \\ll \\cdots \\ll {\\varepsilon }_{1} \\ll 1$ .", "Notation 4.2 For elements $x \\in \\mathrm {R}\\left\\langle {\\varepsilon }\\right\\rangle $ which are bounded over $\\mathrm {R}$ we denote by $\\lim _{{\\varepsilon }} x$ to be the image in $\\mathrm {R}$ under the usual map that sets ${\\varepsilon }$ to 0 in the Puiseux series $x$ .", "Notation 4.3 If $\\mathrm {R}^{\\prime }$ is a real closed extension of a real closed field $\\mathrm {R}$ , and $S\\subset \\mathrm {R}^{k}$ is a semi-algebraic set defined by a first-order formula with coefficients in $\\mathrm {R}$ , then we will denote by ${\\rm Ext}(S, \\mathrm {R}^{\\prime }) \\subset \\mathrm {R}^{\\prime k}$ the semi-algebraic subset of $\\mathrm {R}^{\\prime k}$ defined by the same formula.", "It is well known that ${\\rm Ext}(S, \\mathrm {R}^{\\prime })$ does not depend on the choice of the formula defining $S$ [3].", "Let $\\mathcal {P} = \\lbrace P_1,\\ldots ,P_s\\rbrace \\subset \\mathrm {R}[X_1,\\ldots ,X_k]$ be a finite set of polynomials, and let $B \\subset \\mathrm {R}^k$ be a closed euclidean ball.", "Notation 4.4 For $\\sigma \\in \\lbrace 0,1,-1\\rbrace ^\\mathcal {P}$ , let $\\mathrm {level}(\\sigma ) = \\mathrm {card}(\\lbrace P \\in \\mathcal {P} \\mid \\sigma (P) = 0 \\rbrace ).$ For $c,d \\in \\mathrm {R}, 0< d < c$ , and $\\sigma \\in \\lbrace 0,1,-1\\rbrace ^\\mathcal {P}$ , let $\\overline{\\sigma }(c,d)$ denote the closed formula $\\bigwedge _{\\sigma (P) = 0} (-d \\le P \\le d) \\wedge \\bigwedge _{\\sigma (P) = 1} (P \\ge c) \\wedge \\bigwedge _{\\sigma (P) = -1} (P \\le -c).$ Notation 4.5 For a $\\mathcal {P}$ -formula $\\phi $ we denote $\\Sigma _{\\phi } =\\left\\lbrace \\sigma \\in \\lbrace 0,1,-1\\rbrace ^\\mathcal {P} \\mid \\left(\\bigwedge _{P \\in \\mathcal {P}} (\\mathrm {sign}(P) = \\sigma (P))\\right) \\Rightarrow \\phi \\right\\rbrace ,$ where “$\\Rightarrow $ ” denotes logical implication.", "Let $\\mathrm {R}^{\\prime } = \\mathrm {R}{\\langle }\\mu _s,\\nu _s, \\cdots , \\mu _0, \\nu _0{\\rangle }= \\mathrm {R}{\\langle }\\bar{\\eta }{\\rangle },$ denoting by $\\bar{\\eta }$ the sequence $\\mu _s,\\nu _s, \\ldots , \\mu _0, \\nu _0$ .", "Notation 4.6 We denote $\\mathcal {P}^*(\\bar{\\mu },\\bar{\\nu }) = \\bigcup _{P \\in \\mathcal {P}} \\bigcup _{j = 0}^{s} \\lbrace P \\pm \\mu _j, P\\pm \\nu _j\\rbrace \\subset \\mathrm {R}^{\\prime }[X_1,\\ldots ,X_k].$ Finally, Notation 4.7 We denote by $\\overline{\\phi (\\bar{\\mu },\\bar{\\nu })}$ the $\\mathcal {P}^*(\\bar{\\mu },\\bar{\\nu })$ -closed formula $\\bigvee _{\\sigma \\in \\Sigma _{\\phi }} \\overline{\\sigma }(\\mu _{\\mathrm {level}(\\sigma )},\\nu _{\\mathrm {level}(\\sigma )})$ (see Notation REF ).", "Following the notation introduced above.", "Proposition 4.5 Let $R > 0, B = \\overline{B_k(0,R)}$ , and suppose that $S = {\\mathcal {R}}(\\phi ,B)$ is closed.", "Then, $S^{\\prime } \\searrow S,$ where $S^{\\prime } = {\\mathcal {R}}(\\overline{\\phi (\\bar{\\mu },\\bar{\\nu })},{\\rm Ext}(B,\\mathrm {R}^{\\prime }))$ .", "In particular, ${\\rm Ext}(S,\\mathrm {R}^{\\prime })$ is a semi-algebraic deformation retract of $S^{\\prime }$ .", "See [6]." ], [ "Algorithmic Preliminaries", "We recall the following definition from [6].", "Notation 5.1 (Diagram of various unions of a finite number of subspaces) Let $J$ be a finite set, $A$ a topological space, and $\\mathcal {A} = (A_j)_{j \\in J}$ a tuple of subspaces of $A$ indexed by $J$ .", "For any subset $J^{\\prime } \\subset J$ , we denote $\\mathcal {A}^{J^{\\prime }} &=& \\bigcup _{j^{\\prime } \\in J^{\\prime }} A_{j^{\\prime }}, \\\\\\mathcal {A}_{J^{\\prime }} &=& \\bigcap _{j^{\\prime } \\in J^{\\prime }} A_{j^{\\prime }}, \\\\$ We consider $2^J$ as a category whose objects are elements of $2^J$ , and whose only morphisms are given by: $2^J(J^{\\prime },J^{\\prime \\prime }) &=& \\emptyset \\mbox{ if } J^{\\prime } \\lnot \\subset J^{\\prime \\prime }, \\\\2^J(J^{\\prime },J^{\\prime \\prime }) &=& \\lbrace \\iota _{J^{\\prime },J^{\\prime \\prime }}\\rbrace \\mbox{ if } J^{\\prime } \\subset J^{\\prime \\prime }.$ We denote by $\\mathbf {Simp}^J(\\mathcal {A}):2^J \\rightarrow \\mathbf {Top}$ the functor (or the diagram) defined by $\\mathbf {Simp}^J(\\mathcal {A})(J^{\\prime }) = \\mathcal {A}^{J^{\\prime }}, J^{\\prime } \\in 2^J,$ and $\\mathbf {Simp}^J(\\mathcal {A})(\\iota _{J^{\\prime },J^{\\prime \\prime }})$ is the inclusion map $\\mathcal {A}^{J^{\\prime }} \\hookrightarrow \\mathcal {A}^{J^{\\prime \\prime }}$ .", "We will use an algorithm whose existence is proved in [6], and which we will refer to as Algorithm for computing simplicial replacement, that given a tuple of closed-formulas $\\Phi = (\\phi _0,\\ldots ,\\phi _N)$ , $R >0$ , and $\\ell \\ge 0$ , produces as output a simplicial complex $K$ and subcomplexes $K_i, 0\\le i \\le N$ of $K$ , such that the diagram $\\mathbf {Simp}^{[N]}\\left(({\\mathcal {R}}(\\phi _i,\\overline{B_k(0,R)}))_{i \\in [N]}\\right)$ is homologically $\\ell $ -equivalent ([6]) to the diagram $\\mathbf {Simp}^{[N]}\\left((|K_i|)_{i \\in [N]}\\right)$ (where $|K_i| \\subset |K|$ is the geometric realization of $K_i$ and $[N] = \\lbrace 0,\\ldots ,N\\rbrace $ ).", "We refer the reader to [6] for the details.", "The complexity of this algorithm, as well as the size of the output simplicial complex $\\Delta $ , are bounded by $(N s d)^{k^{O(m)}},$ where $s = \\mathrm {card}(\\mathcal {P})$ , and $d = \\max _{P \\in \\mathcal {P}} \\deg (P)$ ." ], [ "Proofs of Theorems ", "We first prove Theorem REF by describing an algorithm (Algorithm REF below) and proving its correctness and analyzing its complexity.", "Theorem REF will then follow in a straightforward way.", "Algorithm 1 (H) (Applying homology functor to semi-algebraic maps) [1] $\\ell \\ge 0$ ; a finite sets of polynomials $\\mathcal {P}_S \\subset X_1,\\ldots ,X_k], ,\\mathcal {P}_T \\subset Y_1,\\ldots ,Y_m], \\mathcal {P}_f \\subset X_1,\\ldots ,X_k,Y_1,\\ldots Y_m]$ ; $\\mathcal {P}_S$ -closed formulas $\\phi _S$ , $\\mathcal {P}_T$ -closed formula $\\phi _T$ , and $\\mathcal {P}_f$ -closed formula $\\phi _f$ , such that ${\\mathcal {R}}(\\phi _S), {\\mathcal {R}}(\\phi _T)$ are bounded and ${\\mathcal {R}}(\\phi _f,\\mathrm {R}^k \\times \\mathrm {R}^m)$ is the graph of a semi-algebraic map $f: S = {\\mathcal {R}}(\\phi _S) \\rightarrow {\\mathcal {R}}(\\phi _T) = T$ .", "Simplicial complexes $\\Delta _S, \\Delta _T, \\Delta _S \\subset \\Delta _T$ , such that $|\\Delta _S| \\hookrightarrow |\\Delta _Y|$ is homologically $\\ell $ -equivalent to $f: S \\rightarrow T$ .", "Let ${\\varepsilon }$ be an infinitesimal and $\\mathrm {R}\\leftarrow \\mathrm {R}{\\langle }{\\varepsilon }{\\rangle }$ .", "Call Algorithm 14.5 in [3] (Quantifier Elimination) with input the existentially quantified formula $\\Theta _{\\phi _S,\\psi _T,\\phi _f}$ to obtain a quantifier-free formula $\\Theta ^{\\prime }_{\\phi _S,\\psi _T,\\phi _f}$ equivalent to $\\Theta _{\\phi _S,\\psi _T,\\phi _f}$ .", "$\\Theta ^{\\prime \\prime }_{\\phi _S,\\psi _T,\\phi _f} \\leftarrow \\Theta ^{\\prime }_{\\phi _S,\\psi _T,\\phi _f} \\wedge (T=1)$ .", "$\\mathcal {P} \\leftarrow \\mbox{ the set of polynomials appearing in the formula $\\Theta ^{\\prime \\prime }_{\\phi _S,\\psi _T,\\phi _f}$}.$ Apply Algorithm for computing simplicial replacement, with input the pair of formulas $(\\overline{\\Theta ^{\\prime \\prime }_{\\phi _S,\\psi _T,\\phi _f}}, \\overline{\\Theta ^{\\prime }_{\\phi _S,\\psi _T,\\phi _f}})$ (recall Notation REF ) and $R = \\frac{1}{{\\varepsilon }}$ , and obtain as output a simplicial complex $K$ , and subcomplexes $K_1, K_2$ of $K$ .", "$\\Delta _S \\leftarrow K_1, \\Delta _T \\leftarrow K_1 \\cup K_2$ .", "Output the pair $(\\Delta _S,\\Delta _T)$ .", "The complexity of the algorithm is bounded by $(s d)^{(k+m)^{O(\\ell )}},$ where $s = \\max (\\mathrm {card}(\\mathcal {P}_S), \\mathrm {card}(\\mathcal {P}_T), \\mathrm {card}(\\mathcal {P}_f)),$ and $d = \\max _{P \\in \\mathcal {P}_S \\cup \\mathcal {P}_T \\cup \\mathcal {P}_f} \\deg (P).$ The correctness of the algorithm follows from the correctness of Algorithm 14.5 in [3] (Quantifier Elimination), Propositions REF and REF as well as the correctness of Algorithm for computing simplicial replacements in [6].", "The complexity of the algorithm follows from the complexity of Algorithm 14.5 in [3] (Quantifier Elimination), and the Algorithm for computing simplicial replacement.", "The theorem follows from the correctness and the complexity analysis of Algorithm REF .", "Theorem REF follows from Theorem REF after observing that once we have the finite simplicial complexes $\\Delta _S, \\Delta _T$ with $\\Delta _S \\subset \\Delta _T$ , then using standard algorithms from linear algebra (Gauss-Jordan elimination) one can compute bases of $\\mbox{\\rm H}_i(\\Delta _S)$ and $\\mbox{\\rm H}_i(\\Delta _T), 0 \\le i \\le \\ell $ , and the matrix for the map $\\mbox{\\rm H}(j)_i$ , where $j:\\Delta _S \\hookrightarrow \\Delta _T$ is the inclusion map.", "We omit the details but it is clear that the complexity of this step is bounded polynomially in the size of $\\Delta _T$ .", "This proves Theorem REF ." ], [ "Application to semi-algebraic zigzag persistence", "In this section we discuss one application of the main result of the paper.", "In the previous section we have designed an algorithm for effectively applying the homology functor, $\\mbox{\\rm H}_i(\\cdot )$ , to semi-algebraic maps between closed and bounded semi-algebraic sets.", "A next step is to effectively apply the homology functor to more general diagrams (of semi-algebraic maps).", "One important class of diagrams that has been studied previously from an effective homology computation from the point of view were diagrams of the form: $S_0 \\rightarrow S_1 \\rightarrow \\cdots \\rightarrow S_n,$ where each $S_i$ is a closed and bounded semi-algebraic sets and all maps are inclusions.", "This is the setting of persistent homology.", "The problem of computing the persistent homology groups (and the associated barcode) of a filtration of a given semi-algebraic set by the sub-level sets of a semi-algebraic map was studied in [7].", "It is proved in [7] that for each fixed $\\ell > 0$ , there exists an algorithm with singly exponential complexity that computes the first $\\ell $ -dimensional barcodes of such a filtration.", "The ideas introduced in the previous section allow us now to consider more general diagrams.", "Indeed the notion of persistent homology has been generalized to arbitrary diagrams $D:P \\rightarrow \\mathbf {Top}$ (here $D$ is a functor from a poset category $P$ to the category of topological spaces) [10].", "One particular class of poset diagrams that has been studied in the literature are the so called “zigzag” diagrams that we define below (see [14])." ], [ "Zigzag diagrams", "We begin by defining precisely zigzag diagrams.", "Notation 6.1 We denote by $\\mathbf {Z}_n$ the poset whose set of elements is $[n]$ , and whose Hasse diagram is indicated in the following figure.", "${&1 [ld] [rd] && 3 [ld] [rd] && 5[ld] [rd] & \\cdots & \\\\0 && 2 && 4 && 6 & \\cdots }$ Definition 6.1 (Zigzag diagrams) We call a functor $D:\\mathbf {Z}_n \\rightarrow \\mathbf {SA}_{\\mathrm {R}}$ from the poset category $\\mathbf {Z}_n$ to the category of semi-algebraic sets and semi-algebraic maps a zigzag diagram.", "Remark 6.1 The zigzag diagrams that we consider in this paper where the arrows (maps) alternate in directions are not the most general possible.", "A general zigzag diagram need not have this alternation.", "Applying the homology functor to general zigzag diagrams gives rise to zigzag persistence modules which are precisely the quiver representations of quivers of type $\\mathbf {A}$ (see [14]).", "We restrict to the case of alternating arrows (also called regular zigzag diagrams for the ease of exposition and simplifying notation.", "We prove below that for each fixed $\\ell \\ge 0$ , there exists an algorithm that given a zigzag functor $D:\\mathbf {Z}_n \\rightarrow \\mathbf {SA}_{\\mathrm {R}}$ (i.e.", "given quantifier-free formulas describing the semi-algebraic sets $D(i)$ and the graphs of the various maps $D(i) \\rightarrow D(i-1), D(i) \\rightarrow D(i+1)$ for every odd $i, 0\\le i \\le n$ ), computes a functor $D^{\\prime }$ from $\\mathbf {Z}_n$ to the category of finite simplicial complexes, such that the composition of $D^{\\prime }$ with the geometric realization functor $|\\cdot |$ is homologically $\\ell $ -equivalent to $D$ , and all the morphisms $D^{\\prime }(i) \\rightarrow D^{\\prime }(i-1), D^{\\prime }(i) \\rightarrow D^{\\prime }(i+1)$ are inclusions.", "Moreover, the complexity of the algorithm is bounded by $(n s d)^{k^{O(\\ell )}}$ , where $s$ is the cardinality of the set of polynomials occurring in all the formulas in the input and $d$ their maximum degree.", "The more precise statement is as follows.", "Theorem 5 For each fixed $\\ell \\ge 0$ , there exists an algorithm with the following properties.", "The algorithm takes the following input: $R >0$ ; a tuple of closed formulas $\\Phi = (\\phi _0,\\ldots ,\\phi _n)$ , with $S_i = {\\mathcal {R}}(\\phi _i,B) \\subset \\mathrm {R}^k$ for $0 \\le i \\le n$ , where $B = \\overline{B_k(0,R)}$ ; a tuple of closed formulas $\\Psi = (\\psi _1,\\ldots ,\\psi _n),$ such that ${\\mathcal {R}}(\\Psi _i, B \\times B)$ is the graph of a semi-algebraically continuous map $f_i:S_i \\rightarrow S_{i-1}$ if $i$ is odd, and is the graph of a semi-algebraically continuous map $f_i:S_{i-1} \\rightarrow S_{i}$ if $i$ is even.", "The algorithm produces as output simplicial complexes $\\Delta _0,\\ldots ,\\Delta _n$ , having the following properties: $\\Delta _i$ is a subcomplex of $\\Delta _{i-1}$ and $\\Delta _{i+1}$ if $i$ is odd, and $\\Delta _{i-1},\\Delta _{i+1}$ are subcomplexes of $\\Delta _i$ if $i$ is even.", "The diagram $|\\Delta _0| \\hookleftarrow |\\Delta _1| \\hookrightarrow |\\Delta _2| \\cdots $ is homologically $\\ell $ -eqivalent to $S_0 \\leftarrow S_1 \\rightarrow S_2 \\cdots $ The complexity of the algorithm is bounded by $(n s d)^{k^{O(\\ell )}}$ , where $s$ is the cardinality of the set of polynomials occurring in all the formulas in the input and $d$ their maximum degree." ], [ "Proofs of Theorems ", "We first prove Theorem REF which is the main theorem of this section.", "Theorems REF are REF straightforward consequences.", "In order to prove Theorem REF , we first introduce a more general mapping cylinder construction which is adapted to the zigzag diagrams.", "For a zigzag diagram consisting of just one zigzag ${S_{i-1} [rd]^{f_i} && S_{i+1}[ld]_{f_{i+1}} \\\\&S_i&}$ we would like to replace the diagram by the union of two mapping cylinders glued along $S_i$ as shown in Figure REF .", "The maps $f_{i}, f_{i+1}$ are replaced by inclusions of $S_{i-1}$ and $S_{i+1}$ into the mapping cylinders of $f_i$ and $f_{i+1}$ , and $S_i$ is replaced by the union of the mapping cylinders of $f_i$ and $f_{i+1}$ (oriented in opposite directions and glued along $S_i$ ).", "Figure: NO_CAPTIONWe now extend the above idea in two directions.", "We consider zigzag diagrams with a finite number of zigzags instead of just one, and instead of the classical mapping cylinder we use the semi-algebraic version introduced in Definition REF .", "We first define directed versions of the semi-algebraic mapping cylinder construction.", "Definition 6.2 (Directed semi-algebraic mapping cylinder) Let $S \\subset \\mathrm {R}^k, T \\subset \\mathrm {R}^k$ be semi-algebraic subsets and $f:S \\rightarrow T$ a semi-algebraic map, and $a,b \\in \\mathrm {R}, a < b$ .", "We denote: $\\underset{\\rightarrow }{\\widetilde{\\mathrm {{cyl}}}}(f)(a,b) &=& \\lbrace ((\\lambda - a)/(b-a) \\cdot x, f(x), \\lambda ) \\ | \\ x \\in S, \\lambda \\in [a, b]\\rbrace \\cup \\\\&&\\lbrace (\\mathbf {0}, y, a) \\ | \\ y \\in T\\rbrace ,\\\\\\underset{\\leftarrow }{\\widetilde{\\mathrm {{cyl}}}}(f)(a,b) &=& \\lbrace ((b -\\lambda )/(b-a) \\cdot x, f(x), \\lambda ) \\ | \\ x \\in S, \\lambda \\in [a, b]\\rbrace \\cup \\\\&&\\lbrace (\\mathbf {0}, y, b) \\ | \\ y \\in T\\rbrace .$ (Note that $\\widetilde{\\mathrm {{cyl}}}(f) = \\underset{\\rightarrow }{\\widetilde{\\mathrm {{cyl}}}}(f)(0,1)$ .)", "We now define semi-algebraic mapping cylinders of zigzag diagrams.", "Definition 6.3 (Semi-algebraic mapping cylinder of zigzag diagrams) Let $D:\\mathbf {Z}_n \\rightarrow \\mathbf {SA}_{\\mathrm {R}}$ be a zigzag diagram and for $0 \\le i \\le n$ , let $S_i = D(i)$ , and for $1 \\le i \\le n$ , let $f_i$ denote the map $f_i: S_{i-1} \\rightarrow S_i$ .", "For $0 < i \\le n$ , with $i$ odd we define $\\widetilde{S}_i = {\\left\\lbrace \\begin{array}{ll}\\lbrace (x, h_i(x,\\mu ), \\mu ) \\mid x \\in S_i, \\mu \\in [i - \\tfrac{1}{2}, i+ \\tfrac{1}{2}]\\rbrace , &i\\ne n, \\\\\\lbrace (x, (\\mu - n + \\tfrac{3}{2}) \\cdot f_{i}(x), \\mu ) \\mid x \\in S_n, \\mu \\in [n - \\tfrac{1}{2}, n]\\rbrace & i = n,\\end{array}\\right.", "}$ where $h_i(x,\\mu ) = (\\mu -i + \\tfrac{3}{2}) \\cdot f_{i}(x) + (\\mu -i +\\tfrac{1}{2}) \\cdot f_{i+1}(x).$ For $0 \\le i \\le n$ , with $i$ even, we define $\\widetilde{S}_i = {\\left\\lbrace \\begin{array}{ll}\\underset{\\leftarrow }{\\widetilde{\\mathrm {{cyl}}}}(f_{i})(i - \\tfrac{1}{2},i) \\cup \\underset{\\rightarrow }{\\widetilde{\\mathrm {{cyl}}}}(f_{i+1})(i,i+\\tfrac{1}{2}) \\cup \\widetilde{S}_{i-1} \\cup \\widetilde{S}_{i+1}& i \\ne 0,n, \\\\\\underset{\\rightarrow }{\\widetilde{\\mathrm {{cyl}}}}(f_{1})(0,\\tfrac{1}{2}) \\cup \\widetilde{S}_{1} &i=0, \\\\\\underset{\\leftarrow }{\\widetilde{\\mathrm {{cyl}}}}(f_{n})(n - \\tfrac{1}{2},n) \\cup \\widetilde{S}_{n-1}& i=n.\\end{array}\\right.", "}$ We denote the diagram ${&\\widetilde{S}_1 [ld] [rd] && \\widetilde{S}_3 [ld] [rd] && \\widetilde{S}_5[ld] [rd] & \\cdots & \\\\\\widetilde{S}_0 && \\widetilde{S}_2 && \\widetilde{S}_4 && \\widetilde{S}_6 & \\cdots }$ where the arrows denote inclusions, by $\\widetilde{\\mathrm {{cyl}}}(D)$ .", "Suppose in Definition REF each $S_i$ is a closed and bounded semi-algebraic subset of $\\mathrm {R}^k$ .", "Notice the following (see also the schematic Figure REF ).", "For each even $i$ , $\\widetilde{S}_{i-1}, \\widetilde{S}_{i+1} \\subset \\widetilde{S}_i$ (using the convention that $\\widetilde{S}_{-1} = \\widetilde{S}_{n+1} = \\emptyset $ ).", "For each odd $i$ , the map $\\widetilde{S}_i \\rightarrow S_i, (x,y,\\mu ) \\mapsto x$ is a homological equivalence using the homological version of the Vietoris-Begle theorem.", "The union $\\bigcup _{0 \\le i \\le n} \\widetilde{S}_i$ is a closed and bounded semi-algebraic subset of $\\mathrm {R}^k \\times \\mathrm {R}^k \\times [0,n]$ .", "For $T \\subset \\mathrm {R}^k \\times \\mathrm {R}^k \\times \\mathrm {R}$ , and $\\mu \\in \\mathrm {R}$ , let $T_\\mu $ denote the subset of $T$ with last coordinate equal to $\\mu $ .", "Then for each even $i$ , we have $\\left(\\widetilde{S}_{i-1}\\right)_{i-\\tfrac{1}{2}} &=& \\left(\\underset{\\leftarrow }{\\widetilde{\\mathrm {{cyl}}}}(f_i)(i-\\tfrac{1}{2},i)\\right)_{i - \\tfrac{1}{2}}, \\\\\\left(\\widetilde{S}_{i+1}\\right)_{i+\\tfrac{1}{2}} &=& \\left(\\underset{\\rightarrow }{\\widetilde{\\mathrm {{cyl}}}}(f_{i+1})(i-\\tfrac{1}{2},i)\\right)_{i + \\tfrac{1}{2}}, \\\\\\left(\\widetilde{S}_i\\right)_i &=& \\mathbf {0} \\times S_i \\times \\lbrace i\\rbrace \\cong S_i.$ Figure: S ˜ i-1 ,S ˜ i ,S ˜ i+1 \\widetilde{S}_{i-1}, \\widetilde{S}_i, \\widetilde{S}_{i+1} for every even ii.The following proposition captures the key property of $\\widetilde{\\mathrm {{cyl}}}(D)$ .", "We use the same notation introduced in Definition REF , Proposition 6.1 Let $D:\\mathbf {Z}_n \\rightarrow \\mathbf {SA}_{\\mathrm {R}}$ be a zigzag diagram and for $0 \\le i \\le n$ $S_i = D(i)$ is a closed and bounded semi-algebraic subset of $\\mathrm {R}^k$ .", "Then the diagrams $D$ and $\\widetilde{\\mathrm {{cyl}}}(D)$ are homologically equivalent.", "For $0 \\le j \\le n$ , we define $g_j: \\widetilde{S}_j\\rightarrow {S}_j$ be defined as follows.", "$g_j(x,y,\\mu ) = {\\left\\lbrace \\begin{array}{ll}y & \\text{ if } j \\text{ is even, } j-\\tfrac{1}{2} \\le \\mu \\le j+\\tfrac{1}{2}, \\\\f_{j}(x)& \\text{ if } j \\text{ is even, } j\\ne 0, j-\\tfrac{3}{2} \\le \\mu \\le j-\\tfrac{1}{2}, \\\\f_{j+1}(x)& \\text{ if } j \\text{ is even, } j\\ne n, j+ \\tfrac{1}{2} \\le \\mu \\le j+\\tfrac{3}{2}, \\\\x& \\text{ if } j \\text{ is odd.", "}\\end{array}\\right.", "}$ It is now easy to check that the for each (even) $j$ the following diagram commutes: ${\\widetilde{S}_{j-1}[dd]^{g_{j-1}} [rd] && \\widetilde{S}_{j+1}[dd]^{g_{j+1}} [ld] \\\\& \\widetilde{S}_{j}[dd]^{g_j} & \\\\{S}_{j-1} [rd]^{f_{j-1}}&& {S}_{j+1}[ld]^{f_j} \\\\& {S}_{j} & \\\\}$ Finally, using the homological version of the Vietoris-Begle theorem it is an easy exercise to check that each $g_j, 0 \\le j \\le n$ is a homological equivalence.", "This proves the proposition.", "Let $D$ denote the zigzag diagram ${&{S}_1 [ld]^{f_1} [rd]_{f_2} && {S}_3 [ld]^{f_3} [rd]_{f_4} && {S}_5[ld]^{f_5} [rd]_{f_6} & \\cdots & \\\\{S}_0 && {S}_2 && {S}_4 && {S}_6 & \\cdots }.$ Using Algorithm 14.5 in [3] (Quantifier Elimination) and following Definition REF we can compute using the input tuples of formulas $\\Phi $ and $\\Psi $ a tuple $\\widetilde{\\Phi } = (\\widetilde{\\phi }_0,\\ldots ,\\widetilde{\\phi }_n)$ whose realization is $\\widetilde{\\mathrm {{cyl}}}(D)$ .", "Finally we replace the tuple of formulas $\\widetilde{\\Phi }$ by a tuple of closed formulas $\\overline{\\widetilde{\\Phi }}= (\\overline{\\widetilde{\\phi }_0},\\ldots ,\\overline{\\widetilde{\\phi }_n})$ (recall Notation REF ).", "The number of polynomials and their degrees appearing in the $\\overline{\\widetilde{\\Phi }}$ are all bounded singly exponentially.", "We then use the Algorithm for simplicial replacement to compute the simplicial complexes, $\\Delta , \\Delta _0,\\ldots ,\\Delta _n$ having the required properties.", "Theorem REF follows from Theorem REF using standard algorithms from linear algebra.", "Use Theorem REF to reduce the computation of the barcode of a zigzag diagram of semi-algebraic maps to that of finite dimensional vector spaces, and then use the algorithm in [14].", "The complexity remains singly exponentially bounded." ], [ "Conclusion and open problems", "In this paper we have described for each $\\ell \\ge 0$ , algorithms with singly exponential complexity for computing the homology functor $\\mbox{\\rm H}_i(\\cdot )$ for $0 \\le i \\le \\ell $ on semi-algebraic maps and zigzag diagrams on closed and bounded semi-algebraic sets.", "We end with some open problems.", "Is it possible to compute the homology functor with singly exponential complexity without having to restrict to only the first few dimensions ?", "Remove the assumption that all the semi-algebraic sets in the input are closed and bounded.", "One possible approach is to generalize the results of Gabrielov and Vorobjov [19] on replacing an arbitrary semi-algebraic set by a locally closed one without changing its homotopy type, to semi-algebraic maps.", "Is it possible to extend the numerical algorithms mentioned in the Introduction for computing Betti numbers of semi-algebraic sets to the functor setting ?", "It will be necessary to study the condition number of semi-algebraic maps or more general diagrams ?", "Study the categorical complexity of the semi-algebraic homology functor, and prove a singly exponential bound on its functor complexity (as defined in [5])." ] ]
2207.10497
[ [ "The structure of interaction vertices in pure gravity in the light-cone\n gauge" ], [ "Abstract The first truly non-MHV interaction vertices in the light-cone formulation of pure gravity appear at order 6.", "From a closed form expression, for gravitation in the light-cone gauge, we extract and present all 6-point interaction vertices.", "We invoke symmetry arguments to explain the structure of these vertices.", "Symmetry considerations also allow us to place constraints on the structure of all even- and odd-point vertices in the theory.", "The origin of MHV vertices within this formalism is also discussed." ], [ "Introduction", "Neither locality nor Poincaré invariance is manifest in theories formulated in the light-cone gauge.", "While this may appear to be a limitation, it serves as a significant advantage when dealing with scattering amplitudes.", "This is because both these properties, when manifest, obscure the inherent simplicity underlying amplitude structures (as does the standard covariant formalism).", "For example, the Parke-Taylor formula for n-gluon scattering [1] emerges naturally from the light-cone formulation of Yang-Mills theory.", "There is a price to pay for rendering the simplicity manifest - light-cone formulations are significantly more involved than their covariant counterparts.", "Gravity in the light-cone gauge was studied in [2], [3].", "The primary result being a closed form expression for the gravitational Lagrangian, written entirely in terms of the two physical degrees of freedom in the theory.", "Subsequently, this framework was further refined and perturbative expansions of the closed form Lagrangian achieved to orders $\\kappa $ , $\\kappa ^2$ and $\\kappa ^3$  [4], [5].", "The close links between the light-cone gauge and spinor helicity variables [6] ensures that the standard tree-level amplitudes emerge naturally from these perturbative results (in momentum space).", "A limitation of existing perturbative studies is their focus on MHV structures [7].", "In the light-cone gauge, all quartic vertices for massless bosonic fields, are inherently MHV.", "Five-point interaction vertices do involve structures having three negative helicity fields and two positive helicity fields, but these vertices are essentially the conjugates of 5-point MHV vertices.", "Thus the first truly non-MHV vertices appear at the six-point level.", "This is the focus of the first part of this paper.", "After reviewing the closed form expression for the light-cone gravity action, we explicitly derive all the 6-point interaction vertices in the theory.", "The second part of the paper focuses on the symmetries in the theory and the constraints they impose on `allowed' interaction vertices.", "The six-point vertices, for example, are all non-MHV in structure and we explain why this is the case from simple symmetry considerations.", "These symmetry arguments extend to all orders and determine the nature of even- and odd-point interaction vertices in the light-cone formulation of pure gravity.", "We also comment on the origin of MHV vertices in the theory." ], [ "$3-$ , {{formula:7bb81892-2d5d-4fc5-8272-c5c8d6f84562}} and {{formula:bb66b0dc-e2f4-40a0-be0d-2eb8bc114395}} point interaction vertices in light-cone gravity: a review", "With the metric $(-,+,+,+)$ , we define the light-cone coordinates $x^\\pm =\\frac{1}{\\sqrt{2}}(x^0\\pm x^3)\\ ,&& x=\\frac{1}{\\sqrt{2}}\\,(\\,{x_1}\\,+\\,i\\,{x_2}\\,)\\ , \\qquad \\bar{x}= x^*\\ .$ $\\partial _\\pm , \\bar{\\partial }$ and $\\partial $ are the corresponding derivatives.", "$x^+$ is chosen as light-cone time.", "So $-i\\partial ^-$ is the light-cone Hamiltonian.", "The Einstein-Hilbert action on a Minkowski background is $S_{EH}=\\int \\,{d^4}x\\;\\mathcal {L}\\,=\\,\\frac{1}{2\\,\\kappa ^2}\\,\\int \\,{d^4}x\\;{\\sqrt{-g}}\\,\\,{\\mathcal {R}}\\ ,$ where $g=det\\,(\\,{g_{\\mu \\nu }}\\,)$ is the determinant of the metric.", "$\\mathcal {R}$ is the curvature scalar and $\\kappa ^2=8\\pi \\, G$ is the gravitational coupling constant.", "The corresponding field equations are $\\mathcal {R}_{\\mu \\nu } \\,-\\,\\frac{1}{2}\\, g_{\\mu \\nu }\\mathcal {R}\\,=\\,0\\ .$ At this stage, we impose three gauge choices [2], [3] $g_{--}\\,=\\,g_{-i}\\,=\\,0\\quad ,\\; i=1,2\\ ,$ and parameterize the other components as $\\begin{split}g_{+-}\\,&=\\,-\\,e^\\phi \\ , \\\\g_{i\\,j}\\,&=\\,e^\\psi \\,\\gamma _{ij}\\ .\\end{split}$ The matrix $\\gamma ^{ij}$ is real, symmetric and unimodular.", "$\\phi $ and $\\psi $ are real parameters.", "Field equations in which $\\partial _+$ (time derivative) does not appear serve as constraint relations, as opposed to `true' equations of motion.", "The constraint relation $\\mu \\!=\\!\\nu \\!=\\!-\\;$ from (REF ) reads $2\\,\\partial _-\\phi \\,\\partial _-\\psi \\,-\\,2\\,\\partial ^2_-\\psi \\,-\\,(\\partial _-\\psi )^2\\,+\\, \\frac{1}{2}\\partial _-\\gamma ^{ij}\\,\\partial _-\\gamma _{ij}\\,=\\,0\\ .$ This can be solved by making the specific choice $\\phi \\,=\\,\\frac{\\psi }{2}\\ ,$ which is the fourth (and final) gauge choice.", "With this choice $\\psi \\,=\\,\\frac{1}{4}\\,\\frac{1}{\\partial ^2_-}\\,(\\partial _-\\gamma ^{ij}\\,\\partial _-\\gamma _{ij})\\ .$ Unphysical components of the metric are systematically eliminated by constraint relations.", "This leaves us with a `physical' closed-form expression [2], [3] entirely in terms of the $\\gamma ^{ij}$ .", "In the expressions below, we continue to use $\\phi $ & $\\psi $ (even though (REF ), (REF ) relate them to the $\\gamma ^{ij}$ ) because they make terms in the calculation easy to keep track of.", "$S\\,&=&\\frac{1}{2\\kappa ^2}\\int d^{4}x \\; e^{\\psi }\\left(2\\,\\partial _{+}\\partial _{-}\\phi \\, +\\, \\partial _+\\partial _-\\psi - \\frac{1}{2}\\,\\partial _{+}\\gamma ^{ij}\\partial _{-}\\gamma _{ij}\\right) \\nonumber \\\\&&-e^{\\phi }\\gamma ^{ij}\\left(\\partial _{i}\\partial _{j}\\phi + \\frac{1}{2}\\partial _{i}\\phi \\partial _{j}\\phi - \\partial _{i}\\phi \\partial _{j}\\psi - \\frac{1}{4}\\partial _{i}\\gamma ^{kl}\\partial _{j}\\gamma _{kl} + \\frac{1}{2}\\partial _{i}\\gamma ^{kl}\\partial _{k}\\gamma _{jl}\\right) \\nonumber \\\\&&- \\frac{1}{2}e^{\\phi - 2\\psi }\\gamma ^{ij}\\frac{1}{\\partial _{-}}R_{i}\\frac{1}{\\partial _{-}}R_{j}\\ ,$ where $R_{i}\\,\\equiv \\, e^{\\psi }\\left(\\frac{1}{2}\\partial _-\\gamma ^{jk}\\partial _{i}\\gamma _{jk}-\\partial _-\\partial _i\\phi - \\partial _-\\partial _i\\psi + \\partial _i\\phi \\partial _-\\psi \\right)+\\partial _k(e^\\psi \\,\\gamma ^{jk}\\partial _-\\gamma _{ij})\\ .", "\\nonumber $" ], [ "Perturbative expansion", "In [3], a perturbative expansion of the Lagrangian in (REF ), in powers of $\\kappa $ , was obtained using $\\gamma _{ij}=\\left(\\mathrm {e}^{ \\sqrt{2}\\;\\kappa \\, h\\, }\\right)_{ij}\\ ,\\qquad \\gamma _{ij}\\,\\gamma ^{jk}=\\delta _i^k \\;\\;,\\qquad h_{ij}=\\begin{pmatrix} h_{11} & h_{12}\\\\h_{12} &-h_{11}\\end{pmatrix}\\ .$ These relate to the physical helicity states of the graviton as $h_{ij}=\\frac{1}{\\sqrt{2}}\\begin{pmatrix} h+\\bar{h}& -i(h-\\bar{h})\\\\-i(h-\\bar{h}) &-h-\\bar{h}\\end{pmatrix}\\ ,$ with $h$ and $\\bar{h}$ representing gravitons of helicity $+2$ and $-2$ respectively.", "Instead of working with the $h_{ij}$ and then rewriting quantities in terms of helicity states, we focus directly on the helicity states by introducing transverse variables $a$ and $b$ (so $x^a\\,, x^b$ represent $x\\,,\\bar{x}$ ) [5].", "The relevant metric being $\\eta _{ab}=\\frac{\\partial x^i}{\\partial x^a}\\,\\frac{\\partial x^j}{\\partial x^b}\\,\\eta _{ij}\\ .$ So $\\eta _{xx}=\\eta _{\\bar{x}\\bar{x}}=0\\ ,\\qquad \\eta _{x{\\bar{x}}}=\\eta _{{\\bar{x}}x}=1\\ .$ The inverse metric ($\\eta ^{ab}\\eta _{bc}=\\delta ^a_{\\;\\;c}$ ) has components $\\eta ^{xx}=\\eta ^{\\bar{x}\\bar{x}}=0\\ ,\\qquad \\eta ^{x{\\bar{x}}}=\\eta ^{{\\bar{x}}x}=1\\ .$ In this new basis, the expressions for $\\gamma $ from (REF ) read $\\gamma _{ab}\\,\\,=\\,\\, \\sum _{n = 0}^{\\infty } \\begin{pmatrix} 2\\kappa ^{2n+1}\\,\\dfrac{(4h\\bar{h})^n}{(2n+1)!", "}\\,\\bar{h}& \\kappa ^{2n}\\dfrac{(4h\\bar{h})^n}{(2n)!", "}\\, \\\\\\\\ \\kappa ^{2n}\\dfrac{(4h\\bar{h})^n}{(2n)!", "}\\, & 2\\kappa ^{2n+1}\\,\\dfrac{(4h\\bar{h})^n}{(2n+1)!", "}\\,h \\end{pmatrix}\\ ,$ $\\gamma ^{ab}\\,\\,=\\,\\,\\sum _{n = 0}^{\\infty }\\begin{pmatrix} -2\\kappa ^{2n+1}\\,\\dfrac{(4h\\bar{h})^n}{(2n+1)!", "}\\,h & \\kappa ^{2n}\\dfrac{(4h\\bar{h})^n}{(2n)!", "}\\, \\\\\\\\ \\kappa ^{2n}\\dfrac{(4h\\bar{h})^n}{(2n)!", "}\\,& -2\\kappa ^{2n+1}\\,\\dfrac{(4h\\bar{h})^n}{(2n+1)!", "}\\,\\bar{h}\\end{pmatrix}\\ .$ For the purposes of this paper, we only need terms up to $n=2$ in the series (REF ) and (REF ).", "From these, we can also calculate $\\psi $ at orders 2 and 4 $\\psi \\!\\!\\!\\!", "&& = \\qquad 2\\kappa ^2 \\psi _2 \\qquad \\;\\;\\;\\;\\;\\;\\;\\;\\;\\,\\;+\\;\\;\\;\\;\\;\\; \\qquad 4\\kappa ^4 \\psi _4 \\\\&&=2\\kappa ^2\\left\\lbrace \\!-\\frac{1}{\\partial _-^2}(\\partial _-\\bar{h}\\partial _-h)\\right\\rbrace + 4\\kappa ^4\\left\\lbrace - \\frac{1}{3\\partial _-^2}(\\partial _-h\\partial _-[\\bar{h}\\bar{h}h])\\!-\\!", "\\frac{1}{3\\partial _-^2}(\\partial _-[\\bar{h}hh]\\partial _-\\bar{h})\\!+\\!\\frac{1}{2\\partial _-^2}(\\partial _-[\\bar{h}h]\\partial _-[\\bar{h}h])\\ \\right\\rbrace \\nonumber \\ .$ We now expand the closed form action in (REF ) to order $\\kappa ^2$ .", "This reads [3], ${\\cal L}\\;\\;=&&\\!\\!\\!\\!\\!\\!\\bar{h}\\, \\square \\, h+2\\kappa \\,\\bar{h}\\,\\partial _-^2\\bigg (\\frac{\\bar{\\partial }}{\\partial _-}h\\frac{\\bar{\\partial }}{\\partial _-}h-h\\frac{\\bar{\\partial }^2}{\\partial _-^2}h\\bigg )+2\\kappa \\,h\\,\\partial _-^2\\bigg (\\frac{\\partial }{\\partial _-}\\bar{h}\\frac{\\partial }{\\partial _-}\\bar{h}-\\bar{h}\\frac{\\partial ^2}{\\partial _-^2}\\bar{h}\\bigg ) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\, \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!+2\\kappa ^2{\\biggl \\lbrace }\\,\\frac{1}{\\partial _-^2}\\big (\\partial _-h\\partial _-\\bar{h}\\big )\\frac{\\partial \\bar{\\partial }}{\\partial _-^2}\\big (\\partial _-h\\partial _-\\bar{h}\\big )+\\frac{1}{\\partial _-^3}\\big (\\partial _-h\\partial _-\\bar{h}\\big )\\left(\\partial \\bar{\\partial }h\\,\\partial _-\\bar{h}+\\partial _-h\\partial \\bar{\\partial }\\bar{h}\\right) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!-\\frac{1}{\\partial _-^2}\\big (\\partial _-h\\partial _-\\bar{h}\\big )\\,\\left(2\\,\\partial \\bar{\\partial }h\\,\\bar{h}+2\\,h\\partial \\bar{\\partial }\\bar{h}+9\\,\\bar{\\partial }h\\partial \\bar{h}+\\partial h\\bar{\\partial }\\bar{h}-\\frac{\\partial \\bar{\\partial }}{\\partial _-}h\\,\\partial _-\\bar{h}-\\partial _-h\\frac{\\partial \\bar{\\partial }}{\\partial _-}\\bar{h}\\right) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!-2\\frac{1}{\\partial _-}\\big (2\\bar{\\partial }h\\,\\partial _-\\bar{h}+h\\partial _-\\bar{\\partial }\\bar{h}-\\partial _-\\bar{\\partial }h\\bar{h}\\big )\\,h\\,\\partial \\bar{h}-2\\frac{1}{\\partial _-}\\big (2\\partial _-h\\,\\partial \\bar{h}+\\partial _-\\partial h\\,\\bar{h}-h\\partial _-\\partial \\bar{h}\\big )\\,\\bar{\\partial }h\\,\\bar{h}\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!-\\frac{1}{\\partial _-}\\big (2\\bar{\\partial }h\\,\\partial _-\\bar{h}+h\\partial _-\\bar{\\partial }\\bar{h}-\\partial _-\\bar{\\partial }h\\bar{h}\\big )\\frac{1}{\\partial _-}\\big (2\\partial _-h\\,\\partial \\bar{h}+\\partial _-\\partial h\\,\\bar{h}-h\\partial _-\\partial \\bar{h}\\big ) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!-h\\,\\bar{h}\\,\\bigg (\\partial \\bar{\\partial }h\\,\\bar{h}+h\\partial \\bar{\\partial }\\bar{h}+2\\,\\bar{\\partial }h\\partial \\bar{h}+3\\frac{\\partial \\bar{\\partial }}{\\partial _-}h\\,\\partial _-\\bar{h}+3\\partial _-h\\frac{\\partial \\bar{\\partial }}{\\partial _-}\\bar{h}\\bigg ){\\biggr \\rbrace }\\ .$ The terms containing a $\\partial _+$ at order $\\kappa ^2$ were eliminated using a field redefinition [3] $h\\rightarrow h - \\kappa ^2\\,\\frac{1}{\\partial _-}{\\biggl \\lbrace }2\\,\\partial _-^2h\\frac{1}{\\partial _-^3}(\\partial _-h\\partial _-\\bar{h})+\\partial _-h\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})+\\frac{1}{3}(h\\bar{h}\\partial _-h-hh\\partial _-\\bar{h})\\,{\\biggr \\rbrace }\\ .$ Moving to five-point interaction vertices, ie.", "at order $\\kappa ^3$  [5], we find ${\\cal L}_{\\kappa ^3}=2\\sqrt{2}\\kappa ^3\\,L_5\\ ,$ where $L_5$ reads $L_5=&&\\!\\!\\!\\!\\!\\!-\\frac{1}{\\sqrt{2}}h\\bar{\\partial }h\\bar{\\partial }\\bar{h}\\frac{1}{\\partial _-^2}(\\partial _-\\bar{h}\\partial _-h)+\\frac{\\sqrt{2}}{3}\\bar{h}hh\\bar{\\partial }h\\bar{\\partial }\\bar{h}+\\frac{\\sqrt{2}}{3}h\\bar{\\partial }h\\bar{\\partial }(\\bar{h}\\bar{h}h)+\\frac{\\sqrt{2}}{3}h\\bar{\\partial }\\bar{h}\\bar{\\partial }(\\bar{h}hh) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!+\\frac{1}{\\sqrt{2}}h\\bar{\\partial }h\\bar{\\partial }\\bar{h}\\frac{1}{\\partial _-^2}(\\partial _-\\bar{h}\\partial _-h)-\\frac{\\sqrt{2}}{3}\\bar{h}hh\\bar{\\partial }h\\bar{\\partial }\\bar{h}-\\frac{\\sqrt{2}}{3}h\\bar{\\partial }h\\bar{\\partial }(\\bar{h}\\bar{h}h)-\\frac{\\sqrt{2}}{3}h\\bar{\\partial }\\bar{h}\\bar{\\partial }(\\bar{h}hh) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!-\\frac{3}{4\\sqrt{2}}h\\frac{\\bar{\\partial }}{\\partial _-^2}(\\partial _-\\bar{h}\\partial _-h)\\frac{\\bar{\\partial }}{\\partial _-^2}(\\partial _-\\bar{h}\\partial _-h)+\\frac{1}{2\\sqrt{2}}h\\frac{1}{\\partial _-^2}(\\partial _-\\bar{h}\\partial _-h)\\frac{\\bar{\\partial }\\bar{\\partial }}{\\partial _-^2}(\\partial _-\\bar{h}\\partial _-h) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!-\\frac{1}{3\\sqrt{2}}\\bar{h}hh\\frac{\\bar{\\partial }\\bar{\\partial }}{\\partial _-^2}(\\partial _-\\bar{h}\\partial _-h)-\\frac{1}{3\\sqrt{2}}h\\frac{\\bar{\\partial }\\bar{\\partial }}{\\partial _-^2}(\\partial _-h\\partial _-[\\bar{h}\\bar{h}h])-\\frac{1}{3\\sqrt{2}}h\\frac{\\bar{\\partial }\\bar{\\partial }}{\\partial _-^2}(\\partial _-[\\bar{h}hh]\\partial _-\\bar{h})\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!-\\frac{1}{2\\sqrt{2}}h\\frac{\\bar{\\partial }\\bar{\\partial }}{\\partial _-^2}(\\partial _-[\\bar{h}h]\\partial _-[\\bar{h}h])-2{\\sqrt{2}}\\,\\bar{h}\\bar{\\partial }h{\\biggl [}\\frac{\\bar{\\partial }}{\\partial _-}\\lbrace h\\partial _-(\\bar{h}h)\\rbrace +\\frac{\\bar{\\partial }}{\\partial _-}\\lbrace \\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})\\partial _-h\\rbrace \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!-\\frac{\\bar{\\partial }}{\\partial _-}(h\\bar{h}\\partial _-h)-\\frac{1}{3}\\bar{\\partial }(hh\\bar{h})+\\frac{1}{\\partial _-}\\lbrace \\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})\\bar{\\partial }\\partial _-h\\rbrace \\,{\\biggr ]}+\\frac{1}{\\sqrt{2}}\\,h\\,\\frac{1}{\\partial _-}{\\biggl [}\\partial _-h\\bar{\\partial }\\bar{h}\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!+\\partial _-\\bar{h}\\bar{\\partial }h-\\frac{3}{2}\\frac{\\bar{\\partial }}{\\partial _-}(\\partial _-\\bar{h}\\partial _-h)+2\\bar{\\partial }(h\\partial _-\\bar{h})-\\bar{\\partial }\\partial _-(\\bar{h}h){\\biggr ]}\\,\\times \\,\\frac{1}{\\partial _-}{\\biggl [}\\partial _-h\\bar{\\partial }\\bar{h}+\\partial _-\\bar{h}\\bar{\\partial }h \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!-\\frac{3}{2}\\frac{\\bar{\\partial }}{\\partial _-}(\\partial _-\\bar{h}\\partial _-h)+2\\bar{\\partial }(h\\partial _-\\bar{h})-\\bar{\\partial }\\partial _-(\\bar{h}h){\\biggr ]}+\\bar{\\partial }h\\bar{\\partial }h\\,{\\biggl [}\\,\\frac{\\sqrt{2}}{3}\\bar{h}\\bar{h}h+\\frac{3}{\\sqrt{2}}\\bar{h}\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})\\,{\\biggr ]} \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!+{\\sqrt{2}}\\bar{\\partial }h\\,\\frac{1}{\\partial _-}{\\biggl \\lbrace }-\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})\\partial _-h\\bar{\\partial }\\bar{h}+\\frac{1}{3}\\partial _-h\\bar{\\partial }(\\bar{h}\\bar{h}h)+\\frac{1}{3}\\partial _-(\\bar{h}hh)\\bar{\\partial }\\bar{h}+\\frac{1}{3}\\partial _-\\bar{h}\\bar{\\partial }(\\bar{h}hh) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!-\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})\\partial _-\\bar{h}\\bar{\\partial }h+\\frac{1}{3}\\partial _-(\\bar{h}\\bar{h}h)\\bar{\\partial }h-\\partial _-(\\bar{h}h)\\bar{\\partial }(\\bar{h}h)-\\frac{1}{2}\\frac{\\bar{\\partial }}{\\partial _-}[\\partial _-h\\partial _-(\\bar{h}\\bar{h}h)] \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!+\\frac{3}{2}\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})\\frac{\\bar{\\partial }}{\\partial _-}(\\partial _-\\bar{h}\\partial _-h)-\\frac{1}{2}\\frac{\\bar{\\partial }}{\\partial _-}[\\partial _-(\\bar{h}hh)\\partial _-\\bar{h}]+\\frac{3}{4}\\frac{\\bar{\\partial }}{\\partial _-}[\\partial _-(\\bar{h}h)\\partial _-(\\bar{h}h)] \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!-\\frac{1}{2}\\frac{\\bar{\\partial }}{\\partial _-^2}(\\partial _-\\bar{h}\\partial _-h)\\frac{1}{\\partial _-}(\\partial _-\\bar{h}\\partial _-h)+\\frac{2}{3}\\bar{\\partial }[h\\partial _-(\\bar{h}\\bar{h}h)]+\\frac{2}{3}\\bar{\\partial }[\\bar{h}hh\\partial _-\\bar{h}]-\\frac{1}{6}\\bar{\\partial }\\partial _-(\\bar{h}\\bar{h}hh) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!-2\\bar{\\partial }[\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})h\\partial _-\\bar{h}]-\\bar{\\partial }[\\bar{h}h\\partial _-(\\bar{h}h)]+\\bar{\\partial }[\\frac{1}{\\partial _-^2}(\\partial _-\\bar{h}\\partial _-h)\\partial _-(\\bar{h}h)]\\,{\\biggr \\rbrace } \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!-{\\sqrt{2}}\\frac{1}{\\partial _-}{\\biggl \\lbrace }\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})\\bar{\\partial }\\partial _-h-\\frac{1}{3}\\bar{\\partial }\\partial _-(hh\\bar{h})-\\bar{\\partial }(h\\bar{h}\\partial _-h)+\\bar{\\partial }[\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})\\partial _-h] \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!+\\bar{\\partial }(h\\partial _-(\\bar{h}h){\\biggr \\rbrace }\\times \\,\\frac{1}{\\partial _-}{\\biggl \\lbrace }\\partial _-h\\bar{\\partial }\\bar{h}+\\partial _-\\bar{h}\\bar{\\partial }h-\\frac{3}{2}\\frac{\\bar{\\partial }}{\\partial _-}(\\partial _-\\bar{h}\\partial _-h)+2\\bar{\\partial }(h\\partial _-\\bar{h})-\\bar{\\partial }\\partial _-(\\bar{h}h)\\,{\\biggr \\rbrace } \\nonumber \\\\\\, \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+{\\sqrt{2}}[\\bar{h}h+\\frac{3}{2}\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})]\\,\\frac{1}{\\partial _-}{\\biggl \\lbrace }\\partial _-h\\bar{\\partial }\\bar{h}+\\partial _-\\bar{h}\\bar{\\partial }h-\\frac{3}{2}\\frac{\\bar{\\partial }}{\\partial _-}(\\partial _-\\bar{\\partial }\\partial _-h)+2\\bar{\\partial }(h\\partial _-\\bar{h})-\\bar{\\partial }\\partial _-(\\bar{h}h)\\,{\\biggr \\rbrace }\\bar{\\partial }h \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+{\\biggl \\lbrace }2\\,\\partial _-^2\\bar{h}\\frac{1}{\\partial _-^3}(\\partial _-h\\partial _-\\bar{h})+\\partial _-\\bar{h}\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})+\\frac{1}{3}(h\\bar{h}\\partial _-\\bar{h}-\\bar{h}\\bar{h}\\partial _-h)\\,{\\biggr \\rbrace }\\partial _-\\bigg (\\frac{\\bar{\\partial }}{\\partial _-}h\\frac{\\bar{\\partial }}{\\partial _-}h-h\\frac{\\bar{\\partial }^2}{\\partial _-^2}h\\bigg ) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+{\\biggl \\lbrace }2\\,\\partial _-^2h\\frac{1}{\\partial _-^3}(\\partial _-h\\partial _-\\bar{h})+\\partial _-h\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})+\\frac{1}{3}(h\\bar{h}\\partial _-h-hh\\partial _-\\bar{h})\\,{\\biggr \\rbrace }\\times {\\biggl [}2\\frac{\\bar{\\partial }}{\\partial _-^2}(\\partial _-^2\\bar{h}\\frac{\\bar{\\partial }}{\\partial _-}h) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\partial _-(\\frac{\\bar{\\partial }}{\\partial _-}h\\frac{\\bar{\\partial }}{\\partial _-}h-h\\frac{\\bar{\\partial }^2}{\\partial _-^2}h)-\\frac{1}{\\partial _-}(\\partial _-^2\\bar{h}\\frac{\\bar{\\partial }^2}{\\partial _-^2}h)-\\frac{\\bar{\\partial }^2}{\\partial _-^3}(h\\partial _-^2\\bar{h}){\\biggr ]}\\,\\,+\\,{\\mbox{c.c.", "}}$ The last three lines of (REF ) represent quintic interaction vertices produced by applying the field redefinition in (REF ) to the cubic interaction vertices in (REF ).", "Although $L_5$ appears to contain non-MHV structures, these are related by conjugation to MHV vertices." ], [ "The 6-point result", "One new result in this paper is the perturbative expansion to order $\\kappa ^4$ , from the closed form action (REF ).", "This (six) is the lowest order at which truly non-MHV structures first appear.", "We find ${\\cal L}_{\\kappa ^4}=4\\kappa ^4\\,L_6\\ ,$ where $&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!", "L_6=4 \\psi _{4} \\partial \\bar{\\partial } \\psi _{2}+\\psi _{2}^{2} \\partial \\bar{\\partial } \\psi _{2}-\\frac{1}{15}\\left(h {\\partial \\bar{\\partial }}[(h \\bar{h})^2\\bar{h}]+\\bar{h} {\\partial \\bar{\\partial }}[(h \\bar{h})^2h]\\right)+\\frac{1}{3} h \\bar{h}{\\partial \\bar{\\partial }}[(h \\bar{h})^2] \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\frac{2}{9} h \\bar{h} h{\\partial \\bar{\\partial }}[h \\bar{h} \\bar{h}]+\\frac{\\psi _{2}}{3}\\left(\\frac{\\partial \\bar{\\partial }}{\\partial _-} h \\partial _{-}[h \\bar{h} \\bar{h}]+\\partial _{-} \\bar{h} \\frac{\\partial \\bar{\\partial }}{\\partial _-}[h \\bar{h} h]+\\partial _{-} h \\frac{\\partial \\bar{\\partial }}{\\partial _-}[h \\bar{h} \\bar{h}]+\\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h} \\partial _{-}[h \\bar{h} h]\\right) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\psi _{2} \\frac{\\partial \\bar{\\partial }}{\\partial _-}[h \\bar{h}] \\partial _{-}[h \\bar{h}]+\\left(\\psi _{4}+\\frac{\\psi _{2}^{2}}{2}\\right)\\left(\\frac{\\partial \\bar{\\partial }}{\\partial _-} h \\partial _{-} \\bar{h}+\\partial _{-} h \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h}\\right)-\\frac{1}{6}(h \\bar{h} h \\bar{h}) \\partial \\bar{\\partial } \\psi _{2}-h \\bar{h} \\partial \\bar{\\partial } \\psi _{4}\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\frac{\\psi _{2}}{2} h \\bar{h} \\partial \\bar{\\partial } \\psi _{2}+\\frac{3}{4} h \\bar{h} \\partial \\psi _{2} \\bar{\\partial } \\psi _{2}-\\frac{5}{2} \\psi _{4} \\partial \\bar{\\partial } \\psi _{2}-\\frac{5}{16} \\psi _{2}^{2} \\partial \\bar{\\partial } \\psi _{2}\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+(\\partial h \\bar{\\partial } \\bar{h}-\\partial \\bar{h} \\bar{\\partial } h)\\left(\\frac{1}{6} h \\bar{h} h \\bar{h}+\\frac{1}{2} \\psi _{2} h \\bar{h}+\\frac{1}{2} \\psi _{4}+\\frac{1}{8} \\psi _{2}^{2}\\right)\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\left(\\frac{1}{3} h \\bar{h}+\\frac{1}{6} \\psi _{2}\\right)(\\partial h \\bar{\\partial }[h \\bar{h} \\bar{h}]+\\bar{\\partial } \\bar{h} \\partial [h \\bar{h} h]-\\bar{\\partial } h \\partial [h \\bar{h} \\bar{h}]-\\partial \\bar{h} \\bar{\\partial }[h \\bar{h} h])\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\left(\\frac{h \\bar{h}}{3}+\\frac{\\psi _{2}}{2}\\right)(h \\partial \\bar{h}-\\bar{h} \\partial h) \\bar{\\partial }[h \\bar{h}]+\\left(\\frac{h \\bar{h}}{3}+\\frac{\\psi _{2}}{2}\\right)(\\bar{h} \\bar{\\partial } h-h \\bar{\\partial } \\bar{h}) \\partial [h \\bar{h}]+\\frac{1}{6}(h \\partial \\bar{h}-\\bar{h} \\partial h) \\bar{\\partial }[h \\bar{h} h \\bar{h}]\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\frac{1}{6}(\\bar{h} \\bar{\\partial } h-h \\bar{\\partial } \\bar{h}) \\partial [h \\bar{h} h \\bar{h}]+\\frac{h}{3}(\\bar{\\partial }[h \\bar{h}] \\partial [h \\bar{h} \\bar{h}]-\\bar{\\partial }[h \\bar{h} \\bar{h}] \\partial [h \\bar{h}])+\\frac{\\bar{h}}{3}(\\partial [h \\bar{h}] \\bar{\\partial }[h \\bar{h} h]-\\partial [h \\bar{h} h] \\bar{\\partial }[h \\bar{h}])\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\biggl \\lbrace 2 h \\partial \\bar{h} \\frac{\\bar{\\partial }}{\\partial _{-}}\\left[\\frac{2}{3} h \\bar{h} \\bar{h} \\partial _{-} h-\\frac{2}{3} h \\bar{h} h \\partial _{-} \\bar{h}-2 \\psi _{2} h \\partial _{-} \\bar{h}+\\psi _{2} \\partial _{-}[h \\bar{h}]\\right]-2 h \\partial \\bar{h} \\frac{1}{\\partial _{-}}\\biggl [\\frac{1}{3} (\\partial _{-} h \\bar{\\partial }[h \\bar{h} \\bar{h}]\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\bar{\\partial } \\bar{h} \\partial _{-}[h \\bar{h} h]+\\bar{\\partial } h \\partial _{-}[h \\bar{h} \\bar{h}]+\\partial _{-} \\bar{h} \\bar{\\partial }[h \\bar{h} h])-\\partial _{-}[h \\bar{h}] \\bar{\\partial }[h \\bar{h}]+\\psi _{2}\\left(\\partial _{-} h \\bar{\\partial } \\bar{h}+\\bar{\\partial } h \\partial _{-} \\bar{h}\\right) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\frac{3}{2} \\psi _{2} \\partial _{-} \\bar{\\partial } \\psi _{2}-\\frac{1}{2} \\bar{\\partial } \\psi _{2} \\partial _{-} \\psi _{2}+\\frac{3}{2} \\partial _{-} \\bar{\\partial } \\psi _{4}\\biggl ]+ \\left(\\bar{\\partial }\\left[h \\bar{h}-2 \\frac{1}{\\partial _{-}}\\left[h \\partial _{-} \\bar{h}\\right]\\right]-\\frac{1}{\\partial _{-}}\\left[\\partial _{-} h \\bar{\\partial } \\bar{h}+\\bar{\\partial } h \\partial _{-} \\bar{h}\\right]-\\frac{3}{2} \\bar{\\partial } \\psi _{2}\\right) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!", "\\times \\left(2 h \\partial \\left[\\frac{1}{3} h \\bar{h} \\bar{h}+\\frac{1}{\\partial _{-}}\\left[\\psi _{2} \\partial _{-} \\bar{h}-\\bar{h} \\bar{h} \\partial _{-} h\\right]\\right]+2 \\partial \\bar{h}\\left[\\frac{1}{3} h \\bar{h} h-\\frac{3}{2} \\psi _{2} h\\right]\\right) + c.c.", "\\biggl \\rbrace \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\biggl (\\frac{1}{3}(h \\bar{h})^{2}-3 \\psi _{4}-3 \\psi _{2} h \\bar{h}+\\frac{9}{4} \\psi _{2}^{2}\\biggl ) \\bar{\\partial } h \\partial \\bar{h}\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\left(h \\bar{h}-\\frac{3}{2} \\psi _{2}\\right)\\left(2 \\bar{\\partial } h \\partial \\left[\\frac{1}{3} h \\bar{h} \\bar{h}+\\frac{1}{\\partial _{-}}\\left[\\psi _{2} \\partial _{-} \\bar{h}-\\bar{h} \\bar{h} \\partial _{-} h\\right]\\right]+\\text{ c.c.", "}\\right)\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\left(h \\bar{h}-\\frac{3}{2} \\psi _{2}\\right)\\left\\lbrace \\bar{\\partial }\\left[h \\bar{h}-2 \\frac{1}{\\partial _{-}}\\left[h \\partial _{-} \\bar{h}\\right]\\right]-\\frac{1}{\\partial _{-}}\\left[\\partial _{-} h \\bar{\\partial } \\bar{h}+\\bar{\\partial } h \\partial _{-} \\bar{h}\\right]-\\frac{3}{2} \\bar{\\partial } \\psi _{2}\\right\\rbrace \\left\\lbrace \\partial \\left[h \\bar{h}-2 \\frac{1}{\\partial _{-}}\\left[\\bar{h} \\partial _{-} h\\right]\\right]\\right.", "\\nonumber \\\\ &&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\left.-\\frac{1}{\\partial _{-}}\\left[\\partial _{-} \\bar{h} \\partial h+\\partial \\bar{h} \\partial _{-} h\\right]-\\frac{3}{2} \\partial \\psi _{2}\\right\\rbrace \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!", "-\\left\\lbrace 2 \\bar{\\partial } h \\frac{\\partial }{\\partial _{-}}\\left(-\\frac{2}{15} \\bar{h} \\partial _{-}[h \\bar{h}]^{2}-\\frac{2}{3} \\psi _{2} \\bar{h} \\partial _{-}[h \\bar{h}]+\\left(\\frac{8}{15}(h \\bar{h})^{2}+\\frac{4}{3} \\psi _{2} h \\bar{h}+\\psi _{4}+\\frac{1}{2} \\psi _{2}^{2}\\right) \\partial _{-} \\bar{h}\\right)+\\text{ c.c.", "}\\right\\rbrace \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\left\\lbrace \\left( \\partial \\left[ h \\bar{ h } - 2 \\frac{ 1 }{ \\partial _ { - } } [ \\bar{ h } \\partial _ { - } h ] \\right] - \\frac{ 1 }{ \\partial _ { - } } [ \\partial _ { - } \\bar{ h } \\partial h + \\partial \\bar{ h } \\partial _ { - } h ] - \\frac{ 3 }{ 2 } \\partial \\psi _ { 2 } \\right)\\right.\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\times \\biggl (\\frac{ \\bar{ \\partial } }{ \\partial _ { - } } \\left[\\frac{2}{3} h \\bar{h} \\bar{h} \\partial _{-} h-\\frac{2}{3} h \\bar{h} h \\partial _{-} \\bar{h} -2 \\psi _{2} h \\partial _{-} \\bar{h}+\\psi _{2} \\partial _{-}[h \\bar{h}]\\right]\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\frac{1}{\\partial _{-}}\\biggl [\\frac{1}{3}\\left(\\partial _{-} h \\bar{\\partial }[h \\bar{h} \\bar{h}]+\\bar{\\partial } \\bar{h} \\partial _{-}[h \\bar{h} h]+\\bar{\\partial } h \\partial _{-}[h \\bar{h} \\bar{h}]+\\partial _{-} \\bar{h} \\bar{\\partial }[h \\bar{h} h]\\right)-\\partial _{-}[h \\bar{h}] \\bar{\\partial }[h \\bar{h}]+\\psi _{2}\\left(\\partial _{-} h \\bar{\\partial } \\bar{h}+\\bar{\\partial } h \\partial _{-} \\bar{h}\\right)\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\frac{3}{2} \\psi _{2} \\partial _{-} \\bar{\\partial } \\psi _{2}-\\frac{1}{2} \\bar{\\partial } \\psi _{2} \\partial _{-} \\psi _{2}+\\frac{3}{2} \\partial _{-} \\bar{\\partial } \\psi _{4}\\biggl ]\\biggl )+\\text{ c.c.", "}\\biggl \\rbrace \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-2 \\partial \\left[\\frac{1}{3} h \\bar{h} \\bar{h}+\\frac{1}{\\partial _{-}}\\left[\\psi _{2} \\partial _{-} \\bar{h}-\\bar{h} \\bar{h} \\partial _{-} h\\right]\\right] \\bar{\\partial }\\left[\\frac{1}{3} h h \\bar{h}+\\frac{1}{\\partial _{-}}\\left[\\psi _{2} \\partial _{-} h-h h \\partial _{-} \\bar{h}\\right]\\right]\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\biggl \\lbrace \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{M}\\left[\\partial _{-}^{2} h \\frac{1}{\\partial _{-}^{3}}\\left(\\partial _{-} h \\partial _{-} \\bar{h}\\right)+\\frac{1}{3}\\left(h \\bar{h} \\partial _{-} h-h h \\partial _{-} \\bar{h}\\right)\\right]-\\frac{1}{\\partial _{-}^2}\\left(\\partial _{-} M \\partial _{-} \\bar{h}\\right)\\biggl [-\\frac{1}{2}\\frac{\\partial \\bar{\\partial }}{\\partial _-^2}\\left(\\partial _{-} h \\partial _{-} \\bar{h}\\right)\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\frac{1}{\\partial _{-}}\\left(\\partial \\bar{\\partial } h \\partial _{-} \\bar{h}\\right)\\biggl ]- M\\bar{h}\\left[ \\bar{\\partial }h \\partial \\bar{h}+\\frac{4}{3} h \\partial \\bar{\\partial } \\bar{h}+4 \\partial _{-} h \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h}\\right]-2 \\frac{1}{\\partial _{-}^{2}}\\left(\\partial _{-} M \\partial _{-} \\bar{h}\\right)\\biggl [\\frac{1}{2}\\bar{\\partial } h \\partial \\bar{h}+h \\partial \\bar{\\partial } \\bar{h}-\\partial _{-} h \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h}\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\frac{1}{\\partial _{-}}\\left(\\partial \\bar{\\partial } h \\partial _{-} \\bar{h}\\right)-\\frac{1}{4} \\frac{\\partial \\bar{\\partial }}{\\partial _{-}^{2}}\\left(\\partial _{-} h \\partial _{-} \\bar{h}\\right)\\biggl ]+2 M \\partial \\bar{h} \\frac{1}{\\partial _{-}}\\left(2 \\bar{\\partial } h \\partial _{-} \\bar{h}+h \\partial _{-} \\bar{\\partial } \\bar{h}-\\partial _{-} \\bar{\\partial } h \\bar{h})+ c.c \\right.", "\\biggl \\rbrace \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\frac{1}{\\partial _{-}}\\left(2 \\bar{\\partial } M \\partial _{-} \\bar{h}+M \\partial _{-} \\bar{\\partial }\\bar{h}-\\partial _{-} \\bar{\\partial }M \\bar{h}\\right) \\frac{1}{\\partial _{-}}\\left(2 \\partial _{-} h \\partial \\bar{h}+\\partial _{-} \\partial h \\bar{h}-h \\partial _{-} \\partial \\bar{h}\\right) \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\biggl \\lbrace \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h}\\left[\\partial _{-}^{2} M \\frac{1}{\\partial _{-}^{3}}\\left(\\partial _{-} h \\partial _{-} \\bar{h}\\right)+\\frac{1}{3}\\left(M \\bar{h} \\partial _{-} h-M h \\partial _{-} \\bar{h}\\right)\\right]-\\frac{1}{\\partial _{-}^2}\\left(\\partial _{-} h \\partial _{-} \\bar{M}\\right)\\biggl [-\\frac{1}{2}\\frac{\\partial \\bar{\\partial }}{\\partial _-^2}\\left(\\partial _{-} h \\partial _{-} \\bar{h}\\right)\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\frac{1}{\\partial _{-}}\\left(\\partial \\bar{\\partial } h \\partial _{-} \\bar{h}\\right)\\biggl ]- h\\bar{M}\\left[ \\bar{\\partial }h \\partial \\bar{h}+\\frac{4}{3} h \\partial \\bar{\\partial } \\bar{h}+4 \\partial _{-} h \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h}\\right]-2 \\frac{1}{\\partial _{-}^{2}}\\left(\\partial _{-} h \\partial _{-} \\bar{M}\\right)\\biggl [\\frac{1}{2}\\bar{\\partial } h \\partial \\bar{h}+h \\partial \\bar{\\partial } \\bar{h}-\\partial _{-} h \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h}\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\frac{1}{\\partial _{-}}\\left(\\partial \\bar{\\partial } h \\partial _{-} \\bar{h}\\right)-\\frac{1}{4} \\frac{\\partial \\bar{\\partial }}{\\partial _{-}^{2}}\\left(\\partial _{-} h \\partial _{-} \\bar{h}\\right)\\biggl ]+2 h \\partial \\bar{M} \\frac{1}{\\partial _{-}}\\left(2 \\bar{\\partial } h \\partial _{-} \\bar{h}+h \\partial _{-} \\bar{\\partial } \\bar{h}-\\partial _{-} \\bar{\\partial } h \\bar{h})+ c.c \\right.\\biggl \\rbrace \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\frac{1}{\\partial _{-}}\\left(2 \\bar{\\partial } h \\partial _{-} \\bar{M}+h \\partial _{-} \\bar{\\partial }\\bar{M}-\\partial _{-} \\bar{\\partial }h \\bar{M}\\right) \\frac{1}{\\partial _{-}}\\left(2 \\partial _{-} h \\partial \\bar{h}+\\partial _{-} \\partial h \\bar{h}-h \\partial _{-} \\partial \\bar{h}\\right)\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\biggl \\lbrace \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h}\\left[\\partial _{-}^{2} h \\frac{1}{\\partial _{-}^{3}}\\left(\\partial _{-} M \\partial _{-} \\bar{h}\\right)+\\frac{1}{3}\\left(h \\bar{M} \\partial _{-} h-h M \\partial _{-} \\bar{h}\\right)\\right]-\\frac{1}{\\partial _{-}^2}\\left(\\partial _{-} h \\partial _{-} \\bar{h}\\right)\\biggl [-\\frac{1}{2}\\frac{\\partial \\bar{\\partial }}{\\partial _-^2}\\left(\\partial _{-} M \\partial _{-} \\bar{h}\\right)\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\frac{1}{\\partial _{-}}\\left(\\partial \\bar{\\partial } M \\partial _{-} \\bar{h}\\right)\\biggl ]- h\\bar{h}\\left[ \\bar{\\partial }M \\partial \\bar{h}+\\frac{4}{3} M \\partial \\bar{\\partial } \\bar{h}+4 \\partial _{-} M \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h}\\right]-2 \\frac{1}{\\partial _{-}^{2}}\\left(\\partial _{-} h \\partial _{-} \\bar{h}\\right)\\biggl [\\frac{1}{2}\\bar{\\partial } M \\partial \\bar{h}+M \\partial \\bar{\\partial } \\bar{h}\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\partial _{-} M \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h}+\\frac{1}{\\partial _{-}}\\left(\\partial \\bar{\\partial } M \\partial _{-} \\bar{h}\\right)-\\frac{1}{4} \\frac{\\partial \\bar{\\partial }}{\\partial _{-}^{2}}\\left(\\partial _{-} M \\partial _{-} \\bar{h}\\right)\\biggl ]+2 h \\partial \\bar{h} \\frac{1}{\\partial _{-}}\\left(2 \\bar{\\partial } M \\partial _{-} \\bar{h}+M \\partial _{-} \\bar{\\partial } \\bar{h}-\\partial _{-} \\bar{\\partial } M \\bar{h})+ c.c \\right.\\biggl \\rbrace \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\frac{1}{\\partial _{-}}\\left(2 \\bar{\\partial } h \\partial _{-} \\bar{h}+h \\partial _{-} \\bar{\\partial }\\bar{h}-\\partial _{-} \\bar{\\partial }h \\bar{h}\\right) \\frac{1}{\\partial _{-}}\\left(2 \\partial _{-} M \\partial \\bar{h}+\\partial _{-} \\partial M \\bar{h}-M \\partial _{-} \\partial \\bar{h}\\right)\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\biggl \\lbrace \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{h}\\left[\\partial _{-}^{2} h \\frac{1}{\\partial _{-}^{3}}\\left(\\partial _{-} h \\partial _{-} \\bar{M}\\right)+\\frac{1}{3}\\left(h \\bar{h} \\partial _{-} M-h h \\partial _{-} \\bar{M}\\right)\\right]-\\frac{1}{\\partial _{-}^2}\\left(\\partial _{-} h \\partial _{-} \\bar{h}\\right)\\biggl [-\\frac{1}{2}\\frac{\\partial \\bar{\\partial }}{\\partial _-^2}\\left(\\partial _{-} h \\partial _{-} \\bar{M}\\right)\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!+\\frac{1}{\\partial _{-}}\\left(\\partial \\bar{\\partial } h \\partial _{-} \\bar{M}\\right)\\biggl ]- h\\bar{h}\\left[ \\bar{\\partial }h \\partial \\bar{M}+\\frac{4}{3} h \\partial \\bar{\\partial } \\bar{M}+4 \\partial _{-} h \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{M}\\right]-2 \\frac{1}{\\partial _{-}^{2}}\\left(\\partial _{-} h \\partial _{-} \\bar{h}\\right)\\biggl [\\frac{1}{2}\\bar{\\partial } h \\partial \\bar{M}+h \\partial \\bar{\\partial } \\bar{M}\\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\partial _{-} h \\frac{\\partial \\bar{\\partial }}{\\partial _-} \\bar{M}+\\frac{1}{\\partial _{-}}\\left(\\partial \\bar{\\partial } h \\partial _{-} \\bar{M}\\right)-\\frac{1}{4} \\frac{\\partial \\bar{\\partial }}{\\partial _{-}^{2}}\\left(\\partial _{-} h \\partial _{-} \\bar{M}\\right)\\biggl ]+2 h \\partial \\bar{h} \\frac{1}{\\partial _{-}}\\left(2 \\bar{\\partial } h \\partial _{-} \\bar{M}+h \\partial _{-} \\bar{\\partial } \\bar{M}-\\partial _{-} \\bar{\\partial } h \\bar{M})+ c.c \\right.\\biggl \\rbrace \\nonumber \\\\&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!-\\frac{1}{\\partial _{-}}\\left(2 \\bar{\\partial } h \\partial _{-} \\bar{h}+h \\partial _{-} \\bar{\\partial }\\bar{h}-\\partial _{-} \\bar{\\partial }h \\bar{h}\\right) \\frac{1}{\\partial _{-}}\\left(2 \\partial _{-} h \\partial \\bar{M}+\\partial _{-} \\partial h\\bar{M}-h \\partial _{-} \\partial \\bar{M}\\right)\\,\\ ,$ with $M = -\\frac{1}{2}\\,\\frac{1}{\\partial _-}{\\biggl \\lbrace }2\\,\\partial _-^2h\\frac{1}{\\partial _-^3}(\\partial _-h\\partial _-\\bar{h})+\\partial _-h\\frac{1}{\\partial _-^2}(\\partial _-h\\partial _-\\bar{h})+\\frac{1}{3}(h\\bar{h}\\partial _-h-hh\\partial _-\\bar{h})\\,{\\biggr \\rbrace }\\ .$ The last 16 lines of (REF ) arise as a consequence of applying the field redefinition in (REF ) on the quartic interaction vertices in (REF ).", "As done earlier at order $\\kappa ^2$ , at this order also, interaction vertices containing a $\\partial _+$ (including ones resulting from (REF )), need to be eliminated.", "This is achieved by the following field redefinition $h &&\\!\\!\\!\\!\\!\\!\\!\\!\\rightarrow h + 2\\kappa ^4\\biggl \\lbrace \\frac{1}{\\partial _-}\\biggl [\\frac{2}{3}h\\partial _-\\bar{h}M - \\frac{5}{6}\\partial _-h\\bar{h}M - \\frac{1}{2}h\\bar{h}\\partial _-M + \\frac{1}{2}\\partial _-\\bar{M}h h + \\frac{1}{6}\\bar{M}h\\partial _-h\\nonumber \\\\&&-\\partial _-^2h\\frac{1}{\\partial _-^3}\\left(\\partial _-{M}\\partial _-\\bar{h} \\right) -\\partial _-^2M\\frac{1}{\\partial _-^3}\\left(\\partial _-{h}\\partial _-\\bar{h} \\right) -\\partial _-^2h\\frac{1}{\\partial _-^3}\\left(\\partial _-\\bar{M}\\partial _-{h} \\right) - \\partial _-h\\left(\\psi _4 + \\frac{\\psi _2^2}{2}\\right) +\\frac{1}{3}h\\partial _-[h\\bar{h}]^2 \\nonumber \\\\&&+ h\\psi _2\\partial _-[h\\bar{h}] - \\frac{h^2}{3}\\left(\\frac{1}{3}\\partial _-[h\\bar{h}\\bar{h}] + 2\\psi _2\\partial _-\\bar{h}\\right) -\\frac{2h \\bar{h}}{3}\\left(\\frac{1}{3}\\partial _-[h\\bar{h}h] + 2\\psi _2\\partial _-h\\right) \\biggl ]\\nonumber \\\\&&-2 \\partial _-h\\frac{1}{\\partial _-^3}(\\partial _-M\\partial _-\\bar{h}) -2 \\partial _-h\\frac{1}{\\partial _-^3}(\\partial _-h\\partial _-\\bar{M}) +\\partial _-h\\frac{1}{\\partial _-}(4\\psi _4 + \\psi _2^2) - \\frac{1}{15}(h\\bar{h})^2h \\nonumber \\\\&&-\\frac{1}{2}M\\psi _2 + \\frac{1}{2}\\partial _-h\\frac{1}{\\partial _-^2}(M\\partial _-\\bar{h}) + \\frac{1}{2}\\partial _-h\\frac{1}{\\partial _-^2}(\\bar{M}\\partial _-{h}) \\biggl \\rbrace \\ .$ Since the $\\partial _+$ that occurs in (REF ) appears linearly, it can always be shifted to a higher order." ], [ "Spinor helicity and amplitudes", "Now that we have perturbative expansions of the closed form action, we can examine these vertices in momentum space.", "Before doing so, we briefly review the intimate link between light-cone gauge and spinor helicity variables [6].", "A four-vector $p_\\mu $ may be rewritten as a matrix $p_{a\\dot{a}}$ using the Pauli matrices $\\sigma _{\\mu }=(-\\mathbf {1},\\sigma )$ $p_{\\mu }\\left(\\sigma ^{\\mu }\\right)_{a \\dot{a}} \\equiv p_{a \\dot{a}} =\\left(\\begin{array}{cc}-p_{0}+p_{3} & p_{1}-i p_{2} \\\\p_{1}+i p_{2} & -p_{0}-p_{3}\\end{array}\\right)=\\sqrt{2}\\left(\\begin{array}{cc}-p_{-} & \\bar{p} \\\\p & -p_{+}\\end{array}\\right)\\ .$ The determinant of this matrix is the length of the four-vector $p_\\mu $ $\\operatorname{det}\\left(p_{a \\dot{a}}\\right)=-2\\left(p \\bar{p}-p_{+} p_{-}\\right)=-p^{\\mu } p_{\\mu } .$ The on-shell condition for a light-like vector $p_{\\mu }$ is $p_{+}=\\frac{p \\bar{p}}{p_{-}}$ .", "We define holomorphic and anti-holomorphic spinors by $\\lambda _{a}=\\frac{2^{\\frac{1}{4}}}{\\sqrt{p}_{-}}\\left(\\begin{array}{c}p_{-} \\\\-p\\end{array}\\right), \\quad \\tilde{\\lambda }_{\\dot{a}}=-\\left(\\lambda _{a}\\right)^{*}=-\\frac{2^{\\frac{1}{4}}}{\\sqrt{p}_{-}}\\left(\\begin{array}{c}p_{-} \\\\-\\bar{p}\\end{array}\\right),$ such that $\\lambda _{a} \\tilde{\\lambda }_{\\dot{a}}$ reproduces (REF ) on-shell.", "We define the off-shell holomorphic and anti-holomorphic spinor products [8], $\\langle i j\\rangle =\\sqrt{2}\\, \\frac{p^{i}\\, p_{-}^{j}\\,-\\,p^{j}\\, p_{-}^{i}}{\\sqrt{p_{-}^{i}\\, p_{-}^{j}}}, \\quad [i j]=\\sqrt{2}\\, \\frac{\\bar{p}\\,^{i}\\, p_{-}^{j}\\,-\\,\\bar{p}\\,^{j}\\, p_{-}^{i}}{\\sqrt{p_{-}^{i}\\, p_{-}^{j}}} .$" ], [ "MHV Lagrangians", "In the light-cone gauge, the Lagrangian for Yang-Mills theory has the following helicity structure ${\\it L}_{\\mbox{YM}}={\\it L}_{+-}\\;+\\;{\\it L}_{++-}\\;+\\;{\\it L}_{+--}\\;+\\;{\\it L}_{++--}\\ .$ Feynman diagrams primarily constructed using the first cubic vertex vanish since these correspond to amplitudes involving all `plus' or all but one `plus' helicities [9], [10].", "This motivates the need for a canonical field redefinition, one that maps the kinetic term and the ${\\it L}_{++-}$ (non-MHV) cubic vertex to a purely kinetic term.", "In momentum space, we seek a transformation of the form ${\\it L}_{+\\,-}\\,+\\,{\\it L}_{+\\,+\\,-}\\,\\rightarrow \\,{\\it L^{\\prime }}_{+\\,-}\\ .$ This is achieved through field redefinitions of the form [11], [12] $A\\,&\\sim \\,&u_1\\, B\\,+\\,u_2\\, BB\\,+\\,u_3\\, BBB\\,+....\\ ,\\nonumber \\\\\\bar{A}\\,&\\sim \\,&v_1\\,\\widetilde{B}\\,+\\,v_2\\,\\widetilde{B}B\\,+\\,v_3\\,\\widetilde{B}BB\\,+....\\ ,$ where $A,\\bar{A}$ are $+$ and $-$ helicity Yang-Mills fields and $B,\\widetilde{B}$ are the shifted fields.", "The relations (REF ) are purely schematic and we refer the reader to [11], [12] for details.", "The coefficients $u_n,v_n$ are functions of momenta and can be worked out, order by order, based on the requirement in (REF ).", "In this process of eliminating the cubic non-MHV vertex from (REF ), the terms ${\\it L}_{+--}$ and ${\\it L}_{++--}$ generate an infinite series of MHV vertices.", "This new `MHV Lagrangian' [13] has the following structure ${\\it L}_{\\mbox{YM}}={\\it L^{\\prime }}_{+-}\\;+\\;{\\it L^{\\prime }}_{+--}\\;+\\;{\\it L^{\\prime }}_{++--}\\;+\\;{\\it L^{\\prime }}_{+++--}\\;+\\;{\\it L^{\\prime }}_{++++--}\\;+\\;\\ldots \\;\\;\\;\\;.$ The key advantage of this Lagrangian is that all MHV amplitudes are trivial to read off (by simply taking the relevant momenta on-shell).", "The light-cone action for pure gravity is similar in helicity structure to Yang-Mills theory [14], [15].", "The same procedure can be followed to eliminate the non-MHV cubic vertex from (REF ).", "The canonical field redefinitions for $h,\\bar{h}$ in momentum space take the form [16] $h\\, &\\sim \\,& y_1\\, C\\,+\\,y_2\\, CC\\,+\\,y_3\\, CCC\\,+....\\ ,\\nonumber \\\\\\bar{h}\\, &\\sim \\,& z_1\\,\\widetilde{C}\\,+\\,z_2\\,\\widetilde{C}C\\,+\\,z_3\\,\\widetilde{C}CC\\,+....\\ ,$ where $C,\\widetilde{C}$ are the shifted fields and the coefficients $y_n,z_n$ are functions of momenta.", "Once these field redefinitions are applied to the gravity Lagrangian, tree-level (MHV) amplitudes are trivial to read off.", "For example, the cubic MHV term in (REF ) (in momentum space) reads $M_3(+,-,-)=\\frac{{\\langle k\\,l\\rangle }^6}{{\\langle l\\,p\\rangle }^2{\\langle p\\,k\\rangle }^2}\\ ,$ where $p$ is the momentum of the positive helicity field and $k,l$ the momenta of the negative helicity fields.", "At quartic order, these steps produce $M_4(+,+,-,-)=\\frac{\\langle k\\,l\\rangle ^{8}[k\\,l]}{\\langle k\\,l\\rangle \\langle k\\,p\\rangle \\langle k\\,q\\rangle \\langle l\\,p\\rangle \\langle l\\,q\\rangle \\langle p\\, q\\rangle ^{2}}\\ ,$ where $p,q$ are the momenta of the positive helicity fields and $k,l$ the momenta of negative helicity fields.", "These off-shell MHV vertices factorise, as expected from the KLT relations [17], into products of off-shell MHV vertices in Yang-Mills theory [14].", "MHV structures at order $\\kappa ^4$ We note that (REF ) does not contain structures like $\\bar{h}\\bar{h}hhhh$ , the MHV vertex at order $\\kappa ^4$ .", "In other words, MHV vertices do not arise naturally at this order and beyond.", "Instead, the field redefinition described above will produce such vertices.", "Alternatively, one could arrive at the $(++++--)$ amplitude by summing over the various Feynman diagrams, at order $\\kappa ^4$ , arising from lower-point vertices.", "Building a MHV Lagrangian [13] to order $\\kappa ^4$ , would involve working out the coefficients in (REF ) up to $n=4$ ." ], [ "Higher-order interaction vertices in light-cone gravity", "Poincaré invariance needs to be explicitly verified in light-cone field theories.", "This check can be used as a tool to impose constraints on field theories in the light-cone gauge.", "Essentially, this means that light-cone consistency checks serve as a first-principles approach to deriving Lagrangians for interacting field theories.", "Specifically, the procedure involves starting with a general ansatz for interaction vertices and allowing the Poincaré algebra to constrain the ansatz.", "A massless field of integer spin $\\lambda $ , in light-cone gauge, has two physical degrees of freedom $\\phi $ and $\\bar{\\phi }$ corresponding to the $+$ and $-$ helicity states respectively.", "The generators of the Poincaré algebra are the momenta $P^-=-i\\frac{\\partial \\bar{\\partial }}{\\partial ^{+}}=-P_+\\ , \\qquad P^+=-i\\partial ^+=-P_-\\ , \\qquad P=-i\\partial \\ , \\qquad \\bar{P}=-i\\bar{\\partial }\\ ,$ the rotation generators $J = (x \\bar{\\partial }-\\bar{x} \\partial - \\lambda ) \\;,\\qquad J^+ = i(x\\partial ^{+}-x^+\\partial )\\ , \\nonumber \\\\J^{+-}=i(x^-\\partial ^{+}-x^+\\frac{\\partial \\bar{\\partial }}{\\partial ^{+}}) \\;,\\qquad J^{-}=i(x\\frac{\\partial \\bar{\\partial }}{\\partial ^{+}}-x^{-}\\partial -\\lambda \\frac{\\partial }{\\partial ^{+}}) \\ ,$ and their complex conjugates.", "We work on the surface $x^+=0\\,$ to simplify calculations.", "The generators are of two kinds: kinematical that do not involve time derivatives $P^+\\;,\\;\\;P\\;,\\;\\;\\bar{P}\\;,\\;\\; J \\;,\\;\\;J^+\\;,\\;\\;{\\bar{J}}^+ \\;\\;{\\mbox{and}}\\;\\; J^{+-}\\ ,$ and dynamical that do $P^-\\;,\\;\\;J^-\\;,\\;\\;{\\bar{J}}^-\\ .$ Dynamical generators pick up corrections when interactions are switched on.", "We introduce the Hamiltonian variation $\\delta _{\\mathcal {H}}\\phi \\equiv \\partial ^-\\phi =\\lbrace \\phi ,H\\rbrace =\\frac{\\partial \\bar{\\partial }}{\\partial ^{+}}\\,\\phi \\ ,$ where the last equality only holds for the free theory.", "The appendix lists all non-vanishing commutators satisfied by the light-cone Poincaré generators.", "In the following, we adopt the approach of [18]: start with an ansatz for the operator $\\delta _H\\,\\phi $ , work through the list of Poincaré commutators to refine the ansatz and thus arrive at a structure for the Hamiltonian.", "For related discussions, see [19].", "Let us focus on the Hamiltonian describing $n$ fields.", "This would be at order $\\kappa ^{n-2}$ and have the schematic form $\\kappa ^{n-2}\\;\\phi ^c\\,\\bar{\\phi }^d$ ($c,d\\in \\lbrace 0,1,..,n\\rbrace $ and $c+d =n$ ).", "To arrive at such a Hamiltonian, we begin with the ansatz $\\delta _{\\mathcal {H}}^{\\kappa ^{n-2}} \\phi =\\kappa ^{n-2}\\,{\\partial ^{+}}^{\\mu _0}\\biggl \\lbrace [{\\bar{\\partial }}^{{\\alpha }_1} \\,\\partial ^{{\\beta }_1}\\, {\\partial ^{+}}^{{\\mu }_1} \\phi ]\\,...\\,[{\\bar{\\partial }}^{{\\alpha }_c} \\,\\partial ^{{\\beta }_c}\\, {\\partial ^{+}}^{{\\mu }_c}\\phi ]\\,\\;[{\\bar{\\partial }}^{{\\alpha }_{c+1}} \\,\\partial ^{{\\beta }_{c+1}}\\, {\\partial ^{+}}^{{\\mu }_{c+1}}\\bar{\\phi }]\\,...\\,[{\\bar{\\partial }}^{{\\alpha }_{n-1}} \\,\\partial ^{{\\beta }_{n-1}}\\,{\\partial ^{+}}^{{\\mu }_{n-1}} \\bar{\\phi }] \\biggr \\rbrace \\nonumber $ where ${\\alpha }_i$ , $ {\\beta }_i$ , ${\\mu }_i$ are integers to be fixed by the algebra.", "The next step is to work through some of the commutators in the appendix.", "The commutator $[\\delta _{J},\\delta _{H}]$ yields $\\sum _{i=1}^{n-1}({\\alpha }_i - {\\beta }_i) + (d-c)\\lambda = 0\\ .$ The next commutator, $[\\delta _{J^{+-}},\\delta _{H}]$ produces the condition $\\sum _{i=0}^{n-1} {\\mu }_i = -1\\ .$ Using (REF ) and dimensional analysis we find (noting that $[\\kappa ] = [L]^{\\lambda -1}$ ) $\\sum _{i=1}^{n-1} ({\\alpha }_i + {\\beta }_i) = 2 + (\\lambda - 2)(c + d - 2)\\ .$ If we assume that there are no negative powers of the transverse derivatives in the Hamiltonian, using (REF ) and (REF ) we obtain $\\sum _{i=1}^{n-1} {\\alpha }_i = (\\lambda - 1)c - d - \\lambda + 3 \\ge 0\\ ,\\\\\\sum _{i=1}^{n-1} {\\beta }_i = (\\lambda - 1)d - c - \\lambda + 3 \\ge 0\\ .$ Consider the implications of these inequalities in the simplest case: for spin-1 fields.", "$\\lambda =1$ implies $c\\,,d\\,\\le \\,2$ from which it follows that $n\\le 4$ .", "Thus verifying the known result, that the Yang-Mills theory action terminates with four-point interaction vertices.", "Now we consider the case of gravity with $\\lambda =2$ .", "Expressions (REF ) and () imply that $\\sum _{i=1}^{n-1} {\\alpha }_i = c - d + 1 \\ge 0\\ ,\\\\\\sum _{i=1}^{n-1} {\\beta }_i = d - c + 1 \\ge 0\\ .$ For a MHV vertex, $d = 2$ .", "From (REF ) and (), we see that $1 \\le c \\le 3$ .", "As $c \\,+\\, d \\,=\\, n$ , it follows that $3 \\le n \\le 5$ for a MHV vertex.", "Exactly as demonstrated in section 2, MHV vertices arise from the cubic, quartic and quintic interaction vertices.", "Also, as verified explicitly by our calculations, the first entirely non-MHV vertices appear in the 6-point result.", "Using $c+d=n$ in (REF ) and () yields $\\frac{n - 1}{2}\\, \\le \\,c\\,,d\\, \\le \\, \\frac{n + 1}{2}\\ .$ This relation allows us to extract information about the general form of even- and odd-point vertices.", "Consider an even point vertex, where $n\\, = \\,2m$ for some positive integer $m$ .", "We find $m - \\frac{1}{2}\\, \\le \\,c\\,,d\\, \\le \\,m + \\frac{1}{2}\\ ,$ which implies $c = d = m$ .", "Thus, the only allowed structure at order $2m$ is $h^m\\,\\,\\bar{h}^m$ .", "Now consider an odd-point vertex, where $n\\, =\\, 2m\\, +\\, 1$ for some positive integer $m$ .", "We find $m\\, \\le \\, c\\,,d \\,\\le \\, m +1\\ .$ If $c=m$ , then $d = m+1$ and consequently, odd-point vertices will have the structure $h^{m}\\,\\,\\bar{h}^{m+1}$ .", "The other case ($c=m+1$ ) produces the structure $h^{m+1}\\,\\,\\bar{h}^{m}$ .", "These are the only two allowed possibilities at odd order.", "Further, due to the reality of the Lagrangian, both structures appear together (as complex conjugates of one another).", "This clearly demonstrates the fact that MHV terms do not occur naturally beyond the quintic interaction vertex in light cone gravity.", "Thus, MHV vertices at any order beyond $\\kappa ^3$ , occur only through the canonical field redefinitions described in (REF )." ], [ "Acknowledgments", "The work of SA is partially supported by a MATRICS grant - MTR/2020/000073 - of SERB.", "NB and AR are supported by a CSIR NET fellowship and a DAE-DISHA fellowship respectively." ], [ "Light-cone Poincaré algebra", "We define $J^+\\ =\\ \\frac{J^{+1}+iJ^{+2}}{\\sqrt{2}}\\ ,\\quad \\bar{J}^{+}\\ =\\ \\frac{J^{+1}-iJ^{+2}}{\\sqrt{2}}\\ ,\\quad J\\ =\\ J^{12}\\ .$ The non-vanishing commutators of the Poincaré algebra are $&[P^-, J^{+-}]\\ =\\ -i P^- \\ , \\quad &[P^-, J^+] = -i P\\ , \\quad [P^-, \\bar{J}^+]\\ =\\ -i \\bar{P} \\nonumber \\\\ \\nonumber \\\\&[P^+, J^{+-}]\\ = \\ iP^+\\ ,\\quad &[P^+, J^-]\\ =\\ -iP\\ , \\quad [P^+, \\bar{J}^-]\\ =\\ -i \\bar{P} \\nonumber \\\\ \\nonumber \\\\&[P, \\bar{J}^-]\\ =\\ -i P^-\\ , \\quad &[P, \\bar{J}^+] \\ =\\ -iP^+\\ , \\quad [P, J]\\ =\\ P \\nonumber \\\\ \\nonumber \\\\&[\\bar{P}, J^-]\\ = \\ -i P^-\\ , \\quad &[\\bar{P}, J^+]\\ =\\ -iP^+\\ , \\quad [\\bar{P}, J]\\ =\\ -\\bar{P} \\nonumber \\\\ \\nonumber \\\\&[J^-, J^{+-}]\\ =\\ -i J^- \\ , \\quad &[J^-, \\bar{J}^+]\\ =\\ iJ^{+-} + J \\ , \\quad [J^-, J]\\ =\\ J^- \\nonumber \\\\ \\nonumber \\\\&[\\bar{J}^-, J^{+-}]\\ = \\ -i \\bar{J}^- \\ , \\quad &[\\bar{J}^-, J^+]\\ =\\ iJ^{+-} - J \\ , \\quad [\\bar{J}^-, J]\\ =\\ -\\bar{J}^- \\nonumber \\\\ \\nonumber \\\\&[J^{+-}, J^+]\\ =\\ -i J^+ \\ , \\quad &[J^{+-}, \\bar{J}^+]\\ = \\ -i \\bar{J}^+ \\ , \\nonumber \\\\ \\nonumber \\\\&[J^+, J]\\ =\\ J^+\\ , \\quad &[\\bar{J}^+, J]\\ =\\ -\\bar{J}^+\\ .$" ] ]
2207.10416
[ [ "Steady-state two-phase flow of compressible and incompressible fluids in\n a capillary tube of varying radius" ], [ "Abstract We study immiscible two-phase flow of a compressible and an incompressible fluid inside a capillary tube of varying radius under steady-state conditions.", "The incompressible fluid is Newtonian and the compressible fluid is an inviscid ideal gas.", "The surface tension associated with the interfaces between the two fluids introduces capillary forces that vary along the tube due to the variation in the tube radius.", "The interplay between effects due to the capillary forces and the compressibility results in a set of properties that are different from incompressible two-phase flow.", "As the fluids move towards the outlet, the bubbles of the compressible fluid grow in volume due to the decrease in pressure.", "The volumetric growth of the compressible bubbles makes the volumetric flow rate at the outlet higher than at the inlet.", "The growth is not only a function of the pressure drop across the tube, but also of the ambient pressure.", "Furthermore, the capillary forces create an effective threshold below which there is no flow.", "Above the threshold, the system shows a weak non-linearity between the flow rates and the effective pressure drop, where the non-linearity also depends on the absolute pressures across the tube." ], [ "Introduction", "Hydrodynamic properties of the flow of multiple immiscible and incompressible fluids, otherwise known as two-phase flow [1], [2], [3], [4], are controlled by a number of different factors: fluid properties such as the viscosity contrast and surface tension between the fluids, driving parameters such as the applied pressure drop or the flow rate, and geometrical properties of the medium such as the size and shape of the pore space.", "The combined effects of these factors make two-phase flow different and more complex than single phase flow.", "The dimensionless parameters that play a key role to define the flow properties are the ratio between the viscous and capillary forces, referred to as the capillary number, and the ratio between the viscosities of the two fluids.", "Depending on the values of these parameters, the flow generates different types of fingering patterns [5], [6], [7], [8], [9] or stable displacement fronts [10] during invasion processes where one fluid displaces another in the porous medium.", "Displacement processes are transient.", "If one continues to inject after breakthrough, the flow enters a steady state characterized by a situation where the macroscopic flow properties fluctuate or remain constant around well-defined averages.", "A more general form of steady-state flow can be achieved by continuously injecting both fluids simultaneously.", "In this case, the dynamics at the pore scale might have fluid clusters breaking up and forming, while the macroscopic flow parameters still have well-defined averages.", "Over the last decade, it has become clear that steady-state flow deviates from the linear Darcy relationship [11] between the total flow rate and pressure drop over a range of parameters.", "Rather, one finds a power law relationship between pressure drop and the volumetric flow rate [12], [13] in that range.", "In terms of the capillary number, this range is intermediary, with linearity appearing both for lower and higher values [14], [15], [16].", "Theoretical work to understand the physics behind the non-linearity has appeared in e.g.", "[17], [18], [16], and computational studies have been performed using Lattice Boltzmann simulations [19] and dynamic pore network modeling [20], [14].", "It is now believed that a fundamental mechanism behind this non-linearity is the capillary barriers at the pore throats, which create an effective yield threshold.", "When the viscous forces increase, they overcome the capillary barriers creating new flow paths.", "This increases the effective mobility and thus the non-linear behavior appears [21].", "The disorder in the pore-space properties, such as the pore-size distribution [22] and the wetting angle distribution [23], therefore play key roles in determining the value of the exponent relating the volumetric flow rate and the pressure drop in the non-linear regime.", "The majority of the analytical and numerical approaches mentioned above consider the two fluids to be incompressible, whereas many of the experiments and applications use air as one of the fluids.", "Air is strongly compressible, which introduces complex pore-scale mechanisms such as trapping and coalescence [24], [25].", "Compressibility is relevant to a wide range of applications with liquid and gas transport in porous media, for example, ${\\rm CO}_2$ transport and storage [26], [27], [28] and the transport in fuel cells [29].", "Another class of applications where the compressibility plays a key role are those involving phase transitions of the fluids such as boiling and condensation.", "There are industrial applications where such processes are of high importance, for example aerospace vehicle thermal protection [30], high power electronics cooling systems [31], [32], [33] and chemical reactors [34].", "These applications utilize the high specific surface area of a porous medium with fluid flowing inside, which enhances the heat and mass transfer rates [35], [36].", "There are also natural processes such as drying of soil [37] where a liquid to gas transition takes place.", "In this paper we present a study of two-phase flow of a mixture of compressible and incompressible fluids in a capillary tube with varying radius.", "We consider two fluids, one is an incompressible Newtonian fluid obeying Poiseuille flow whereas the other is a compressible ideal gas, where the viscosity is assumed to be negligible.", "The fluids flow as a series of bubbles and droplets under a constant pressure drop along the tube.", "In case of two-phase flow of two incompressible fluids in a corresponding capillary tube, it has been found that the volumetric flow rate depends on the square root of the pressure drop along the tube minus a threshold pressure [38].", "The primary goal of the present work is to determine how this constitutive equation changes when one of the two fluids is compressible.", "A secondary goal of this work is to provide a basis for dynamic pore network modeling [39], [40], [41], [20] of compressible-incompressible fluid mixtures.", "This opens the possibility for incorporating thermodynamic effects in such models such as boiling.", "We note that the other dominating computational model in this context, the Lattice Boltzmann model [42], [43], can only incorporate fluids that are weakly compressible [44], [45].", "We describe in Section REF the equations that govern the flow through the capillary tube.", "In Section REF we introduce the boundary conditions used, i.e., how we inject alternate compressible and incompressible fluid into the tube.", "Section REF describes how the governing equations are integrated in time.", "Section presents the results of our investigation.", "Section REF defines what we mean by steady-state flow in the context of expanding bubbles.", "In section REF we investigate how the compressible bubbles grow as they advance along the tube, thus increasing the overall flow rate of the fluids.", "Section REF presents the relation between volumetric flow rate and pressure drop at both the inlet and outlet.", "We summarize our results in Section .", "Section contains the description of the videos provided in the electronic supplementary material." ], [ "Methodology", "The capillary tube considered in this work is filled with an incompressible and a compressible fluid, immiscible to each other, which flow through it.", "The radius of the capillary tube varies along the flow direction, $x$ .", "The fluids are separated by menisci, generating a surface tension.", "The incompressible fluid is a viscous Newtonian liquid obeying Hagen-Poiseuille flow whereas the compressible fluid is an inviscid ideal gas.", "The flow occurs as a plug flow with a series of alternate bubbles and droplets of the two fluids as illustrated in Figure REF .", "There is no fluid film along the tube walls and therefore no coalescence or snap off taking place inside the tube during the flow.", "We will refer the compressible and the incompressible fluid segments as bubbles and droplets respectively.", "Figure: Illustration of the tube geometry andthe indexed variables.", "The shaded fluid represents the non-wettingcompressible fluid and the white fluid represents the more wettingincompressible liquid.", "There are N=6N=6 bubbles here indicatedby the numbers i=1,...,6i=1,\\ldots ,6.", "The indexed variables P i P_i, V i V_iand n i n_i respectively correspond to the pressure, volume andmoles of the iith bubble whereas Q i Q_i corresponds to theflow rate of the droplet between iith and (i+1)(i+1)thbubbles." ], [ "Governing equations", "We assume that at a given time the system contains $N$ compressible bubbles denoted by $i=1,2,\\ldots ,N$ from left to right as shown in Figure REF .", "The volume $V_i$ and the pressure $P_i$ of the $i$ th bubble are connected through the ideal gas law, $\\displaystyle P_iV_i = n_iRT \\;,$ where $n_i$ is the number of moles of gas present inside the bubble, $R$ is the ideal gas constant and $T$ is the temperature.", "The tube is assumed to have a constant average cross-sectional area ($A$ ) in terms of the fluid volume and therefore $V_i=A(x_i^r-x_i^l)$ where $x_i^l$ and $x_i^r$ are the positions of the left and right menisci of the $i$ th bubble respectively.", "The volume of an incompressible droplet on the other hand will remain constant throughout the flow and the flow rate will depend on the pressures of the two compressible bubbles bordering it.", "The volumetric flow rate of the incompressible droplet between $i$ and $i+1$ is denoted by $Q_i$ , and follows the constitutive equation [46], $\\displaystyle Q_i = \\frac{A^2}{8\\pi \\mu (x_{i+1}^l-x_{i}^r)}\\left[P_i - P_{\\rm c}(x_i^r) - P_{i+1} + P_{\\rm c}(x_{i+1}^l)\\right] \\;,$ where $\\mu $ is the viscosity of the incompressible fluid and $P_c(x)$ is the capillary pressure at $x$ .", "Here we consider the incompressible fluid to be more wetting with respect to the pore walls than the compressible fluid, thus determining the sign of $P_c$ in Equation REF .", "We model $P_c$ by using the Young-Laplace equation [2], $\\displaystyle P_{\\rm {c}}(x) = \\frac{2\\gamma }{r(x)} \\;,$ where $r(x)$ is the radius of the tube at $x$ .", "Here $\\gamma =\\sigma \\cos (\\theta )$ where $\\sigma $ is the surface tension between the fluids and $\\theta $ is the wetting angle of the fluid with respect to the tube wall.", "The variation in the radius of the tube shown in Figure REF is modeled by $\\displaystyle r(x) = \\frac{1}{2}\\left[w + 2a\\cos \\left(\\frac{2h\\pi x}{L}\\right)\\right]$ where $L$ is the tube length, $w$ is the average radius, $a$ is the amplitude of oscillation and $h$ is the number of periods." ], [ "Boundary conditions", "The system is driven by a constant pressure drop $\\Delta P = P_0-P_L$ where $P_0$ and $P_L$ are the pressures at the inlet ($x=0$ ) and outlet ($x=L$ ) respectively.", "The two fluids are injected alternatively at the inlet.", "Depending on which fluid that is being injected and which fluid that is leaving the tube, there will be different configurations as illustrated in Figure REF .", "When a bubble is entering at the inlet [Figure REF (a)] or leaving at the outlet [Figure REF (c)], the pressure in that bubble is given by $P_0$ or $P_L$ respectively.", "This is because the compressible fluid has no viscosity and thus the pressure inside a bubble is uniform.", "The pressures inside all other bubbles are calculated using Equation REF .", "When a droplet is entering at the inlet [Figure REF (b) and (c)] or leaving at the outlet [Figure REF (a) and (b)], the respective flow rates $Q_0$ and $Q_N$ are given by, $\\begin{split}\\displaystyle Q_0 & = \\frac{A^2}{8\\pi \\mu x_1^l}\\left[P_0 - P_1 + P_{\\rm c}(x_1^l)\\right] \\; {\\rm and}\\\\Q_N & = \\frac{A^2}{8\\pi \\mu (L-x_N^r)}\\left[P_N - P_{\\rm c}(x_N^r) - P_L\\right] \\;,\\end{split}$ whereas the flow rates of the remaining droplets are calculated using Equation REF .", "Figure: Illustration of different configurationswhere bubbles and droplets are colored as gray and whiterespectively.", "In (a), a bubble is entering at the inlet andtherefore P 1 =P 0 P_1 = P_0 there.", "In (c), a bubble is leaving at theoutlet, therefore P N =P L P_N=P_L there.", "A droplet is entering at theinlet in (b) and (c), and leaving at the outlet in (a) and(b).", "The flow rates of such droplets are calculated usingEquation .The simulation is started with the tube completely filled with the incompressible fluid.", "The two fluids are then injected alternately through the inlet using small time steps.", "Whenever the injection is switched to a different fluid, a new menisci is created and the injection is continued for that fluid until the bubble or the droplet being injected has reached a given length, $b_{\\rm C}$ or $b_{\\rm I}$ respectively.", "For each new bubble or droplet, a new value for $b_{\\rm C}$ or $b_{\\rm I}$ is determined using the following scheme: $\\displaystyle b_{\\rm C} = b_{\\rm min} + kF_{\\rm C}b_{\\rm max} \\quad \\; {\\rm and} \\quad b_{\\rm I} = b_{\\rm min} + kF_{\\rm I}b_{\\rm max} \\;,$ where $k$ is chosen from a uniform distribution of random numbers between 0 and 1.", "$F_{\\rm C}$ and $F_{\\rm I}$ are the tentative values of the fractional flows for the bubbles and droplets respectively.", "The two parameters $b_{\\rm max}$ and $b_{\\rm min}$ set the smallest and largest allowed sizes of any bubble or droplet.", "We consider here $b_{\\rm min}=L/10^4$ and $b_{\\rm max}=L/50$ .", "The parameters $b_{\\rm C}$ and $b_{\\rm I}$ decide the initial sizes of the bubbles and droplets just after they detach from the inlet.", "For the compressible fluid, this determines the number of moles $n_i$ inside a bubble, $\\displaystyle n_i = \\frac{Ab_{\\rm C}P_0}{RT} \\;,$ which remains constant for that bubble throughout the flow after it gets detached from the inlet." ], [ "Updating the menisci", "At any time, the two menisci bordering a droplet inside the tube move with the same velocities.", "The velocities of the menisci are calculated from the velocities $v_i$ of the droplets using Equations REF and REF , $\\displaystyle {{x_{i}^r}}{t} = {{x_{i+1}^l}}{t} = v_i = \\frac{Q_i}{A} \\;.$ We solve these ordinary differential equations using an explicit Euler scheme, thus updating positions of all menisci by choosing a small time step $\\Delta t$ .", "Depending on the position of the menisci and the corresponding capillary pressures, the bubbles may compress or expand.", "If a bubble compresses at any time step, it means the left and right interfaces of that bubble approach each other.", "This necessitates the choice of time step $\\Delta t$ to be sufficiently small, as otherwise, the two menisci around that bubble will collapse after the time step.", "We deal with this situation in the following way.", "First we calculate a time $\\Delta t_1$ that is needed to pass one pore-volume of incompressible fluid through the tube, $\\displaystyle \\Delta t_1 = \\frac{8\\pi \\mu L^2}{A(P_0-P_L)} \\;.$ Next, we check for every bubble $i$ if $(v_{i-1}-v_i)>0$ , that is, whether the two menisci bordering the bubble are approaching each other in that time step.", "If this criterion is found to be true for any of the bubbles $j$ , we measure the time it will take for the two menisci to collapse, $\\displaystyle \\Delta t_2^j = \\frac{x_j^r-x_j^l}{v_{j-1}-v_j} \\;.$ After calculating $\\Delta t_1$ and $\\Delta t_2^j$ , we determine a time $\\Delta t$ for that step from, $\\displaystyle \\Delta t = {\\rm min}(a^*\\Delta t_1, b^*\\Delta t_2^j) \\;,$ which means that if there is a possibility for a bubble to collapse during the time step, we chose $\\Delta t$ from the minimum of $a^*\\Delta t_1$ and all of $b^*\\Delta t_2^j$ .", "If there is no possibility of collapse, we use $\\Delta t$ equal to $a^*\\Delta t_1$ .", "For the simulations presented in this paper, we set $a^*=10^{-8}$ and $b^*=10^{-6}$ .", "We perform steady-state simulations considering a tube of length $L=100\\,{\\rm cm}$ with $w=1\\,{\\rm cm}$ , $a=0.25\\,{\\rm cm}$ and $h=30$ (Equation REF ).", "The viscosity of the incompressible fluid is $\\mu =0.001\\,{\\rm Pa.s}$ , the ideal gas constant is $R=8.31\\,{\\rm J/(mol.K)}$ and the temperature is kept fixed throughout the simulation at $T=293\\,{\\rm K}$ .", "We fix $F_c=0.4$ (Equation REF ) which sets the volumetric fractional flow of the compressible fluid at the inlet around that value.", "We perform simulations varying the pressure drops ($\\Delta P = P_0-P_L$ ) as well as the absolute outlet pressure with different values of the surface tension, $\\gamma $ .", "Figure: Total volumetric flow rate q T i q_{\\rm T}^{\\rm i} at the inlet as a function of the injected porevolume V p V_p for the outlet pressure P L =1 kPa P_L=1\\,{\\rm kPa} and thesurface tension γ=0.09N/m\\gamma =0.09\\,{\\rm N/m} and for the pressuredrops ΔP=1,2,4\\Delta P=1,2,4 and 8 kPa 8\\,{\\rm kPa} respectively.", "Thesteady-state values of the flow rates are measured by takingaverages in the range of 20 to 40 pore volumes as indicated bythe dashed lines." ], [ "Steady-state flow", "The steady state is defined by the volumetric flow rates of the fluids fluctuating around a stable average.", "Due to the expansion of the compressible fluid, which we will discuss in a moment, the volumetric flow rate of the fluids changes as the fluids flow towards the outlet.", "We define the quantities $Q_{\\rm T}^{\\rm i}$ , $Q_{\\rm C}^{\\rm i}$ , $Q_{\\rm I}^{\\rm i}$ as the average steady-state flow rates for the total, compressible and incompressible fluids at the inlet and $Q_{\\rm T}^{\\rm o}$ , $Q_{\\rm C}^{\\rm o}$ , $Q_{\\rm I}^{\\rm o}$ as those at the outlet.", "The inlet and outlet flow rates are measured by tracking the displacement of the first meniscus nearest to the inlet and the last meniscus near the outlet, which are either the left or the right meniscus of the first ($i=1$ ) and the last ($i=N$ ) bubbles.", "The instantaneous flow rates of the bubbles and droplets are measured as $q_{\\rm C}^{\\rm i}=A\\sum \\Delta x_1^r/\\sum \\Delta t$ for $x_1^l=0$ , $q_{\\rm I}^{\\rm i}=A\\sum \\Delta x_1^l/\\sum \\Delta t$ for $x_1^l>0$ and $q_{\\rm C}^{\\rm o}=A\\sum \\Delta x_N^l/\\sum \\Delta t$ for $x_N^l=L$ , $q_{\\rm I}^{\\rm o}=A\\sum \\Delta x_N^r/\\sum \\Delta t$ for $x_N^R<L$ .", "This measurement is performed after every $0.05$ pore-volumes of fluid are injected and the sum is therefore over the time steps in between.", "The total flow rates are therefore given by, $q_{\\rm T}^{\\rm i,o} = q_{\\rm C}^{\\rm i,o}+q_{\\rm I}^{\\rm i,o}$ .", "This provides the measurement of the injected and outlet flow rates as a function of the injected pore volumes or of the time.", "In Figure REF , we plot $q_{\\rm T}^{\\rm i}$ as a function of the pore-volumes ($V_p$ ) injected for $P_L=1\\,{\\rm kPa}$ .", "The pore-volume $V_p$ is defined as the ratio between the volume of the inject fluids and the volume of the total pore space of the tube, which provides an estimate of how many times the pore space was flushed with the fluids.", "The plots show that $q_{\\rm T}^{\\rm i}$ increases with time at the beginning of the flow.", "This increase in $q_{\\rm T}^{\\rm i}$ is due to the decrease in the effective viscosity of the system caused by the injection of inviscid compressible gas into the tube filled with viscous incompressible fluid.", "After the injection of a few pore volumes, $q_{\\rm T}^{\\rm i}$ fluctuates around a constant average ($Q_{\\rm T}^{\\rm i}$ ) shown by the horizontal dashed lines which defines the steady state.", "We run our simulations for 40 pore volumes of fluid where the steady-state averages are taken after 20 pore volumes injected to ensure that a steady state has been reached." ], [ "Bubble growth", "As a compressible bubble moves along the tube, the volume of the bubble increases due to the decrease in the pressure towards the outlet [47].", "The bubble can also grow due to other mechanisms, such as the increase in temperature or a phase transition between liquid and gas phases [48], [49], but these phenomena are not studied here.", "A simulation with a single bubble inside a short tube is shown in the supplementary material which illustrates that the bubble increases in size as it flows towards the outlet.", "To understand how this growth depends on different flow parameters in the steady state, we define the growth function $G_{\\rm C}(x)$ by, $\\displaystyle G_{\\rm C}(x) = \\frac{V(x) - V_{\\rm 0}}{V_0}\\;,$ where $V_0$ and $V(x)$ are the volume of a given bubble initially after detaching from the inlet and when its center is at $x$ .", "We measure $G_{\\rm C}$ by including all the bubbles that are not attached to the inlet or outlet and calculate the time average value of $(V(x)-V_0)/V_0$ in the investigated time interval, where $x$ is the center of the bubble.", "Figure: Plot of the bubble growth G C (x)G_{\\rm C}(x)in the steady state as a function of the scaled position x/Lx/Linside the tube for zero surface tension, γ=0\\gamma = 0.", "The twoplots show the results for the same set of pressure drops ΔP\\Delta P with different outlet pressures P L P_L.Figure: Variation of the pressure P(x)P(x) [kPa]inside a compressible bubble along the tube during steady stateflow.", "P(x)P(x) shows a linear behavior for different values ofΔP\\Delta P and P L P_L.Figure REF shows the variation of $G_{\\rm C}(x)$ along the tube for two different outlet pressures, $P_L=1\\,{\\rm kPa}$ and $100\\,{\\rm kPa}$ where we plot the results for the same set of pressure drops $\\Delta P$ .", "These results are with zero surface tension, $\\gamma =0$ .", "There are a few details to note here.", "First, the plots show that $G_{\\rm C}(x)$ increases with an increase in $\\Delta P$ .", "In addition, $G_{\\rm C}(x)$ also depends on the absolute pressures at the inlet and outlet, since we can see that the curves are non-linear functions of $x$ for $P_L=1\\,{\\rm kPa}$ , whereas for $P_L=100\\,{\\rm kPa}$ , they show linear behavior.", "Furthermore, $G_{\\rm C}(x)$ approaches $\\Delta P/P_L$ at $x=L$ for all the data sets.", "To explain the dependency of $G_{\\rm C}(x)$ on $\\Delta P$ and $P_L$ , we recall Equation REF and rewrite Equation REF as, $\\displaystyle G_{\\rm C}(x) = \\frac{P_0-P(x)}{P(x)} \\;,$ where $P(x)$ is the pressure inside a bubble at $x$ .", "For $x=L$ , $P(x)= P_L$ and therefore $G_{\\rm C}(L)=\\Delta P/P_L$ as observed.", "In Figure REF , we plot $P(x)$ , averaged over different time steps in the steady state, for the two outlet pressures, $P_L=1\\,{\\rm kPa}$ and $100\\,{\\rm kPa}$ .", "Both of the plots show linear variation along $x$ with the slope $-\\Delta P$ .", "We therefore have $P(x)=-x\\Delta P/L+P_L+\\Delta P$ and thus, $\\displaystyle \\frac{G_{\\rm C}(x)}{n_P} = \\frac{x/L}{1+n_P(1-x/L)} \\;,$ where $n_P=\\Delta P/P_L$ .", "This leads to $\\displaystyle \\frac{G_{\\rm C}(x)}{n_P} \\sim {\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{1}{n_P}\\left(\\frac{1}{1-x/L}-1\\right) & {\\rm for} \\quad n_P \\gg 1 \\;, \\\\x/L & {\\rm for} \\quad n_P \\ll 1 \\;,\\end{array}\\right.", "}$ which explains the concave and linear variation of $G_C$ as function of $x/L$ observed in Figure REF (a) and (b) respectively.", "The growth of the bubbles along the tube is therefore a function of $n_P=\\Delta P/P_L$ .", "Figure: Variation of the bubble growth G C G_{\\rm C}, scaled with n P =ΔP/P L n_P=\\Delta P/P_L, with x/Lx/L.", "Results areplotted for the same sets of n P n_P for two different values ofP L P_L.", "The left and right figures correspond to γ=0\\gamma =0 and0.03N/m0.03\\,{\\rm N/m} respectively.", "In each plot, the line correspondsto P L =1 kPa P_L=1\\,{\\rm kPa} and the symbols correspond to P L =100 kPa P_L=100\\,{\\rm kPa}.In Figure REF we plot $G_{\\rm C}/n_P$ for the two outlet pressures $P_L$ with the same sets of values of $n_P$ for (a) $\\gamma = 0$ and (b) $\\gamma = 0.3\\,{\\rm N/m}$ .", "The plots show that the results for the same values of $n_P$ follow the same curves, irrespective of the outlet pressures $P_L$ .", "Furthermore, for the non-zero surface tension case in Figure REF (b), $G_{\\rm C}$ also shows a periodic oscillation along $x$ when both the $n_P$ and $P_L$ are small, that is, for $P_L=1\\,{\\rm kPa}$ and $n_P\\le 1$ .", "In addition, there is no data point for $n_P\\le 0.3$ with $P_L=1\\,{\\rm kPa}$ , as the movement of the bubbles stopped due to high capillary barriers.", "This suggests the existence of an effective threshold pressure, below which there will be no flow through the tube.", "This threshold depends on both $\\gamma $ and $P_L$ , which we will explore more in the following section.", "We show the different characteristics of flow in the videos provided in electronic supplementary material." ], [ "Effective rheology", "Equations REF and REF resist analytical solutions even in the case when there is only a single compressible bubble in the tube.", "This is due to the pressure in the compressible bubble being inversely proportional to the difference in position of the two menisci surrounding it, whereas the motion of the two surrounding incompressible fluids is determined by the cosine of the positions of the same menisci.", "These equations, even in this simplest case, are therefore highly non-linear with an essential singularity lurking in the very neighborhood where we seek solutions.", "We therefore stick to numerical analysis in the following.", "Figure: Plot of the flow rates for the total(Q T i,o Q_{\\rm T}^{\\rm i,o}), compressible (Q C i,o Q_{\\rm C}^{\\rm i,o}) andincompressible (Q I i,o Q_{\\rm I}^{\\rm i,o}) fluids at the inlet (leftcolumn) and at the outlet (right column) for P L =1 kPa P_L=1\\,{\\rm kPa} asa function of ΔP\\Delta P. The different sets in each plotcorrespond to different values of the surface tension indicated inthe legends.", "The dashed line in each plot has a slope of 1.Due to the volumetric growth of the compressible bubbles during their flow towards the outlet, the volumetric flow rate varies along the tube.", "In addition, this volumetric growth is a function of the pressures, making the average saturation and the effective viscosity of the two fluids inside the tube pressure dependent.", "These two mechanisms together control the effective rheological behavior of the steady-state flow.", "In Figure REF we show the variation of the volumetric flow rates ($Q_{\\rm T}^{\\rm i,o}$ , $Q_{\\rm C}^{\\rm i,o}$ , $Q_{\\rm I}^{\\rm i,o}$ ) as functions of the pressure drop $\\Delta P$ for the outlet pressure $P_L=1\\,{\\rm kPa}$ and for different values of the surface tension ($\\gamma $ ).", "Note the differences between the inlet and outlet flow rates for the total and for the each component of flow.", "For the incompressible fluid, there is no significant increase in the outlet flow rate compared to its inlet flow rate (third row in Figure REF ) whereas there is a noticeable increase in the outlet flow rate of the compressible fluid (second row in Figure REF ).", "This increase in $Q_{\\rm C}^{\\rm o}$ effectively increases the total flow rate at the outlet (first row in Figure REF ).", "The dashed line in Figure REF has a slope equal to 1.", "The total flow rates show deviations from this dashed line.", "For the inlet, $Q_{\\rm T}^{i}$ shows small deviations from the dashed line for $\\gamma >0$ at small $\\Delta P$ .", "Whereas at the outlet, the deviations are significantly higher due to the increase in the volumetric growth of the compressible fluid.", "Another point to note in Figure REF is that there is a minimum value of $\\Delta P$ , below which there is no data point available.", "This is due to the existence of a threshold pressure below which the flow stops.", "In the supplementary material we show a simulation video in this regime where one can observe that the flow of the bubbles stops at a certain time step.", "The threshold is due to the capillary forces at the menisci between the two fluids that create capillary barriers at the narrowest points along the tube.", "Such threshold was also observed in the case of two-phase flow of two incompressible fluids in a tube with variable radius [38].", "There, it was shown analytically that the average flow rate $Q$ in the steady state varies with the applied pressure drop $\\Delta P$ as, $Q\\sim \\sqrt{\\Delta P^2 - P_{\\rm t}^2}$ where $P_{\\rm t}$ is the effective threshold pressure.", "When ${\\Delta P} -P_{\\rm t} \\ll P_{\\rm t}$ , this relationship translates to $Q\\sim \\sqrt{{\\Delta P} - P_{\\rm t}}$ , that is, the flow rate varies with the excess pressure drop to the power of $0.5$ .", "The threshold pressure depends on the surface tension and on the configuration of the menisci positions inside the tube.", "If the total capillary barrier is higher than the applied pressure drop, the flow stops.", "This is similar here for the two-phase flow with one of the fluids being compressible.", "Figure: Plot of the volumetric inlet flow rateQ T i Q_{\\rm T}^{\\rm i} as a function of the excess pressure drop(ΔP-P t )(\\Delta P-P_{\\rm t}) for P L =1 kPa P_L=1\\,{\\rm kPa} and 100 kPa 100\\,{\\rm kPa}, where the values of P t P_{\\rm t} are obtained from aminimization of the least square fit error ϵ\\epsilon .", "Theminimization is illustrated in the insets of (a) and (b) forγ=0.05N/m\\gamma =0.05\\,{\\rm N/m} (green) and 0.09N/m0.09\\,{\\rm N/m}(purple).", "The dashed lines in (a) and (b) have a slope 1 whereasin (c), the lower and upper dashed lines have slopes 1 and 1.31.3respectively.We assume a general relation between the average volumetric flow rates $Q_{\\rm T}^{\\rm i,o}$ and the pressure drop $\\Delta P$ as, $\\displaystyle Q_{\\rm T}^{\\rm i,o} \\sim (\\Delta P - P_{\\rm t})^{\\beta _{\\rm i,o}}$ where $\\beta _{\\rm i,o}$ is the corresponding exponent.", "In order to find both the effective threshold pressure $P_T$ and the exponent $\\beta _{\\rm i,o}$ from the measurements of $Q_{\\rm T}^{\\rm i,o}$ , we adopt an error minimization technique that was used in earlier studies [18], [23].", "There we choose a series of trial values for $P_{\\rm t}$ and calculate the mean square error $\\epsilon $ for the linear least square fit by fitting the data points with $\\log (Q)\\sim \\log (\\Delta P-P_{\\rm t})$ .", "Then we select the value of $P_{\\rm t}$ that corresponds to the minimum value of $\\epsilon $ , implying the best fit of the data points with Equation REF .", "This is illustrated in the insets of Figure REF (a) and (b).", "The slope for the selected threshold $P_{\\rm t}$ provides the exponent $\\beta _{\\rm i,o}$ .", "The variation of the total inlet and outlet flow rates $Q_{\\rm T}^{\\rm i,o}$ with the excess pressure drop $(\\Delta P-P_{\\rm t})$ are plotted in Figure REF for the two outlet pressures $P_L=1$ and $100\\,{\\rm kPa}$ .", "The data sets show agreement with Equation REF with the selected values of $P_{\\rm t}$ and $\\beta $ .", "There is a noticeable difference between the slopes for the inlet and outlet flow rates for $P_L=1\\,{\\rm kPa}$ whereas for $P_L=100\\,{\\rm kPa}$ they are similar.", "For $P_L=100\\,{\\rm kPa}$ the data points for both $Q_{\\rm T}^{\\rm i}$ and $Q_{\\rm T}^{\\rm o}$ follow a slope of $\\approx 1.0$ whereas for $P_L=1\\,{\\rm kPa}$ , the data points for $Q_{\\rm T}^{\\rm i}$ and $Q_{\\rm T}^{\\rm o}$ follow the slopes of $\\approx 1.0$ and $1.3$ respectively.", "These are indicated by the dashed lines in the figures.", "Figure: Variation of the threshold pressureP t P_{\\rm t} and the exponents β i,o \\beta _{\\rm i,o} as functions ofthe effective surface tension γ\\gamma for P L =1P_L=1 and 100 kPa 100\\,{\\rm kPa}.", "P t P_{\\rm t} increases with the increase of γ\\gamma andthe values are much higher for P L =1 kPa P_L=1\\,{\\rm kPa} compared toP L =100 kPa P_L=100\\,{\\rm kPa}.", "The exponent β i \\beta _{\\rm i} for the inletflow rate are close to 1 for both the values of P L P_L whereasfor the outlet flow rate β o ≈1.3\\beta _{\\rm o}\\approx 1.3 forP L =1 kPa P_L=1\\,{\\rm kPa}.", "For P L =100 kPa P_L=100\\,{\\rm kPa}, β o \\beta _{\\rm o}remains close to β i \\beta _{\\rm i}.", "The dashed horizontal linesindicate the value 1.01.0 of the yy axis.The variations of $P_{\\rm t}$ and $\\beta _{\\rm i,o}$ with the surface tension $\\gamma $ are plotted in Figure REF .", "The data points were calculated by considering different ranges of $\\Delta P$ and taking averages over the ranges, and the corresponding standard deviations are plotted as error bars.", "The threshold pressure $P_{\\rm t}$ is zero at $\\gamma =0$ and then increases gradually with $\\gamma $ which shows that the threshold appears due to capillary forces.", "The increase in $P_{\\rm t}$ with $\\gamma $ appears to be linear here which is similar to the case of two incompressible fluids, where the linear dependence of $P_{\\rm t}$ on the surface tension was shown analytically [38].", "Additionally for the compressible flow here, the thresholds also depend on the outlet pressure $P_L$ .", "For the lower outlet pressure $P_L=1\\,{\\rm kPa}$ , the thresholds are systematically higher compared to those for $P_L=100\\,{\\rm kPa}$ for the whole range of $\\gamma $ .", "Furthermore, the exponents $\\beta _{\\rm i,o}$ also depend on the outlet pressure as seen from the Figures REF (b) and (c).", "The difference is more visible for the exponents related to the outlet flow rates than the inlet.", "For the inlet flow rate, $\\beta _{\\rm i}$ has values around $\\approx 0.95$ and $1.02$ for $P_L=1\\,{\\rm kPa}$ and $100\\,{\\rm kPa}$ respectively, showing almost linear dependence for both the cases.", "For the outlet flow rates, $\\beta _{\\rm o}$ remains close to $\\beta _{\\rm i}$ for $P_L=100\\,{\\rm kPa}$ whereas for $P_L=1\\,{\\rm kPa}$ , $\\beta _{\\rm o}$ increases to $\\approx 1.3$ .", "This increase in $\\beta _{\\rm o}$ compared to $\\beta _{\\rm i}$ reflects the dependence of the volumetric growth $G_{\\rm C}(x)$ of the bubbles on $P_L$ , indicating an underlying dependence of the rheological behavior on the absolute inlet or outlet pressures.", "However, at this point we are unable to describe how the two parameters $P_{\\rm t}$ and $\\beta $ scale with $P_L$ , which needs further study.", "Existing studies of the power-law volumetric flow rate-pressure drop relation for porous media have shown the existence of different regimes characterized by different exponents.", "These studies involve experiments [12], [17], [13], [14], [15], [16], Lattice Boltzmann simulations [19], pore-network modeling [18], [14] and analytical calculations [17], [18], [50], [16].", "There are three regimes, an intermediate non-linear regime where the flow rate $Q$ increases at a rate much faster than the applied pressure drop $\\Delta P$ with a power law exponent larger than one and up to a value around 2.5.", "There are in addition two linear regimes for either smaller [19], [15], [16] or larger [19], [18], [14] volumetric flow rates than the non-linear regime.", "This allows the definition of a lower and upper crossover pressure drop.", "A simple explanation for these three regimes may be drawn from the study of the conductivity of a disordered network of threshold resistors [21] which is based on the following idea.", "Each resistor has a threshold voltage to start conducting the current and then the current increases linearly.", "In a network with links of such flow properties, there will be a regime when new conducting paths will appear when increasing the global pressure drop.", "The increase in the flow rate through each path together with the increase in the number of paths leads to an effective increase of $Q$ faster than $\\Delta P$ .", "This results in the non-linear exponent being higher than 1, the value of which depends on the distribution of the thresholds in each link [50].", "The linear regime above this non-linear regime appears from all the available paths being conducting whereas the linear regime below appears from the flow being flow in single percolating channels without interfaces.", "With this idea, the experimental [15], [16] and numerical [19] observations of two-phase flow in porous media showing linear variation of flow rate in this low pressure regime therefore indicate that the flow in single channels consisting of many pores are linear, which is similar to what we have found for the lower outlet pressure in the present compressible/incompressible flow case." ], [ "Conclusions", "We have studied the flow of alternating compressible bubbles and incompressible droplets through a capillary tube with variable radius.", "The motion of the bubbles was given by the model Equations REF and REF , thus assuming the compressible fluid to be an ideal gas with zero viscosity, whereas the incompressible fluid is Newtonian.", "The incompressible fluid is more wetting than the compressible gas, but not to a degree that films form.", "We switch between injecting the compressible and incompressible fluid at intervals so that the fractional flow rate is essentially constant at the inlet.", "We fix a pressure drop along the tube in addition to an ambient pressure.", "This creates steady-state flow conditions in the tube.", "The compressible bubbles expand as they move from the higher pressure region at the inlet towards the lower pressure at the outlet.", "This expansion accelerates the incompressible fluid, thus making the volumetric flow rate larger at the outlet than at the inlet.", "The lower the ambient pressure is, the stronger this effect is.", "We measure volumetric flow rate at the inlet, finding essentially a linear relationship between the volumetric flow rate and the pressure drop.", "However, there is a threshold pressure that needs to be overcome in order to have flow through the tube.", "At the outlet, we find that the volumetric flow rate is still linear in the excess pressure drop when the ambient pressure is low.", "However, when the ambient pressure is high, the volumetric flow rate at the outlet becomes proportional to the excess pressure to a power of around $1.3$ .", "This behavior is very different from that of two incompressible fluids moving through a corresponding tube: Here the volumetric flow rate, being the same at the inlet and the outlet, is proportional to the square root of the excess pressure.", "We expected the flow rate-pressure drop constitutive relations to be different in this compressible/incompressible case than that of two incompressible fluids.", "However, that we should find linearity was a big surprise.", "A precise explanation as to why this is so, is still lacking.", "Besides these surprising results, this work makes a first step in implementing the modeling of compressible/incompressible fluid mixtures in dynamic network models.", "We may then envision using more sophisticated equations of state for the compressible fluid beyond the ideal gas law.", "This allows the consideration of e.g.", "phase transitions such as boiling and condensation in porous media." ], [ "Supplementary material", "The electronic supplementary material contains videos showing different flow characteristics.", "The videos can be found in the list of ancillary files in the arXiv abstract page of this article.", "We considered a tube with $L=10\\,{\\rm cm}$ , $w=1\\,{\\rm cm}$ , $a=0.25\\,{\\rm cm}$ and $h=5$ (Equation REF ).", "The simulations were performed for $P_L=1\\,{\\rm kPa}$ and $\\gamma =0.2\\,{\\rm N/m}$ .", "The compressible bubbles are colored with magenta whereas the incompressible droplets are colored with black.", "The videos are not in real time.", "We show four different simulations with different values of $\\Delta P$ : Flow of a single bubble of compressible gas in incompressible fluid.", "Here $\\Delta P = 5\\,{\\rm kPa}$ .", "The video shows the increase in the volume of the bubble as it approaches towards the outlet.", "Injection of multiple compressible bubbles and incompressible droplets at a very low pressure drop, $\\Delta P = 0.3\\,{\\rm kPa}$ .", "The flow stops after a certain time when several interfaces appeared in the tube.", "This shows the existence of a total capillary barrier, which is higher than the applied pressure drop here.", "Two-phase flow of multiple compressible bubbles and incompressible droplets at a low pressure drop, $\\Delta P =0.4\\,{\\rm kPa}$ .", "Here the bubbles speed up and slow down as they flow, showing the combined effect of the surface tension and the shape of the tube.", "The bubbles also grow in volume towards the outlet.", "Two-phase flow of multiple compressible bubbles and incompressible droplets at a higher pressure drop, $\\Delta P =3\\,{\\rm kPa}$ .", "The bubbles do not show any significant slowing down in this case, indicating the capillary forces being negligible compared to the viscous pressure drop.", "The volumetric expansion of the compressible bubbles can also be observed here." ], [ "Acknowledgments", "We thank Federico Lanza, Marcel Moura and Håkon Pedersen for helpful discussions.", "This work is supported by the Research Council of Norway through its Centers of Excellence funding scheme, project number 262644." ] ]
2207.10503
[ [ "Multi-modal Retinal Image Registration Using a Keypoint-Based Vessel\n Structure Aligning Network" ], [ "Abstract In ophthalmological imaging, multiple imaging systems, such as color fundus, infrared, fluorescein angiography, optical coherence tomography (OCT) or OCT angiography, are often involved to make a diagnosis of retinal disease.", "Multi-modal retinal registration techniques can assist ophthalmologists by providing a pixel-based comparison of aligned vessel structures in images from different modalities or acquisition times.", "To this end, we propose an end-to-end trainable deep learning method for multi-modal retinal image registration.", "Our method extracts convolutional features from the vessel structure for keypoint detection and description and uses a graph neural network for feature matching.", "The keypoint detection and description network and graph neural network are jointly trained in a self-supervised manner using synthetic multi-modal image pairs and are guided by synthetically sampled ground truth homographies.", "Our method demonstrates higher registration accuracy as competing methods for our synthetic retinal dataset and generalizes well for our real macula dataset and a public fundus dataset." ], [ "Introduction", "For the diagnosis of retinal disease, such as diabetic retinopathy, glaucoma, or age-related macular degeneration, and for the long-term monitoring of their progression, ophthalmological imaging is essential.", "Images are recorded over varying time periods using different multi-modal imaging systems, such as color fundus (CF), infrared (IR), fluorescein angiography (FA), or the more recent optical coherence tomography (OCT) and OCT angiography (OCTA).", "For the comparison and fusion of the information from different images by the ophthalmologists, multi-modal image registration is required to accurately align the vessel structures in the images.", "Multi-modal retinal registration methods can be summarized into global methods to predict an affine transform or a homography and local methods that estimate a non-rigid displacement field.", "In this work, we concentrate on feature-based methods that apply keypoint detection, description, matching, and computation of the global transform.", "Classical methods estimate e.g.", "the partial intensity invariant feature descriptor (PIIFD) [5] and Harris corner detector.", "This was extended by [24] using speed up robust feature (SURF) detector, PIIFD, and robust point matching, called SURF–PIIFD–RPM.", "With the use of deep learning, some steps or even all steps are replaced by neural networks.", "The retinal method DeepSPA [12] uses a convolutional neural network (CNN) to classify patches of vessel junctions based on a step pattern representation.", "The keypoint detection and description network RetinaCraquelureNet [19] is trained on small multi-modal retinal image patches centered at vessel bifurcations and uses mutual nearest neighbor matching and random sample consensus (RANSAC) [7] for homography estimation.", "In GLAMpoints [22], homography guided self-supervised learning is applied to train a UNet  [17] for keypoint detection combined with RootSIFT [1] descriptors for retinal image data.", "The weakly supervised method by Wang et al.", "[25] sequentially trains a vessel segmentation network using style transfer and the mean phase image as guidance, the self-supervised SuperPoint [6] network, and an outlier network using context normalization [26], which they adapt for the homography estimation task.", "End-to-end networks are often designed to directly compute the parameters of the transform.", "To predict affine and non-rigid transforms, there is for instance the image and spatial transformer networks (ISTN) for structure-guided image registration that learns a representation of the segmentation maps during training [13].", "An approach [2] on spatial transformers and CycleGANs [28] for multi-modal image registration uses cross-modality translation between the modalities to employ a mono-modality metric.", "Figure: Our keypoint-based vessel structure aligning network (KPVSA-Net) for multi-modal retinal registration uses a CNN to extract cross-modal features of the vessel structures in both images and a graph neural network for descriptor matching.", "Our method is end-to-end and self-supervisedly trained by using synthetically augmented image pairs.", "During inference, the homography is predicted based on the matches and scores using weighted direct linear transform (DLT).In this paper, we propose an end-to-end deep learning method for multi-modal retinal image registration, named Keypoint-based Vessel Structure Aligning Network (KPVSA-Net).", "We employ prior knowledge by extracting deep features of the vessel structure using the keypoint detection and description network RetinaCraquelureNet [19].", "In contrast to vessel segmentation based methods, we extract the features directly from multi-modal images to learn distinctive cross-modal descriptors.", "We build an end-to-end network for feature extraction and matching by extending RetinaCraquelureNet and combining it with the graph neural network SuperGlue [18].", "We jointly train both networks using a novel self-supervised keypoint and descriptor loss and a self-supervised matching loss guided by sampled homographies.", "We created a synthetically augmented dataset by training an image translation technique to generate synthetic retinal images.", "Our network incorporates and connects knowledge about the local and global position, visual appearance, and context between keypoints showing high registration accuracy.", "Figure: Keypoint confidence heatmap (from low confidence blue to high confidence red) without (middle left) and with (middle right) our novel self-supervised keypoint and descriptor loss in combination with the differentiable keypoint refinement.", "The extracted keypoints (most right) of our multi-modal registration method are color-coded based on their confidence (red is high).Our proposed method is trained end-to-end in a self-supervised manner guided only by synthetically sampled ground truth homographies.", "To apply the self-supervised technique to multi-modal images, we make use of unpaired image-to-image translation using the cycle consistency [28].", "For each modality combination, we train one CycleGAN [28] to augment the training dataset by generating synthetic images of the other modalities.", "To train our registration method, we sample random homographies on the fly to transform the second image.", "Afterwards, we crop both images at the same randomly selected position to a fixed patch size and recalculate the homographies based on the new corner points.", "We augment both patches independently with photometric augmentations such as color jittering, Gaussian blurring, sharpening, Gaussian random noise, and small random crops.", "Prior to warping, we jointly augment the full-size images with geometric transformations such as horizontal and vertical flipping, rotation, and elastic deformation by random noise." ], [ "Multi-modal Retinal Keypoint Detection and Description Network", "We employ and extend the fully-convolutional RetinaCraquelureNet [20], [19] for our end-to-end pipeline (see fig-01).", "The network architecture is composed of a ResNet [9] backbone and a keypoint detection and description head.", "The keypoint detection head has two output channels (“vessel”, “background”), which we reduce to only one channel to directly predict the keypoint confidence score.", "We set the feature dimension of the description head to 256-D to reduce the parameters for end-to-end learning.", "We pretrain the network from scratch using multi-modal retinal image patches centered at supervised keypoint positions with a binary cross-entropy loss for keypoint detection and a cross-modal bidirectional quadruplet descriptor loss [20], [19].", "Figure: Qualitative results for one IR-OCTA example.Then, we fit the network into our pipeline.", "In order to directly use the output of the detection head as keypoint confidence scores, we add a batch normalization layer after the last $1 \\times 1$ convolutional layer and add a sigmoid activation after the bicubic upsampling layer.", "With these modifications the predictions of the detector are scaled to the range zero to one.", "Then, we apply non-maximum suppression (NMS) to the keypoint confidence heatmap and extract the top $N_\\text{max}$ keypoints from the NMS heatmap [20], [19].", "This step is non-differentiable, therefore we apply a differentiable subpixel keypoint refinement (DKR) that allows the gradients to flow back to the small regions around the keypoints.", "Inspired by recent works [14], [15], [11], [27], we extract $5 \\times 5$ patches from the confidence heatmap which are centered at the $N_\\text{max}$ keypoint positions and compute for each patch $p$ the spatial softargmax of the normalized patch $(p - s_\\text{NMS})/t$ , where $s_\\text{NMS}$ is the value of the NMS score map and $t$ the temperature for the softmax.", "The refined keypoint positions are the sum of the initial coordinates and the soft subpixel coordinates.", "The corresponding descriptors are linearly interpolated at the refined keypoint coordinates [20].", "Based on the idea of the bidirectional quadruplet descriptor loss ($\\mathcal {L}_{\\text{Desc}}$ ) [20], we design a self-supervised keypoint and descriptor loss ($\\mathcal {L}_{\\text{KD}}$ ) that is guided by ground truth homographies instead of labeled matching keypoint pairs as in [20].", "Within the detected keypoints in both images, positive keypoint pairs are automatically determined based on mutual nearest neighbor matching of the keypoint coordinates whose reprojection error is smaller than a threshold $\\tau $ .", "For the positive descriptor pairs (anchor $\\mathbf {d}_a$ and positive counterpart $\\mathbf {d}_p$ ), the closest non-matching descriptors in both directions are selected analogously to [20]: $\\begin{split}\\mathcal {L}_{\\text{Desc}}(\\mathbf {d}_a,\\mathbf {d}_p,\\mathbf {d}_u,\\mathbf {d}_v) = &\\max [0, m + D(\\mathbf {d}_a,\\mathbf {d}_p) - D(\\mathbf {d}_a,\\mathbf {d}_u)]\\\\+ &\\max [0, m + D(\\mathbf {d}_p,\\mathbf {d}_a) - D(\\mathbf {d}_p,\\mathbf {d}_v)],\\end{split}$ where $m$ is the margin, $D(x,y)$ the Euclidean distance, $\\mathbf {d}_u$ the closest negative to $\\mathbf {d}_a$ , and $\\mathbf {d}_v$ is the closest negative to $\\mathbf {d}_p$ .", "However, since this self-supervised descriptor loss formulation depends on the number of matchable keypoints in the images with a coordinate distance smaller than $\\tau $ , it could encourage the reduction of the number of positive pairs $N_p$ to minimize the loss.", "To account for that and to refine the keypoint positions, we include a term into our loss to also minimize the reprojection error of the coordinates of the positive pairs (anchor $\\mathbf {x}_a$ , and warped coordinates of the positive counterpart $\\hat{\\mathbf {x}}_p$ ) which is weighted by $\\beta $ .", "This leads to our self-supervised keypoint and descriptor loss: $\\begin{split}\\mathcal {L}_{\\text{KD}}(\\mathbf {d}_a,\\mathbf {d}_p,\\mathbf {d}_u,\\mathbf {d}_v, \\mathbf {x}_a, \\hat{\\mathbf {x}}_p) = \\frac{1}{N_p}\\sum _i^{N_p} \\mathcal {L}_{\\text{Desc}}(\\mathbf {d}_{ai},\\mathbf {d}_{pi},\\mathbf {d}_{ui},\\mathbf {d}_{vi})\\\\+ \\frac{\\beta }{N_p^2}\\sum _i^{N_p} D(\\mathbf {x}_{ai},\\hat{\\mathbf {x}}_{pi}).\\end{split}$ Table: Quantitative evaluation for our synthetic retina test dataset.", "Models with * are fine-tuned on our synthetic augmented dataset." ], [ "Keypoint Matching Using a Graph Convolutional Neural Network", "For keypoint matching, we incorporate the graph convolutional neural network SuperGlue [18] into our method which consists of three building blocks, see fig-01.", "The keypoint coordinates are encoded as high dimensional feature vectors using a multilayer perceptron, and a joint representation is computed for the descriptors and the encoded keypoints [18].", "The attentional graph neural network (GNN) uses alternating self- and cross-attention layers to learn a more distinctive feature representation.", "The nodes of the graph are the keypoints' representations of both images.", "The self-attention layers connect the keypoints within the same image, while the cross-attention layers connect a keypoint to all keypoints in the other image.", "Information is propagated along both the self- and cross-edges via messages.", "At each layer the keypoints' representations for each image are updated by aggregation of the messages using multi-head attention [23].", "Lastly, a $1 \\times 1$ convolutional layer is used to obtain the final descriptors [18].", "The optimal matching layer is used to compute the partial soft assignment matrix $\\mathbf {P}$ , which assigns for each keypoint at most one single keypoint in the other image.", "Based on the score matrix of the similarity of the descriptors, $\\mathbf {P}$ is iteratively solved using the differentiable Sinkhorn algorithm [21].", "To account for unmatchable keypoints, a dustbin is added to the $N \\times M$ score matrix [18].", "The negative log-likelihood of $\\mathbf {P}$ is minimized [18]: $\\mathcal {L}_{\\text{SG}}(\\mathbf {P},\\mathcal {M},\\mathcal {I},\\mathcal {J}) = - \\kappa \\sum _{(i,j) \\in \\mathcal {M}} \\log \\mathbf {P}_{i,j} - \\sum _{i \\in \\mathcal {I}} \\log \\mathbf {P}_{i,N+1} - \\sum _{j \\in \\mathcal {J}} \\log \\mathbf {P}_{M+1,j},$ where $\\kappa $ is the weight for the positive matches $\\mathcal {M}$ , $\\mathcal {I}$ denotes the unmatchable keypoints of image $I_I$ , and $\\mathcal {J}$ the unmatchable keypoints of image $I_J$ which are all those whose reprojection errors are higher than $\\tau $ .", "We compute the $\\mathcal {L}_{\\text{SG}}$ and the ground truth matches twice, once based on matching from image $I_I$ to $I_J$ and once vice versa, i.e.", "the matching loss is the sum of both.", "Table: Quantitative evaluation for the IR-OCT-OCTA dataset.", "Models with * are fine-tuned on our synthetic augmented dataset." ], [ "Multi-modal Retinal Datasets", "For our IR-OCT-OCTA retinal dataset, provided by the Department of Ophthalmology, FAU Erlangen-Nürnberg, the maculas of 46 controls were measured by Spectralis OCT II, Heidelberg Engineering up to three times a day resulting in 134 images per modality and 402 images in total.", "The multi-modal image triplets consist of IR images ($768 \\times 768$ ) and en-face OCT and OCTA projections of the SVP layer (Par off) of the macula (both $512 \\times 512$ ).", "We split the images for each modality into training: 89, validation: 15, and test set: 30.", "Secondly, we split the public color fundus (CF: $576 \\times 720\\times 3$ ) and fluorescein angiography (FA: $576 \\times 720$ ) dataset [8], [4] that consists of 29 image pairs of controls and 30 pairs of patients with diabetic retinopathy into training: 35, validation: 10, and test set: 14.", "For our synthetic dataset, we generate 1119 multi-modal pairs of real and synthetic images for training, 205 for validation, and 386 for testing.", "Due to our self-supervised training, we do not need any annotations, hence we only annotated 6 control points per image for the test sets.", "OCT, OCTA, and FA images are inverted for our experiments to depict all vessels in dark." ], [ "Implementation and Experimental Details", "KPVSA-Net is implemented in PyTorch and we use the Kornia framework [16] for data augmentation, homography estimation using weighted direct linear transform (DLT), and image warping.", "To initialize both networks, we pretrain our adapted version of RetinaCraquelureNet (RCN: 256-D) from scratch (backbone + detection head: learning rate of $\\eta =1\\cdot 10^{-3}$ , 100 epochs; complete network: $\\eta =1\\cdot 10^{-4}$ , 25 epochs; for both: with early stopping and linear decay of $\\eta $ to 0 starting at 10) and use the SuperGlue weights of the Outdoor dataset.", "Then, we train KPVSA-Net end-to-end using Adam solver for 100 epochs with $\\eta =1\\cdot 10^{-4}$ for SuperGlue and $\\eta =1\\cdot 10^{-6}$ for the detector and descriptor heads of RCN (frozen ResNet backbone) and then decay $\\eta $ linearly to zero for the next 400 epochs with early stopping and a batch size of 8.", "The keypoint and descriptor loss and matching loss are equally weighted, $m=1$ , $\\beta =300$ , $t=0.1$ , $\\kappa =0.45$ , $\\tau =3$ , training patch size of 384, $N_\\text{max} = 512$ (synthetic dataset) or $N_\\text{max} = 1024$ (real datasets), and matching score threshold of $0.2$ for DLT.", "For the comparison, we used the original configuration of RetinaCraquelureNet (RCN: 512-D) [19] and we fine-tuned SuperPoint [6] (SP*) and GLAMpoints [22] with our synthetic multi-modal dataset by extending the training code of [11], [22].", "Then, we jointly fine-tuned SuperGlue and the descriptors of the pretrained SuperPoint model (SP+SG)* for 100 epochs using our synthetic dataset and training strategy.", "Likewise, we jointly fine-tuned SuperGlue and the SP* model (SP*+SG)*.", "For the feature-based comparison methods, we use $N_\\text{max}$ of 2000 or 4000 (synthetic/real), mutual nearest neighbor matching and RANSAC [7] (reprojection error of 5) for homography estimation using OpenCV.", "For vessel segmentation and Dice score computation, we trained a UNet with synthetically augmented multi-modal images using CycleGANs based on the CF images and manual segmentations of the HRF dataset [3], [10].", "The registration success rate for the real datasets is computed for the mean Euclidean error ($\\text{ME}$ ) and maximum Euclidean error ($\\text{MAE}$ ) of 6 manual target control points and warped source control points using the predicted homography and an error threshold $\\epsilon $ .", "For the synthetic dataset, we compute the success rate of the mean homography error ($\\text{MHE}$ ) [6] for different $\\epsilon $ based on warping the corner points of the source image using the ground truth and the predicted homography." ], [ "Results", "The quantitative results of the synthetic dataset are summarized in Table REF .", "Our KPVSA-Net shows the highest success rates for homography estimation for low error thresholds and in total the highest Dice score of the registered images.", "For error thresholds larger than 4, the two SuperPoint+SuperGlue variants show comparable results.", "All SuperGlue-based methods achieve higher scores than the feature-based methods that use RANSAC.", "The bottom rows of Table REF show our ablation study.", "First, RCN (256-D) with training only the descriptor head, without keypoint refinement, and without our novel loss variant in combination with SuperGlue (RCN-D*+SG*) already shows an improvement of 7.5 % compared to (SP*+SG)* for $\\epsilon <=1$ .", "Enabling the keypoint detector and descriptor to learn (RCN-KD*+SG*) improves further by 3 %, and using the differential keypoint refinement (DKR) (RCN-DK*-D*+SG*) achieves 5 % more, and finally our full method KPVSA-Net additionally achieves 19 % plus for $\\epsilon <=1$ .", "The high accuracy for low error thresholds could be seen as the effect of the combination of our novel loss and DKR that pulls matching keypoints and descriptors closer together.", "The effect of both terms on the keypoint heatmap is visualized in Fig.", "REF .", "The left heatmap of the frozen detector highlights the vessel structures.", "Adding the single described steps only marginally change the visual appearance of the heatmap.", "Our final model has a refining effect on the heatmap (right) by thinning the high response area (red).", "Further, our loss also had a positive effect on SuperGlue by speeding up the convergence of both losses.", "Table: Quantitative evaluation for the public CF-FA dataset.", "Models with * are fine-tuned on our synthetic augmented dataset.The evaluation results for our real IR-OCT-OCTA dataset is shown in Table REF and for the public dataset in Table REF .", "For the single multi-modal domain pairs, the twice fine-tuned (SP*+SG)* model has a slightly higher success rate for IR-OCT, but our method is slightly better for IR-OCTA and OCT-OCTA and for the total dataset.", "Generally, the errors are a bit higher for the real dataset and the best Dice score (ours) only has 54.2 % instead of 78.9 % for the synthetic dataset, but good results are still achieved.", "Since there is no ground truth for the real dataset, some inaccuracies come from the manual control points and also due to small deformations in the vessels or motion artifacts.", "The registration task for the CF-FA dataset is less complex, resulting in smaller ME and MAE for all methods and relatively close results for RCN, (SP+SG)*, (SP*+SG)*, and our method.", "We also tested the conventional method SURF+PIIFD+RPM [24] using their Matlab implementation.", "Results are in the supplementary material, as it achieved bad results for CF-FA and failed for the IR-OCT-OCTA dataset.", "A qualitative IR-OCTA registration result is shown in Fig.", "REF .", "RootSIFT applied to the vessel segmentation predicted by the UNet finds the least number of correct matches and does not predict an acceptable homography.", "GLAMpoints detects more keypoints and matches than SuperPoint, but their registration results are comparable.", "The matches of RetinaCraquelureNet are concentrated on vessel structures resulting in a more precise registration.", "SuperPoint+SuperGlue filters out most false matches, but only shows a small number of matches in total.", "Our KPVSA-Net, however, detects a larger number of strong matches and results in a sightly more accurate overlay of the segmented vessels." ], [ "Conclusion", "Our method incorporates prior knowledge of the vessel structure into an end-to-end trainable pipeline for retinal image registration.", "Using a graph neural network for image matching, spatial and visual information is connected to form a more distinctive descriptor.", "In the evaluation, our method demonstrates high registration accuracy for our synthetic retinal dataset and generalizes well for our real clinical dataset and the public fundus dataset.", "As there are some small deformations of the vessels, which cannot be handled with a perspective transform, we will look into non-rigid approaches as a further step of investigation." ] ]
2207.10506
[ [ "First-principles insights into all-optical spin switching in the\n half-metallic Heusler ferrimagnet Mn$_2$RuGa" ], [ "Abstract All-optical spin switching (AOS) represents a new frontier in magnetic storage technology -- spin manipulation without a magnetic field, -- but its underlying working principle is not well understood.", "Many AOS ferrimagnets such as GdFeCo are amorphous and renders the high-level first-principles study unfeasible.", "The crystalline half-metallic Heusler Mn$_2$RuGa presents an opportunity.", "Here we carry out hitherto the comprehensive density functional investigation into the material properties of Mn$_2$RuGa, and introduce two concepts - the spin anchor site and the optical active site - as two pillars for AOS in ferrimagnets.", "In Mn$_2$RuGa, Mn$(4a)$ serves as the spin anchor site, whose band structure is below the Fermi level and has a strong spin moment, while Mn$(4c)$ is the optical active site whose band crosses the Fermi level.", "Our magneto-optical Kerr spectrum and band structure calculation jointly reveal that the delicate competition between the Ru-$4d$ and Ga-$4p$ states is responsible for the creation of these two sites.", "These two sites found here not only present a unified picture for both Mn$_2$RuGa and GdFeCo, but also open the door for the future applications.", "Specifically, we propose a Mn$_2$Ru$_x$Ga-based magnetic tunnel junction where a single laser pulse can control magnetoresistance." ], [ "Introduction", "Laser-induced ultrafast demagnetization [1] changes the landscape of spin manipulation, where the laser field plays a central role in magnetism.", "All-optical spin switching (AOS) [2] is a prime example, where a single laser pulse can turn spins from one direction to another, free of an external magnetic field.", "As more and more materials are discovered [3], [4], [5], a critical question on the horizon is what properties are essential to AOS.", "Earlier studies have focused on magnetic orderings such as ferrimagnetic versus ferromagnetic [6], sample composition [7], compensation temperature [8], magnetic domains [9] and others [10], but most AOS materials are amorphous and difficult to simulate within state-of-the-art density functional theory.", "This greatly hampers the current effort to decipher the mystery of AOS at a microscopical level that goes beyond the existing phenomenological understanding [11].", "Heusler compounds represent a new opportunity [12], [13].", "Their properties can be systematically tailored, only subject to the structure stability.", "Different from rare-earth-transition metals [2], [14], one has an empirical Slater-Pauling rule to predict spin moments [15], [16], [17], [18], [19], [20], [21], [22].", "Although this rule is simple [23], the actual synthesis of a desired material is a monumental task of decades in making [24], because many materials are unstable experimentally.", "In 2002, Hori et al.", "[25] successfully synthesized various (Mn$_{1-x}$ Ru$_x$ )$_3$ Ga alloys with $x=0.33-0.67$ and determined the spin moment of 1.15 $\\mu _{\\rm B}$ per formula.", "In 2014, Kurt et al.", "[26] demonstrated that Ru can significantly reduce the spin moment in ferrimagnet Mn$_2$ Ru$_x$ Ga. Because one can tune composition $x$ , Mn$_2$ Ru$_x$ Ga is likely to be a half-metal and fully-compensated ferrimagnet [27], [28], with no stray field, ideal for spintronics [29], [13], [30].", "Research has intensified immediately [31], [21], [32], [33], [34], [35].", "Lenne et al.", "[36] found that the spin-orbit torque reaches $10^{-11}$ Tm$^2$ /A in the low-current limit.", "Banerjee et al.", "[37] reported that a single 200-fs/800-nm laser pulse can toggle the spin from one direction to another in Mn$_2$ RuGa within 2 ps or less.", "Just as found in GdFeCo [2], [11], for every consecutive pulse, the spin direction is switched.", "This discovery [37] demonstrates the extraordinary tunability of Heusler compounds, which now changes the trajectory of AOS research [38], [39], [40], with Mn$_2$ RuGa as a crystalline model system where the first-principles investigation is now possible.", "In this paper, we carry out the comprehensive first-principles density-functional study to pin down the material properties essential to all-optical-spin switching in ferrimagnet Mn$_2$ RuGa.", "We introduce two concepts - the spin anchor site (SAS) and the optical active site (OAS) as two essential pillars of AOS in ferrimagnets.", "SAS has a strong spin moment, and in Mn$_2$ RuGa it is the Mn$(4a)$ site.", "Our band structure reveals that the Mn$(4a)$ 's band is 0.5 eV below the Fermi level.", "By contrast, OAS has a smaller spin moment, easier to be switched optically [41], and in Mn$_2$ RuGa it is the Mn$(4c)$ site.", "Its band is around the Fermi level and accessible to optical excitation [42].", "The creation of SAS and OAS is the making of Ru and Ga.", "The Ru-$4d$ electrons set up the initial spin configuration with a strong spin moment concentrated on the distant Mn$(4c)$ , but Ga tips this balance and reverses the relative spin magnitude between Mn$(4a)$ and Mn$(4c)$ .", "Although Ru and Ga are weakly magnetic, their energy bands appear in the same energy window as two Mn atoms, which is manifested in the magneto-optical Kerr spectrum.", "Guided by two essential sites, we can now unify Mn$_2$ RuGa with GdFeCo, despite of their apparent structural differences, and extract three essential properties for AOS.", "(i) A ferrimagnet must have a spin anchor site, i.e.", "Gd in GdFeCo and Mn$(4a)$ in Mn$_2$ RuGa.", "(ii) It must have an optical active site, i.e.", "Fe in GdFeCo and Mn$_2(4c)$ in Mn$_2$ RuGa.", "(iii) Its spin anchor site and optical active site must be antiferromagnetically coupled to minimize the potential energy barrier [43].", "We propose a laser-activated magnetic tunnel junction based on the same material Mn$_2$ Ru$_x$ Ga, but with different compositions $x$ which form optical activation, spin filtering and reference layers.", "This device, if successful, represents an ideal integration of fully-compensated half-metallicity in spintronics into all-optical spin switching in femtomagnetism [44], [45].", "The rest of the paper is arranged as follows.", "In Sec.", "II, we present our theoretical formalism.", "Section III is devoted to the results and discussion, which includes the crystal structure, electronic band structure, ultrafast demagnetization, and Kerr rotation angle.", "Finally, we conclude this paper in Sec.", "IV." ], [ "Theoretical formalism and calculation", "Element Mn lies in the middle of $3d$ transition metals, with a half-filled $3d$ shell and zero orbital moment, just as Gd in the middle of $4f$ rare-earth metals.", "Mn is the only $3d$ transition metal element in inverse Heusler compounds, which is similar to a rare-earth element [46].", "Mn$_2$ RuGa crystallizes in an inverse $XA$ Heusler structure [46], [31], [21], [22], [35] (see Fig.", "REF (a)), where two manganese atoms, Mn$_1$ and Mn$_2$ , are situated at two distinct Wyckoff positions $4a(0,0,0)$ and $4c(\\frac{1}{4},\\frac{1}{4},\\frac{1}{4})$ , and are antiferromagnetically coupled.", "Ru and Ga sit at $4d(\\frac{3}{4},\\frac{3}{4},\\frac{3}{4})$ and $4b(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2})$ , respectively.", "The inverse Heusler $XA$ structure has two Mn atoms separated by a vector $(\\frac{1}{4}, \\frac{1}{4}, \\frac{1}{4})$ , while the $L2_1$ structure by $(\\frac{1}{2}, \\frac{1}{2}, \\frac{1}{2})$ [26], [32], [33].", "The experimental lattice constants in this nearly cubic material are $a=b=c=5.97 \\rm Å$ [26].", "Viewing along the diagonal direction, four atoms form chains $\\rm Mn_1$ -$\\rm Mn_2$ -Ga-Ru-$\\rm Mn_1 \\cdots $ .", "Therefore, Mn$_2$ RuGa loses both inversion and time reversal symmetries due to the antiferromagnetic coupling between two Mn atoms.", "We employ the state-of-the art density functional theory and the full-potential linearlized augmented plane wave (FLAPW), as implemented in the Wien2k code [47].", "We first self-consistently solve the Kohn-Sham equation $\\left[-\\frac{\\hbar ^2\\nabla ^2}{2m_e}+V_{Ne}+V_{H}+V_{xc} \\right]\\psi _{n{\\bf k}}(r)=E_{n{\\bf k}} \\psi _{n{\\bf k}} (r), $ where $\\psi _{n{\\bf k}}(r)$ is the wavefunction of band $n$ at the crystal momentum ${\\bf k}$ and $E_{n{\\bf k}}$ is its band energy.", "The terms on the left are the kinetic energy operator, the attraction between the nuclei and electrons, the Hartree term, and the exchange-correlation [48], respectively.", "The spin-orbit coupling is included using a second-variational method in the same self-consistent iteration." ], [ "Crystal structure", "Ordered Heusler alloys have three distinctive kinds of structures [49]: (1) normal full-Heusler $X_2YZ$ alloys with group symmetry $L2_1$ , (2) half Heusler $XYZ$ compounds with group symmetry $C1_b$ , and (3) inverse Heusler $X_2YZ$ alloys with group symmetry $XA$ .", "(1) has the space group No.", "225.", "(2) and (3) have the same space group No.", "216.", "$L2_1$ has $(8c)$ site, which is split into two different sites in $XA$ [46].", "However, over the years, various Wyckoff positions are adopted in the literature.", "For the $XA$ structure, Wollmann et al.", "[46] used a different set of Wyckoff positions for Mn at $4d(\\frac{1}{4},\\frac{1}{4},\\frac{1}{4})$ , $Y$ at $4c(\\frac{3}{4},\\frac{3}{4},\\frac{3}{4})$ , Mn at $4b(\\frac{1}{2},\\frac{1}{2},\\frac{1}{2})$ , and $Z$ at $4a(0,0,0)$ .", "So in their paper, their $(4a),(4b),(4c),(4d)$ positions have different meanings from those in [31].", "In order to convert Wollmann's notation to the latter notation, one has to shift the entire cell by $4d(\\frac{1}{4},\\frac{1}{4},\\frac{1}{4})$ .", "For the $L2_1$ structure, Wollmann et al.", "[46] also adopted different positions, which were again used in their review paper [13] (see Table REF ).", "In Mn$_2$ RuGa, several versions have also been used.", "It adopts an $XA$ structure.", "Kurt et al.", "[26] correctly assigned the space group symmetry $L2_1$ to the full Heusler compound, but inappropriately assigned the same group to Mn$_2$ RuGa, and so did Zic et al.", "[32].", "Both Zic et al.", "[32] and Fleischer et al.", "[33] had the correct notations for all the atoms, but their figure switched the positions for Ru and Mn$_2$ , where Ru$(4d)$ appears at position $4c(\\frac{1}{4},\\frac{1}{4},\\frac{1}{4})$ and Mn$_2(4c)$ at $4d(\\frac{3}{4},\\frac{3}{4},\\frac{3}{4})$ .", "Since they never used the figure to characterize their experimental data, this change does not affect their results.", "We also notice that Betto et al.", "[50] assigned $C1_b$ group symmetry to Mn$_2$ RuGa, where two Mn atoms are at $4a(0,0,0)$ and $4c(3/4,3/4,3/4)$ while Ru at $4d(1/4,1/4,1/4)$ and Ga at $4b(1/2,1/2,1/2)$ .", "One can see from Table REF that $C1_b$ has no $4c$ site.", "Galanakis et al.", "[31] exchanged the positions for Ru and Ga, so Ru is at site $4b$ and Ga is at site $4d$ .", "Although in general such an exchange is allowed, they do not match the existing experimental results [42].", "For instance, only Mn$_1(4a)$ has Ru as its neighbor.", "If we exchange the positions for Ru and Ga, then Mn$_2(4c)$ would have Ru as its neighbor.", "We summarize those used Wyckoff positions in the same table, so the reader can see the difference.", "We adopt the common convention, as listed in the last line in Table REF .", "This convention matches the experimental results better [33].", "In particular, the magneto-optics signal agrees with the experimental one." ], [ "Band structure", "We choose a big $k$ mesh of $44\\times 44\\times 44$ , with 11166 irreducible $k$ points in the Brillouin zone.", "The product of the Muffin-tin radius $R_{\\rm MT}$ and the planewave cutoff is 7, where $R_{\\rm MT}(\\rm Mn_1,Mn_2, Ru)=2.42$ bohr, and $R_{\\rm MT}(\\rm Ga)=2.28$ bohr.", "We find that Mn$_1$ has spin moment of $M_{4a}=3.17\\mu _{\\rm B}$ .", "We call Mn$_1(4a)$ the spin anchor site, SAS, as it pins the magnetic configuration, so the magnetic structure can be stabilized and is immune to optical excitation.", "Mn$_2(4c)$ atoms form another spin sublattice with a smaller spin moment of $-M_{4c}=-2.31\\mu _{\\rm B}$ .", "The entire cell has the spin moment of 1.027 $\\mu _{\\rm B}$ , in agreement with prior studies [31], [21].", "Figure REF (a) shows the spatial valence spin density integrated from 2 eV below the Fermi level for each atom, where the red(blue) color refers to the majority(minority) spin.", "One can see the spin density is mainly localized on these two Mn atoms, where Mn$_1$ has a larger spin in the spin up channel and Mn$_2$ has the spin density in the spin down channel, so they are antiferromagnetically coupled.", "Our first finding is that the above spin configuration hinges on the delicate balance between Ru and Ga.", "Figure REF (b) shows their respective atomic energies.", "Ru's $4d^75s^1$ states are close to Mn's unoccupied $5d^0$ states.", "Without Ga, when Mn and Ru form a solid, the spin moment on Ru increases by five times to 0.39$\\mu _B$ and is antiferromagnetically coupled to the Mn$_1(4a)$ 's spin, but its spin is now ferromagnetically coupled to the distant Mn$_2(4c)$ , which is opposite to the native Mn$_2$ RuGa (see Table REF ).", "Adding Ga tips the balance, because Ga's $4p^1$ is higher than Mn's unoccupied $5d^0$ orbitals (see Fig.", "REF (b)), so Ga can transfer electrons to Mn atoms more easily than Ru.", "We integrate the atom-resolved density of states around each sphere, and find that the number of the Ru $4d$ electrons is 5.87, reduced by 1.13 with respect to its atomic $4d^7$ , while the $4p$ electron of Ga is 0.81, reduced by 0.2 from $4p^1$ .", "The total number of electrons within the Mn$_1(4a)$ and Mn$_2(4c)$ spheres are almost exactly the same, 6.04, but the number of $3d$ electrons in each spin channel is very different.", "Table REF shows that Mn$_1(4a)$ has 4.09 $3d$ electrons in the majority channel and 1.01 in the minority channel, in contrast to Mn$_2(4c)$ where 1.44 and 3.72 electrons are present.", "The total number of $3d$ electron is still close to 5.", "Table REF summarizes these results.", "In general, the orbital moment on Mn$_1$ is small, around 0.025$\\mu _{\\rm B}$ , and that on Mn$_2 $ is slightly larger, reaching -0.046$\\mu _{\\rm B}$ , which is beneficial to the spin-orbit torque [43], [38], important for AOS [37].", "Figure REF (a) shows the band structure, superimposed with the Mn$_1$ -$3d$ orbital character from its spin majority channel.", "The orbital characters are highlighted by the circles, whose radius is proportional to the weight of the Mn$_1$ -$3d$ character, and the lines are the actual band dispersion.", "Bands with a clear dominance of a single orbital are highlighted, and in the figure, $d_{z^2}$ and $d_{x^2-y^2}$ are denoted by $z^2$ and $x^2-y^2$ for simplicity; and this is the same for other orbitals.", "The entire set of detailed orbital characterization is presented in [51].", "We see that the Mn$_1$ 's occupied majority band centers around -0.6 eV below the Fermi level $E_F$ (horizontal dashed line), with a smaller contribution close to the Fermi level.", "This feature is reflected in the $3d$ -partial density of states (pDOS) in Fig.", "REF (b), where a small peak at the Fermi level is found, consistent with two prior studies [31], [21], indicative of structural instability [46].", "Figure REF (c) shows that the Mn$_1$ spin minority band has a single $d_{xz}/d_{yz}$ band, which crosses the Fermi level from the L to $\\Gamma $ , and then to X point, but this single band crossing does not constitute a major contribution to the density of states (DOS).", "Figure REF (d) shows the partial $3d$ density of states at the Fermi level is very tiny but not zero.", "The other occupied minority $d$ band is at -1.5 eV below the Fermi level.", "Because Mn$_1(4a)$ 's $d$ band is away from $E_F$ and has a small density of states around the Fermi level, optical excitation at Mn$_1$ is weak [42].", "Mn$_2(4c)$ is quite different from Mn$_1(4a)$ .", "Figure REF (f) shows that its majority bands cross the Fermi level at multiple points, have mixed $d$ characters, and are highly dispersive.", "Its $3d$ -pDOS (Fig.", "REF (e)) has a larger peak at the Fermi level than Mn$_1$ , quantitatively 1.80-1.81 states/eV for the former and 0.65-0.69 states/eV for the latter.", "This explains why Mn$_2(4c)$ is more optically active than Mn$_1(4a)$ [42], where we call Mn$_2(4c)$ the optical active site.", "In the minority channel, Mn$_2$ has a strong admixture of orbital characters (see Fig.", "REF (h)), and its overall density of states at the Fermi level is also small (see Fig.", "REF (g)).", "We note that the minority band structure is very similar to that of Mn$_3$ Ga [16], and they both have a flat $d_{z^2}$ band along the $\\Gamma $ -X direction.", "Before we move on to ultrafast demagnetization, we must emphasize that the band structure is not solely contributed by these two Mn atoms.", "Both Ru and Ga significantly affect the magnetic properties of Mn atoms.", "Thin lines in Figs.", "REF (e) and REF (g) are the Ru's $4d$ pDOS for the spin majority and minority channels, respectively.", "One can see that the Ru-$d$ majority density of states follows the Mn$_1(4a)$ 's pDOS (compare Figs.", "REF (b) and (e)), but its minority state follows the Mn$_2(4c)$ 's pDOS (compare the thin and thick lines in Fig.", "REF (g)).", "The split role from the same atom is remarkable." ], [ "Ultrafast demagnetization", "The circles in Fig.", "REF (a) are the experimental ultrafast demagnetization [42], and consist of two regions.", "Region I is from 0 to 0.26 ps, highlighted by the red arrow in Fig.", "REF (a), and region II starts from 0.26 ps to 5 ps.", "This time separation of 0.26 ps is consistent with a prior study [52].", "In region I, a sharp decrease in spin moment is observed, but in region II there is a peak.", "We can fit these two regions with the same equation, $\\frac{\\Delta M(t)}{M}=A\\left(\\frac{M_{4a}{\\rm e}^{-\\alpha _{4a}(t-{\\cal T})}-M_{4c}{\\rm e}^{-\\alpha _{4c}(t-{\\cal T})}}{M_{4a}-M_{4c}}\\right)-B, $ where $t$ is the time.", "$A$ is necessary, since without it the laser field amplitude cannot enter the equation.", "$B$ determines the net amount of demagnetization.", "${\\cal T}$ sets the characteristic time for demagnetization or remagnetization.", "Since our spin moments are fixed by our calculation, we only have four fitting parameters for each region, where $\\alpha _{4a(4c)}$ is the demagnetization rate for site $4a(4c)$ .", "Table REF shows that in region I, $\\alpha $ is site-dependent, $\\alpha _{4a}=4.5$ /ps and $\\alpha _{4c}=2.8$ /ps, demonstrating that the larger the spin moment is, the larger $\\alpha $ becomes, $\\alpha =cM, {\\rm or},\\tau _M=\\frac{1}{cM}, $ where $c$ is a constant.", "This equation is consistent with the empirical formula proposed by Koopmans and coworkers [53].", "From $\\alpha $ , we find the demagnetization times $\\tau _M(4a)=222$ fs, and $\\tau _M(4c)=357$ fs.", "These intrinsic demagnetization times, called the Hübner times [10], are well within the times for other transition and rare-earth metals: 58.9 fs (Fe), 176 fs (Ni), 363 fs [Gd($5d$ )], 690 fs [Gd($4f$ )].", "An extreme point will appear if $\\partial \\left(\\frac{\\Delta M(t)}{M}\\right)/ \\partial t =0$ , and the second-order time-derivative determines whether the extreme is a maximum or minimum, $\\frac{\\partial ^2 \\left(\\frac{\\Delta M(t)}{M}\\right)}{\\partial t^2}=A\\alpha _{4c}M_{4c}{\\rm e}^{-\\alpha _{4c}(t-T)}(\\alpha _{4a}-\\alpha _{4c}).", "$ If $\\alpha _{4a}>\\alpha _{4c}$ , we only have a minimum which explains the spin change in region I.", "In region II, both $\\alpha _{4a}$ and $\\alpha _{4c}$ are reduced, but $\\alpha _{4a}$ is reduced much more, so $\\alpha _{4a}<\\alpha _{4c}$ , which corresponds to a peak in region II.", "Table REF shows that region II has $\\alpha _{4a}=0.6$ /ps and $\\alpha _{4a}=1.5$ /ps.", "The demagnetization on the $4a$ site slows down significantly." ], [ "Kerr rotation angle", "Underlying ultrafast demagnetization and subsequent all-optical spin switching is the magneto-optical property of Mn$_2$ RuGa, which is characterized by the conductivity [54] in units of $\\rm (\\Omega m)^{-1}$ , $\\sigma _{\\alpha \\beta }(\\omega )=\\frac{i\\hbar e^2}{m_e^2V}\\sum _{k;m,n}\\frac{f_{nk}-f_{mk}}{E_{mk}-E_{nk}}\\frac{\\langle nk |p_\\alpha |mk\\rangle \\langle mk |p_\\beta |nk\\rangle }{(\\hbar \\omega +i\\eta )+(E_{nk}-E_{mk})}, $ where $m_e$ is the electron mass, $V$ is the unit cell volume, $f_{nk}$ is the Fermi distribution function, $E_{mk}$ is the band energy of state $|mk\\rangle $ , $\\langle nk |p_\\alpha |mk\\rangle $ is the momentum matrix element between states $|mk\\rangle $ and $|nk\\rangle $ , and $\\eta $ is the damping parameter.", "The summation is over the crystal momentum $k$ and all the band states $|mk\\rangle $ and $|nk\\rangle $ , and $\\omega $ is the incident photon frequency.", "Here $\\alpha $ and $\\beta $ refer to the directions, such as the $x$ and $y$ directions, not to be confused with the above demagnetization rate.", "The anomalous Hall conductivity is just the off-diagonal term.", "In the limit of $\\eta ,\\omega \\rightarrow 0$ , the term behind the summation over $k$ is the Berry curvature $\\Omega _{\\alpha ,\\beta }^{k,n}=\\sum _{m\\ne n}\\hbar ^2\\frac{(f_{mk}-f_{nk}) \\langle nk|v_\\alpha |mk \\rangle \\langle mk|v_\\beta |nk\\rangle }{(E_{mk}-E_{nk})^2}.", "$ The general expression given in Eq.", "REF is better suited for metals with partial occupation than the treatment with a separate sum over occupied and unoccupied states [55], [56], though the latter is faster.", "The intraband transition with $n=m$ is included by replacing $(f_{nk}-f_{mk})/(E_{mk}-E_{nk})$ by its derivative $-\\partial f_{nk}/\\partial E_{nk}$ , which is $-\\frac{\\beta /2}{\\cosh \\beta (E_{nk}-E_F)+1}$ , without resorting to more complicated numerics [57].", "Here $\\beta =1/(k_BT)$ , where $k_B$ is the Boltzmann constant, $T$ is the temperature, and $E_F$ is the Fermi energy.", "The Kerr effect is characterized by the Kerr rotation $\\theta $ and ellipticity $\\epsilon $ , in the small angle limit and with magnetization along the $z$ axis, $\\theta +i\\epsilon =-\\frac{\\sigma _{xy}}{\\sigma _{xx}\\sqrt{1+4\\pi i\\sigma _{xx}/\\omega }},$ where $\\sigma _{xx}$ must be converted to 1/s.", "The SI version of $\\sqrt{1+4\\pi i\\sigma _{xx}/\\omega }$ is $\\sqrt{1+ i\\sigma _{xx}/(\\omega \\epsilon _0)}$ .", "Experimentally, Fleischer et al.", "[33] measured the magneto-optical Kerr effect for a series of Mn$_2$ Ru$_x$ Ga samples with compositions $x=0.61,0.62,0.69,0.83$ and with thickness from 26 to 81 nm.", "Figure REF (b) reproduces two sets of data from their supplementary materials.", "One can see that both the thickness and composition affect the Kerr rotation angle.", "The thicker sample has a larger angle (compare dotted and long-dashed lines with $x=0.61,0.62$ ), and the angle peaks between 1.6-1.9 eV.", "Our theoretical Kerr angles with three different dampings are three solid lines with $\\eta =0.8$ , 0.6 and 0.4 eV from the bottom to top, respectively.", "One notices that the overall shape is similar to the experimental data, and the main peak is also around 2 eV, slightly higher than the experimental one, but a more direct comparison is not possible since there are no experimental data at $x=1$ .", "The best agreement in terms of the Kerr angle is obtained with $\\eta =0.8$ eV.", "The convergence of our spectrum is tested against the mesh of $92\\times 92 \\times 92$ , and there is no visual difference between this much bigger mesh and the one used in Fig.", "REF (b).", "We can pinpoint the origin of the main peak by removing some atoms.", "We use $\\eta =0.4$ eV since it gives us more structures.", "First, we remove Mn$_2(4c)$ , without changing the lattice structure and the rest of atoms, so we have Mn$(4a)$ RuGa.", "The solid line in Fig.", "REF (c) shows that the Kerr rotation angle for the new Mn$(4a)$ RuGa is very different from the one in Fig.", "REF (b), highlighting the fact that Mn$_2(4c)$ , not Mn$_1(4a)$ , contributes significantly to the overall signal.", "To verify this, we remove Mn$_1(4a)$ but keep Mn$_2(4c)$ .", "The red long-dashed line shows clearly that the overall shape is well reproduced, but the Kerr angle is larger.", "This concludes that Mn$_2(4c)$ is optically active and plays a decisive role in the magneto-optical response as OAS, consistent with the experiment [42], but the role of Ru and Ga should not be underestimated.", "Magnetically, they are silent and do not contribute to the spin moment significantly, but when we remove Ru, the spectrum changes completely (see the dotted line in Fig.", "REF (c).", "The same thing happens to the removal of Ga atom.", "This reveals the significant contributions of Ru and Ga to the optical response of Mn$_2$ RuGa.", "The discovery of two sites (spin anchor site and optical active site) found here has some resemblance to the laser-induced intersite spin transfer [58].", "In their system, Dewhurst et al.", "found that the spins of two Mn atoms are aligned and coupled ferromagnetically, not antiferromagnetically coupled as found here.", "Additional calculations are necessary since their materials are not Mn$_2$ RuGa.", "Mentink et al.", "[59] proposed a two-sublattice spin model where AOS is realized through the angular momentum exchange between sublattices.", "But their did not reveal the different roles played by two spin sublattices.", "Our mechanism makes a clear distinction between two sublattices, and thus ensures that two sublattices do not compete optically and magnetically.", "Through Mn$_2$ RuGa, our state-of-the-art first-principles density functional calculation establishes two concepts: the spin anchor site and the optical active site as the key to AOS in ferrimagnets.", "The formation of SAS and OAS in Mn$_2$ RuGa is accomplished by weakly magnetic Ru and Ga atoms.", "In GdFeCo [2] Gd is SAS while Fe is OAS.", "Switching starts with OAS [52]; because the ferrimagnetic coupling between SAS and OAS is frustrated and has a lower potential barrier to overcome if the spin moment is smaller [60], SAS is dragged into the opposite direction by OAS through the spin torque $J{\\bf S}_i\\times {\\bf S}_{j}$ [41], [38], to realize all-optical spin switching.", "Because the Heusler compounds have excellent tunability [17], [29], [13], [19], [22], [33], the future research can investigate the effect of the spin moments at SAS and OAS on the spin switchability [8].", "We envision an integrated device based on Mn$_2$ RuGa as illustrated in Fig.", "REF (c).", "All three parts of the device are made of the same materials but with different concentration $x$ .", "In the middle, $x$ is close to 0.6, so we have a full-compensated half-metal, while at two ends, $x$ is close to 1, whose spin is designed to be optically switched.", "This forms an ideal magnetic tunnel junction that a light pulse can activate.", "A future experimental test is necessary.", "We should add that experimentally, concentrations of both Mn and Ru can be already tuned in several different experimental groups.", "Chatterjee et al.", "[35] were able to adopt two concentrations $x=0.2,0.5$ in Mn$_{2-x}$ Ru$_{1+x}$ , while Siewierska et al.", "[34] were able to independently change $x$ and $y$ in $\\rm Mn_yRu_xGa$ films.", "In fact, Banerjee et al.", "[37] already used 13 samples with $x=0.5$ up to 1.0 in $\\rm Mn_2Ru_xGa$ .", "The authors appreciate the numerous communications with Dr. K. Rode (Dublin) and Dr. K. Fleischer (Dublin).", "Dr. Fleischer provided the original experimental data in text form, which is very convenient to plot, with a small correction to the thickness of their samples.", "G.P.Z.", "and Y.H.B.", "were supported by the U.S. Department of Energy under Contract No.", "DE-FG02-06ER46304.", "Part of the work was done on Indiana State University's high performance Quantum and Obsidian clusters.", "The research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No.", "DE-AC02-05CH11231.", "M.S.S.", "was supported by the National Science Foundation of China under grant No.", "11874189.", "$^*$ guo-ping.zhang@outlook.com.", "https://orcid.org/0000-0002-1792-2701 Table: Summary of the space group symmetries for X 2 _2YZ used in theliterature.", "Two X atoms are denoted as X 1 _1 and X 2 _2, and“share” means that they share the same positions.", "For Mn 2 _2RuGa, X 1 _1is Mn 1 _1, X 2 _2 is Mn 2 _2, while Y is Ru and Z is Ga. Thefull-Heusler compound has L2 1 _1 symmetry, the half-Heusler one hasC1 b C1_b symmetry, and the inverse Heusler compound has XAXA symmetry.Table: Spin and orbital moments of Mn 1 _1, Mn 2 _2, Ru and Ga. Theelectron populations in their majority and minority 3d3d states arelisted as n 3d,↑ n_{3d,\\uparrow } and n 3d,↓ n_{3d,\\downarrow }.", "Thedemagnetization rate in regions I and II is denoted as α I \\alpha ^Iand α II \\alpha ^{II}, respectively.", "In two artificial structures, Mn 2 Ru \\rm Mn_2Ru and Mn 2 Ga \\rm Mn_2Ga, only the spin moments are given.Figure: (a) Structure of Mn 2 _2RuGa and the spatial spin densities on theMn 1 (4a)_1(4a) (red, positive) and Mn 2 (4c)_2(4c) (blue, negative).", "The Ruand Ga atoms have a small spin density.", "(b) Atomic energy levels ofRu, Mn and Ga.", "The energy splitting is due to the spin-orbitcoupling in atoms.", "(c) Our proposed device has a junction structureand consists of three layers of the same Mn 2 _2Ru x _xGa, but withdifferent composition xx.", "The layer on the left is an opticallyactive layer, the middle is the spin filter, and the right layer isa spin reference layer.", "The magnetoresistance is controlled bylight.Figure: (a) and (c) Orbital-resolved band structure with the 3d3dstate characters for the Mn 1 (4a)_1(4a) spin-majority and spin-minoritychannels, respectively.", "(b) and (d) Partial density of states forthe Mn 1 _1 spin-majority and spin-minority channels, respectively.Bands with clear orbital characters are denoted by their orbitals.The Fermi level is set at 0 eV (horizontal dashed line).", "(f) and(h) Band structure with the 3d3d state characters for theMn 2 (4c)_2(4c) spin-majority and spin-minority channels, respectively.", "(e) and (g) Partial density of states for the Mn 2 _2's 3d3d (thicklines) and Ru's 4d4d (thin lines) spin-majority and spin-minority,respectively.Figure: (a) Experimental demagnetization fitted by Eq.", ",with two sets of fitting parameters given in Table ,provides a crucial insight that demagnetization rates at two Mnspin-sublattices change between region I (between 0 to 0.26 ps)and region II (between 0.26 ps to 5 ps).", "The experimental data areextracted from Ref.", ".", "The thick redcurve is the laser pulse of duration 40 fs.", "(b) Theexperimental (dotted and dashed lines fromRef. )", "and our theoretical Kerr rotationangles.", "The three solid lines are our theoretical results withthree different dampings η=0.4,0.6,0.8\\eta =0.4,0.6,0.8 eV.", "(c)Element-resolved Kerr rotation angles when Mn 2 (4c)_2(4c) (solidline), or Mn 1 (4a)_1(4a) (dashed line), or Ru (dotted line) is removedseparately.", "η=0.4\\eta =0.4 eV is used." ] ]
2207.10443
[ [ "Noncommutative extensions of parameters in the asymptotic spectrum of\n graphs" ], [ "Abstract The zero-error capacity of a classical channel is a parameter of its confusability graph, and is equal to the minimum of the values of graph parameters that are additive under the disjoint union, multiplicative under the strong product, monotone under homomorphisms between the complements, and normalized.", "We show that any such function either has uncountably many extensions to noncommutative graphs with similar properties, or no such extensions at all.", "More precisely, we find that every extension has an exponent that characterizes its values on the confusability graphs of identity quantum channels, and the set of admissible exponents is either an unbounded subinterval of $[1,\\infty)$ or empty.", "In particular, the set of admissible exponents for the Lov\\'asz number, the projective rank, and the fractional Haemers bound over the complex numbers are maximal, while the fractional clique cover number does not have any extensions." ], [ "Introduction", "The Shannon (zero-error) capacity [25] and several quantum versions thereof have recently been shown to have dual characterizations as the minimum over parameters of graphs (or noncommutative graphs) which are normalized, multiplicative under the strong (or tensor) product, additive under the disjoint union (direct sum), and monotone with respect to appropriately chosen cohomomorphism preorders [33], [22].", "With these operations, isomorphism classes of graphs and noncommutative graphs form the semirings $\\mathcal {G}$ and $\\mathcal {G}_{\\textnormal {nc}}$ , and the inclusion $\\mathcal {G}\\rightarrow \\mathcal {G}_{\\textnormal {nc}}$ is a semiring-homomorphism.", "On $\\mathcal {G}_{\\textnormal {nc}}$ (and, by restriction, on $\\mathcal {G}$ ) one introduces the cohomomorphism preorder $\\le $ and the entanglement-assisted cohomomorphism preorder $\\le _*$ .", "The different notions of one-shot zero-error capacities can be characterized in terms of these relations (e.g.", "the independence number of a graph $G$ is the largest $d$ such that $\\overline{K_d}\\le G$ ), and the corresponding (asymptotic) capacities can be similarly understood in terms of asymptotic preorders associated with $\\le $ and $\\le _*$ .", "The aforementioned dual characterizations are proved using the theory of asymptotic spectra developed by Strassen in the context of computational complexity of bilinear maps [27].", "A central concept here is the asymptotic spectrum of a preordered semiring $(S,\\preccurlyeq )$ , the set of $\\preccurlyeq $ -monotone semiring-homomorphisms from $S$ to $\\mathbb {R}_{\\ge 0}$ (with the usual operations and total order), denoted by $\\Delta (S,\\preccurlyeq )$ .", "Under some conditions on the preorder $\\preccurlyeq $ (see sec:preliminaries for the precise statements), the asymptotic preorder $\\lesssim $ is characterized as $x\\lesssim y$ iff $\\forall f\\in \\Delta (S,\\preccurlyeq ):f(x)\\le f(y)$ .", "The same conditions imply that the asymptotic spectrum is nonempty and the restriction map to the asymptotic spectrum of any subsemiring is surjective.", "In [33] Zuiddam showed that these conditions are satisfied by $(\\mathcal {G},\\le )$ , and in [22] Li and Zuiddam showed that they are also satisfied by $(\\mathcal {G},\\le _*)$ and $(\\mathcal {G}_{\\textnormal {nc}},\\le _*)$ but not by $(\\mathcal {G}_{\\textnormal {nc}},\\le )$ .", "For this reason, the results of Strassen's theory (the duality theorem, nonempty spectrum, extension property) are not directly available for the study of unassisted (classical or quantum) capacity of quantum channels, which raises the following questions: Is $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )$ nonempty?", "Is the restriction map $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )\\rightarrow \\Delta (\\mathcal {G},\\le )$ surjective?", "Does $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )$ characterize the asymptotic preorder on $\\mathcal {G}_{\\textnormal {nc}}$ (and therefore also the classical and quantum unassisted capacities)?", "The first question is known to have a positive answer since $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le _*)\\subseteq \\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )$ is nonempty (in fact, an explicit element is known: a quantum version of the Lovász number [9]).", "Focusing on the second question, in this paper, we study the restriction map from the asymptotic spectrum of noncommutative graphs with respect to the (unassisted) cohomomorphism preorder to the asymptotic spectrum of graphs.", "Known elements of $\\Delta (\\mathcal {G},\\le )$ include the fractional clique cover number [25], the Lovász number [20], the projective rank [23], and the fractional Haemers bounds [3], [2], therefore we are studying the existence of noncommutative extensions of these and similar parameters.", "Formulated in the abstract framework of preordered semirings, Strassen's results and their recent generalizations are fundamentally non-constructive, therefore we cannot use them to find explicit extensions of the graph parameters.", "However, they are useful for proving that (multiplicative, additive, and monotone) extensions of certain parameters exist and, as we will see, provide information about the possible extensions." ], [ "Results.", "None of the known extension theorems apply to the inclusion of $\\mathcal {G}$ into $\\mathcal {G}_{\\textnormal {nc}}$ with the unassisted cohomomorphism preorder.", "To overcome this difficulty, we introduce an intermediate semiring $\\mathcal {A}$ that contains the confusability graphs of all classical channels and the confusability graphs of noiseless quantum channels.", "It follows from the construction of $\\mathcal {A}$ that every element in the asymptotic spectrum of $\\mathcal {A}$ is the restriction of at least one element of the asymptotic spectrum of noncommutative graphs (with respect to unassisted cohomomorphisms), and that every element in the asymptotic spectrum of $\\mathcal {A}$ is uniquely specified by a pair $(f,\\alpha )$ where $f$ is in the asymptotic spectrum of graphs and $\\alpha \\in [1,\\infty )$ .", "$\\alpha $ is equal to the logarithm of the value of the spectral point on the confusability graph of the noiseless qubit channel, therefore we will call it the exponent of the extension.", "We prove the following about $\\Delta (\\mathcal {A},\\le )$ , the asymptotic spectrum of $\\mathcal {A}$ , considered as a subset of $\\Delta (\\mathcal {G},\\le )\\times [1,\\infty )$ : if $\\alpha $ is an admissible exponent for some $f\\in \\Delta (\\mathcal {G},\\le )$ (i.e.", "$(f,\\alpha )\\in \\Delta (\\mathcal {A},\\le )$ ) and $\\alpha \\le \\beta $ , then $\\beta $ is also admissible (thm:exponentsupperset); $\\Delta (\\mathcal {A},\\le )$ is log-convex (thm:Aspectrumlogconvex).", "For specific graph parameters we find the following conditions concerning the existence of (multiplicative, additive, monotone) extensions: the fractional clique cover number $\\overline{\\chi }_f$ has no such extensions (prop:fccnoextensions); the Lovász number $\\vartheta $ , the fractional Haemers bound over the complex numbers $\\mathcal {H}^\\mathbb {C}_f$ , and the complement of the projective rank $\\overline{\\xi _f}$ each have extensions with every exponent in $[1,\\infty )$ (prop:thetaoneadmissible,prop:fHaemersConeadmissible,prop:complementprojectiverankoneadmissible); for sufficiently large primes $p$ such that there exists a Hadamard matrix of size $4p$ , the set of admissible exponents is a proper subset of $[1,\\infty )$ (prop:fHaemersFpbound)." ], [ "Organization of this paper.", "In sec:preliminaries we recall some basic definitions and introduce notations related to graphs and noncommutative graphs in zero-error information theory, and concepts from Strassen's theory of asymptotic spectra, formulated in terms of preordered semirings.", "In sec:extensions we describe an intermediate semiring between the semiring of graphs and the semiring of noncommutative graphs, such that any element of the spectrum of the intermediate semiring admits at least on extension to noncommutative graphs.", "We study the relation between the asymptotic spectrum of graphs and that of the intermediate semiring and find that the existence of a single extension of any given element of the asymptotic spectrum of graphs implies the existence of an infinite family of extensions.", "This abstract result is followed by a case-by-case study of the extensions of some known elements of the asymptotic spectrum of graphs.", "In sec:convexity we prove that the asymptotic spectrum of the intermediate semiring is log-convex, which is especially useful for studying the extensions of (suitably defined) convex combinations of elements in the asymptotic spectrum of graphs." ], [ "Graphs and noncommutative graphs.", "By a graph we will mean a finite simple undirected graph.", "The vertex and edge sets of a graph $G$ will be denoted by $V(G)$ and $E(G)$ , and we will write $g\\sim g^{\\prime }$ if two vertices $g,g^{\\prime }$ are adjacent, and $g\\simeq g^{\\prime }$ if they are adjacent or equal.", "For example, the strong product $G\\boxtimes H$ of two graphs has vertex set $V(G)\\times V(H)$ , and $(g,h)\\simeq (g^{\\prime },h^{\\prime })$ iff $g\\simeq g^{\\prime }$ and $h\\simeq h^{\\prime }$ .", "The disjoint union of $G$ and $H$ is the graph $G\\sqcup H$ with $V(G\\sqcup H)=V(G)\\sqcup V(H)$ and $E(G\\sqcup H)=E(G)\\sqcup E(H)$ .", "The complete graph with vertex set $[d]=\\lbrace 1,2,\\ldots ,d\\rbrace $ is denoted by $K_d$ .", "The complement of $G$ will be denoted by $\\overline{G}$ .", "A homomorphism $\\varphi :H\\rightarrow G$ is a map between the vertex sets such that adjacent vertices are mapped to adjacent ones, while a cohomomorphism is a homomorphism between the complements.", "We will write $H\\le G$ if there exists a cohomomorphism from $H$ to $G$ (i.e.", "a homomorphism from $\\overline{H}$ to $\\overline{G}$ ).", "Recall that the confusability graph of a classical (or classical-quantum) channel is defined as follows.", "The vertex set is the set of input symbols, and two distinct vertices form an edge if the corresponding output distributions (or states) do not have disjoint supports.", "When two independent channels are used in parallel, the transition probabilities are given by the tensor product, and the confusability graph of the product of channels is the strong graph product of the confusability graphs of the individual channels.", "This implies that the zero-error classical capacity of a channel is a function of its confusability graph, A noncommutative graph is a subspace $S\\subseteq \\operatorname{\\mathcal {B}}(\\mathcal {H})$ for a Hilbert-space $\\mathcal {H}$ (which we always assume to be finite dimensional) such that $I\\in S$ and $S^*=S$ [9].", "A quantum channel $N:\\operatorname{\\mathcal {B}}(\\mathcal {H})\\rightarrow \\operatorname{\\mathcal {B}}(\\mathcal {K})$ , with Kraus representation $N(\\rho )=\\sum _{i\\in I}E_i\\rho E_i^*$ determines the noncommutative graph $\\operatorname{span}\\left\\lbrace E_i^*E_j|i,j\\in I\\right\\rbrace $ — the confusability graph of $N$ — which, similarly to the classical case, determines its zero-error capacities.", "Every noncommutative graph arises in this way for a suitable channel [10], [5].", "Parallel use of channels with confusability graphs $S\\subseteq \\operatorname{\\mathcal {B}}(\\mathcal {H})$ and $T\\subseteq \\operatorname{\\mathcal {B}}(\\mathcal {K})$ corresponds to the tensor product of the operator systems $S\\otimes T=\\operatorname{span}\\left\\lbrace A\\otimes B|A\\in S,B\\in T\\right\\rbrace \\subseteq \\operatorname{\\mathcal {B}}(\\mathcal {H}\\otimes \\mathcal {K})$ .", "The analogue of the disjoint union of graphs is the direct sum $S\\oplus T=\\left\\lbrace A\\oplus B|A\\in S,B\\in T\\right\\rbrace \\subseteq \\operatorname{\\mathcal {B}}(\\mathcal {H}\\oplus \\mathcal {K})$ .", "The noncommutative graphs $S$ and $T$ are isomorphic if there is a unitary $U:\\mathcal {H}\\rightarrow \\mathcal {K}$ such that $USU^*=T$ .", "A cohomomorphism from the noncommutative graph $T$ to $S$ is a collection of linear maps $E_i:\\mathcal {K}\\rightarrow \\mathcal {H}$ ($i\\in I$ for some finite index set $I$ ) such that $\\sum _{i\\in I}E_i^*E_i=I$ and $\\forall i,j\\in I:E_i^*SE_S\\subseteq T$ .", "We will write $T\\le S$ when a cohomomorphism exists.", "Any classical channel $W$ can be viewed as a quantum channel with Kraus operators of the form $\\sqrt{W(y|x)}\\left|y\\rangle \\!\\langle x\\right|$ for input and output symbols $x$ and $y$ .", "The corresponding noncommutative graph depends only on the confusability graph of the classical channel, therefore any graph $G$ can be viewed as a noncommutative graph, namely $S_{G}=\\operatorname{span}\\left\\lbrace \\left|x\\rangle \\!\\langle x^{\\prime }\\right||x,x^{\\prime }\\in V(G),x\\simeq x^{\\prime }\\right\\rbrace \\subseteq \\operatorname{\\mathcal {B}}(\\mathbb {C}^{V(G)}).$ This embedding of graphs into noncommutative graphs is compatible with the notions of isomorphism and cohomomorphism, and turns disjoint unions into direct sums and strong products into tensor products.", "The noiseless $d$ -dimensional quantum channel is the identity map on $\\operatorname{\\mathcal {B}}(\\mathbb {C}^d)$ .", "We will denote its confusability graph $\\mathbb {C}I\\subseteq \\operatorname{\\mathcal {B}}(\\mathbb {C}^d)$ by $\\mathcal {I}_{d}$ .", "Likewise, the noiseless classical channel with $d$ inputs has confusability graph $\\overline{K_d}$ .", "We will use the simplified notation $\\overline{\\mathcal {K}_{d}}$ for the corresponding noncommutative graph on $\\mathbb {C}^d$ (which is the set of diagonal operators in the standard basis).", "We note that $\\overline{\\mathcal {K}_{1}}=\\mathcal {I}_{1}$ and that, up to isomorphisms, $\\overline{\\mathcal {K}_{d_1}}\\oplus \\overline{\\mathcal {K}_{d_2}}=\\overline{\\mathcal {K}_{d_1+d_2}}$ , $\\overline{\\mathcal {K}_{d_1}}\\otimes \\overline{\\mathcal {K}_{d_2}}=\\overline{\\mathcal {K}_{d_1d_2}}$ , and $\\mathcal {I}_{d_1}\\otimes \\mathcal {I}_{d_2}=\\mathcal {I}_{d_1d_2}$ .", "A zero-error code for a classical channel is the same as an independent set of its confusability graph $G$ , i.e.", "a subset $S\\subseteq V(G)$ such that no two vertices in $S$ are adjacent.", "The independence number $\\alpha (G)$ is the maximum cardinality of an independent set.", "The Shannon capacity of $G$ is the limit $\\Theta (G)=\\lim _{n\\rightarrow \\infty }\\@root n \\of {\\alpha (G^{\\boxtimes n})}$ .", "Note that $C_0(G)=\\log \\Theta (G)$ is often also called the Shannon (or zero-error) capacity.", "The independence number can be characterized in terms of cohomomorphisms as $\\alpha (G)=\\max \\left\\lbrace d\\in \\mathbb {N}|\\overline{K}_d\\le G\\right\\rbrace $ .", "The notion of an independent set extends to noncommutative graphs, with the same interpretation (transmitting classical information via a quantum channel without errors), and a similar characterization in terms of cohomomorphisms from $\\overline{\\mathcal {K}_{d}}$ .", "In addition, one can consider the quantity $\\max \\left\\lbrace d\\in \\mathbb {N}|\\mathcal {I}_{d}\\le S\\right\\rbrace $ and its growth rate on tensor powers, which characterizes the ability of the channel to transmit quantum information without errors." ], [ "Preordered semirings.", "We will use the following facts from the theory of asymptotic spectra due to Strassen.", "For more details we refer the reader to [27], [32].", "A (commutative) preordered semiring is a set $S$ equipped with binary operations $+$ and $\\cdot $ that are commutative and associative, with neutral elements 0 and 1, and satisfying the distributive law, together with a reflexive and transitive relation $\\preccurlyeq $ that is compatible with the operations in the sense that $a\\preccurlyeq b$ implies $a+c\\preccurlyeq b+c$ and $ac\\preccurlyeq bc$ for all $a,b,c\\in S$ .", "There is a unique semiring-homomorphism $i:\\mathbb {N}\\rightarrow S$ , which we will assume to be an order embedding with respect to the usual total order $\\le $ on $\\mathbb {N}$ , i.e.", "for $n,m\\in \\mathbb {N}$ we have $i(n)\\preccurlyeq i(m)$ iff $n\\le m$ .", "Let $S$ and $T$ be preordered semirings with preorders $\\preccurlyeq _S$ , $\\preccurlyeq _T$ (for simplicity, we will use the same notation for the operations and neutral elements).", "A monotone semiring homomorphism from $S$ to $T$ is a map $f:S\\rightarrow T$ satisfying $f(0)=0$ , $f(1)=1$ , $f(a+b)=f(a)+f(b)$ , $f(a\\cdot b)=f(a)f(b)$ and $a\\preccurlyeq _S b\\Rightarrow f(a)\\preccurlyeq _T f(b)$ .", "The asymptotic spectrum $\\Delta (S,\\preccurlyeq )$ is the set of monotone semiring homomorphisms $f:S\\rightarrow \\mathbb {R}_{\\ge 0}$ , and its elements are also called spectral points.", "An element $u\\in S$ is power universal [14] if for all $s\\in S\\setminus \\lbrace 0\\rbrace $ there exists $k\\in \\mathbb {N}$ such that $1\\preccurlyeq u^ks$ and $s\\preccurlyeq u^k$ .", "If such an element exists, we say that $S$ is of polynomial growth.", "If $u=2$ is power universal, then $\\preccurlyeq $ is called a Strassen preorder [27], [32].", "A preordered semiring of polynomial growth can be equipped with a relaxed preorder, the asymptotic preorder [27], [29], defined as $a\\lesssim b$ if there is a sublinear sequence $(k_n)_{n\\in \\mathbb {N}}$ of natural numbers such that for all $n$ the inequality $a^n\\preccurlyeq u^{k_n}b^n$ holds.", "The asymptotic spectrum provides the following dual characterization of the asymptotic preorder of a semiring with a Strassen preorder.", "Theorem 2.1 ([27], [32]) Let $S$ be a semiring with a Strassen preorder $\\preccurlyeq $ , and let $a,b\\in S$ .", "Then $a\\lesssim b$ iff $\\forall f\\in \\Delta (S,\\preccurlyeq ):f(a)\\le f(b)$ .", "Moreover, $\\Delta (S,\\preccurlyeq )$ is nonempty.", "It is possible to weaken the conditions on the preorder, but it should be noted that the polynomial growth condition in itself is not sufficient to ensure a similar characterization of the asymptotic preorder.", "When $S$ is a preordered semiring and $T$ is a subsemiring with inclusion $i:T\\rightarrow S$ , any element $f$ of $\\Delta (S,\\preccurlyeq )$ restricts to an element of $\\Delta (T,\\preccurlyeq )$ , which gives rise to a map $\\Delta (i):f\\mapsto f\\circ i$ .", "It is known that when $\\preccurlyeq $ is a Strassen preorder on $S$ , then $\\Delta (i)$ is surjective (see [27] and [32]), i.e.", "every monotone homomorphism $T\\rightarrow \\mathbb {R}_{\\ge 0}$ has at least one extension to $S$ that is also a monotone homomorphism.", "The same is true if $S$ is only assumed to be of polynomial growth and $T$ contains a power universal element.", "Proposition 2.2 ([29] and [13]) Let $S$ be a preordered semiring of polynomial growth, $u\\in S$ power universal.", "Let $T\\subseteq S$ be a subsemiring containing $u$ , with the restricted preorder, and let $i:T\\rightarrow S$ be the inclusion map.", "Then $\\Delta (i)$ is surjective.", "We will consider two particular semirings.", "The first one, denoted by $\\mathcal {G}$ , is the set of isomorphism classes of graphs, with operations given by the disjoint union and strong product, and the cohomomorphism preorder: $H\\le G$ iff there exists a homomorphism $\\overline{H}\\rightarrow \\overline{G}$ .", "This preordered semiring was introduced by Zuiddam in [33], where it was proved that $\\le $ is a Strassen preorder on $\\mathcal {G}$ and that the zero-error capacity of a graph $G$ is characterized as $\\Theta (G)=\\min _{f\\in \\Delta (\\mathcal {G},\\le )}f(G).$ $\\Delta (\\mathcal {G},\\le )$ has uncountably many elements [30], including several well-known upper bounds on the Shannon capacity: the fractional clique cover number $\\overline{\\chi _f}$ , the Lovász number $\\vartheta $ , the fractional Haemers bounds $\\mathcal {H}^{\\mathbb {F}}_f$ and the projective rank of the complement $\\overline{\\xi _f}$ .", "The second semiring, $\\mathcal {G}_{\\textnormal {nc}}$ , consists of isomorphism classes of noncommutative graphs, equipped with the operations induced by the direct sum and the tensor product, and where $S\\le T$ if there is a cohomomorphism $S\\rightarrow T$ .", "We have seen that any graph $G$ gives rise to a noncommutative graph $S_{G}$ in a way that is compatible with the operations and the preorder, which gives rise to a monotone semiring homomorphism $\\mathcal {G}\\rightarrow \\mathcal {G}_{\\textnormal {nc}}$ (in fact, an order embedding [22]).", "Further quantum variants have been considered in [22], in particular with the preorder replaced with the entanglement-assisted cohomomorphism preorder $\\le _*$ [7], [26].", "In that paper it was shown that $\\le _*$ is a Strassen preroder both on $\\mathcal {G}$ and $\\mathcal {G}_{\\textnormal {nc}}$ , but $\\le $ is not a Strassen preorder on $\\mathcal {G}_{\\textnormal {nc}}$ .", "Strassen's theory therefore provides dual characterizations of entanglement-assisted classical capacities of classical and quantum channels (and, more generally, the asymptotic preorder), and relates the asymptotic spectra $\\Delta (\\mathcal {G},\\le _*)$ and $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le _*)$ via the extension property.", "In contrast, little is known about $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )$ and its relation to the unassisted capacity, apart from the simple fact that $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le _*)\\subseteq \\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )$ , which is a consequence of $\\le _*\\supseteq \\le $ .", "The asymptotic preorder of $\\mathcal {G}_{\\textnormal {nc}}$ characterizes the trade-off for simultaneously transmitting classical and quantum information: a rate pair $(R_c,R_q)$ is achievable via a channel with confusability graph $S$ iff $\\overline{\\mathcal {K}_{2}}^{\\lfloor R_cn\\rfloor }\\otimes \\mathcal {I}_{2}^{\\lfloor R_qn\\rfloor }\\lesssim S^{\\otimes n}$ for all $n$ .", "A necessary condition for this is that $\\forall f\\in \\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )$ the inequality $R_c+R_q\\log f(\\mathcal {I}_{2})\\le \\log f(S)$ , and if the dual characterization property analogous to thm:Strassen was true for $(\\mathcal {G}_{\\textnormal {nc}},\\le )$ , then this condition would be sufficient as well." ], [ "Extensions of graph parameters", "The inclusion $\\mathcal {G}\\rightarrow \\mathcal {G}_{\\textnormal {nc}}$ does not satisfy the conditions of prop:extension with the unassisted cohomomorphism preorder, therefore the restriction map may not be surjective.", "In this section we will introduce an intermediate semiring $\\mathcal {A}$ such that prop:extension applies to the inclusion $\\mathcal {A}\\rightarrow \\mathcal {G}_{\\textnormal {nc}}$ and at the same time the extensions of monotone semiring homomorphisms from $\\mathcal {G}$ to $\\mathcal {A}$ can be given fairly explicitly in terms of $\\Delta (\\mathcal {G},\\le )$ and an additional parameter with a range that depends on the classical parameter.", "It will turn out that this range is always a closed interval, bounded from below and unbounded from above (unless it is empty), therefore the problem of understanding of $\\Delta (\\mathcal {A},\\le )$ is reduced to determining the endpoint of the interval as a function on $\\Delta (\\mathcal {G},\\le )$ .", "We let $\\mathcal {A}$ be the subsemiring of $\\mathcal {G}_{\\textnormal {nc}}$ generated by $\\mathcal {G}$ and $\\left\\lbrace \\mathcal {I}_{d}|d\\in \\mathbb {N}\\right\\rbrace $ .", "By construction, $\\mathcal {A}$ contains a power universal element of $\\mathcal {G}_{\\textnormal {nc}}$ , therefore the restriction map $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )\\rightarrow \\Delta (\\mathcal {A},\\le )$ is surjective.", "Clearly, any element of $\\Delta (\\mathcal {A},\\le )$ is uniquely specified by its restriction to $\\mathcal {G}$ (which is an element of $\\Delta (\\mathcal {G},\\le )$ ) together with its values on $\\mathcal {I}_{d}$ for $d\\in \\mathbb {N}$ .", "Since $\\mathcal {I}_{d_1}\\otimes \\mathcal {I}_{d_2}=\\mathcal {I}_{d_1d_2}$ and $d_1\\le d_2$ implies $\\mathcal {I}_{d_1}\\le \\mathcal {I}_{d_2}$ , the composition with the map $d\\mapsto \\mathcal {I}_{d}$ is a monotone increasing multiplicative map, which necessarily has the form $d\\mapsto d^\\alpha $ for some exponent $\\alpha \\ge 0$ .", "To see this, apply the monotone homomorphism to the chain of inequalities $\\mathcal {I}_{2}^{\\otimes \\lfloor n\\log d\\rfloor }\\le \\mathcal {I}_{d}^{\\otimes n}\\le \\mathcal {I}_{2}^{\\otimes \\lceil n\\log d\\rceil }$ and let $n\\rightarrow \\infty $ after raising to the power $1/n$ .", "Moreover, $\\overline{\\mathcal {K}_{d}}\\le \\mathcal {I}_{d}$ implies that the exponent $\\alpha $ is at least 1.", "This gives an outer approximation of $\\Delta (\\mathcal {A},\\le )$ parametrized by $\\Delta (\\mathcal {G},\\le )\\times [1,\\infty )$ .", "We now introduce a notation for the homomorphism (not necessarily monotone) corresponding to any such pair.", "Definition 3.1 Let $f\\in \\Delta (\\mathcal {G},\\le )$ and $\\alpha \\in [1,\\infty )$ .", "We define the homomorphism $f_\\alpha :\\mathcal {A}\\rightarrow \\mathbb {R}_{\\ge 0}$ as $f_\\alpha \\left(\\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}\\right)=\\sum _{d=1}^r f(G_d)d^\\alpha .$ By the preceding discussion, every element of $\\Delta (\\mathcal {A},\\le )$ is of the form $f_\\alpha $ for a unique $f\\in \\Delta (\\mathcal {G},\\le )$ and $\\alpha \\ge 1$ , and the question is therefore to decide which pairs $(f,\\alpha )$ give rise to a monotone function.", "We will say that the exponent $\\alpha $ is admissible (for $f$ ) if $f_\\alpha \\in \\Delta (\\mathcal {A},\\le )$ .", "Our next aim is to show that increasing the exponent $\\alpha $ does not lead out of $\\Delta (\\mathcal {A},\\le )$ , i.e.", "the set of admissible exponents for a given element in the asymptotic spectrum of graphs is upwards closed.", "First we argue that monotonicity continues to hold for the increased exponent when the left hand side of the inequality is a single term of the form $S_{H}\\otimes \\mathcal {I}_{q}$ , to which the general case will be reduced via a type decomposition.", "Lemma 3.2 Suppose that $S_{H}\\otimes \\mathcal {I}_{q}\\le \\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ .", "Then $S_{H}\\otimes \\mathcal {I}_{q}\\le \\bigoplus _{d=q}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ .", "The assumption means that there exist linear maps $E_i:\\mathbb {C}^{V(H)}\\otimes \\mathbb {C}^q\\rightarrow \\bigoplus _{d=1}^r\\mathbb {C}^{V(G_d)}\\otimes \\mathbb {C}^d$ ($i\\in I$ ) such that $\\sum _{i\\in I}E_i^*E_i=I$ and for all $i,j\\in I$ the containment $E_i^*\\left(\\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}\\right)E_j\\subseteq S_{H}\\otimes \\mathcal {I}_{q}$ holds.", "Every operator in $S_{H}\\otimes \\mathcal {I}_{q}$ is of the form $X\\otimes I_q$ , therefore its rank is a multiple of $q$ .", "If $g\\in V(G_d)$ , then $\\left|g\\rangle \\!\\langle g\\right|\\otimes I_d$ is an element of the right hand side and has rank $d$ , therefore $E_i^*(\\left|g\\rangle \\!\\langle g\\right|\\otimes I_d)E_i\\in S_{H}\\otimes \\mathcal {I}_{q}$ has rank at most $d$ , and at the same time divisible by $q$ .", "It follows that if $d<q$ then $E_i^*(\\left|g\\rangle \\!\\langle g\\right|\\otimes I_d)E_i=0$ .", "In other words, the range of each $E_i$ must be orthogonal to the term $\\mathbb {C}^{V(G_d)}\\otimes \\mathbb {C}^d$ for all $d<q$ .", "Let $P$ be the orthogonal projection $\\bigoplus _{d=1}^r\\mathbb {C}^{V(G_d)}\\otimes \\mathbb {C}^d\\rightarrow \\bigoplus _{d=q}^r\\mathbb {C}^{V(G_d)}\\otimes \\mathbb {C}^d$ .", "Then the linear maps $(PE_i)_{i\\in I}$ form a cohomomorphism from $S_{H}\\otimes \\mathcal {I}_{q}$ to $\\bigoplus _{d=q}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ .", "Corollary 3.3 If $f_\\alpha \\in \\Delta (\\mathcal {A},\\le )$ , $\\alpha \\le \\beta $ , and $S_{H}\\otimes \\mathcal {I}_{q}\\le \\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ then $f_\\beta \\left(S_{H}\\otimes \\mathcal {I}_{q}\\right)\\le f_\\beta \\left(\\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}\\right).$ By lem:nosmallterms, the assumption implies $S_{H}\\otimes \\mathcal {I}_{q}\\le \\bigoplus _{d=q}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ .", "Using the monotonicty of $f_\\alpha $ for this inequality, we have $\\begin{split}f_\\beta \\left(S_{H}\\otimes \\mathcal {I}_{q}\\right)& = q^{\\beta -\\alpha }f_\\alpha \\left(S_{H}\\otimes \\mathcal {I}_{q}\\right) \\\\& \\le q^{\\beta -\\alpha }f_\\alpha \\left(\\bigoplus _{d=q}^rS_{G_d}\\otimes \\mathcal {I}_{d}\\right) \\\\& = \\sum _{d=q}^r f(G_d)q^{\\beta -\\alpha }d^\\alpha \\\\& \\le \\sum _{d=q}^r f(G_d)d^{\\beta -\\alpha }d^\\alpha \\\\& \\le \\sum _{d=1}^r f(G_d)d^{\\beta } \\\\& = f_\\beta \\left(\\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}\\right).\\end{split}$ Theorem 3.4 If $f_\\alpha \\in \\Delta (\\mathcal {A},\\le )$ and $\\alpha \\le \\beta $ , then $f_\\beta \\in \\Delta (\\mathcal {A},\\le )$ .", "Let $T=\\bigoplus _{d=1}^rS_{H_d}\\otimes \\mathcal {I}_{d}$ and $S=\\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ and suppose that $T\\le S$ .", "Then $T^{\\otimes n}\\le S^{\\otimes n}$ for all $n\\in \\mathbb {N}$ .", "For $Q\\in \\mathcal {P}_{n}([r])$ , let us introduce the noncommutative graph $T_Q=S_{\\overline{K_{\\vert T^{n}_{Q}\\vert }}\\boxtimes \\prod _{d=1}^r H_d^{\\boxtimes nQ(d)}}\\otimes \\mathcal {I}_{\\prod _{d=1}^rd^{nQ(d)}}.$ Note that $T_Q$ is the product of a classical graph and the confusability graph of a noiseless quantum channel, therefore cor:singletermlhsmonotone applies to any inequality with $T_Q$ on the left hand side.", "Note also that $T^{\\otimes n}$ is the direct sum of the $T_Q$ for all $Q\\in \\mathcal {P}_{n}([r])$ , therefore $T_Q\\le S^{\\otimes n}$ .", "Using cor:singletermlhsmonotone and that $\\vert \\mathcal {P}_{n}([r])\\vert \\le (n+1)^r$ [6], we obtain $\\begin{split}f_\\beta (T)^n& = f_\\beta (T^{\\otimes n}) \\\\& = \\sum _{Q\\in \\mathcal {P}_{n}([r])}f_\\beta (T_Q) \\\\& \\le \\vert \\mathcal {P}_{n}([r])\\vert f_\\beta (S^{\\otimes n}) \\\\& \\le (n+1)^r f_\\beta (S)^n,\\end{split}$ which implies $f_\\beta (T)\\le f_\\beta (S)$ by taking $n$ th roots and $n\\rightarrow \\infty $ .", "thm:exponentsupperset implies that, for any $f\\in \\Delta (\\mathcal {G},\\le )$ , the set of admissible exponents $\\alpha $ is either empty or an interval that is unbounded from above.", "For example, it follows from the result of [9] that for the Lovász number $f=\\vartheta $ [20] the exponent $\\alpha =2$ gives a monotone extension, therefore every exponent in $[2,\\infty )$ is admissible.", "Note that the extension $\\tilde{\\vartheta }$ constructed in [9], by virtue of also providing an upper bound on the entanglement-assisted capacity, can be much larger than the Shannon capacity: e.g.", "$\\Theta (\\mathcal {I}_{d})=d<d^2=\\tilde{\\vartheta }$ precisely because the exponent is 2.", "What we apparently learn from thm:exponentsupperset is that there exist even worse upper bounds on the Shannon capacity that extend $\\vartheta $ .", "A more optimistic viewpoint is that extensions with large exponents may imply better upper bounds on the zero-error quantum capacity.", "In addition, we will see below by an explicit calculation that the set of admissible exponents for $\\vartheta $ is in fact $[1,\\infty )$ ." ], [ "Concrete parameters", "The aim of this section is to find or estimate the set of admissible exponents for specific elements of $\\Delta (\\mathcal {G},\\le )$ .", "To simplify notation, we will use a slight extension of the “adjacent or equal” relation $\\simeq $ : considering the noncommutative graph $\\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ and elements $g\\in G_d$ , $g^{\\prime }\\in G_{d^{\\prime }}$ , we will write $g\\simeq g^{\\prime }$ if $d=d^{\\prime }$ and $g$ is adjacent or equal to $g^{\\prime }$ in $G_d$ (effectively regarding $\\bigsqcup _{d=1}^r V(G_d)$ as the vertex set of the disjoint union $\\bigsqcup _{d=1}^r G_d$ , although this graph will not enter the discussion otherwise).", "In addition, we will introduce the map $\\pi :\\bigsqcup _{d=1}^r V(G_d)\\rightarrow [r]$ , sending $g\\in G_d$ to $d$ , for each such direct sum (the map $\\pi $ depends on the noncommutative graph, but hopefully there is no risk of confusion arising from not including it in the notation).", "We start with a lemma that allows us to bring a generic cohomomorphism between the elements of $\\mathcal {A}$ to a simpler form, which can be described via a function between the vertex sets together with a suitable family of isometries.", "For the purposes of this section, it would be sufficient to prove the statement in the special case when $T$ is a classical graph.", "The general case will be used in sec:convexity.", "Lemma 3.5 Let $S=\\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ and $T=\\bigoplus _{d=1}^rS_{H_d}\\otimes \\mathcal {I}_{d}$ be two arbitrary elements of $\\mathcal {A}$ .", "Then the following are equivalent: $T\\le S$ , there is a function $\\varphi :\\bigsqcup _{d=1}^r V(H_d)\\rightarrow \\bigsqcup _{d=1}^r V(G_d)$ and a family of isometries $U_h:\\mathbb {C}^{\\pi (h)}\\rightarrow \\mathbb {C}^{\\pi (\\varphi (h))}$ ($h\\in \\bigsqcup _{d=1}^r V(H_d)$ ) such that $h\\lnot \\simeq h^{\\prime }\\Rightarrow (\\varphi (h)\\lnot \\simeq \\varphi (h^{\\prime })\\text{ or }U_h^*U_{h^{\\prime }}=0)$ and $(h\\simeq h^{\\prime }\\text{ and }\\varphi (h)\\simeq \\varphi (h^{\\prime }))\\Rightarrow U_h^*U_{h^{\\prime }}=cI_{\\pi (h)}$ for some $c\\in \\mathbb {C}$ .", "REF $\\Rightarrow $REF : The inequality $T\\le S$ means that there exist linear maps $E_i:\\bigoplus _{d=1}^r\\mathbb {C}^{V(H_d)}\\otimes \\mathbb {C}^d\\rightarrow \\bigoplus _{d=1}^r\\mathbb {C}^{V(G_d)}\\otimes \\mathbb {C}^d$ ($i\\in I$ ) such that $\\sum _{i\\in I}E_i^*E_i=I$ and for all $i,j\\in I$ we have $E_i^*SE_j\\subseteq T$ .", "Let $h\\in \\bigsqcup _{d=1}^r V(H_d)$ .", "Then $I_{\\pi (h)}=\\left\\langle h|h\\right\\rangle \\otimes I_{\\pi (h)}=\\sum _{g\\in \\bigsqcup _{d=1}^r V(G_d)}\\sum _{i\\in I}(\\left\\langle h\\right|\\otimes I_{\\pi (h)})E_i^*(\\left|g\\rangle \\!\\langle g\\right|\\otimes I_{\\pi (g)})E_i(\\left|h\\right\\rangle \\otimes I_{\\pi (h)}),$ therefore there exists $g,i$ such that the corresponding term does not vanish.", "Choose one such $g$ and $i$ for each $h$ , and let $\\varphi (h)=g$ and $U_h=\\frac{(\\left\\langle g\\right|\\otimes I_{\\pi (g)})E_i(\\left|h\\right\\rangle \\otimes I_{\\pi (h)})}{\\left\\Vert (\\left\\langle g\\right|\\otimes I_{\\pi (g)})E_i(\\left|h\\right\\rangle \\otimes I_{\\pi (h)})\\right\\Vert _{}}$ Since $\\left|g\\rangle \\!\\langle g\\right|\\otimes I_{\\pi (g)}\\in S$ , the condition $E_i^*SE_i\\subseteq T$ implies that $U_h^*U_h\\in (\\left\\langle h\\right|\\otimes I_{\\pi (h)})T(\\left|h\\right\\rangle \\otimes I_{\\pi (h)})=\\mathbb {C}I_{\\pi (h)}$ , i.e.", "$U_h$ is proportional to an isometry.", "The normalization ensures that $U_h$ itself is an isometry.", "Let $h,h^{\\prime }\\in \\bigsqcup _{d=1}^r V(H_d)$ and $i,i^{\\prime }\\in I$ be the corresponding indices.", "If $h\\lnot \\simeq h^{\\prime }$ and $\\varphi (h)\\simeq \\varphi (h^{\\prime })$ , then using $\\left|\\varphi (h)\\rangle \\!\\langle \\varphi (h^{\\prime })\\right|\\otimes I_{\\pi (h)}\\in S$ and $E_i^*SE_{i^{\\prime }}\\subseteq T$ we get $U_h^*U_{h^{\\prime }}\\in (\\left\\langle h\\right|\\otimes I_{\\pi (h)})T(\\left|h^{\\prime }\\right\\rangle \\otimes I_{\\pi (h^{\\prime })})=0$ .", "On the other hand, if $h\\simeq h^{\\prime }$ and $\\varphi (h)\\simeq \\varphi (h^{\\prime })$ , then by the same reasoning we have $U_h^*U_{h^{\\prime }}\\in (\\left\\langle h\\right|\\otimes I_{\\pi (h)})T(\\left|h^{\\prime }\\right\\rangle \\otimes I_{\\pi (h^{\\prime })})=\\mathbb {C}I_{\\pi (h)}$ .", "REF $\\Rightarrow $REF : Given $\\varphi $ and $U_h$ as in the statement, we can choose $I=\\bigsqcup _{d=1}^r V(H_d)$ as the index set and the Kraus operators $E_h=\\left|\\varphi (h)\\rangle \\!\\langle h\\right|\\otimes U_h$ .", "Then $\\sum _{h\\in \\bigsqcup _{d=1}^r V(H_d)}E_h^*E_h=\\sum _{h\\in \\bigsqcup _{d=1}^r V(H_d)}\\left|h\\rangle \\!\\langle h\\right|\\otimes U_h^*U_h=\\sum _{h\\in \\bigsqcup _{d=1}^r V(H_d)}\\left|h\\rangle \\!\\langle h\\right|\\otimes I_{\\pi (h)}=I$ and $E_h^*SE_{h^{\\prime }}={\\left\\lbrace \\begin{array}{ll}\\mathbb {C}\\left|h\\rangle \\!\\langle h^{\\prime }\\right|\\otimes U_h^*U_{h^{\\prime }} & \\text{if $\\varphi (h)\\simeq \\varphi (h^{\\prime })$} \\\\0 & \\text{if $\\varphi (h)\\lnot \\simeq \\varphi (h^{\\prime })$}.\\end{array}\\right.", "}$ In the first case, either $h\\simeq h^{\\prime }$ , in which case $\\mathbb {C}\\left|h\\rangle \\!\\langle h^{\\prime }\\right|\\otimes U_h^*U_{h^{\\prime }}=\\mathbb {C}\\left|h\\rangle \\!\\langle h^{\\prime }\\right|\\otimes I_{\\pi (h)}\\in T$ , or $h\\lnot \\simeq h^{\\prime }$ , and then the condition ensures $U_h^*U_{h^{\\prime }}=0$ .", "In the special case when $T$ is a graph (i.e.", "$H_d$ is empty unless $d=1$ ), the isometries $U_h$ are essentially unit vectors $u_h\\in \\mathbb {C}^{\\pi (\\varphi (h))}$ , and the condition is that $h\\lnot \\simeq h^{\\prime }\\Rightarrow (\\varphi (h)\\lnot \\simeq \\varphi (h^{\\prime })\\text{ or }\\left\\langle u_h|u_{h^{\\prime }}\\right\\rangle =0)$ ." ], [ "Fractional clique cover number.", "Shannon introduced the fractional clique cover number $\\overline{\\chi _f}$ as an upper bound on the zero-error capacity [25].", "This parameter is the largest element of the asymptotic spectrum of graphs [33] (in abstract terms, this amounts to saying that it is equal to the asymptotic rank in the semiring of graphs, see [27] and [32]).", "We now show that, interestingly, this fundamental upper bound does not extend to noncommutative graphs (in a compatible way with the preordered semiring structure).", "Recall that the orthogonal rank $\\xi (G)$ of a graph $G$ is the smallest dimension $d$ such that it is possible to map $V(G)$ to the set of unit vectors of $\\mathbb {C}^d$ in such a way that adjacent vertices get mapped to orthogonal ones.", "Proposition 3.6 $\\overline{\\chi _f}$ does not extend to an element of $\\Delta (\\mathcal {A},\\le )$ .", "Suppose that $\\alpha \\ge 1$ is an admissible exponent for $\\overline{\\chi _f}$ .", "Then for any graph $G$ with at least two distinct nonadjacent vertices (so that $\\overline{\\xi }(G)>1$ ), applying the extension to the inequality $S_{G}\\le \\mathcal {I}_{\\overline{\\xi }(G)}$ [26] gives $\\frac{\\log \\overline{\\chi _f}(G)}{\\log \\overline{\\xi }(G)}\\le \\alpha .$ For $n\\in \\mathbb {N}$ a multiple of 4, consider the Hadamard graph $\\Omega _n$ [17], [18] whose vertex set is $\\lbrace -1,1\\rbrace ^n$ , and two such vectors form an edge iff they are orthogonal.", "Then clearly $\\xi (\\Omega _n)\\le n$ .", "On the other hand, Frankl and Rödl [11] showed that $\\alpha (\\Omega _n)\\le (2-\\epsilon )^n$ for some $\\epsilon >0$ and all sufficiently large $n$ (see also [12] for the precise value when $n/4$ is a power of an odd prime), which implies $\\chi _f(\\Omega _n)\\ge \\frac{\\vert V(\\Omega _n)\\vert }{\\alpha (\\Omega _n)}\\ge \\frac{2^n}{(2-\\epsilon )^n}$ (see [28] for the first inequality).", "Choose $G=\\overline{\\Omega _{4k}}$ ($k\\in \\mathbb {N}_{>0}$ ) in the estimate (REF ) and take the limit $k\\rightarrow \\infty $ to get $\\begin{split}\\alpha & \\ge \\limsup _{k\\rightarrow \\infty }\\frac{\\log \\chi _f(\\Omega _{4k})}{\\log \\xi (\\Omega _{4k})} \\\\& \\ge \\limsup _{k\\rightarrow \\infty }\\frac{4k\\log \\frac{2}{2-\\epsilon }}{\\log (4k)}=\\infty ,\\end{split}$ a contradiction." ], [ "Lovász number.", "The vectors $(u_g)_{g\\in V(G)}$ form an orthonormal representation of the graph $G$ if each $u_g$ is a unit vector in a complex inner product space $\\mathcal {H}$ , and $g\\lnot \\simeq g^{\\prime }$ implies $\\left\\langle u_g|u_{g^{\\prime }}\\right\\rangle =0$ .", "The Lovász $\\vartheta $ number of the graph is defined as [20] $\\vartheta (G)=\\min _{(u_g)_{g\\in V(G)},c}\\max \\frac{1}{\\vert \\left\\langle c|u_g\\right\\rangle \\vert ^2},$ where the minimization is over orthonormal representations of $g$ and unit vectors $c$ in $\\mathcal {H}$ (the minimum is attained since we may assume $\\dim \\mathcal {H}\\le \\vert V(G)\\vert $ by restricting to the span of the vectors).", "$\\vartheta $ is an element of $\\Delta (\\mathcal {G},\\le )$ [33] (see e.g.", "the review [19] for the proofs of the required properties).", "Duan, Severini, and Winter defined a quantum version $\\tilde{\\vartheta }$ of the Lovász $\\vartheta $ number [9], which satisfies $\\tilde{\\vartheta }(\\mathcal {I}_{d})=d^2$ , and is a monotone semiring-homomorphism $\\mathcal {G}_{\\textnormal {nc}}\\rightarrow \\mathbb {R}_{\\ge 0}$ , therefore any exponent $\\alpha \\ge 2$ is admissible for $\\vartheta $ by thm:exponentsupperset.", "We improve this bound to 1, i.e.", "show that the set of admissible exponents is $[1,\\infty )$ .", "Proposition 3.7 The set of admissible exponents for $\\vartheta $ is $[1,\\infty )$ .", "By thm:exponentsupperset it suffices to prove that the exponent 1 is admissible.", "Suppose that $\\bigoplus _{d=1}^rS_{H_d}\\otimes \\mathcal {I}_{d}\\le \\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ .", "Let $H=\\bigsqcup _{d=1}^r H_d\\boxtimes \\overline{K_d}$ .", "Since $\\overline{\\mathcal {K}_{d}}\\le \\mathcal {I}_{d}$ , we also have $S_{H}\\le \\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ .", "By lem:specialcohomomorphism there exists a function $\\varphi :V(H)\\rightarrow \\bigsqcup _{d=1}^r V(G_d)$ and unit vectors $u_h\\in \\mathbb {C}^{\\pi (\\varphi (h))}$ such that $h\\lnot \\simeq h^{\\prime }\\Rightarrow (\\varphi (h)\\lnot \\simeq \\varphi (h^{\\prime })\\text{ or }\\left\\langle u_h|u_{h^{\\prime }}\\right\\rangle =0)$ .", "Let $(v_g)_{g\\in G_d}$ be orthonormal representations of $G_d$ in $\\mathbb {C}^{n_d}$ and $c_d\\in \\mathbb {C}^{n_d}$ a unit vectors ($d\\in [r]$ ) such that $\\vartheta (G_d)=\\max _{g\\in V(G_d)}\\frac{1}{\\vert \\left\\langle c_d|v_g\\right\\rangle \\vert ^2}.$ Combining ideas from the proof that $\\vartheta \\le \\xi $ and that $\\vartheta $ is additive under the disjoint union (see [20] and [19]), we form an orthonormal representation of $H$ in $\\bigoplus _{d=1}^r\\mathbb {C}^{n_d}\\otimes \\mathbb {C}^d\\otimes \\mathbb {C}^d$ as $w_h=v_{\\varphi (h)}\\otimes u_h\\otimes \\overline{u_h}\\in \\mathbb {C}^{n_{\\pi (\\varphi (h))}}\\otimes \\mathbb {C}^{\\pi (\\varphi (h))}\\otimes \\mathbb {C}^{\\pi (\\varphi (h))}\\subseteq \\bigoplus _{d=1}^r\\mathbb {C}^{n_d}\\otimes \\mathbb {C}^d\\otimes \\mathbb {C}^d,$ and consider the unit vector $c=\\bigoplus _{d=1}^r\\sqrt{\\lambda _d}c_d\\otimes \\frac{1}{\\sqrt{d}}\\sum _{i=1}^d\\left|i\\right\\rangle \\otimes \\left|i\\right\\rangle $ with convex weights $\\lambda _d=\\frac{d\\vartheta (G_d)}{\\sum _{j=1}^r j\\vartheta (G_j)}.$ To see that $(w_h)_{h\\in V(H)}$ is indeed an orthonormal representation, we use that $h\\lnot \\simeq h^{\\prime }$ implies either $\\varphi (h)\\lnot \\simeq \\varphi (h^{\\prime })$ and therefore $\\left\\langle v_{\\varphi (h)}|v_{\\varphi (h^{\\prime })}\\right\\rangle =0$ , or $\\left\\langle u_h|u_{h^{\\prime }}\\right\\rangle =0$ .", "In both cases $\\left\\langle w_h|w_h^{\\prime }\\right\\rangle =\\left\\langle v_{\\varphi (h)}|v_{\\varphi (h^{\\prime })}\\right\\rangle \\vert \\left\\langle u_h|u_{h^{\\prime }}\\right\\rangle \\vert ^2=0.$ For this orthonormal representation and unit vector we have $\\begin{split}\\vert \\left\\langle c|w_h\\right\\rangle \\vert ^2& = \\lambda _{\\pi (\\varphi (h))}\\left|\\left\\langle c_{\\pi (\\varphi (h))}|v_{\\varphi (h)}\\right\\rangle \\right|^2\\frac{1}{\\pi (\\varphi (h))}\\left(\\sum _{i=1}^{\\pi (\\varphi (h))}\\vert \\left\\langle i|u_h\\right\\rangle \\vert ^2\\right)^2 \\\\& \\ge \\frac{\\lambda _{\\pi (\\varphi (h))}}{\\vartheta (G_{\\pi (\\varphi (h))})\\pi (\\varphi (h))} \\\\& = \\frac{1}{\\sum _{j=1}^r j\\vartheta (G_j)},\\end{split}$ therefore $\\sum _{d=1}^r\\vartheta (H_d)d=\\vartheta (H)\\le \\sum _{d=1}^r d\\vartheta (G_d).$ This implies that the extension with exponent 1 is monotone.", "There are no explicit elements of $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )$ known that extend $\\vartheta $ with exponent 1.", "A possible such extension is the parameter $\\hat{\\theta }$ introduced by Boreland, Todorov, and Winter [4], which is known to be submultiplicative and monotone [4]." ], [ "Fractional Haemers bounds.", "In [16] Haemers found a new upper bound on the Shannon capacity of graphs, which is incomparable with the Lovász number.", "A fractional version of Haemers' bound [3], [2] over any field belongs to the asymptotic spectrum of graphs [33].", "Of the several alternative formulations we will use the following one, which [2] attributes to Schrijver.", "An $a/b$ -subspace representation of a graph $G$ over a field $\\mathbb {F}$ is a collection of subspaces $S_g\\subseteq \\mathbb {F}^a$ ($g\\in V(G)$ ) such that for all $g\\in V(G)$ we have $\\dim S_g=b$ and $S_g\\cap \\Big (\\sum _{\\begin{array}{c}g^{\\prime }\\in V(G) \\\\ g^{\\prime }\\lnot \\simeq g\\end{array}}S_{g^{\\prime }}\\Big )=\\lbrace 0\\rbrace .$ The value of an $a/b$ -subspace representation is the number $\\frac{a}{b}$ , and the fractional Haemers bound $\\mathcal {H}^{\\mathbb {F}}_f(G)$ is the infimum of the values of all subspace representations of $G$ over $\\mathbb {F}$ .", "Note that if $(S_g)_{g\\in V(G)}$ is a subspace representation and $m\\in \\mathbb {N}_{>0}$ , then $(S_g\\otimes \\mathbb {F}^m)_{g\\in V(G)}$ is also a subspace representation with the same value.", "In particular, any finite collection of subspace representations can be modified to have equal denominators without affecting their values.", "It is known that fields with different nonzero characteristic give rise to distinct parameters [2].", "On the other hand, no separation is known for fields of equal characteristic.", "In fact, [22] shows that $\\mathcal {H}^\\mathbb {R}_f=\\mathcal {H}^\\mathbb {C}_f$ .", "We do not know the set of admissible exponents for the fractional Haemers bounds over all fields, but we will show that over $\\mathbb {C}$ every exponent in $[1,\\infty )$ is admissible, while over certain fields we can exclude small exponents.", "Proposition 3.8 The set of admissible exponents for $\\mathcal {H}^\\mathbb {C}_f$ is $[1,\\infty )$ .", "In the same way as in the proof of prop:thetaoneadmissible, suppose $\\bigoplus _{d=1}^rS_{H_d}\\otimes \\mathcal {I}_{d}\\le \\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ and let $H=\\bigsqcup _{d=1}^r H_d\\boxtimes \\overline{K_d}$ , and choose a function $\\varphi :V(H)\\rightarrow \\bigsqcup _{d=1}^r V(G_d)$ and unit vectors $u_h\\in \\mathbb {C}^{\\pi (\\varphi (h))}$ such that $h\\lnot \\simeq h^{\\prime }\\Rightarrow (\\varphi (h)\\lnot \\simeq \\varphi (h^{\\prime })\\text{ or }\\left\\langle u_h|u_{h^{\\prime }}\\right\\rangle =0)$ .", "For $\\epsilon >0$ , let $(S_g)_{g\\in V(G_d)}$ be $a_d/b$ -subspace representations of the graphs $G_d$ with values satisfying $\\frac{a_d}{b}\\le \\mathcal {H}^\\mathbb {C}_f(G_d)+\\epsilon $ .", "We claim that $h\\mapsto T_h:=S_{\\varphi (h)}\\otimes \\mathbb {C}u_h\\subseteq \\operatorname{\\mathcal {B}}\\left(\\bigoplus _{d=1}^r\\mathbb {C}^{a_d}\\otimes \\mathbb {C}^d\\right)$ is a $(\\sum _{d=1}^r a_dd)/b$ -subspace representation of $H$ .", "Indeed, any element of $T_h$ is of the form $x\\otimes u_h$ for some $x\\in S_{\\varphi (h)}$ , therefore a vector in the intersection $T_h\\cap \\sum _{\\begin{array}{c}h^{\\prime }\\in V(H) \\\\ h^{\\prime }\\lnot \\simeq h\\end{array}}T_{h^{\\prime }}$ can be written in two ways as $x\\otimes u_h=\\sum _{\\begin{array}{c}h^{\\prime }\\in V(H) \\\\ h^{\\prime }\\lnot \\simeq h\\end{array}}y_{h^{\\prime }}\\otimes u_{h^{\\prime }}.$ Apply the map $I_{a_{\\pi (\\varphi (h))}}\\otimes \\left\\langle u_h\\right|$ to get $x=\\sum _{\\begin{array}{c}h^{\\prime }\\in V(H) \\\\ \\pi (\\varphi (h^{\\prime }))=\\pi (\\varphi (h)) \\\\ h^{\\prime }\\lnot \\simeq h\\end{array}}y_{h^{\\prime }}\\left\\langle u_h|u_{h^{\\prime }}\\right\\rangle .$ The conditions $h\\lnot \\simeq h^{\\prime }$ and $\\left\\langle u_h|u_{h^{\\prime }}\\right\\rangle \\ne 0$ imply $\\varphi (h)\\lnot \\simeq \\varphi (h^{\\prime })$ , and since $y_{h^{\\prime }}\\in S_{\\varphi (h^{\\prime })}$ and these subspaces form a subspace-representation of $G_{\\pi (\\varphi (h))}$ , the vector $x$ must be 0.", "It follows that $\\begin{split}\\mathcal {H}^\\mathbb {C}_f(H)& \\le \\sum _{d=1}^r \\frac{a_d}{b}d \\\\& \\le \\sum _{d=1}^r (\\mathcal {H}^\\mathbb {C}_f(G_d)+\\epsilon )d \\\\& = \\sum _{d=1}^r \\mathcal {H}^\\mathbb {C}_f(G_d)+\\frac{r(r+1)}{2}\\epsilon .\\end{split}$ Since $\\epsilon >0$ was arbitrary, we also have $\\sum _{d=1}^r\\mathcal {H}^\\mathbb {C}_f(H_d)d=\\mathcal {H}^\\mathbb {C}_f(H)\\le \\sum _{d=1}^r\\mathcal {H}^\\mathbb {C}_f(G_d)d$ which expresses the monotonicity of the extension of $\\mathcal {H}^\\mathbb {C}_f$ to $\\mathcal {A}$ with exponent 1.", "There are no known (multiplicative, additive, monotone) extensions of the $\\mathcal {H}^\\mathbb {C}_f$ to the semiring of all noncommutative graphs.", "It might be possible to “fractionalize” the extension of the Haemers bound introduced in [15] and obtain such an extension with exponent 1.", "We turn to fields of positive characteristic.", "In [1] it was proved that if $p$ is a sufficiently large prime such that there exists a Hadamard matrix of size $4p$ , then there exists a graph $G$ such that $\\mathcal {H}^{\\mathbb {F}_{p}}_f(G)<\\Theta _*(G)$ , the entanglement-assisted zero-error capacity.", "Li and Zuiddam noted that a strict inequality also holds for the quantum Shannon capacity and, consequently, $\\mathcal {H}^{\\mathbb {F}_{p}}_f$ is not in the asymptotic spectrum of graphs with respect to the quantum cohomomorphism preorder $\\le _q$ [22].", "The proof relied on the following observation: if a graph $G$ satisfies $\\xi (G)\\le d$ and has $M$ disjoint $d$ -cliques, then $\\alpha _q(G)\\ge M$ [1], i.e.", "$\\overline{K_M}\\le _q G$ .", "Interestingly, the same assumption has another implication that we will find useful for bounding the admissible exponents for $\\mathcal {H}^{\\mathbb {F}_{p}}_f$ .", "Lemma 3.9 Suppose that the graph $G$ satisfies $\\xi (G)\\le d$ and has $M$ disjoint $d$ -cliques.", "Then $\\overline{\\mathcal {K}_{Md}}\\le S_{G}\\otimes \\mathcal {I}_{d}$ .", "Let $(u_g)_{g\\in V(G)}$ be unit vectors in $\\mathbb {C}^d$ such that $g\\sim g^{\\prime }\\Rightarrow \\left\\langle u_g|u_{g^{\\prime }}\\right\\rangle =0$ , and let $\\varphi :[Md]\\rightarrow V(G)$ be an enumeration of the vertices in $M$ disjoint $d$ -cliques.", "Then the map $\\varphi $ and the vectors $i\\mapsto u_{\\varphi (i)}$ give rise to a cohomomorphism from $\\overline{\\mathcal {K}_{Md}}$ to $S_{G}\\otimes \\mathcal {I}_{d}$ via lem:specialcohomomorphism (with Kraus operators $E_i=(\\left|\\varphi (i)\\right\\rangle \\otimes u_{\\varphi (i)})\\left\\langle i\\right|$ ).", "Proposition 3.10 Let $p$ be an odd prime such that there exists a Hadamard matrix of size $4p$ , and suppose that the exponent $\\alpha $ is admissible for $\\mathcal {H}^{\\mathbb {F}_{p}}_f$ .", "Then $\\begin{split}\\alpha & \\ge \\frac{\\log \\binom{4p-1}{2p}-\\log \\sum _{i=0}^{p-1}\\binom{4p-1}{i}-\\log (4p-1)}{\\log (4p-1)} \\\\& \\ge \\frac{(4p-1)\\left[h(\\frac{1}{2}+\\frac{1}{8p-2})-h(\\frac{1}{4}+\\frac{1}{16p-4})\\right]-\\log (16p^2-4p)}{\\log (4p-1)}.\\end{split}$ Let $G$ be the graph with vertex set the set of binary strings of length $n=4p-1$ and Hamming weight $(n+1)/2$ , and with edge set the set of pairs of vertices with Hamming distance $(n+1)/2$ .", "In [1] it was shown that (see [8] for the second inequality) $\\mathcal {H}^{\\mathbb {F}_{p}}(G)\\le \\sum _{i=0}^{p-1}\\binom{n}{i}\\le 2^{nh(p/n)}.$ The graph $G$ satisfies $\\xi (G)\\le n$ [1] and has at leas $\\vert V(G)\\vert /n^2$ disjoint $n$ -cliques [1], where $\\vert V(G)\\vert =\\binom{n}{(n+1)/2}\\ge \\frac{1}{n+1}2^{nh(\\frac{1}{2}+\\frac{1}{2n})}.$ It follows by lem:disjointcliques that $\\overline{\\mathcal {K}_{\\lceil \\vert V(G)\\vert /n\\rceil }}\\le G\\otimes \\mathcal {I}_{n}$ .", "We apply the extension of $\\mathcal {H}^{\\mathbb {F}_{p}}_f$ with exponent $\\alpha $ to this inequality to get $\\vert V(G)\\vert /n\\le \\mathcal {H}^{\\mathbb {F}_{p}}_f(G)n^\\alpha $ , and rearrange as $\\begin{split}\\alpha & \\ge \\frac{\\log (\\vert V(G)\\vert )}{\\log n}-\\frac{\\log \\mathcal {H}^{\\mathbb {F}_{p}}_f(G)}{\\log n}-1 \\\\& \\ge \\frac{\\log \\binom{4p-1}{2p}-\\log \\sum _{i=0}^{p-1}\\binom{4p-1}{i}-\\log (4p-1)}{\\log (4p-1)} \\\\& \\ge \\frac{(4p-1)\\left[h(\\frac{1}{2}+\\frac{1}{8p-2})-h(\\frac{1}{4}+\\frac{1}{16p-4})\\right]-\\log (16p^2-4p)}{\\log (4p-1)}.\\end{split}$ The bound grows as $\\Omega (p/\\ln p)$ , therefore it becomes nontrivial for any suitable sufficiently large prime $p$ .", "Numerically evaluating the bound for the first few odd primes, we find the first nontrivial lower bound at $p=17$ (a Hadamard matrix of size 68 exists by the Payley construction [24]): if $\\alpha $ is an admissible exponent for $\\mathcal {H}^{\\mathbb {F}_{17}}_f$ , then $\\alpha >1.16249\\ldots $ ." ], [ "Projective rank.", "The projective rank $\\xi _f$ is a graph parameter introduced in [23], and can be viewed as a fractional version of the orthogonal rank $\\xi $ .", "An $a/b$ -projective representation (with $a\\in \\mathbb {N}$ , $b\\in \\mathbb {N}_{>0}$ ) of a graph $G$ is a collection $(P_g)_{g\\in V(G)}$ of rank-$b$ projections on $\\mathbb {C}^a$ satisfying $g\\sim g^{\\prime }\\Rightarrow P_gP_{g^{\\prime }}=0$ .", "The value of an $a/b$ -projective representation is the number $\\frac{a}{b}$ , and the projective rank $\\xi _f(G)$ is the infimum of the values of all projective representations of $G$ .", "The complement $\\overline{\\xi _f}$ is an element of $\\Delta (\\mathcal {G},\\le )$ [33].", "If $(P_g)_{g\\in V(G)}$ is a projective representation and $m\\in \\mathbb {N}_{>0}$ , then $(P_g\\otimes I_m)_{g\\in V(G)}$ is also a projective representation with the same value.", "This means that, similarly to subspace representations, a finite collection of projective representations can be modified to have equal denominators without affecting their values.", "Proposition 3.11 The set of admissible exponents for $\\overline{\\xi _f}$ is $[1,\\infty )$ .", "Once again we proceed as in the proof of prop:thetaoneadmissible: suppose $\\bigoplus _{d=1}^rS_{H_d}\\otimes \\mathcal {I}_{d}\\le \\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ and let $H=\\bigsqcup _{d=1}^r H_d\\boxtimes \\overline{K_d}$ , and choose a function $\\varphi :V(H)\\rightarrow \\bigsqcup _{d=1}^r V(G_d)$ and unit vectors $u_h\\in \\mathbb {C}^{\\pi (\\varphi (h))}$ such that $h\\lnot \\simeq h^{\\prime }\\Rightarrow (\\varphi (h)\\lnot \\simeq \\varphi (h^{\\prime })\\text{ or }\\left\\langle u_h|u_{h^{\\prime }}\\right\\rangle =0)$ .", "For $\\epsilon >0$ , let $(P_g)_{g\\in V(G_d)}$ be $a_d/b$ -projective representations of the graphs $\\overline{G_d}$ with values satisfying $\\frac{a_d}{b}\\le \\overline{\\xi _f}(G_d)+\\epsilon $ .", "Then $h\\mapsto P_{\\varphi (h)}\\otimes \\left|u_h\\rangle \\!\\langle u_h\\right|\\in \\operatorname{\\mathcal {B}}\\left(\\bigoplus _{d=1}^r\\mathbb {C}^{a_d}\\otimes \\mathbb {C}^d\\right)$ is a projective representation of $\\overline{H}$ with rank-$b$ projections, therefore $\\begin{split}\\overline{\\xi _f}(H)& \\le \\sum _{d=1}^r\\frac{a_d}{b}d \\\\& \\le \\sum _{d=1}^r(\\overline{\\xi _f}(G_d)+\\epsilon )d \\\\& =\\sum _{d=1}^r\\overline{\\xi _f}(G_d)d+\\frac{r(r+1)}{2}\\epsilon .\\end{split}$ Since $\\epsilon >0$ was arbitrary, we also have $\\sum _{d=1}^r\\overline{\\xi _f}(H_d)d=\\overline{\\xi _f}(H)\\le \\sum _{d=1}^r\\overline{\\xi _f}(G_d)d,$ i.e.", "the exponent 1 is admissible for $\\overline{\\xi _f}$ .", "To the best of our knowledge, no explicit element of $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )$ is known which extends $\\overline{\\xi _f}$ .", "An extension of the orthogonal rank to noncommutative graphs has been considered in [26], [21]." ], [ "Logarithmic convexity", "Beyond the specific elements considered in the preceding section, the asymptotic spectrum of graphs contains uncountably many points due to the convexity result proved in [30].", "In this section we show that, similarly, $\\Delta (\\mathcal {A},\\le )$ is log-convex.", "The main tool in [30] was the probabilistic refinement of the elements of $\\Delta (\\mathcal {G},\\le )$ , which we briefly recall before introducing a variant that is suitable for elements of $\\Delta (\\mathcal {A},\\le )$ .", "For more information on convexity properties of asymptotic spectra we refer the reader to [31].", "Let $f\\in \\Delta (\\mathcal {G},\\le )$ , $G$ a nonempty graph and $Q\\in \\mathcal {P}_{}(V(G))$ .", "Let $(Q_n)_{n\\in \\mathbb {N}}$ be any sequence such that $Q_n\\in \\mathcal {P}_{n}(V(G))$ and $Q_n\\rightarrow Q$ .", "Then it can be shown that the limit $\\lim _{n\\rightarrow \\infty }\\@root n \\of {f(G^{\\boxtimes n}[T^{n}_{Q_n}])}$ exists and is independent of the particular sequence.", "We denote the value of the limit by $f(G,Q)$ , and call the resulting functional on probabilistic graphs the probabilistic refinement of $f$ .", "The parameter $f$ may be reconstructed from the probabilistic refinement as $f(G)=\\max _{Q\\in \\mathcal {P}_{}(V(G))}f(G,Q)$ (for every nonempty $G$ ).", "The meaning of the log-convexity of $\\Delta (\\mathcal {G},\\le )$ is that functions $(G,Q)\\mapsto \\log f(G,Q)$ arising in this way form a convex set with respect to the pointwise operations.", "This property in turn follows from the fact that the set of logarithmic probabilistic refinements is characterized by a family of affine inequalities.", "We define the analogous functionals for elements of $\\mathcal {A}$ by reducing it to the probabilistic refinement of the asymptotic spectrum of graphs.", "Definition 4.1 Let $S=\\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ and $Q\\in \\mathcal {P}_{}(\\bigsqcup _{d=1}^r V(G_d))$ .", "We define the probabilistic refinement of $f_\\alpha $ (as defined in def:falpha) as $\\log f_\\alpha (S,Q):=H(\\pi _*(Q))+\\sum _{d=1}^r\\pi _*(Q)(d)\\left[\\log f(G_d,Q_d)+\\alpha \\log d\\right],$ where $H$ is the Shannon entropy and $Q_d$ is the normalization of $Q$ restricted to $V(G_d)$ (if $Q(V(G_d))=0$ then $f(G_d,Q_d)$ is not well-defined, but the corresponding term vanishes anyway).", "A routine calculation shows that $f_\\alpha (S)=\\max _{Q\\in \\mathcal {P}_{}(\\bigsqcup _{d=1}^r V(G_d))}f_\\alpha (S,Q)$ .", "Note also that forming a convex combinations of the functionals of the form $(S,Q)\\mapsto f_\\alpha (S,Q)$ amounts to combining the corresponding functionals $(G,Q)\\mapsto \\log f(G,Q)$ as well as the exponents $\\alpha $ with the same weights, since $\\log f_\\alpha (S,Q)$ is built from these in an affine way.", "Recall that every map $f_\\alpha :\\mathcal {A}\\rightarrow \\mathbb {R}_{\\ge 0}$ is a homomorphism, therefore we only need to prove that monotonicity is preserved under convex combinations.", "A key point concerning the monotonicity property in [30] is the fact that if the inequality $H\\le G$ holds, then for every $Q\\in \\mathcal {P}_{}(V(H))$ there exists a $P\\in \\mathcal {P}_{}(V(G))$ such that for all spectral points $f$ the inequality $f(H,Q)\\le f(G,P)$ holds (i.e.", "$P$ can be constructed independently of $f$ ).", "Concretely, if $\\varphi :\\overline{H}\\rightarrow \\overline{G}$ is a homomorphism, then one can take $P=\\varphi _*(Q)$ .", "We use a similar argument for the log-convexity of $\\Delta (\\mathcal {A},\\le )$ .", "Theorem 4.2 The set of functionals of the form $(S,Q)\\mapsto \\log f_\\alpha (S,Q)$ with $f_\\alpha \\in \\Delta (\\mathcal {A},\\le )$ is convex.", "Let $f_\\alpha ,g_\\beta \\in \\Delta (\\mathcal {A},\\le )$ and $\\lambda \\in [0,1]$ , and define $h\\in \\Delta (\\mathcal {G},\\le )$ and $\\gamma \\in [1,\\infty )$ by $\\log h(G,P)=\\lambda \\log f(G,P)+(1-\\lambda )\\log g(G,P)$ and $\\gamma =\\lambda \\alpha +(1-\\lambda )\\beta $ .", "We need to show that $h_\\gamma $ is monotone.", "Let $T=\\bigoplus _{d=1}^rS_{H_d}\\otimes \\mathcal {I}_{d}$ and $S=\\bigoplus _{d=1}^rS_{G_d}\\otimes \\mathcal {I}_{d}$ and suppose that $T\\le S$ .", "By lem:specialcohomomorphism, $\\varphi :\\bigsqcup _{d=1}^r V(H_d)\\rightarrow \\bigsqcup _{d=1}^r V(G_d)$ and a family of isometries $U_h:\\mathbb {C}^{\\pi (h)}\\rightarrow \\mathbb {C}^{\\pi (\\varphi (h))}$ ($h\\in \\bigsqcup _{d=1}^r V(H_d)$ ) such that $h\\lnot \\simeq h^{\\prime }\\Rightarrow (\\varphi (h)\\lnot \\simeq \\varphi (h^{\\prime })\\text{ or }U_h^*U_{h^{\\prime }}=0)$ and $(h\\simeq h^{\\prime }\\text{ and }\\varphi (h)\\simeq \\varphi (h^{\\prime }))\\Rightarrow U_h^*U_{h^{\\prime }}=cI_{\\pi (h)}$ for some $c\\in \\mathbb {C}$ .", "Let $P\\in \\mathcal {P}_{}(\\bigsqcup _{d=1}^r V(H_d))$ and consider a sequence $(P_n)_{n\\in \\mathbb {N}_{>0}}$ such that $P_n\\in \\mathcal {P}_{n}(\\bigsqcup _{d=1}^r V(H_d))$ and $P_n\\rightarrow P$ .", "Note that if $(h_1,\\ldots ,h_n)\\in T^{n}_{P_n}$ then $(\\varphi (h_1),\\ldots ,\\varphi (h_n))\\in T^{n}_{\\varphi _*(P_n)}$ and $U_{h_1}\\otimes \\cdots \\otimes U_{h_n}:\\mathbb {C}^{\\pi (h_1)}\\otimes \\cdots \\otimes \\mathbb {C}^{\\pi (h_n)}\\rightarrow \\mathbb {C}^{\\pi (\\varphi (h_1))}\\otimes \\cdots \\otimes \\mathbb {C}^{\\pi (\\varphi (h_n))},$ therefore the restrictions $\\left.\\varphi ^{\\times n}\\right|_{T^{n}_{P_n}}$ and $(U_{h_1}\\otimes \\cdots \\otimes U_{h_n})_{(h_1,\\ldots ,h_n)\\in T^{n}_{P_n}}$ determine (via lem:specialcohomomorphism) a cohomomorphism corresponding to the inequality $\\overline{\\mathcal {K}_{\\vert T^{n}_{\\pi _*(P_n)}\\vert }}\\otimes \\bigotimes _{d=1}^rS_{G_d^{\\boxtimes n\\pi _*(P_n)(d)}[T^{n\\pi _*(P_n)(d)}_{(P_n)_d}]}\\otimes \\mathcal {I}_{\\prod _{d=1}^rd^{n\\pi _*(P_n)(d)}} \\\\\\le \\overline{\\mathcal {K}_{\\vert T^{n}_{\\pi _*(\\varphi _*(P_n))}\\vert }}\\otimes \\bigotimes _{d=1}^rS_{H_d^{\\boxtimes n\\pi _*(\\varphi _*(P_n))(d)}[T^{n\\pi _*(\\varphi _*(P_n))(d)}_{(\\varphi _*(P_n))_d}]}\\otimes \\mathcal {I}_{\\prod _{d=1}^rd^{n\\pi _*(\\varphi _*(P_n))(d)}}.$ More precisely, for every element $(d_1,\\ldots ,d_n)\\in T^{n}_{\\pi _*(P_n)}$ (corresponding to the vertices of the first factor on the left hand side) one needs to choose an isomorphism between $\\mathbb {C}^{\\pi (h_1)}\\otimes \\cdots \\otimes \\mathbb {C}^{\\pi (h_n)}$ and $\\mathbb {C}^{\\prod _{d=1}^rd^{n\\pi _*(P_n)(d)}}$ and similarly on the right hand side.", "Note that the isometries only need to be compared when $\\varphi (h_i)\\simeq \\varphi (h^{\\prime }_i)$ for all $i$ (and, in the second condition, assuming further $h_i\\simeq h^{\\prime }_i$ for all $i$ ), and these imply $\\pi (\\varphi (h_i))=\\pi (\\varphi (h^{\\prime }_i))$ (respectively, $\\pi (h_i)=\\pi (h^{\\prime }_i)$ ), therefore the same isomorphism is used for both and thus the choice of the isomorphisms does not matter.", "We apply monotonicity of $f_\\alpha $ and $g_\\beta $ to this inequality, take the logarithm, divide by $n$ and let $n\\rightarrow \\infty $ to get $\\log f_\\alpha (T,P)\\le \\log f_\\alpha (S,\\varphi _*(P)) \\\\\\log g_\\beta (T,P)\\le \\log g_\\beta (S,\\varphi _*(P)),\\multicolumn{2}{l}{\\text{which implies}}\\\\\\log h_\\gamma (T,P)\\le \\log h_\\gamma (S,\\varphi _*(P))$ by taking the convex combination of the two inequalities.", "The right hand side is at most $\\log h_\\gamma (S)$ , and maximizing the left hand side over $P$ gives $\\log h_\\gamma (T)$ , therefore $h_\\gamma (T)\\le h_\\gamma (S)$ as required." ], [ "Comments", " $\\Delta (\\mathcal {A},\\le )$ can be identified with the set of pairs $(f,\\alpha )\\in \\Delta (\\mathcal {G},\\le )\\times \\mathbb {R}$ where $\\alpha $ is an admissible exponent for $f$ , which is the epigraph of an extended real-valued function by thm:exponentsupperset.", "This function is convex by thm:Aspectrumlogconvex and lower semicontinuous because $\\Delta (\\mathcal {A},\\le )$ is locally compact [29].", "It is not known if $\\Delta (\\mathcal {G}_{\\textnormal {nc}},\\le )$ characterizes the asymptotic preorder between noncommutative graphs similarly to thm:Strassen, but it can be shown that $\\Delta (\\mathcal {A},\\le )$ does characterize the asymptotic preorder between elements of $\\mathcal {A}$ .", "To see this, we verify the conditions of [29]: if $S,T\\in \\mathcal {A}$ and $\\frac{\\operatorname{ev}_T}{\\operatorname{ev}_S}$ is bounded, then (specializing to some $f_\\alpha $ , $\\alpha \\rightarrow \\infty $ ) one can see that the largest $d$ such that $\\mathcal {I}_{d}$ appears in $S$ is at least as large as those appearing in $T$ ; therefore $T\\le S_{H}\\otimes \\mathcal {I}_{d}$ and $S_{G}\\otimes \\mathcal {I}_{d}\\le S$ for some graphs $H$ and $G$ ; since the preorder on $\\mathcal {G}$ is Strassen, we have $H\\le \\overline{K_r}\\boxtimes G$ for some $r\\in \\mathbb {N}$ , which implies $T\\le \\overline{\\mathcal {K}_{r}}\\otimes S$ .", "We note that the asymptotic preorder of $\\mathcal {G}_{\\textnormal {nc}}$ (or $\\mathcal {A}$ ), restricted to the subsemiring $\\mathcal {G}$ may not be the same as the asymptotic preorder of $\\mathcal {G}$ .", "The latter allows a sublinear number of uses of a noiseless classical channel, while the former allows a sublinear number of uses of a noiseless quantum channel when comparing large powers.", "It is a curious fact that we were able to show that 1 is an admissible exponent precisely for those $f\\in \\Delta (\\mathcal {G},\\le )$ which are known to be elements of $\\Delta (\\mathcal {G},\\le _q)$ and, conversely, we were able to show that 1 is not admissible for those $f$ which are known to be outside $\\Delta (\\mathcal {G},\\le _q)$ [22] (and the violated inequalities involve the same graphs).", "We do not know if this is merely a coincidence or the two questions are related.", "The statement that no exponent is admissible for $\\overline{\\xi }_f$ can be considerably generalized in connection with the log-convex structure.", "Note that the graphs $\\overline{\\Omega _n}$ are vertex-transitive, therefore their logarithmic evaluation maps are affine on $\\Delta (\\mathcal {G},\\le )$ .", "Let $f\\in \\Delta (\\mathcal {G},\\le )$ be arbitrary and $\\lambda \\in (0,1]$ , and let $g$ be the convex combination of $f$ (with weight $\\lambda $ ) and $\\overline{\\chi _f}$ (with weight $1-\\lambda $ ).", "Since $f(\\overline{\\Omega _{4k}})\\ge 2$ (there exist distinct non-adjacent vertices), we have $\\log g(\\overline{\\Omega _{4k}})\\ge (1-\\lambda )4k\\log \\frac{2}{2-\\epsilon }$ , which implies (using an estimate similar to (REF )) that no exponent for $g$ is admissible.", "The following problem arises from the results of prop:thetaoneadmissible,prop:fHaemersConeadmissible,prop:complementprojectiverankoneadmissible: construct explicit extensions of $\\vartheta ,\\mathcal {H}^\\mathbb {C}_f$ and $\\overline{\\xi _f}$ with exponent 1 or, even better, families of extensions for the entire parameter range $[1,\\infty )$ ." ], [ "Acknowledgement", "This work was partially funded by the National Research, Development and Innovation Office of Hungary via the research grants K124152, KH129601, by the ÚNKP-21-5 New National Excellence Program of the Ministry for Innovation and Technology, the János Bolyai Research Scholarship of the Hungarian Academy of Sciences, and by the Ministry of Innovation and Technology and the National Research, Development and Innovation Office within the Quantum Information National Laboratory of Hungary." ] ]
2207.10483
[ [ "Efficiency of the Moscow Stock Exchange before 2022" ], [ "Abstract This paper investigates the degree of efficiency for the Moscow Stock Exchange.", "A market is called efficient if prices of its assets fully reflect all available information.", "We show that the degree of market efficiency is significantly low for most of the months from 2012 to 2021.", "We calculate the degree of market efficiency by (i) filtering out regularities in financial data and (ii) computing the Shannon entropy of the filtered return time series.", "We have developed a simple method for estimating volatility and price staleness in empirical data, in order to filter out such regularity patterns from return time series.", "The resulting financial time series of stocks' returns are then clustered into different groups according to some entropy measures.", "In particular, we use the Kullback-Leibler distance and a novel entropy metric capturing the co-movements between pairs of stocks.", "By using Monte Carlo simulations, we are then able to identify the time periods of market inefficiency for a group of 18 stocks.", "The inefficiency of the Moscow Stock Exchange that we have detected is a signal of the possibility of devising profitable strategies, net of transaction costs.", "The deviation from the efficient behavior for a stock strongly depends on the industrial sector it belongs." ], [ "Introduction", "When prices reflect all available information, the market is called efficient [1].", "One way to claim the efficiency of a market is by testing the Efficient Market Hypothesis (EMH).", "In the weak form, the EMH considers that the last price incorporates all the past information about market prices [2].", "If the weak form of EMH is rejected, previous prices help to predict future prices.", "For traders, market efficiency means that analyzing the history of previous prices does not help to design a strategy that gives an abnormal profit.", "For a company issuing shares, market efficiency means that the cost of its share already reflects all information about the valuation and decisions of the company.", "The EMH is of great interest also in research.", "Mathematical models of an asset price are usually based on the assumption that the price follows a martingale: the expected value of a future price is the current value of the price.", "If the EMH is rejected, there should be an estimation of the future price better than its current value.", "In such a case, new models should be thought.", "The review of works confirming the EMH was presented by Fama in 1970 [2] and then in 1991 [3].", "The martingale hypothesis was also tested later.", "It was shown that the efficiency of a market depends on the development of the country [4].", "Also, the martingale hypothesis was confirmed on short time intervals, but may be violated on longer intervals [5].", "In addition, there is a range of strategies designed to increase an expected profit.", "High-frequency and algorithmic trading strategies are discussed in [6].", "Statistical and machine learning methods for high frequency trading are reviewed in [7].", "The existence of such profitable strategies contradicts the Efficient Market Hypothesis.", "According to Grossman and Stiglitz [8], the degree of market inefficiency determines the effort investors are willing to expend to gather and trade on information.", "A goal of this paper is to investigate the degree of stock market efficiency using the Shannon entropy.", "Before estimating the degree of market efficiency, we need to get rid of regularities that make prices more predictable, but do not imply any profitable strategies.", "The methodology of filtering regularities was introduced in [9].", "However, such a filtering has not usually been applied in other research works (see e.g.", "[10], [11], [12]).", "In fact, deviations of price behavior from perfect randomness may be the result of some known regularity pattern, such as volatility clustering or daily seasonality, but not a signal of market inefficiency.", "We process data by filtering regularities of financial time series including volatility clustering and price staleness.", "Price staleness is defined as a lack of price adjustments yielding 0-returns.", "Traders may trade less because of high transaction costs and so the price does not update.", "See [13] for more details.", "The price staleness produces an extra amount of 0-returns called excess 0-returns.", "The other source of 0-returns in the time series is price rounding.", "Estimations of volatility and degree of price staleness are mutually connected: excess 0-returns appearing due to price staleness tend to underestimate volatility.", "At the same time, volatility estimation is needed to calculate the expected amount of 0-returns due to rounding.", "One way to estimate volatility in the presence of excess 0-returns was presented in [14].", "It uses expectation-maximization algorithm [15] to estimate returns in the places of all 0-returns and uses GARCH(1,1) model to estimate volatility [16].", "The maximization of the likelihood function appearing at each step of the considered algorithm requires several parameters for numerical optimization.", "If the estimation of volatility is sensitive to these parameters, that are user-defined, then they may affect the entropy of returns standardized by volatility and the amount of 0-returns in the time series.", "In this article we suggest a modification of moving average volatility estimation that requires adjusting of the only parameter that can be defined using out-of-sample testing.", "The idea is to adopt a simple method for volatility estimation, so that price staleness is taken into consideration.", "Moreover, while estimating volatility, we filter out excess 0-returns.", "The degree of market efficiency has been measured for many countries.", "Stock indices for 20 countries including Taiwan, Mexico, and Singapore, were considered in [17].", "The efficiency of 11 emerging markets, the US and Japan markets was measured in [18] using the Hurst exponent and R/S statistics.", "The review of articles about Baltic countries was presented in [19].", "A degree of uncertainty of Chinese [20], Tunisian [21], and Portuguese [22] stock markets was also considered using entropy measures.", "However, the efficiency of the Russian stock market has not yet been analyzed.", "In this paper, we present an analysis of market efficiency based on the estimation of Shannon entropy for a group of 18 stocks of Russian companies from five industries.", "Our paper introduces four original contributions in the field.", "First, we construct the method of filtering out heteroskedasticity and price staleness.", "This filtering process helps to identify a true degree of market inefficiency.", "Second, we calculate the degree of market inefficiency for the previous decade using monthly intervals.", "We conclude that the degree of market inefficiency for the Moscow Stock Exchange was greater than $80\\%$ .", "Third, we determine which pair of stocks exhibits the largest amount of inefficiency, as measured by estimating Shannon's entropy on their high frequency price time series.", "We show that months where the predictability of stock prices attains its maximum cluster together.", "We find out what behavior of stocks repeats most often for inefficient time periods.", "Finally, we estimate the closeness of price movements using two measures of entropy.", "Based on these results, we cluster together groups of stocks for which the efficient market hypothesis is rejected, thus pointing out how market inefficiency display some dependence from the financial sector they belong.", "The article is organized as follows.", "Section  describes the dataset and the methodology of filtering data regularities and calculating the Shannon entropy.", "Section  presents the results on simulated and real data.", "Section  concludes the paper." ], [ "Dataset", "We study the Moscow Stock Exchange.", "We consider close prices aggregated at one-minute time scale.", "In particular, we select only minutes of the main trading session from 10:00 to 18:40.", "The time interval covers ten years from 2012 to 2021.", "The time period is divided into monthly time intervals.", "We take 18 companies, 16 of them are from five sectors: oil industry, metallurgy, banks, telecommunications, electricity.", "All stocks are listed in the Table  REFThere are 2520 trading days.", "Assuming that there are 520 minutes in each trading day, there are 1310400 trading minutes in total.", "We use the Brownlees and Gallo's algorithm of an outlier detection [23].", "See details in Appendix  REF.", "All data are provided by Finam Holdingshttps://www.finam.ru/.", "Table: *" ], [ "Apparent inefficiencies", "To estimate a degree of market efficiency, we first should eliminate the known patterns of predictability, such as a daily seasonality.", "Financial agents operating in the market tend to trade less in the middle of a day.", "It is reflected in prices, but again this pattern in trading volume should be filtered out to detect genuine patterns of inefficiency.", "Other known regularities are volatility clustering, price staleness, and microstructure noise.", "See Appendix  for the guide of filtering out apparent inefficiencies.", "The contribution of this article is devising a simple method for filtering volatility clustering and price staleness.", "One of the methods to estimate volatility is the exponentially weighted moving average (EWMA).", "It is described in the next section." ], [ "EWMA", "We define price returns as $r_t=\\ln {\\left(\\frac{P_t}{P_{t-1}}\\right)}$ , where $P_t$ is the last price available at time $t$ and $\\ln ()$ is the natural logarithm.", "In order to estimate volatility $\\sigma _n$ , we apply exponentially weighted moving average [24] of the values $\\mu _1^{-1}|r_i|$ , $i<n$ , where $\\mu _1=\\sqrt{\\frac{2}{\\pi {}}}$ .", "$\\bar{\\sigma }_n=Sig_1(\\alpha , r_{n-1},\\bar{\\sigma }_{n-1} )=\\alpha \\mu _1^{-1}|r_{n-1}|+(1-\\alpha )\\bar{\\sigma }_{n-1}$ Here the fact that $E[|r_n|]=\\mu _1\\sigma _n$ is used and more weights are given for the more recent data.", "An alternative formula based on the fact that $E[r_n^2]=\\sigma _n^2$ is $\\bar{\\sigma }^2_n=Sig_2(\\alpha , r_{n-1},\\bar{\\sigma }_{n-1} )=\\alpha r_{n-1}^2+(1-\\alpha )\\bar{\\sigma }_{n-1}^2.$ A large value of return increases the value of volatility.", "The current value of volatility reflects all available values of returns and changes slowly if the value of $\\alpha $ is small.", "Usually, the smoothing parameter $\\alpha $ is taken close to 0.", "For instance, $\\alpha =0.05$ is taken in the article [9].", "The value of the parameter $\\alpha $ is set to be equal $0.12$ for in-sample testing and $0.22$ for out-of-sample testing in [25].", "Instead, Hunter [24] suggests to use $\\alpha =0.2\\pm 0.1$ .", "Using the principle of the best one-step forecasting, the smoothing parameter is set to $0.06$ for the daily data and to $0.03$ for the monthly data [26].", "We follow the approach suggested by [26] (p. 97) to find the optimal value of $\\alpha $ .", "The goal is to select the parameter $\\alpha $ , such that it minimizes the value of $Er_{\\sigma }=\\sum _i(\\bar{\\sigma }_i^2-r_i^2)^2$ .", "In order to minimize $Er_{\\sigma }$ as a function of the only parameter $0<\\alpha <1$ , we apply Brent's algorithm [27]The method is available in Python by using the function scipy.optimize.minimize_scalar.", "Alternatively, we could use the golden-section search [28] that requires the boundary of search and the only parameter for the stopping criteria.." ], [ "Estimation of price staleness", "Let's define an efficient price, $P^e$ , as a continuous process following a Geometric Brownian Motion.", "$P^e_t=P^e_0+\\int _0^t{\\sigma _s P^e_s d W_s}$ An observed price moves along a discrete grid.", "Possible price values are multiples of the tick size, $d$ .", "$P_t=d\\cdot \\left[\\frac{P^e_t}{d}\\right]$ If the efficient price changes insignificantly, the return of the rounded price will be equal to 0.", "Analogically, if the return of rounded price is 0, the return of efficient price has a value close to 0.", "We use the Equation REF to determine the probability that a 0-return is appearing due to rounding: $p_i=erf(R_{i-1})+\\frac{1}{R_{i-1}\\sqrt{\\pi }}(\\exp {(-R_{i-1}^2)}-1),$ where $R_i=\\frac{d}{\\bar{P}_i\\bar{\\sigma }_i\\sqrt{2\\Delta }}$ and $erf(x)$ is the Gaussian error function; $d$ is a tick size We estimate the tick size using 2-steps procedure for each month.", "First, we find the amount of significant digits in price.", "Then, we determine the most frequent increment in ordered prices., $\\Delta $ is a time stepThe time step between the end and start of the main trading session is set as 1 minute.", "Also, we consider any time gap without trading more than 2 hours as the closure of the market.", "We set the time step equal to 1 minute for these gaps., $\\bar{P}$ is a rounded price, and $\\bar{\\sigma }$ is an estimation of volatility [29].", "It is obtained by considering the probability that a price following a Geometric Brownian Motion moves less than one tick size, assuming that price increments are normally distributed.", "There is another source of getting 0-returns, namely price staleness.", "Price staleness is a regularity that means that a fundamental (efficient) price of an asset is not updated because of economic reasons.", "Such a reason is, for instance, a high transaction cost, which makes transactions unprofitable for traders.", "See [13] for more details.", "The presence of price staleness in the data implies the existence of excess 0-returns instead of values of returns of efficient price.", "The excess 0-returns tend to reduce the estimation of volatility.", "Therefore, we need to filter out 0-returns due to price staleness and keep 0-returns due to rounding.", "According to our methodology of filtering out excess 0-returns presented in [29], we save 0-returns proportionally to the probability in Eq.", "REF and set other 0-returns as missing values.", "We adopt this methodology to estimate the degree of price staleness together with volatility in the next section." ], [ "Modification of EWMA", "In this Section, we present a modification of the EWMA that takes into consideration the effect of price staleness.", "Our modification of the EWMA is based on the suggestion to estimate volatility $\\sigma _n$ as $\\bar{\\sigma }_{n-1}$ (so setting $\\alpha =0$ ), if the value of $r_{n-1}$ is missing because of price staleness.", "That is, there is no new information from returns to update the value of volatility.", "Initially, the expected amount of 0-returns due to rounding is $N_{save}=0$ .", "Thus, each appearance of 0-returns does not affect the value of volatility.", "A 0-return is defined as a value due to rounding and is saved in the sequence if the sum of all $p_i$ (Eq.", "REF ) moves to a new integer value.", "Other details and the algorithm of volatility estimation can be found in Appendix .", "We update the estimation of volatility and price staleness minute by minute.", "This method has the clear advantage of making possible the online inference by processing data in real time." ], [ "The Shannon entropy", "The degree of market efficiency is assessed by computing the Shannon entropy.", "The entropy of a source is an average measure of the randomness of its outputs [30].", "Definition 1 Let $X = \\lbrace X_1 , X_2 , .\\dots \\rbrace $ be a stationary random process with a finite alphabet $A$ and a measure $\\mu $ .", "An $n$ -th order entropy of $X$ is $H_n(\\mu )=-\\sum _{x_1^n \\in A^n}\\mu (x_1^n)\\log {\\mu (x_1^n)}$ with the convention $0\\log {0}=0$ .", "The process entropy (entropy rate) of $X$ is $h(\\mu )=\\lim _{n\\rightarrow \\infty } \\frac{H_n(\\mu )}{n}.$" ], [ "Discretization", "The Shannon entropy is computed over a finite alphabet.", "To measure the Shannon entropy, we need to keep the length of blocks of symbols, $k$ , sufficiently large.", "The predictable behavior of returns can be seen on blocks of greater length and may not be noticeable on blocks of smaller length.", "For this reason, we consider 3-symbols and 4-symbols discretizations using empirical quantiles.", "$\\begin{split}s^{(3)}_t={\\left\\lbrace \\begin{array}{ll}1, r_t\\le \\theta _1, \\\\0, \\theta _1<r_t\\le \\theta _2,\\\\2,\\theta _2<r_t,\\end{array}\\right.", "}\\end{split}\\quad \\begin{split}s^{(4)}_t={\\left\\lbrace \\begin{array}{ll}0, r_t\\le Q_1, \\\\1, Q_1<r_t\\le Q_2,\\\\2, Q_2<r_t\\le Q_3,\\\\3, Q_3<r_t,\\end{array}\\right.", "}\\end{split}$ where $\\theta _1$ and $\\theta _2$ are tertiles and $Q_1$ , $Q_2$ , $Q_3$ are quartiles.", "The tertiles divide data into three equal parts.", "The quartiles divide data into four equal parts.", "$Q_2$ is also the median of the empirical distribution of returns.", "For the later analysis, we will need a discretization describing the behavior of a pair of stocks.", "$\\begin{split}s^{(p)}_t={\\left\\lbrace \\begin{array}{ll}0, r^{(1)}_t\\le m_1 \\& r^{(2)}_t \\le m_2,\\\\1, r^{(1)}_t\\le m_1 \\& r^{(2)}_t > m_2,\\\\2, r^{(1)}_t > m_1 \\& r^{(2)}_t\\le m_2,\\\\3, r^{(1)}_t > m_1 \\& r^{(2)}_t > m_2,\\\\\\end{array}\\right.", "}\\end{split}$ where $r^{(1)}_t$ and $r^{(2)}_t$ are two time series of price returns and $m_1$ and $m_2$ are their medians." ], [ "The estimation of entropy", "Let $x_1^n\\in A^n$ be the sequence of length $n$ generated by an ergodic source $\\mu $ from the finite alphabet $A$ , where $x_i^{i+k-1}=x_i\\dots x_{i+k-1}$ .", "There are possible missing values in the sequence generated independently from $x_1^n$ .", "We consider all blocks of length $k$ that do not contain missing values.", "We take $k=max(K: K<\\lfloor \\log (n_b(K))\\rfloor ,$ where $n_b(k)$ is the number of blocks of length $k$ .", "The base of the logarithm is the size of the alphabet $A$ (3 or 4).", "For each $a_1^k \\in A^k$ empirical frequencies are defined as $f(a_1^k|x_1^n)=\\#\\lbrace i \\in [1, n-k+1]: x_i^{i+k-1}=a_1^k \\rbrace .$ Empirical frequencies are the actual amount of each block from $A^k$ in the data.", "By considering an empirical k-block distribution as $\\hat{\\mu }_k(a_1^k|x_1^n)=\\frac{f(a_1^k|x_1^n)}{n_b},$ an empirical $k$ -entropy is defined by $\\hat{H}_k(x_1^n)=-\\sum _{a_1^k}\\hat{\\mu }_k(a_1^k|x_1^n)\\log {(\\hat{\\mu }_k(a_1^k|x_1^n))}=\\log (n_b)-\\frac{1}{n_b}\\sum _{i=1}^{M}f_i\\log {f_i}.$ The estimation of the process entropy is $\\hat{h}_k=\\frac{\\hat{H}_k}{k}.$ See [31] for the proof of the consistency of this estimator and [29] for the case of missing values.", "Since the sequence is finite, the estimation of entropy is underestimated.", "To remove this bias, we use the correction for the entropy estimation introduced in [32], [33].", "$\\hat{H}_k^G=\\log (n_b)-\\frac{1}{n_b}\\sum _{i=1}^{M}f_i \\log {\\left(\\exp {G(f_i)}\\right)},$ where the sequence $G(i)$ is defined recursively as $\\begin{split}G(1)&=-\\gamma -\\ln (2)\\\\G(2)&=2-\\gamma -\\ln (2)\\\\G(2n+1)&=G(2n)\\\\G(2n+2)&=G(2n)+\\frac{2}{2n+1}\\text{, }n\\ge 1\\end{split}$ with the Euler’s constant $\\gamma =0.577215\\dots $ ." ], [ "Detection of inefficiency", "We need to do three steps to determine if the time interval is efficient or not.", "First, we filter out apparent inefficiencies (see Appendix ).", "Then, we estimate the entropy of the filtered return time series using Eq.", "REF .", "Finally, we determine if the value of entropy is significantly low relative to the case of perfect randomness.", "We detect inefficiency in the time interval using Monte Carlo simulations.", "We regard a Brownian motion as absolutely unpredictable.", "First, we define the length of sequences as $l=n_{b}(k)+k-1$ .", "Then, we simulate $10^4$ realizations of Brownian motions with Gaussian increments and the length $l$ .", "For each realization, we calculate entropy using 3- and 4-symbols discretizations.", "Then, we find the first percentile of the obtained entropies for each discretization.", "These percentiles are the bounds of $99\\%$ of the Confidence Interval (CI) for testing market efficiency.", "Finally, we define an efficiency rate as the ratio of the entropy of the time interval and the bound of CI.", "If the efficiency rate is less than 1 for at least one type of discretization, we define the time interval as inefficient.", "We provide testing for inefficiency twice using different discretizations because the unique testing may not be robust.", "See an example in Appendix ." ], [ "Kullback–Leibler divergence", "In addition to estimating the entropy of one time series, we can also consider the difference between two time series.", "The Kullback–Leibler divergence [34] is used to measure similarity between two sequences.", "For two discrete probability distributions $P$ and $Q$ .", "$KL(P|Q)=\\sum _{i}p_i\\log {\\frac{p_i}{q_i}}$ We use $p_i$ and $q_i$ as empirical probabilities obtained in Eq REF .", "Since the Kullback–Leibler divergence is asymmetric, we consider the distance between two time series proposed in [35].", "$D(P,Q)=\\frac{KL(P|Q)}{H^G(P)}+\\frac{KL(Q|P)}{H^G(Q)}$ The greater the distance $D(P,Q)$ , the more probability distributions $P$ and $Q$ differ." ], [ "Simulations", "The aim of this section is to assess the accuracy of the estimation of volatility and the degree of price staleness.", "We will choose the method that gives the least error of the estimation for further analysis on real data.", "We take the following model of an observed price $\\tilde{P}_t$ , $t=1\\dots 2N$ .", "$\\begin{split}P_t&=\\int _0^t{\\sigma _s P_s d W^1_s}\\\\\\tilde{P}_i&=P_i(1-B_i)+\\tilde{P}_{i-1}B_i\\\\pr_t&=pr_0+\\int _0^t\\mu _s ds+\\int _0^t\\nu dW^2_s\\end{split}\\quad \\begin{split}B_i={\\left\\lbrace \\begin{array}{ll}1\\text{ with probability }pr_i\\\\0\\text{ with probability }1-pr_i\\end{array}\\right.", "}\\end{split}$ where $W^1$ and $W^2$ are two independent Brownian motions with the length of $2N$ , $N=10^5$ , price $P_0=100$ , and $\\nu =10^{-4}$ .", "$B=1$ stands for the case when price is not updated due to price staleness (see [13], [36]).", "Prices are rounded to two digits, thus the tick size is $d=0.01$ .", "We consider 4 choices for $pr_t$ and $\\sigma _t$ listed below.", "$\\begin{split}pr_t^1&=0\\\\pr_t^2&=0.1+\\int _0^t\\nu dW^2_s\\\\pr_t^3&=0.2+\\int _0^t\\nu dW^2_s\\\\pr_t&=0.2+\\int _0^t\\mu ^4_s ds+\\int _0^t\\nu dW^2_s\\\\\\mu ^4_t&=0.8\\pi /N\\cos (8t\\pi /N)\\\\\\sigma _t^1&=5\\times 10^{-4}\\\\\\sigma _t^2&\\sim ARCH(1.75\\times 10^{-7}, 0.2, 0.1)\\\\\\sigma _t^3&\\sim GARCH(1.25\\times 10^{-8}, 0.1, 0.85)\\\\\\sigma _t^4&\\sim GARCH(1.25\\times 10^{-8}, 0.15, 0.8)\\\\\\end{split}$ We divide data into two equal parts with the size $N$ .", "The first part is a training set for finding optimal values of $\\alpha $ from Equations REF and REF .", "The second part is a testing set for calculating errors represented in Tables REF and REF below.", "We compare two methods that use $Sig_1$ and $Sig_2$ for volatility estimation.", "We set a fixed value of alpha, $\\alpha =0.05$ , as a benchmark for the comparison.", "We also apply non-modified EMWA estimation from Section REF with selected optimal value of $\\alpha $ to show the contribution of 0-filtering to the accuracy of volatility estimation.", "We simulate $10^3$ prices for each model.", "Table REF represents a mean absolute percentage error (MAPE) that is $\\frac{1}{N}\\sum _i|\\frac{\\bar{\\sigma }_i-\\sigma _i}{\\sigma _i}|$ for six different approaches.", "These approaches differ in the choice of a function for volatility, the value of $\\alpha $ , and the presence of missing values.", "Table REF represents three values for each of two methods using $Sig_1$ and $Sig_2$ for the volatility estimation.", "The first value is the optimal value of $\\alpha $ .", "The second is $Er_N=|\\frac{N_{round}N_{A}}{N_{0}N}-1|$ where $N_{A}$ is the amount of remaining non-missing returns, $N_{round}$ is the amount 0-returns that would appear due to rounding (before adding the effect of staleness in the simulated data); $N_{A}$ is the amount of non-missing returns; and $N_{0}$ is the amount of 0-returns.", "$Er_N$ represents the absolute error of the proportion of 0-returns that remain in the data and are defined as 0-returns due to rounding.", "The third value is the proportion of data set as missing values, that is $1-\\frac{N_{A}}{N}$ .", "It can be seen from Table REF that the method that more often gives the lowest value of MAPE is with fixed $\\alpha =0.05$ and $Sig_1$ used for volatility estimation.", "Moreover, for almost all cases, 0-filtering makes the volatility estimate more accurate.", "The error of the amount of 0-returns due to rounding is smaller for the function $Sig_1$ than for the function $Sig_2$ for all 16 cases.", "After the comparison of the two functions of volatility estimation, we choose $Sig_1$ that uses absolute values of returns.", "For the rest of the paper, we fix the value of $\\alpha $ as $0.05$ for the simplicity of further analysis." ], [ "Moscow Stock Exchange", "We define a degree of inefficiency as the fraction of months which are defined as inefficient according to Section REF .", "The degree of inefficiency for the chosen group of stocks traded at Moscow Exchange is 0.823.In our previous work, [29] we found that the degree of inefficiency for the U.S. ETF market is about 0.11 for monthly time intervals and the 3-symbols discretization only.", "The degree of inefficiency for each stock and discretization is presented in Table REF .", "We notice that the 4-symbols discretization contributes to the larger amount of inefficient months than the 3-symbols discretization.", "That is, the 4-symbol discretization appears to have a more predictable structure than the 3-symbols discretization.", "Figure REF shows the minimum value of efficiency rate among all months for each stock.", "There are two most notable deviations from 1 for stocks MLTR (Mechel, mining and metals company) and RSTI (Rosseti, power company).", "We investigate them in the next section.", "For the other 16 stocks, the minimum value of efficiency rate is attained for the stock AFLT and is equal to 0.933 (0.964) for 3 (4) symbols.", "Table: *Table: *Figure: Minimum of efficiency rates for 18 stocks using 3- and 4-symbols discretizations." ], [ "Analysis of MLTR and RSTI", "We plot the values of efficiency rates for monthly intervals for the MLTR and RSTI stocks.", "See Fig.", "REF and Fig.", "REF .", "Figure: Efficiency rate for the MLTR stock using 3- and 4-symbols discretizations.Figure: Efficiency rate for the RSTI stock using 3- and 4-symbols discretizations.Both types of discretization show coherent results.", "For MLTR, there are two notable decreases in the efficiency rates at the beginning of 2014 and in the middle of 2016.", "For both types of discretizations, eight months with the lowest efficiency rate (in the ascending order of time) are Jan-Feb and May-Oct of 2014.", "For each month we write down the most frequent block of symbols in Table REF .", "Note that block 1111 for the 4-symbols discretization appears as the most frequent for 6 months out of 8 for MLTR.", "The block denotes a slight decrease in the price for 4 minutes in a row.", "The meaning of the last two columns is discussed later.", "For RSTI, there are two sharp decreases in 2014 and 2015.", "There are 11 months that have the lowest efficiency rates in common for both discretizations.", "These months are Apr-Sep of 2014 and Jun-Oct of 2015.", "Note that these inefficient months cluster together and are not distributed uniformly among the entire time period of 10 years.", "This is the signal of a market condition that affects the inefficiency of the stocks for more than one month.", "We construct a simple trading strategy on discretized returns to test the predictability of future returns.", "We consider blocks of length 4 obtained by the 4-symbols discretization.", "For each month, we divide blocks into two equal parts.", "The discretization is made using only the first part of a month.", "We consider the sequences of the first 3 symbols of each block.", "If the empirical probability of getting 0 or 1 after the sequence of 3 symbols in the first part of the month, this sequence is from group D (decreasing).", "If the empirical probability of getting 2 or 3 after the sequence of 3 symbols, this sequence is from group I (increasing).", "Then, for the second part of the month, we determine a success if symbols 0 or 1 follow a sequence from group D or if symbols 2 or 3 follow a sequence from group I.", "Then, we calculate the fraction of successes.", "Thus, it is the probability of making a profit: sell after group D or buy after group I.", "In the case of market efficiency, this probability is equal to 0.5.", "For example, we expect that after 111 the next symbol would be 1 according to Table REF .", "That is, after this block, a trader can sell a stock.", "The fourth column of the Table REF shows the results for filtered return time series.", "The fifth column stands for the original return time series.", "For all cases the probability is greater than 0.5.", "Obviously, the probabilities for the original return time series are greater than those for the filtered return time series.", "The reason is that predictability for the original return time series follows from the sources of apparent inefficiencies.", "The same analysis is done for the RSTI stock.", "Eleven months with the lowest efficiency rates are presented in Table REF .", "For the RSTI stock, the simple trading strategy gives the fraction of successes (of predicting increases and decreases of the price) greater than $0.5$ for all 11 months.", "The frequent behavior of the price of RSTI during the chosen months is a slight increase in price for several minutes in a row denoted by symbol 2.", "Table: *Table: *Table: *The simple trading strategy is an illustrative example of market inefficiency.", "In fact, such a strategy could result in no profit when used in practice because it does not take into account the costs of transaction and other trading frictions.", "Moreover, the filtering of daily seasonality pattern is made by using the whole period of analysis.", "That is, this method cannot be applied in real time.", "Finally, we consider blocks containing only observed returns, by neglecting the missing values from the analysis.", "Thus, the application of such a strategy in practice should be integrated with the case when a missing value follows a sequence of 3 symbols." ], [ "Stock market clustering", "Most of the month-long time intervals are identified as inefficient.", "But is there some dependence between two stocks that are inefficient at the same time?" ], [ "Kullback–Leibler distance", "We measure the similarity of discretized filtered returns by using the Kullback–Leibler (KL) distance (Eq.", "REF ).", "We use $k$ , the length of blocks, as the maximum value suitable for both sequences according to Eq.", "REF .", "The 4-symbols discretization is used.", "The Kullback–Leibler divergence $DL(P|Q)$ is calculated using empirical frequencies.", "The entropy rates are calculated using Eq.", "REF .", "Using the Kullback–Leibler distance for all pairs of stocks, we cluster them in three groups using hierarchical clustering with UPGMA algorithm [37]This algorithm is implemented using the python function cluster.hierarchy.dendrogram with the argument distance=average..", "The result is in Fig.", "REF .", "Combining companies into one cluster means that their stocks have a common behavior that is not related to the value of volatility, the degree of price staleness and the structure of microstructure noise.", "It can be seen that banks and oil companies are clustered together (right).", "There is a group of four stocks RTKM, HYDR, AFLT, MGNT, that have nothing in common at first glance.", "The remaining group (left) mainly consists of metallurgy companies.", "However, there is no visible distinction between the stocks of banks and oil companies.", "According to the clustering tree, two telecommunications companies differ significantly, as well as electricity companies.", "Finally, two stocks with the lowest efficiency rates, RSTI and MLTR, are the furthest (in the sense of KL distance) from any other stock.", "That is, there are no stocks that behave similarly to these two stocks.", "Figure: Hierarchical clustering tree using KL distance.", "The threshold for clustering into groups is 0.035." ], [ "Entropy of co-movement", "Now, we consider another measure of difference between two stocks, the entropy of co-movement.", "We calculate the Shannon entropy of the discretization describing the movement of a pair of prices presented in Eq.", "REF .", "We consider only minutes that are in common for both stocks.", "For these minutes we consider values of residuals obtained after ARMA fitting.", "The result is in Fig.", "REF .", "Two companies related to telecommunications are a separate cluster.", "Three metallurgy companies MAGN, CHMF, NLMK also cluster together.", "Stocks relating to oil and bank companies form the other cluster.", "The same cluster, with the exception of the TATN (oil industry), was also formed in the previous section.", "The \"closeness\" of stocks GAZP and SBER is detected either in this and in the previous section.", "The three stocks on the left that join other stock clusters last are the stocks with the lowest efficiency rates.", "Some clusters may form on the basis that companies belong to the same industry.", "The division of companies into industries is noticeable from the dendrogram in Figure REF .", "However, this criterion does not explain all clusters.", "For instance, GMKN from metallurgy is in the cluster of oil companies and banks.", "Figure: Hierarchical clustering tree using the entropy of co-movement.", "The threshold for clustering into groups is 0.989." ], [ "Conclusions and discussion", "We have investigated the predictability of the Moscow Stock Exchange.", "We are interested in a measure of market inefficiency that is not related to known sources of regularity in financial time series.", "Usually, these sources are not filtered out and, accordingly, their impact is taken into account in the degree of price predictability (see e.g.", "[10], [11], [12]).", "We have focused on two sources of regularity, namely volatility clustering and price staleness [13].", "Filtering of volatility clustering was made in [9] by estimating volatility using the exponentially weighted moving average.", "We have developed a modification of the volatility estimation by taking into consideration the effect of price staleness.", "Price staleness produces excess 0-returns that affect the estimation of volatility.", "Another approach of estimating volatility in the case of presence 0-returns was proposed in [14] where all 0-returns are reevaluated during an expectation-maximization algorithm.", "In our approach, we separate 0-returns that may have resulted from rounding and from price staleness.", "Thus, we also filter out apparent inefficiency due to price staleness.", "The advantage of our approach is simplicity: there is only one parameter in the method which can be optimized using historical data.", "Our approach combining the estimates of volatility and the degree of staleness can be used for real-time analysis since only past observations of time series are used.", "We used the Shannon entropy as a measure of randomness to calculate a degree of inefficiency of the Moscow Stock Exchange.", "We used two types of the discretization of return time series to test efficiency more reliably for each month.", "The 4-symbols discretization helps to find more price movements that lead to market inefficiency than the 3-symbols discretization.", "There are $80\\%$ of months over the period from 2012 to 2021 that are defined as inefficient.", "Even after filtering out all sources of apparent inefficiency, most of the months contain signals of market inefficiency.", "We selected two stocks that exhibit the lowest efficiency rates.", "We have shown that the most inefficient months are grouped together.", "We have shown that, for such months, discretized price returns before and after filtering out apparent inefficiencies are predictable.", "Finally, we used two methods to cluster stocks using filtered return time series.", "Inspired by [35], we computed the Kullback–Leibler distance between stocks and grouped stocks into three clusters.", "We also introduced the entropy of co-movement of two stocks.", "In this case, stock prices display common patterns that have an interpretation in terms of the sector the stocks belong to.", "We also noticed that the stocks of banks and oil companies were linked to each other.", "One possible improvement to stock clustering is to modify the entropy of co-movement such that it is possible to define a proper distance function.", "This is left for future research.", "The proposed method for measuring market efficiency using the Shannon entropy can be applied in other markets of different countries.", "In this work, we use monthly time intervals for entropy calculation.", "Our future work will be related to the optimization of the length of return time series.", "One of the problems is to find a significant decrease in entropy without using Monte Carlo simulations.", "We also plan to switch to a higher frequency (less than one minute) to analyze the predictability of financial time series." ], [ "Outliers", "We use the method of an outlier detection introduced in [23].", "The algorithm finds price values that are too far from the mean in relation to the standard deviation.", "The algorithm deletes a price $P_i$ if $|P_i-\\bar{P}_i(k)|\\ge c s_i(k)+\\gamma ,$ where $\\bar{P}_i(k)$ and $s_i(k)$ are respectively a $\\delta $ -trimmed sample mean and the standard deviation of the $k$ price records closest to time $i$ .", "The $\\delta \\%$ of the lowest and the $\\delta \\%$ of the highest observations are discarded when the mean and the standard deviation are calculated from the sample.", "The parameters are $k = 20, \\delta =5, c = 5, \\gamma = 0.05$ ." ], [ "Stock Splits", "We check the condition $|r| > 0.2$ in the return series to detect unadjusted splitsA split is a change in the number of company's shares and in the price of the single share, such that a market capitalization does not change..", "There are no unadjusted splits found." ], [ "Intraday Volatility Pattern", "The volatility of intraday returns has periodic behavior.", "The volatility is higher near the opening and the closing of the market.", "It shows an U-shaped profile every day.", "The intraday volatility pattern from the return series is filtered by using the following model.", "We define deseasonalized returns as $\\tilde{R}_{d,t}=\\frac{R_{d,t}}{\\xi _t},$ where $\\xi _t=\\frac{1}{N_{days}}\\sum _{d^{\\prime }}\\frac{|R_{d^{\\prime },t}|}{s_{d^{\\prime }}},$ $R_{d,t}$ is the raw return of day $d$ and intraday time $t$ , $s_d$ is the standard deviation of absolute returns of day $d$ , $N_{days}$ is the number of days in the sample." ], [ "Heteroskedasticity", "Different days have different levels of the deviation of the deseasonalized returns $\\tilde{R}$ .", "In order to remove this heteroskedasticity, we estimate the volatility $\\bar{\\sigma }_t$ in Appendix .", "We define the standardized returns by $r_t=\\frac{\\tilde{R}_t}{\\bar{\\sigma }_t}.$" ], [ "Price staleness", "If a transaction cost is high, the price is updated less frequently, even if trading volume is not zero.", "This effect is called price staleness and is discussed in Section REF .", "We identify 0-returns appearing due to rounding (and not due to price staleness) using the Equation REF .", "Other 0-returns are set as missing values as shown in Appendix ." ], [ "Microstructure noise", "The last step in filtering apparent inefficiencies is filtering out microstructure noise.", "The microstructure effects are caused by transaction costs and price rounding.", "We consider the residuals of an ARMA(P,Q) model of the standardized returns after filtering out 0-returns.", "We apply the methodology introduced in [38] to find the residuals of an ARMA(P,Q) model by using the Kalman filter.", "We select the values of $P$ and $Q$ that minimize the value of BIC [39], so that $P+Q<6$ .", "The values of $P$ and $Q$ are chosen for each calendar year and are used for the next year.", "For the year 2012 we select $P=0$ and $Q=1$ corresponding to an MA(1) model." ], [ "Algorithm", "The aim of the algorithm is to estimate volatility and filter out excess 0-returns due to price staleness.", "Some 0-returns appear due to price rounding.", "These 0-returns will be saved in the data.", "First, we set the number of 0-returns \"to save\" $N_{save}=0$ and the first value of a cumulative function $Z_1=0$ .", "The cumulative function is updated $Z_t=Z_{t-1}+p_t$ , if $r_{t-1}$ is not defined as missing due to staleness.", "Each time when $\\lfloor Z(t) \\rfloor - \\lfloor Z(t-1)\\rfloor =1$ , $N_{save}$ is increased by 1.", "We notice that the first non-zero return after a row of 0-returns due to staleness is the sum of all missing returns generated by a hidden efficient price.", "This return is also set as missing.", "However, the value of return used for estimating volatility is calculated as its expected value: $\\hat{r}_{n-1}=\\frac{r_{n-1}}{\\sqrt{N_0+1}}$ , where $N_0$ is the amount of missing values strictly before the non-zero return $r_{n-1}$ .", "The same is also referred to initially missing values, e.g., due to no-trading or errors in collecting the data.", "Another assumption is that a 0-return appears due to staleness if the previous return had the 0-value and was defined to appear due to staleness.", "We include this rule, since we assume that it is more likely that two consecutive 0-returns appear due to high transaction costs than due to rounding (that is, simply speaking, two outcomes of generating Gaussian random variables are less than a tick size).", "Generally, for the estimation of volatility at time $t$ we should consider three cases: $P_{t-1}$ was missing (or minute $t-1$ is non-trading), $r_{t-1}=0$ , $r_{t-1}\\ne 0$ .", "Thus, the algorithm is the following.", "We give the algorithm for the case of $Sig_1$ which is used in the application for the real data.", "We remove all 0-returns that start the sequence." ], [ "Pseudocode", "Step 0: $\\bar{\\sigma }_1=|r_1|/\\mu _1$ ; $Z_1=0$ , $N_{save}=0$ ; $N_0=0$ .", "For $t$ from 2 to $N$ , where $N$ is the length of time series: Step 1: If $r_{t-1}$ is missing: $\\bar{\\sigma }_t=\\bar{\\sigma }_{t-1}$ ; Increase $N_0$ by the amount of consecutive missing prices Else if $r_{t-1}=0$ : If $N_{save}>0$ and $N_0=0$ : $N_{save}=N_{save}-1$ , $\\bar{\\sigma }_t=Sig_1(\\alpha , 0,\\bar{\\sigma }_{t-1})$ Else: $\\bar{\\sigma }_t=\\bar{\\sigma }_{t-1}$ , $N_0=N_0+1$ , $r_{t-1}=\\text{missing}$ Else: $\\bar{\\sigma }_t=Sig_1(\\alpha , \\frac{r_{n-1}}{\\sqrt{N_0+1}},\\bar{\\sigma }_{t-1})$ , $N_0=0$ Step 2: Calculate $p_t$ (Eq.", "REF ) If $r_{t-1}$ is not missing, $Z_t=Z_{t-1}+p_i$ If $\\lfloor Z(t) \\rfloor - \\lfloor Z(t-1)\\rfloor =1$ , $N_{save}=N_{save}+1$ Finally, we check if the effect of staleness really exists in the price time series: $\\begin{split}\\hat{p}&=\\frac{\\sum _i p_i}{N}\\\\\\hat{q}&=1-\\hat{p}\\\\Var&=\\hat{p}\\hat{q}N\\end{split}$ If $N_{real}\\le \\sum _i p_i+1.96\\sqrt{Var}$ , we leave time series without putting any missing values, where $N_{real}$ is the initial amount of 0-returns." ], [ "A predictable time series with entropy at maximum", "The goal of this section is to construct a price model where entropy is high because of discretization.", "This model shows that a high entropy value may be caused by discretization, but not because of the randomness of a return time series.", "There are equal probabilities of having symbols 0, 1 and 2.", "1 corresponds to log-returns, $r$ , equal to $-0.4$ , 2 corresponds to log-returns equal to $0.4$ .", "The structure of symbol 0 is more complicated.", "It covers three other symbols $3,4,5$ .", "They correspond to log-returns $-0.3,0.1,0.2$ , respectively.", "One of the symbols $3,4$ or 5 appears with probabilities depending on the previous value of these symbols.", "The probabilities are presented in the Table REF .", "Having a symbol presented in a column, there are probabilities of getting a symbol presented in a row.", "Table: *The model implies an average zero return.", "However, a trading strategy that increases a profit exists.", "After 3 a trader should buy, and after 4 and 5 the trader should sell.", "However, the entropy of 3-symbols series is at maximum, that should imply absence of profitable strategies.", "Considering the same example with 4-symbols discretization we get that $Q_1=-0.4$ , $Q_2=0.1$ , and $Q_3=0.4$ .", "Therefore, we have the following discretization of returns: $s={\\left\\lbrace \\begin{array}{ll}0, r=-0.4, \\\\1, r=-0.3\\text{ or }r=0.1,\\\\2, r=0.2,\\\\3, r=0.4.\\end{array}\\right.", "}$ Thus, we can distinguish returns $r=0.2$ from others the using 4-symbols discretization.", "Table REF gives the following probabilities for the blocks of two symbols from the 4-symbols discretization: $p(11)=\\frac{7}{162}$ , $p(12)=p(21)=\\frac{5}{162}$ , $p(22)=\\frac{1}{162}$ .", "Noting that $p(0)=p(3)=\\frac{1}{3}$ , $p(1)=\\frac{2}{9}$ , and $p(2)=\\frac{1}{9}$ , we calculate that $H_1=-\\frac{2}{3}\\log \\left(\\frac{1}{3}\\right)-\\frac{2}{9}\\log \\left(\\frac{2}{9}\\right)-\\frac{1}{9}\\log \\left(\\frac{1}{9}\\right)\\approx 0.946<1$ and $\\begin{split}H_2&=-\\frac{1}{2}(\\frac{7}{162}\\log {\\left(\\frac{7}{162}\\right)}+\\frac{5}{81}\\log {\\left(\\frac{5}{162}\\right)}+\\frac{1}{162}\\log {\\left(\\frac{1}{162}\\right)}+\\\\&+\\frac{4}{9}\\log {\\left(\\frac{1}{9}\\right)}+\\frac{8}{27}\\log {\\left(\\frac{2}{27}\\right)}+\\frac{4}{27}\\log {\\left(\\frac{1}{27}\\right)})\\approx 0.944<H_1\\end{split}$" ] ]
2207.10476
[ [ "High-Level Approaches to Hardware Security: A Tutorial" ], [ "Abstract Designers use third-party intellectual property (IP) cores and outsource various steps in the integrated circuit (IC) design and manufacturing flow.", "As a result, security vulnerabilities have been rising.", "This is forcing IC designers and end users to re-evaluate their trust in ICs.", "If attackers get hold of an unprotected IC, they can reverse engineer the IC and pirate the IP.", "Similarly, if attackers get hold of a design, they can insert malicious circuits or take advantage of \"backdoors\" in a design.", "Unintended design bugs can also result in security weaknesses.", "This tutorial paper provides an introduction to the domain of hardware security through two pedagogical examples of hardware security problems.", "The first is a walk-through of the scan chain-based side channel attack.", "The second is a walk-through of logic locking of digital designs.", "The tutorial material is accompanied by open access digital resources that are linked in this article." ], [ "Introduction", "Cybersecurity is a system-level challenge with various assets and threats at all levels of abstraction.", "Throughout the entire lifecycle of embedded computer systems, designers and end-users need to be aware of risks arising from untrusted parties [25].", "In digital system design, we are usually concerned with meeting power, performance, and area objectives, where we might also try to improve testability, reliability, and other non-functional properties.", "In this tutorial, we present an introduction to basic concepts in the domain of hardware security.", "So that non-experts may gain insights into the mindset and challenges when working in this domain, we will also take you through a hands-on journey using two case studiesThis tutorial is accompanied by a set of online materials, available at https://github.com/learn-hardware-security, which you can use to follow the case studies.", "from domains which currently have active research communities.", "The tutorial is laid out as follows.", "In sec:principles, we will talk about hardware security in the broadest sense.", "The first case study is presented in sec:scan, where we will take you through an end-to-end attack on a commonly utilized technique for post-fabrication testing of digital circuits: i.e.", "an attack on scan chains.", "We will show how an embedded cryptographic key is vulnerable to leaks, even when the key is not itself exposed to the scan chain.", "Then, in sec:locking, we will take you through the basics of logic locking from the ground-up.", "Logic locking is a set of evolving techniques focused on the protection of hardware intellectual property from reverse engineering and piracy, particularly in the context of an untrusted foundry.", "This case study will work through the motivations and foundations of logic locking and then provide a step-by-step walk-through of one of the pivotal attacks in the literature: the SAT attack [39].", "We assume that you, the reader, have a basic understanding of the fundamental topics related to digital system design (e.g., digital logic gates, Boolean algebra) but no prior knowledge about scan chain security or logic locking; thus we will start by making you more familiar with security generally before a hands-on dive into those topics.", "Readers with a bit more background will find the material a good refresher and we encourage you to sample some of the more recent work mentioned throughout the tutorial and at the end of each case study.", "After working through this tutorial, you should: be able to apply a security mindset in thinking about a system; understand the role of the scan chain, its benefits and risks to the system; be able to work through a scan-based exploit of a cryptographic design; understand the motivations for logic locking, its overall objectives, and the basic principles of the area; and be able to analyze and think critically about simple locking.", "All trustworthy software functionality fundamentally relies on the correct operation of trustworthy underlying hardware.", "For instance, when we load our banking data via our web browsers, we assume that the computer powering the web browser does not surreptitiously save our details or leak them to third parties.", "When we use our bank card to make a purchase, we assume that the hardware doesn't carelessly leave our details in memory.", "Yet, hardware is often designed with only the desired functionality in mind, not essential security properties, and there are numerous examples in the literature of hardware being broken (e.g.", "smart cards [17], microcontroller memory protection fuses [38], and on DRAM memory systems such that they leak sensitive data [20]).", "While functionality is of course important, as are other design metrics (cost, power consumption, performance, size, and reliability among them), leaving security as an afterthought is dangerous, and the increasing number of hardware-based attacks is highlighting this [25], [3].", "That said, industry has recently started acknowledging these risks, and initiatives such as the MITRE Hardware CWEs [18], [9] have begun to classify known design weaknesses.", "One major driver of industry adoption comes from the risks inherent in integrated circuit (IC) and printed circuit board (PCB) counterfeiting and reverse engineering.", "Electronic supply chains are distributed worldwide [25], introducing many possible points of attack.", "Designing an IC or PCB involves creation of intellectual property (IP) that may come from third-party organizations or in-house or both, then integrating those components, and generating an IC/PCB layout; a blueprint of the design will then be sent to the manufacturer.", "Post manufacturing, the designs will be tested, which may be at yet another organization, before being packaged, distributed, and sold, again, with other parties in-the-loop.", "Given the high value represented by IP, there is considerable motivation to prevent reverse-engineering or piracy; for example, reverse-engineered ARM microcontrollers available on the grey market with reduced security [23] represent supply-chain risks.", "Further compounding the security challenge are the risks of faults entering into the design, either accidentally or maliciously added.", "Hardware Trojans [44] refer to the category of maliciously introduced fault-inducing artifacts, and they can be caused by adding, altering, or removing components or connections.", "Unfortunately, designing a product such that it can be easily tested can also provide attack vectors for malicious third parties.", "So how do we begin to get a handle on hardware security?", "In this tutorial, we will use as pedagogical examples two case studies to introduce you to hardware security concepts.", "The first is on attacking systems via exploiting scan chains (sec:scan), which illustrates how “helpful” hardware components can be manipulated for nefarious purposes.", "This example serves to demonstrate hardware-based exploitation, thus motivating research work on secure scan and other hardware-assisted security approaches.", "The second provides an intuitive introduction to logic locking (sec:locking), culminating in a walk-through example application of the pivotal “SAT attack” [39].", "This example serves as an on-boarding into the domain of intellectual property protection, providing insights into the motivations that drive the “cat-and-mouse” of protecting hardware itself from reverse-engineering analysis, theft, and so on.", "Both case studies focus on the confidentiality property of security, being the protection of cryptographic keys (in the scan chain example) and the functionality of a design (in the logic locking example).", "Readers who are already familiar with security properties should feel welcome to proceed to sec:scan; otherwise, the next section provides a quick introduction to security as framed by “the CIA triad.”" ], [ "Confidentiality, Integrity, and Availability - the CIA Triad", "The CIA Triad of confidentiality, integrity, and availability define the central tenants of all cybersecurity [27], [8].", "All attacks and defenses can be viewed within the context of the Triad, and for any device or product to be considered secure it must address all three facets.", "These three properties can apply at various levels of abstraction, from the hardware design level through to the system level (potentially involving distributed systems).", "Confidentiality refers to the protection of data and resources from unauthorized access.", "For example, your credit card stores important banking details in its embedded ICs.", "These details need to be well-protected to prevent third parties from being able to copy them such that they may use them fraudulently to create new purchases of their own.", "There are many countermeasures to protect confidentiality, including password access and other access controls, encryption, and physical defenses.", "Integrity covers the defense of information from unauthorized alteration.", "For instance, it should not be possible to alter your transaction (e.g.", "change the dollar value), once confirmed by the electronics in your bank card.", "Integrity-based measures provide assurance that data is both accurate and complete.", "In electronic systems, it is not only necessary to control access to the components in the digital realm, but also at the physical realm, and ensure that authorized users can only alter the information that they are legitimately authorized to alter.", "Examples of countermeasures in this space can also be based in encryption, such as taking digital hashes or signatures of files; or by duplicating data and storing it in multiple ways.", "Availability refers to the need of a given system to be accessible by authorized users.", "Here, malicious attacks seek to prevent this access and so are often referred to as Denial of Service (DoS).", "Examples include cryptolockers and ransomware - for instance, the 2021 Colonial Pipeline attack which impacted availability of gas on the east coast of the United States [28].", "Within the context of a hardware-based system, an availability-based attack example could be as simple as stealing the aforementioned banking card or damaging the reader.", "Availability countermeasures can involve hardware and software duplication and redundancy (i.e.", "backup systems), and special types of hardware and software to detect if an attack is occurring.", "Insight: Given your own experience, reflect on things you consider valuable in a given system.", "These things are assets.", "Imagine what could go wrong if any of the the three properties of Confidentiality, Integrity, or Availability are violated." ], [ "Overview", "Scan chains, a Design for Test (DFT) technique, are implemented in integrated circuits (ICs) in order to test their correct functionality [7], [13].", "They provide high fault coverage and do not need complex hardware for test pattern generation or signature analysis.", "Fundamentally, a scan chain is a sequential combination of internal registers / flip-flops.", "They are constructed during synthesis by modifying normal D flip-flops to also include scan logic.", "This scan logic consists of multiplexers which are daisy-chained so that the D flip-flops can be disconnected from the main combinational circuit to instead sequentially feed data between themselves.", "The generic architecture of a scan chain is depicted in fig:scan-chain-arch.", "As shown, when the circuit is operating normally, the D flip-flops function normally, taking and returning state to the combinational circuit.", "However, when the circuit is put into test mode, each D flip-flop is joined using the multiplexers such that their contents may be serially `shifted' (as if they were a shift register) between one another and out of the circuit.", "In this context the shifting is referred to as scanning.", "Figure: Generalized scan chain architectureIn typical implementations of scan chains, the hardware is connected to a five-pin serial JTAG [14] boundary scan interface, where TCK is the test clock signal, TMS selects either normal mode or scan (test) mode, TRST is the reset control signal, TDI is connected to the input of the scan chain and is used to scan in new values, and TDO is connected to the output of the scan chain and is where the internal register values will appear.", "However, scan chains represent a significant source of information about the internal functionality and implementation of a given device.", "As such, designers will typically seek to protect access to the scan architecture.", "The simplest method for this is to leave access to the scan chain unbound when physically packaging the IC.", "However, as packages can be broken open for reverse engineering purposes, this is surmountable.", "Alternatively, or in addition to, the access to the scan chain or JTAG interface can be controlled via other methods, including setting protection bits, fuses, or using access control passwords.", "However, given enough time with physical access to the component, protections such as these can still be compromised.", "In this tutorial case study, we will examine how a compromised scan chain can be utilized to leak secrets about a given electronic circuit.", "We consider the case where a cryptographic algorithm is implemented as an application-specific integrated circuit (ASIC).", "Specifically, we consider the symmetric-key Data Encryption Standard (DES) [21].", "We choose DES in this tutorial for three reasons: (1) It is a relatively straightforward algorithm that we can comprehensively describe within this tutorial, ensuring a self-contained case study; (2) Although retired (no longer recommended for new applications), DES was a widely-adopted algorithm used for decades to protect digital secrets (and still sees use within legacy systems); and (3) the process that will be illustrated in this tutorial is also suitable for attacking other encryption standards, including the more advanced and current best-practice Advanced Encryption Standard (AES) [22] algorithm.", "Scan chain attacks on both DES [45] and AES [2], [46] have both been previously demonstrated in the literature.", "Insight: How does this example fit into the CIA Triad (sec:cia-triad)?", "A digital circuit with an embedded encryption key would be expected to keep this key confidential.", "For example, consider how an encrypted digital media stream (e.g.", "satellite TV) should only be broadcast to authorized users (e.g.", "those with the decoding hardware).", "In order to maintain this ideal, the decoding hardware should ensure that the embedded cryptographic keys are well-protected." ], [ "Data Encryption Standard (DES)", "The Data Encryption Standard (DES) is a symmetric-key block cipher published by the National Institute of Standards and Technology (NIST) in the year 1977, and most recently updated in 1999 [21].", "It encrypts 64 bits of data at a time, using a 56-bit key (usually stored as a 64-bit value with checksum bits).", "DES is a Feistel Cipher implementation, meaning it utilizes a repeating block structure and both encryption and decryption utilize the same algorithm.", "DES is based on 16 rounds of 64-bit blocks.", "The high-level structure of the algorithm is presented in fig:des-high-level.", "It is separated into two parts, round key derivation and encryption/decryption.", "Figure: High level structure of DES.Round Key Derivation: This is depicted in the right-hand side of fig:des-high-level.", "Here, we input the 64-bit key and perform a permutation using table PC1 which re-orders the bits (see tbl:des-IP in sec:appendix:des-tables).", "PC1 both removes the superfluous 8 parity bits and splits and reorders the remaining 56-bits of key value into two 28-bit halves.", "Within each round, these two halves are left-rotated independently, either 1 or 2 bits, depending on the specific round of operation (see tbl:des-SHIFTS in sec:appendix:des-tables).", "These two halves are then concatenated and compressed to 48-bits by using the PC2 permutation table (see tbl:des-PC2 in sec:appendix:des-tables).", "This output is termed as a round key.", "Because the PC2 inputs are rotated between each round, the output of PC2 will change each round—each round will have a different round key.", "Given that round key derivation is entirely separate from data encryption and decryption, it is possible to precompute and store the round keys if the intended encryption/decryption key is static.", "Insight: There are 16 round keys, each 48 bits in size.", "However, we do not need to break $2^{(16*48)}$ bits of encryption.", "The round keys were originally derived from the same key, and although this key was 64 bits originally, 8 of the bits are parity bits, and so there is effectively only 56 bits of entropy protecting the encryption.", "The 56-bit key length is what makes DES unsuitable for protecting secrets in the modern era—it is short enough that it can be brute forced, and has been vulnerable to this for many years (for example, see the EFF DES Cracker from 1998 [43]).", "Encryption/decryption: This is depicted in the left-hand side of fig:des-high-level.", "Here, we input the 64-bit data.", "This is first permuted using the Initial Permutation (IP) table (see tbl:des-IP in sec:appendix:des-tables) before the left-hand and right-hand 32-bits are separated into $L_0$ and $R_0$ .", "Then, in each round $n$ of the encryption / decryption process, the 32-bit value $R_n$ is taken and combined in function $F$ with the appropriate round key $k_{n+1}$ .", "When encrypting, the round keys are used in the forward order (as is shown in fig:des-high-level).", "When decrypting, the round key order is reversed, but no other changes need to be made to the process.", "Function $F$ in fig:des-high-level is detailed as follows.", "Firstly, the input from the $R$ register is expanded using expansion permutation function $E$ (see tbl:des-E in sec:appendix:des-tables) to 48-bits, making value $a$ .", "This is combined with the appropriate round key $k_{n+1}$ using an exclusive-or operation to make value $b$ .", "Each consecutive six-bit block of $b$ is then passed through the appropriate substitution box $S$ (see tbl:des-sbox-all in sec:appendix:des-tables) which makes 4-bit blocks then concatenated to make value $c$ .", "Value $c$ is then passed through permutation $P$ (see tbl:des-P in sec:appendix:des-tables) to make value $d$ , which is finally then combined using exclusive-or with the the $L_n$ to make value $e$ .", "In the next round, $R_{n+1}$ takes the value $e$ , and $L_{n+1}$ takes the previous $R_n$ , unless it is the last round.", "In this case ($n=16$ ), the swap does not occur; and instead, the resultant value is concatenated, before being passed into the Final Permutation (FP) (see tbl:des-FP in sec:appendix:des-tables).", "The final permutation is derived by using the inverse of the IP.", "This results in the algorithm's final encrypted output." ], [ "Attacking a practical DES implementation", "The DES implementation can be realized in hardware in a number of different ways.", "One resource-efficient approach is to utilize a single set of L and R registers, and use these iteratively to perform each round of an encryption or decryption.", "These can be combined with pre-computed round-keys stored in an on-chip read-only memory (ROM).", "This approach is presented in fig:des-hardware.", "Internally, the initial permutation, final permutation, expansion E and permutation P functions are implemented as fixed one-to-one mappings.", "S-boxes are implemented either as gates or as ROMs.", "The DES controller primarily consists of a 4-bit counter which will index the appropriate round key for each round, as well as decode logic that will control the enable and select lines.", "An implementation of this algorithm featuring a scan chain is what we focus on attacking, although the scan chain will not include the round keys ROM (otherwise the attack becomes trivial) or any registers from the control logic (we discuss this potential later, in sec:design-explore).", "Figure: DES hardware block diagramWhat does an attacker know?", "Firstly, we assume the attacker knows that the circuit they wish to attack is implementing DES.", "In addition, as it is public, the attacker knows the details of the DES algorithm.", "Secondly, the attacker will know certain details of the implementation.", "For instance, the attacker would likely be able to obtain vendor-provided timing diagrams of the system.", "This is important, as from these, the attacker can infer the structure of the given DES circuit - for instance, whether or not it is iterative, pipelined, or purely combinational.", "In this case study, our architecture is iterative, meaning it operates one round per clock cycle.", "In the first clock cycle, the permuted input will be loaded to the $L$ and $R$ registers.", "In the next 15 clock cycles, the $L$ and $R$ registers will be iteratively updated by the result of the round key computations.", "The output register will be loaded after the final $L$ and $R$ values are permuted.", "Likewise, the attacker can determine that the round keys are stored internally in the design (in a ROM).", "It is reasonable to assume that the attacker will gain access to the scan chains, either via a JTAG port or via breaking open the IC package and directly probing the scan chain ports.", "However, the attacker will not know the structure of the scan chain.", "These are typically determined according to the position of the register in the physical layout of the circuit, and would not be known unless the digital design files of the IC (a more protected form of IP than the typically vendor-provided architecture and timing diagrams) were obtained.", "Based on the above, we thus need to break our attack into two phases.", "The first will determine the structure of the scan chain.", "The second will retrieve a DES round key, and from this, obtain the original DES key.", "Note that while we follow similar steps to that presented in Yang et.", "al's work [45], we use an alternative, more consistent nomenclature (where the `first bit' bit 1 will always refer to the left-most or most significant bit of a value or register) with alternative constants, present the attack steps in more detail (including accompanying code and code discussion), and use an alternative methodology for determining the final key.", "Insight: In this subsection we detailed what the hypothetical attacker does and does not know.", "This process is more formally known as attacker or threat modeling [35], and it is an important step when considering attacks and defenses for given cybersecurity threats.", "Note: In the rest of this section, we proceed with examples and code written in the Python programming language.", "We use Python version 3.8.10.", "The listings are designed to be run sequentially and in-order.", "E.g., copy and run Listing 1 into your program before copying and running Listing 2, and so on.", "Most listings terminate with an exit() command (and possibly some print statements before these).", "You can comment these out safely, they are presented only for pedagogical purposes.", "The complete code for this case study, as well as the accompanying DES implementation, are available in our open repository at https://github.com/learn-hardware-security/py-des-scan." ], [ "DES implementation (Python)", "We can emulate the DES implementation from fig:des-hardware according to the Python class illustrated in fig:python-des-diagram.", "We define it as having only three public methods (assume all attributes are private).", "__init__(seed=None, force_key=None), which is used to instantiate the DES implementation.", "A random seed should be provided to derive the random structure of the scan chain, and if no key is provided in force_key, will also randomly generate a key.", "If no seed is provided, the program will set it to numeric 1.", "In this tutorial, we will focus on determining the value of the key when one was not provided.", "ReInit() is used to reset the hardware between runs.", "It does not recompute keys.", "RunEncryptOrDecrypt(input, do_encrypt, num_rounds) -> Tuple[str, list] is the all-in-one function used to run the emulated hardware.", "It takes a 16-character hexadecimal string as input, as well as a boolean do_encrypt that determines if the hardware should encrypt (True) or decrypt (False).", "It also takes a number of execution rounds num_rounds, which refers to the number of clock cycles the hardware should be provided when executing.", "The function returns two values as a Tuple.", "The first of these is the value of the output register, encoded as a 16-character hexadecimal string.", "The second of these is a list of scan chain outputs, as observed between every round of execution (i.e.", "every tick of the encryption hardware).", "The length of this list will thus be the same as the provided num_rounds argument.", "We now instantiate the design according to the code presented in lst:des-dut-instantiate.", "Executing this code provides the output in lines 22-26.", "#Useful modules for this tutorial from typing import * import itertools   #Import the DESWithScanChain module # assuming the py-des-scan GitHub repository is downloaded to a folder called 'py_des_scan' import py_des_scan.des_scan as des   #Define a random seed for the emulated hardware's key and scan chain seed = 6   #Instantiate the DES module that we will test/attack dut = des.DESWithScanChain(seed)   #Do a test run of the DES with a given input test_code = \"0BADC0DEDEADC0DE\" print(\"Input: \" + test_code) (check_ciphertext, _) = dut.RunEncryptOrDecrypt(test_code) print(\"Ciphertext: \" + check_ciphertext) (plaintext, _) = dut.RunEncryptOrDecrypt(check_ciphertext, do_encrypt=False) print(\"Plaintext: \" + plaintext)   \"\"\" Sample Output:     Input: 0BADC0DEDEADC0DE     Ciphertext: 5FB5CD14D3136003     Plaintext: 0BADC0DEDEADC0DE \"\"\"" ], [ "Attack Phase 1: Determining the Scan Chain Structure", "The first part of the attack deals with reverse-engineering the locations of the flip-flops for the Input, L, and R registers in the scan chain.", "This is necessary as the scan chain output does not intrinsically reveal the correspondence between the data values it provides and the registers internal to the design.", "The general process for this is follows.", "Reset the DES hardware to clear all registers.", "Present DES hardware with input $(8000000000000000)_{16}$ (i.e.", "64-bits with the left-most bit (i.e.", "MSB) set).", "Run it for one clock cycle such that the input register is loaded.", "Scan out the bit stream pattern (Pattern 1).", "Pattern 1 will have one bit active.", "This position corresponds with the position of the input's currently active bit (i.e., the MSB).", "Run it for one additional clock cycle so that the input is loaded into the L and R registers (after they pass through the initial permutation step).", "Scan out the bit stream pattern (Pattern 2).", "Pattern 2 will have two bits active.", "As the input is not cleared after loading L and R registers, one of these will match the bit in Pattern 1 and can be disregarded.", "The other bit represents the bit position in the L or R register after passing through the initial permutation (for example, the first bit of the input will pass through to the 8th bit of the R register).", "Repeat steps 1-8, shifting the input by 1 bit each time, 63 more times to determine the position of the remaining input and L/R register bits.", "Python code to complete this is presented in lst:scan-chain-reverse.", "#Define arrays for storing the determined indices in the random scan chain input_scan_indices = [None] * 64 left_r_scan_indices = [None] * 32 right_r_scan_indices = [None] * 32   #Input a single bit in each of the 64 possible positions and run two rounds, # capturing the scan chains of each cycle.", "# In the first cycle, we can determine that bit of the input register # In the second cycle, we can determine the bit of the L/R register for i in range(64):     dut.ReInit() #Reset the hardware       #Determine the input hex string     input_num = 1 << (63 - i)     input_hexstr = '       #Get scans for the input (we run in Encrypt mode)     (_, scans) = dut.RunEncryptOrDecrypt(input_hexstr, True, 2)       #The scans are 192 bits (represented as ASCII 0/1 characters) long.", "# In scans[0], only one bit will be True;     # this represents the i-th bit in the Input register.", "for j in range(192):         if scans[0][j] == \"1\":             input_index = j             break     #We can store this immediately, using \"i\" as the position     input_scan_indices[i] = input_index;       # In scans[1], two bits will be True;     # the one not present in the first scan represents the i-th bit in the L/R registers     # after Initial Permutation.", "for j in range(192):         if scans[1][j] == \"1\" and j != input_index:             lr_index = j             break     #We need to invert the initial permuatation before we can store this     # For this we can use the Final Permuation table as this is the pre-computed inverse     lr_pos = des.FINAL_PERM[i] - 1; #table values are 1-indexed     #The low 32 lr_pos values refer to the L register, high values to R register     if lr_pos < 32:         left_r_scan_indices[lr_pos] = lr_index     else:         right_r_scan_indices[lr_pos - 32] = lr_index   \"\"\" Example:   input_scan_indices: [20, 152, 61, 26, 71, 78, 145, 110, 60, 36, 11, 2, 176, 140, 85, 130, 55, 32, 111, 74, 179, 106, 27, 17, 83, 129, 79, 92, 182, 43, 125, 180, 141, 42, 49, 103, 167, 191, 40, 95, 12, 155, 146, 21, 188, 168, 153, 7, 166, 147, 3, 75, 173, 161, 33, 119, 70, 0, 171, 35, 144, 73, 25, 139]   left_r_scan_indices: [156, 80, 118, 142, 186, 151, 131, 160, 162, 123, 45, 58, 124, 23, 184, 127, 10, 117, 143, 63, 189, 190, 159, 38, 99, 102, 51, 132, 4, 47, 112, 15]   right_r_scan_indices: [13, 116, 62, 177, 66, 72, 157, 54, 90, 50, 137, 31, 128, 28, 57, 170, 59, 1, 29, 91, 67, 172, 136, 134, 165, 185, 121, 120, 133, 37, 164, 34] \"\"\"" ], [ "Attack Phase 2: Determining Round Key 1", "Now that we know the location of the L and R values in the scan chain, we can load out their contents.", "A function to do this process is presented in lst:scan-chain-read-l-r. #Given the set of indices for the L and R registers and the given scan chain output, # return the contents of the L and R registers.", "def read_scan_l_r(left_scan_indices, right_scan_indices, scan) -> Tuple[list, list]:     l_reg = [None]*32     r_reg = [None]*32     for i in range(32):         l_reg[i] = scan[left_scan_indices[i]]         r_reg[i] = scan[right_scan_indices[i]]     return (l_reg, r_reg)   We wish to observe the intermediate (less protected) encryption data.", "We are specifically interested in the result after the first round of encryption.", "Here, only the first round key $K_1$ has been applied to the data.", "Consider fig:des-high-level.", "Here, the first round of DES can be described using Equations REF -REF .", "$a = E(R_0)$ $b = a \\oplus K_1$ $c = SBoxes(b)$ $d = P(c)$ $R_1 = e = L_0 \\oplus d$ $L_1 = R_0$ Consider the scenario where a known input is loaded into $L_0$ and $R_0$ .", "An encryption round is run, generating $L_1$ and $R_1$ , which are then scanned out using the scan chain.", "Given that we know $R_0$ , and $E$ is a simple permutation, we can derive $a$ using the inverse permutation $E^{-1}$ (eqn:a).", "As we know $R_1$ , we know $e$ ; and given that we also know $L_0$ we can compute $d$ (eqn:r1).", "From this we can compute $c$ by taking the inverse permutation $P^{-1}$ (eqn:d).", "All that remains is $b$ , which we can then use to calculate $K_1$ .", "Let us explore the s-boxes further.", "In each round of DES, eight different s-boxes $\\lbrace S_1, S_2, \\ldots , S_8\\rbrace $ are used.", "The first, $S_1$ , is presented in tbl:des-sbox1.", "Six bits are used to determine which 4-bit value will be taken from an s-box (in other words, each s-box compresses 6 bits to 4 bits).", "For $S_1$ , it is the six left-most bits (most significant bits) $\\lbrace b_{1}, b_{2}, b_{3}, b_{4}, b_{5}, b_{6}\\rbrace $ .", "These are separated into row and column indices as indicated in the table.", "The first and last bits $b_{1}$ and $b_{6}$ are used to determine the s-box row.", "The middle four bits, $b_{2\\ldots 5}$ , will be used to determine the s-box column.", "For example, input $b_{1..6} = (100100)_2$ uniquely identifies value 14 from the table (row 3, column 3).", "Table: DES s-box S 1 S_1Each s-box outputs each possible value exactly four times.", "For instance, in s-box 1, the value 1 is emitted for input $(000110)_2$ , $(001111)_2$ , $(100010)_2$ , and $(101101)_2$ .", "Because of this, it is not possible to determine the input to an s-box by observing just one output.", "However, each input to the s-boxes is the exclusive-or combination of the round-key bits as well as value $a$ , which is computed from the expansion $E$ of the register $R$ .", "This means if we provide multiple values to be combined with the constant round key $K_1$ we can observe multiple different outputs based on the same key.", "Given four possible outputs for each value, we thus need to use three different inputs.", "Given we can set value $a$ arbitrarily, let us see how to reverse the key bits for the first s-box $S_1$ .", "Firstly, we apply our first input $a^1_{1..6} (000000)_2$ .", "Given eqn:b and eqn:c, that means we can derive eqn:s1-c-1.", "$c^1_{1..6} = S_1(K_{1,1},K_{1,2},K_{1,3},K_{1,4},K_{1,5},K_{1,6})$ Given that each output of each s-box appears only once in each row, we shall switch one-bit in the column space (i.e.", "in the middle four bits of $b$ ).", "Thus, we apply our second input $a^2_{1..6} = (001000)_2$ to give our second output in eqn:s1-c-2.", "$c^2_{1..6} = S_1(K_{1,1},K_{1,2},\\overline{K_{1,3}},K_{1,4},K_{1,5},K_{1,6})$ We then switch two-bits in the input of the s-box $S_1$ , changing both a row and a column.", "This is done through applying the third input $a^3_{1..6} = (010001)_2$ , giving our third value eqn:s1-c-3.", "$c^3_{1..6} = S_1(K_{1,1},\\overline{K_{1,2}},K_{1,3},K_{1,4},K_{1,5},\\overline{K_{1,6}})$ For each of Equations REF -REF we will get four possible values for the key bits input to the s-box.", "However, only one of these values will be common to all three equations.", "This will be the final value of $K_{1,1...6}$ .", "Figure: Reversing Round Key bits K 1,1..6 K_{1,1..6} when input a 1..6 =(000000) 2 a_{1..6} = (000000)_2 returns c 1..4 =1c_{1..4} = 1Let us consider an example.", "We provide input $(000000)_2$ to point $a_{1..6}$ , and observe that the s-box returns output 1.", "Given that the input was all zeros, this means that the round key bits $K_{1,1..6}$ are one of $(000110)_2$ , $(001111)_2$ , $(100010)_2$ , or $(101101)_2$ .", "We then provide input $(001000)_2$ .", "If this returns value 8, then the only matching key bits that could emit this are $(000110)_2$ or $(101101)_2$ .", "If it returns value 4, then the only key bits that could do this are $(001111)_2$ .", "Likewise, if it returns 6, the only possibility is that the key bits are $(100010)_2$ .", "If it did return 8, then we need to provide the third input, $(010001)_2$ .", "If this returns 14, then the only possibility is that the key bits are $(000110)_2$ .", "Alternatively, if it returns 1, then the key bits must be $(101101)_2$ .", "This example is depicted pictorially in fig:s1-reverse-when-1, and we present the code for performing the s-box reversal in lst:reverse-sbox.", "#Given the concatenated output of the s-boxes (i.e. point 'c' in Fig.", "2), # (as a list of bits) # and the concatenated input to the xor function (i.e. point 'a' in Fig.", "2), # (also as a list of bits) # return the list of possible values for each s-box (i.e. list of lists).", "def sboxes_output_to_possible_inputs(sboxes_output, sboxes_xor_input) -> List[list]:     sboxes = []     for i in range(8): #for each s-box         #Get the output of _this_ s-box         sbox_output = sboxes_output[i*4:(i+1)*4]         #Get the input to the xor for _this_ s-box         sbox_xor_input = sboxes_xor_input[i*6:(i+1)*6]           #Convert the output of the s-box to an integer         # (the des library stores s-box outputs as integers)         sbox_value = 0         for j in range(4):             sbox_value |= (int(sbox_output[j]) << (3 - j))           #Find the 4 s-box inputs that produce the given output         possible_sbox_inputs = []         for row in range(4): #Every s-box value appears at least once in every row             col = des.SBOXES[i][row].index(sbox_value)             #print(\"Value possible_input = [                 (row & 0b10) >> 1,                 (col & 0b1000) >> 3,                 (col & 0b100) >> 2,                 (col & 0b10) >> 1,                 col & 0b1,                 row & 0b1             ]             #for each bit, undo the XOR operation             for k in range(len(possible_input)):                 possible_input[k] = possible_input[k] ^ sbox_xor_input[k]               possible_sbox_inputs.append(possible_input)         sboxes.append(possible_sbox_inputs)       return sboxes If $c^1_{1..6}$ is a value other than 1, we can still use this method to identify the bits of $K_{1,1..6}$ .", "Likewise, we can construct similar patterns for the other s-boxes $S_2 \\ldots S_8$ .", "These patterns can then be combined to discover the round key $K_1$ .", "We derive these patterns here.", "We first discuss how to prepare the input $a$ .", "Recall from fig:des-high-level and eqn:a that it is constructed using expansion $E$ from the $R$ register.", "The formula for this is presented as eqn:afromer.", "Note the constraints within the construction of $a$ , that is, some bits need to match (e.g.", "$a_1$ and $a_{47}$ are both taken from $r_{32}$ , so they must be the same—both either `1' or `0').", "$\\begin{split}a_{1..48}=\\lbrace ~~~~& \\\\& r_{32},r_{1},r_{2},r_{3},r_{4},r_{5}, \\\\& r_{4},r_{5},r_{6},r_{7},r_{8},r_{9}, \\\\& r_{8},r_{9},r_{10},r_{11},r_{12},r_{13}, \\\\& r_{12},r_{13},r_{14},r_{15},r_{16},r_{17}, \\\\& r_{16},r_{17},r_{18},r_{19},r_{20},r_{21},\\\\& r_{20},r_{21},r_{22},r_{23},r_{24},r_{25},\\\\& r_{24},r_{25},r_{26},r_{27},r_{28},r_{29}, \\\\& r_{28},r_{29},r_{30},r_{31},r_{32},r_{1} \\\\\\rbrace \\end{split}$ Given the constraint from eqn:afromer, we thus present the desired trio of inputs for reverse engineering all bits of round key $K_1$ as Equations REF -REF .", "$\\begin{split}a^1_{1..48} &= (000000\\,000000\\,000000\\,000000\\,000000\\,000000\\,000000\\, 0000000)_2\\end{split}$ $\\begin{split}a^2_{1..48} &= (001000\\,001000\\,001000\\,001000\\,001000\\,001000\\,001000\\, 001000)_2\\end{split}$ $\\begin{split}a^3_{1..48} &= (100010\\,100010\\,101000\\,000101\\,010001\\,010001\\,010101\\,010010)_2\\end{split}$ However, we cannot just present the desired inputs at location $a$ .", "One option is to scan in the value to the $R$ register using the scan chain and eqn:afromer to determine the $R$ bits.", "However, depending on the circuit, it may have additional barriers to scanning in values as opposed to scanning them out.", "Another option is to determine the value from the perspective of the initial input.", "Using the values within the Initial Permutation table, we can derive equations for $L_0$ and $R_0$ , as presented in eqn:des-l and eqn:des-r. $\\begin{split}L_0 = l_{1..32}=\\lbrace ~~~~& \\\\& i_{58}, i_{50}, i_{42}, i_{34}, i_{26}, i_{18}, i_{10}, i_{2}, \\\\& i_{60}, i_{52}, i_{44}, i_{36}, i_{28}, i_{20}, i_{12}, i_{4}, \\\\& i_{62}, i_{54}, i_{46}, i_{38}, i_{30}, i_{22}, i_{14}, i_{6}, \\\\& i_{64}, i_{56}, i_{48}, i_{40}, i_{32}, i_{24}, i_{16}, i_{8}, \\\\\\rbrace \\end{split}$ $\\begin{split}R_0 = r_{1..32}=\\lbrace ~~~~& \\\\& i_{57}, i_{49}, i_{41}, i_{33}, i_{25}, i_{17}, i_{9}, i_{1}, \\\\& i_{59}, i_{51}, i_{43}, i_{35}, i_{27}, i_{19}, i_{11}, i_{3}, \\\\& i_{61}, i_{53}, i_{45}, i_{37}, i_{29}, i_{21}, i_{13}, i_{5}, \\\\& i_{63}, i_{55}, i_{47}, i_{39}, i_{31}, i_{23}, i_{15}, i_{7} \\\\\\rbrace \\end{split}$ Given this, we can substitute eqn:des-r into eqn:afromer to produce an equation for $a$ based only upon original input $i$ , eqn:afromer: $\\begin{split}a_{1..48}=\\lbrace ~~& \\\\& i_{7},i_{57},i_{49},i_{41},i_{33},i_{25}, \\\\& i_{33},i_{25},i_{17},i_{9},i_{1},i_{59}, \\\\& i_{1},i_{59},i_{51},i_{43},i_{35},i_{27}, \\\\& i_{35},i_{27},i_{19},i_{11},i_{3},i_{61}, \\\\& i_{3},i_{61},i_{53},i_{45},i_{37},i_{29},\\\\& i_{37},i_{29},i_{21},i_{13},i_{5},i_{63},\\\\& i_{5},i_{63},i_{55},i_{47},i_{39},i_{31}, \\\\& i_{39},i_{31},i_{23},i_{15},i_{7},i_{57} \\\\\\rbrace \\end{split}$ Thus, from eqn:des-l and eqn:des-a-from-r we can produce the desired input $i$ for each of our inputs $a^{1..3}$ (Equations REF -REF ).", "These are presented in eqn:special-inputs.", "$\\begin{split}i^1_{1..64} &=(0000000000000000)_{16} \\\\i^2_{1..64} &=(0000AA000000AA00)_{16} \\\\i^3_{1..64} &=(8220000A8002200A)_{16}\\end{split}$ We can verify this answer in Python by presenting these inputs and then computing the value at point $a$ and checking that it matches the desired $a$ -values from Equations REF -REF .", "#Use three specially crafted inputs to determine the unique round key R1 # they ensure L1 is 0, and R1 has a special value in it special_inputs = [\"0000000000000000\",                   \"0000AA000000AA00\",                   \"8220000A8002200A\"]   #Permute the special_inputs to compute the values that will be XORed with the # round-key R1 before the s-boxes (i.e. compute the value 'a' in Fig.", "2) special_inputs_at_pt_a = [] for i in range(len(special_inputs)):     after_ip = des.permute(des.hex2bin(special_inputs[i]), des.INITIAL_PERM, 64)     l0 = after_ip[:32]     r0 = after_ip[32:]     r0_expanded = des.permute(r0, des.EXPANSION_FUNC, 48)     r0_expanded_list = []     for i in range(48):         r0_expanded_list.append(int(r0_expanded[i],2))     special_inputs_at_pt_a.append(r0_expanded_list)   #(For testing only) #For each value in special_inputs_at_pt_a, # print it in blocks of 6 bits for i in range(len(special_inputs_at_pt_a)):     print(\"Special input print(\"Value at pt.", "a: \", end=\"\")     for j in range(len(special_inputs_at_pt_a[i])):         if(j print(\" \", end=\"\")         print(\"print() exit(1)   \"\"\" Example output:   Special input 0 (0000000000000000): Value at pt.", "a:  000000 000000 000000 000000 000000 000000 000000 000000 Special input 1 (0000AA000000AA00): Value at pt.", "a:  001000 001000 001000 001000 001000 001000 001000 001000 Special input 2 (8220000A8002200A): Value at pt.", "a:  100010 100010 101000 000101 010001 010001 010101 010010 \"\"\" In practice, we cannot observe the values at point $c$ (i.e.", "immediately after the scan-chains) directly.", "Instead, we must scan out value $e$ from the $R$ register after the first round is completed (i.e.", "$R$ is $R_1$ ) and perform the inverse operations to return it to value $c$ .", "Firstly, we must undo the exclusive-or operation against $L_0$ to compute $d$ (eqn:d).", "To simplify this, we set $L_0$ to be all zeros using eqn:des-l when setting the input $i$ .", "This makes $d = e$ .", "We then need to perform the inverse of permutation $P$ .", "This is a straight-through permutation (i.e.", "every bit in input appears in different location in output), and so inverting it is straightforward.", "We present this as eqn:des-d. $\\begin{split}d_{1..32}=\\lbrace ~~& \\\\& c_{16}, c_{7}, c_{20}, c_{21}, \\\\& c_{29}, c_{12}, c_{28}, c_{17}, \\\\& c_{1}, c_{15}, c_{23}, c_{26}, \\\\& c_{5}, c_{18}, c_{31}, c_{10}, \\\\& c_{2}, c_{8}, c_{24}, c_{14}, \\\\& c_{32}, c_{27}, c_{3}, c_{9}, \\\\& c_{19}, c_{13}, c_{30}, c_{6}, \\\\& c_{22}, c_{11}, c_{4}, c_{25} \\\\\\rbrace \\end{split}$ This is presented in code in lst:special-inputs-pt-c. special_results_after_sbox_pt_c = [] #For each of the 3 special inputs for special_input in special_inputs:     #Run 3 rounds of the encryption over the special input (i.e.", "determine L1, R1)     (_, scans) = dut.RunEncryptOrDecrypt(special_input, True, 3)       #Using the scan chain layout we computed earlier, extract the values of L1 and R1 registers     (l_reg, r_reg) = read_scan_l_r(left_r_scan_indices, right_r_scan_indices, scans[2])       #Undo the P permutation to get the values directly emitted from the SBox     # (i.e. the values at point 'c' in Fig.", "2)     special_result = [None]*32     for i in range(32):         special_result[des.P_PERM[i]-1] = r_reg[i]     special_results_after_sbox_pt_c.append(special_result)   #(For testing only) #For each value in special_results_after_sbox_pt_c, # print it in blocks of 4 bits for i in range(len(special_results_after_sbox_pt_c)):     print(\"Special input print(\"Value at pt.", "c: \", end=\"\")     for j in range(len(special_results_after_sbox_pt_c[i])):         if(j print(\" \", end=\"\")         print(\"print() exit(1)   \"\"\" Example output:   Special input 0 (0000000000000000): Value at pt.", "c:  1111 1000 1110 0011 0001 1011 1010 0000 Special input 1 (0000AA000000AA00): Value at pt.", "c:  0011 0011 0101 1010 0110 0100 1100 1100 Special input 2 (8220000A8002200A): Value at pt.", "c:  1010 1001 0000 1001 1010 0101 1010 0011 \"\"\" We can now use the values at point $c$ along with the values at point $a$ to determine the possible s-box values according to the code in lst:reverse-sbox.", "This is presented in lst:possible-s-box-round-keys.", "#For each of the s-box special results at point c, # use the function sboxes_output_to_possible_inputs() # to determine the possible key inputs given the input # to the xor at point 'a' in Fig. 2.", "sbox_possible_key_values = [] for i in range(len(special_results_after_sbox_pt_c)):     sbox_possible_key_values.append(         sboxes_output_to_possible_inputs(             special_results_after_sbox_pt_c[i], special_inputs_at_pt_a[i]         )     )   #(Testing purposes only) #Print the possible key values for each of the s-boxes for each of the 3 special inputs for i in range(len(sbox_possible_key_values)):     print(\"Special input print(\"Possible key values...\")     for j in range(len(sbox_possible_key_values[i])):         print(\" for s-box for k in range(len(sbox_possible_key_values[i][j])):             for l in range(len(sbox_possible_key_values[i][j][k])):                 print(\"print(' ', end=\"\")         print() exit()   \"\"\" Example output: Special input 1 (0000000000000000): Possible key values...  for s-box 1: 001010 000011 110000 100001  for s-box 2: 000100 001101 110010 100011  for s-box 3: 000110 010111 111100 110101  for s-box 4: 000110 001111 110100 100001  for s-box 5: 000110 001111 100100 101001  for s-box 6: 011110 011011 111100 110001  for s-box 7: 011010 001111 110000 101101  for s-box 8: 011010 011001 110000 110111 Special input 2 (0000AA000000AA00): Possible key values...  for s-box 1: 011000 010101 110000 111101  for s-box 2: 000100 001001 110010 100001  for s-box 3: 000110 011101 110000 110011  for s-box 4: 000110 010011 101000 100001  for s-box 5: 000110 010111 110000 111001  for s-box 6: 011110 001101 111100 101001  for s-box 7: 011010 011111 100000 110111  for s-box 8: 010100 011001 100010 111011 Special input 3 (8220000A8002200A): Possible key values...  for s-box 1: 110000 110011 011000 011011  for s-box 2: 110010 111001 011010 011101  for s-box 3: 101010 101101 000110 001111  for s-box 4: 001001 011010 100001 110100  for s-box 5: 011011 000110 111001 101000  for s-box 6: 001101 011110 110111 111010  for s-box 7: 001111 011010 100101 111000  for s-box 8: 000110 011001 101000 101011 \"\"\" For each s-box, there is only one possible key which will be present in the outputs for each of the three special inputs.", "For example, for s-box $S_1$ , the only possible key value which is present for all inputs 1-3 is value $(110000)_2$ (see Lines 31, 41, 51).", "That means that the round key $K_1$ must begin with these six bits.", "We can determine each of these using the code in lst:sboxkeyreduction.", "#Each of the sbox_possible_key_values is a list of lists of possible # key inputs for that sbox.", "# Starting from the first input, remove any possibility that is not present # in the other inputs.", "# (i.e.", "find the only input that is in all three sets of sbox possibilities)   possible_roundkey_bits_after_expansion = [] for sbox_index in range(8):     possible_values = sbox_possible_key_values[0][sbox_index]     for i in range(1,len(sbox_possible_key_values),1): #start at 1         other_values = sbox_possible_key_values[i][sbox_index]           #remove any elements from possible_values that is not present in other_values         possible_values = [x for x in possible_values if x in other_values]       possible_roundkey_bits_after_expansion.append(possible_values)   #(Testing purposes only) #Print the possible key values for each of the s-boxes after this removal step # There should only be 1 possible set of bits per section of the key print(\"Possible roundkeys\") for i in range(len(possible_roundkey_bits_after_expansion)):     print(\"Bits for j in range(len(possible_roundkey_bits_after_expansion[i])):         for k in range(len(possible_roundkey_bits_after_expansion[i][j])):             print(\"print(\" \", end=\"\")     print() exit(1)   \"\"\" Example output: Possible roundkeys Bits 1-8 have 1 possible value: 110000 Bits 9-16 have 1 possible value: 110010 Bits 17-24 have 1 possible value: 000110 Bits 25-32 have 1 possible value: 100001 Bits 33-40 have 1 possible value: 000110 Bits 41-48 have 1 possible value: 011110 Bits 49-56 have 1 possible value: 011010 Bits 57-64 have 1 possible value: 011001 \"\"\" At this point, there should be only one possible round key.", "However, to enable experimental modifications to these `special values', we will proceed as if we have not identified the round-key uniquely.", "If there are more than one possible value for a given section of the round-key, we would need to take the cartesian product of the possible key bits in order to produce a set of possible keys.", "This is presented in lst:list-roundkey-1.", "As noted on line 23, the round key $K_1$ is uniquely determined.", "#Convert the set of possible bits per round-key section # to a set of possible round keys for the round1 key # by taking the cartesian product of the possibilities # Note: there should only be one possible sbox input per sbox at this point, # so the cartesian product should only return one element.", "possible_roundkeys_round1 = [] for components in itertools.product(*possible_roundkey_bits_after_expansion):     possible_roundkey_round1 = []     for component in components:         possible_roundkey_round1.extend(component)     possible_roundkeys_round1.append(possible_roundkey_round1)   #(Testing purposes only) #Print the possible roundkeys for round1 for possible_roundkey_round1 in possible_roundkeys_round1:     possible_roundkey_val = 0     for i in range(48):         possible_roundkey_val |= (possible_roundkey_round1[i] << (47-i))     print(\"Possible roundkey 1: exit(1) \"\"\" Example output: Possible roundkey 1: C321A119E699 \"\"\" Insight: The three special inputs were well-designed to reverse the s-boxes uniquely and produce only one possible round key $K_1$ .", "However, what happens if the special inputs are not well-designed, or if we used only one or two special inputs?", "Consider experimenting with this." ], [ "Attack Phase 3: Determine the original key", "Each round key contains 48-bits of the original 56-bit key (8 of the 64-bits in the 64-bit key are parity bits not used for encryption purposes).", "After determining round key $K_1$ , we now have 48 bits of the key (positions derivable by undoing the PC2, shifts, and PC1 permutations).", "The attack can now proceed in two different ways.", "The first method is to perform a similar attack on $R_2$ and $R_3$ as depicted above, by using the scan chain to scan in replacement values to the $L$ and $R$ registers after each round of encryption is completed.", "However, this requires additional test infrastructure.", "The second method notes that there are only 8-bits remaining of the key, giving only $2^8=256$ different key bit possibilities.", "As such, we should be able to brute-force the values relatively easily.", "In this section we will proceed with the second method, and brute force the remaining bits of the key.", "We present this in code in lst:listing-possible-keys.", "Note the possible-key list structure printed by Line 40 (example output on Line 70).", "This has [0] or [1] in the position of `known' bits, [0, 1] in position of `unknown' bits, and [None] in position of the parity bits (every 8th bit).", "As there are 8 `unknown' bits, there should be 256 possible keys generated from the cartesian product, which is confirmed for the example on Line 71. possible_keys = [] #For each possible round1 key, derive every possible key that could have generated it for possible_roundkey_round1 in possible_roundkeys_round1:     #First, undo the PC2 permutation     key1 = [None]*56     for i in range(48):         key1[des.KEY_PC2[i]-1] = possible_roundkey_round1[i]       #Now undo the two half-key rotations by     # right rotating each half of the key     key1_left = key1[:28]     key1_right = key1[28:]     key1_left = key1_left[-1:] + key1_left[:-1]     key1_right = key1_right[-1:] + key1_right[:-1]     key1 = key1_left + key1_right       #Now undo the PC1 permutation     key = [None]*64     for i in range(56):         key[des.KEY_PC1[i]-1] = key1[i]       #The format of the key is such that it has 64 bits,     # but only 48 of them are currently filled with values (the others are 'None')     # We will create all possible keys by taking the cartesian product     # of all possible values for the 8 unfilled key bits (ignoring parity bits).", "#Prepare the cartesian product by creating a list of lists of     # known and possible values for the unfilled key bits.", "combined_key_possibilities = []     for i in range(64):         if key[i] == None:             if (i+1) combined_key_possibilities.append([0,1]) #Unknown key bit             else:                 combined_key_possibilities.append([None]) #Parity bit         else:             combined_key_possibilities.append([key[i]]) #Known key bit       print(\"Key possibilities that would generate round key 1:\")     print(combined_key_possibilities)       for components in itertools.product(*combined_key_possibilities):         #Combine all key bits into a single key         possible_key_bits = []         for component in components:             possible_key_bits.append(component)           #Calculate the parity bits         for i in range(8):             val_bits = possible_key_bits[i*8:(i+1)*8-1]             parity_bit = 0             for bit in val_bits:                 parity_bit ^= bit             possible_key_bits[i*8+7] = parity_bit           #Convert the binary list into a hex string         key_val = 0         for i in range(64):             key_val |= (possible_key_bits[i] << (63-i))         possible_key_val = '           #Store the possible key         possible_keys.append(possible_key_val)   #(Testing purposes only) print(\"There are   \"\"\" Example output: Key possibilities that would generate round key 1: [[0], [0], [0], [0], [1], [0, 1], [0, 1], [None], [0], [1], [0, 1], [0, 1], [1], [1], [1], [None], [0], [0], [1], [0], [1], [0], [1], [None], [1], [0], [0], [0], [0], [1], [1], [None], [1], [0], [0], [0], [1], [1], [0], [None], [1], [0], [0, 1], [1], [0], [0, 1], [0], [None], [0], [0, 1], [1], [0, 1], [1], [1], [0], [None], [1], [0], [1], [0], [0], [0], [0], [None]] There are 256 possible keys. \"\"\"", "The remaining step is to now perform the brute-force operation to check all the possible keys until the correct key is found.", "This is straightforward by creating new DES instances with specified keys (refer to fig:python-des-diagram).", "We will compare the ciphertexts that we generate to that generated in lst:des-dut-instantiate.", "The brute-forcing code is presented in lst:key-brute-force.", "print(\"Brute-force checking for possible_key in possible_keys:     pos_des = des.DESWithScanChain(force_key=possible_key)     (test_ciphertext, _) = pos_des.RunEncryptOrDecrypt(test_code)     if(test_ciphertext == check_ciphertext):         print(\"Found the key.", "It is break   print(\"Checking the answer.", "The embedded secret key was \" + dut.key_hex) if(possible_key == dut.key_hex):     print(\"The two keys match, the attack is successful.\")", "\"\"\" Example output: Brute-force checking 256 possible keys.", "Found the key.", "It is 096F2B878D906CA0 Checking the answer.", "The embedded secret key was 096F2B878D906CA0 The two keys match, the attack is successful. \"\"\"", "The attack successfully recovered the key used to generate the round keys in the hardware." ], [ "Design exploration", "In this case study we explored how to perform a scan chain attack on (emulated) hardware.", "We presented relatively simple iterative hardware, but this attack would also work on more complex implementations.", "For instance, in a fully pipelined architecture, each of the 16 DES rounds would be instantiated separately, meaning there would be 17 pairs of $L$ and $R$ registers (from $\\lbrace L_0, R_0\\rbrace $ to $\\lbrace L_{16}, R_{16}\\rbrace $ ).", "While this would significantly increase the size of the scan chain, identifying the correspondence between each bit and its position in the scan chain is still possible, especially for the low registers [45].", "$L_0$ and $R_0$ can be located first, by using the same methodology as presented earlier.", "As $L_1=R_0$ , identifying $L_1$ is also straightforward.", "Given $R_1=L_0\\oplus f(R_0,K_1)$ we can determine $R_1$ as follows.", "We set $R_0$ to be a constant value; $K_1$ is already constant.", "We then iterate through $L_0$ , setting each bit to be 1 in turn.", "As $f(R_0,K_1)$ is constant, there will only be a 1-bit difference each time we stream out $R_0$ , and that difference will be based on the position of the 1 in $L_0$ .", "Now we have the positions of $L_0$ , $R_0$ , $L_1$ , and $R_1$ and the key extraction can proceed as outlined in the previous subsection (using the brute force method).", "Our emulated hardware assumed that it took one clock cycle to load values into the input register.", "For some DES implementations this might take a different number of cycles, or no cycles at all if there is no input register present.", "However, using the scan chain it also possible to determine this.", "Firstly, the length of the scan chain reveals how many register bits are present.", "Secondly, by observing the scan chain after each clock tick, it is possible to determine when encryption has commenced.", "This is because cryptographic algorithms such as DES display an `avalanche' effect [45], where small differences (i.e.", "1-bit) will translate into larger changes in the subsequent rounds.", "By observing for this, it is possible to determine when the encryption has started [45].", "Our emulated hardware simplified some aspects of the reverse engineering by excluding control registers from the scan chain.", "If these were present, there would be additional and varying data bits in each cycle.", "However, these are control-driven rather than data-driven.", "As such, we can identify them by loading multiple different inputs to the hardware module and running the complete encryption/decryption cycle.", "Any flip-flops in the control unit would have the same pattern for each input trace, no matter the input, and could thus be identified and discarded.", "Further, we assume that a reset actually sets all internal register bits to zero.", "This may not be the case, as some circuits have implementations that reset to unknown or random garbage values.", "If this is the case, then instead of resetting the circuit using the random reset, the attacker should `reset' the circuit by loading a constant input and running the complete encryption procedure.", "This would set all internal registers to a constant value, which can be the new fixed state of the chip rather than all bits being zero." ], [ "Potential pedagogical exercises", "This case study presents a walk-through of an attack on a DES implementation, providing a sound basis for academic exercises.", "Consider modifying the design (e.g.", "using the suggestions in Section REF ) prior to attempting the attack.", "Also consider attacking a different encryption algorithm that is vulnerable to scan attacks, such as AES [2], [46]." ], [ "Overview", "In this section, we will work towards building an intuition about digital hardware intellectual property (IP) protection, specifically through logic locking,This is also seen in literature as logic obfuscation, logic encryption, redaction, and so on.", "a family of techniques that continues to evolve since first appearing in 2008 [26].", "As one can intuit, hardware IP is extremely valuable.", "Creators of IP (or IP vendors) want to manage access of the IP within the context of licensing agreements and to avoid threats such as reverse-engineering, where malicious parties want to understand an IP to the level where they might be able to illicitly modify the design (say, to insert a Hardware Trojan) or piracy (where designs are stolen, cloned, and passed off as genuine or as a competing product).", "Ultimately, the designer (acting as a defender of the IP) wants to ensure that only legitimate users (or licensees) can use the IP as intended while other entities cannot." ], [ "Threat Models ", "In general terms, we can consider the following entities (and their employees) as being involved in the chip design flow: the IP vendor, the system-on-chip integrator, the foundry, the test facility, and the end user.", "Usually, we want to protect IP from potential adversaries: untrusted foundries, test facilities, or compromised/illegitimate end users, assuming that they have been compromised in some way (see survey papers like [5], [4] for more discussion about the threat model).", "Their aim is to do whatever they can to recover the IP; usually defined as gaining enough knowledge of the design so that they can get the IP to produce functionally correct outputs.", "We will see more precisely what this means in the context of logic locking in the next section.", "We usually assume that the adversary can recover the IP's gate-level netlist.", "Without any protection, an adversary can work out what different parts of the gate-level netlist do, work out what parts to modify if they desire, and re-use the IP.", "Given this setting, there are two typical variations of the threat model.", "The Oracle-based model assumes that the adversary has in their possession an instance of the IP that produces functionally correct outputs—perhaps (legitimately) acquired on the open market.", "Within the Oracle-based approach, there are additional possible assumptions, including whether scan-chain access is available or if the Oracle's internal behavior can be observed (e.g., tamper-proof memory used or packaging-level protections to mitigate imaging).", "The Oracle-less model assumes that the adversary only has the gate-level netlist and is unable to gain information regarding correct functional input/output behavior.", "Insight: As with any security problem, it is up to each practitioner to decide for themselves what assumptions are reasonable in terms of threat modeling.", "Take a moment to consider what adversary capabilities you think are realistic.", "In this tutorial, we will focus our attention on the basic locking of combinational logic circuits and Oracle-based analysis.", "From this foundation you should be well-equipped to engage with the recent advances in logic locking research.", "Given the aforementioned threat model, logic locking seeks to transform a design in such a way that an adversary cannot simply copy or reverse-engineer a designRecent work by Beerel et al.", "[4] attempts to formalize the aims of logic locking, but we will take the intuitive understanding as sufficient for the purposes of an introduction to the topic in this tutorial paper..", "In other words, we want to change the design by inserting or replacing some of its logic so that the IP is associated with some secret that can be shared between the IP vendor and trusted legitimate end-user that will “unlock” the IP's true functionality.", "The secret is typically in the form of some sort of “key” input.", "Figure: An example of logic lockingTable: Truth Table for the circuit in fig:original-netlist.", "Notice the differences in the output `o' when the key is set to 0 or 1.For example, consider the circuit in fig:original-netlist and its associated truth table in tab:original-table.", "Imagine that this is a valuable IP and we want to transform the design (i.e., “lock” it) such that the adversary cannot get the benefit of correct input/output behavior.", "The simplest modification we could do the circuit is to insert an XOR gate somewhere in the design and connecting it to a “key input” as in lockedcirc.", "Consider the truth table in tab:original-table when key is set to 0 and when it is set to 1—notice that some of the rows are different to the original when $key=1$ .", "If the correct value of the key input is kept secret, an adversary that makes an incorrect guess of the key input will end up with a circuit that does not behave correctly for all input combinations.", "This is the basic premise of the first logic locking approach proposed by Roy et al.", "in 2008 [26]: randomly add XOR and XNOR gates to various points of the design attached to new key inputs.", "As an exercise, you can create your own 3-input, 1-output circuit with 4-5 logic gates, construct its truth table, and then examine what happens when you “lock” the design with XOR and XNOR insertion.", "Insight: One way to think about the general premise of logic locking is to add “additional” or “surplus” functionality to the design, from which only a subset is actually useful.", "In our basic example, we added a new input and a gate, which actually expands the Boolean function from a 3-input to 4-input function.", "Added parts used in logic locking include new (key) inputs (and thus, new parts of the truth table) or new states and transitions (such as by adding memory elements like flip-flops).", "The useful (protected) part of the design should be hard to discover or use, unless, of course, you are authorized to by receiving the requisite secret material.", "Since 2008, there has been several back-and-forth exchanges in the academic community with proposed attacks and countermeasures [41], [5].", "Check out the references for a selection of related work.", "For this tutorial, we will keep ourselves occupied with random logic locking (RLL) (basic XOR/XNOR locking)." ], [ "Early analyses", "Early analyses of logic locking has had a close link with VLSI testing ideas, so to start to get a flavor of the type of analyses that is possible, let us consider the idea of the sensitization attack [24], where an adversary has access to an activated chip (i.e., a locked design which has had the secret material loaded to unlock the functionality).", "Consider again the simple design of lockedcirc.", "We know the structure of the design, but we do not know the key input's intended value—the idea here is to sensitize the path so the key input value in the activated chip can be observed directly on the output, which we can do by choosing appropriate test input vectors.", "For example, as we can see in fig:sens, one can set the inputs “abc” to “000” so that the actual value of key can be seen on the output.", "By setting $c = 0$ , we ensure the output of gate g3 reflects the output of gate g2.", "By setting either a or b to 0, we can guarantee that the output of gate g1 is 0, thus making the output of gate g2 the value of the key input.", "Typical test generation algorithms can help adversaries find the test vectors needed to sensitize the design [24], [11].", "Figure: Sensitization attackAs one might expect, a defender that knows that this attack is possible can try to respond by adding multiple key inputs and gates such that it is not possible to sensitize an individual key input [24], i.e., the choice of where to insert locking logic needs to be carefully considered.", "Recent work has shown that the sensitization and other attacks that use automatic test pattern generation (ATPG) have shown success on various logic locking flavors [11], [10].", "This kind of adversarial back-and-forth or “cat-and-mouse” perspective has driven a lot of progress in logic locking [4], [41], [5].", "Next, we will take a closer look at one of the most influential attacks in the logic locking literature: the “SAT attack” [39].", "Understanding the SAT attack provides a good foundation for understanding the motivations for and intuitions around many proposed logic locking approaches, such as cyclic locking [48], stripped functionality logic locking [47], its descendants [29], and attacks (e.g., [32], [33], [34], [30])." ], [ "The SAT attack", "In this section, we will do a step-by-step walk-through of the concepts and application of the attack by Subramanyan et al.", "[39]." ], [ "The building blocks", "The SAT attack builds on a few key ideas, including the Boolean satisfiability problem, the construction of miter circuits, and the identification of distinguishing input patterns.", "The first idea is the Boolean satisfiability problem, often referred to as SAT.", "Given a Boolean formula, SAT is the problem of determining if variables in the formula can be set to TRUE and FALSE such that the formula as a whole evaluates to TRUE.", "If so, the formula is satisfiable (or SAT), otherwise, it is unsatisfiable (or UNSAT).", "While SAT is an NP-complete problem, several heuristic algorithms exist that can be used to solve SAT instances [12].", "So, if we can produce a Boolean formula, we can give it to a SAT solver to try to find the variable assignments that make the formula evaluate to TRUE, if the formula is satisifiable.", "For example, the formula $a \\wedge b$The notation $\\wedge $ means logical conjuction (AND).", "$\\vee $ means logical disjunction (OR).", "is satisfiable, as the formula evaluates to TRUE when $a = 1$ and $b = 1$ .", "The next thing we need to understand is the idea of a miter circuit.", "A miter circuit is a circuit comprising two circuits that are fed the same input, as shown in fig:miter; we can check to see if the output of the circuits match for any given input.", "Intuitively, the two circuits are equivalent if they produce the same output as each for any input.", "The miter circuit is especially useful if we use a SAT solver because we can write a Boolean formula representing the miter circuit (including the output comparison) to feed into the SAT solver.", "The SAT solver then tries to answer the question: \"Is there any input where the circuit outputs are different\"?", "A SAT solver that returns SAT has found an input where the output of the two circuits disagrees.", "We can use the miter circuit to perform formal equivalence checking of two combinational circuits: say we have two circuits that we think have the same behavior, we can connect them together as a miter (same inputs, XOR'd output) and use a SAT solver to see if the miter is satisfiable.", "The circuits are equivalent if the solver returns UNSAT; i.e., the solver cannot find any input that causes the two circuits to produce different outputs (i.e., XOR at the end will always be 0 (FALSE)).", "This leads us to the third idea: the idea of a distinguishing input pattern.", "Let us now consider a miter circuit that is built with two copies of a logic locked design, as in fig:miter-attack.", "As before, the same input is fed into each circuit copy.", "When we feed this miter circuit into a SAT solver, we are essentially asking the same question as before: \"Is there any input where the circuit outputs are different?\"", "– except, in this case, we have separate key and key' inputs to each copy of the locked circuit.", "Given that the two circuits in the miter are exactly the same, the only way in which a SAT solver will be able to make the miter circuit formula satisfiable (if it is satisifiable) is to find an input and different values of key and key' that will make the different copies present different outputs.", "In other words, the solver will find a distinguishing input pattern (DIP) which means that the value (variable assignments) of key and key' that are found belong to two different classes of keys that make the locked circuit behave differently for that DIP.", "Figure: (a) A general miter circuit and (b) the starting miter for the SAT attack." ], [ "The algorithm", "We can now put together the aforementioned ideas into an an attack on logic locked designs when the adversary has access to an Oracle.", "Let us assume that we created the miter circuit using two copies of the locked design and that we have fed this to a SAT solver that returns SAT.", "The solver has given as a DIP and two possible outputs for that DIP.", "Which output is correct?", "With access to an Oracle, we can simply ask it which is the correct output for that DIP.", "Crucially, because we know which output is correct for that DIP, we can revise the miter circuit Boolean formula that we give to the SAT solver to include the new information – i.e., that for the DIP, the output must be so.", "The formula can be fed into the SAT solver again, which will attempt to find another DIP.", "Each time we find a DIP, we can query the Oracle, modify the formula to take into account what the correct behavior should be for that DIP, and repeat; the solver prunes the key space iteratively.", "In each iteration, we effectively ask the solver to answer the question: \"Is there any input where the circuit outputs are different, given that when the inputs are {previous DIPs} the corresponding outputs must be {the correct outputs from the Oracle}\"?", "Eventually, the solver will return UNSAT when it can no longer find any DIPs; what is left will let us identify a key in the equivalence class of correct keys.", "Taken all together, alg:satattack presents Subramanyan et al.", "'s “Logic Decryption Algorithm” [39], where $C(I,K,O)$ is the Boolean input-key-output relation (i.e., logic locked circuit represented as a Boolean formula), $I$ is a vector of inputs, $K$ is a vector of key inputs, $O$ is a vector of outputs, and $oracle()$ is a function that represents querying the Oracle.", "Subscripts represent each iteration of the algorithm.", "[h] LeftleftThisthisUpupUnionUnionFindCompressFindCompressInputInputOutputOutput The values of K i = 1 $F_1 = C(I, K_1, O_1) \\wedge C(I, K_2, O_2)$ $sat[F_i \\wedge (O_1 \\ne O_2)]$ $I^d_i =$ a DIP value that satisfy $[F_i \\wedge (O_1 \\ne O_2)]$ $O^d_i = oracle(I^d)$ $F_{i+1} = F_i \\wedge C(I^d_i, O^d_i, K_1) \\wedge C(I^d_i, O^d_i, K_2)$ $i = i + 1$ $K =$ the value of $K$ in the sat assignment of $F_i \\wedge (O_1 \\equiv O_2)$ SAT Attack Algorithm from [39] (variable names changed )" ], [ "Worked example", "To see more clearly how the SAT attack algorithm process works, let us try a worked example on the locked design shown in fig:lock-example.", "Figure: Logic locked example design with its \"Oracle\" as a truth tableFigure: The web browser-based logictools SAT solver from For this example, we will use a simple, Web browser-based SAT solver called logictools [40] with its corresponding syntax for representing propositional formulas+ is XOR, <=> is equivalence, & is AND, $\\sim $ or - is negation.. We will use the dpll:old engine in this walk-through, as shown in fig:logictools.", "Typically, SAT solver tools ingest formulas in conjunctive normal form (CNF) which can be produced from propositional formulas using the Tseytin transformation [42]; logictools can ingest propositional formulas directly (and performs the transformation internally), so we will stick with propositional formulas for clarity in this example.", "To begin, let us construct a miter circuit using two copies of the locked design.", "For readability, we will label the internal nets of the first copy as shown in fig:lock-example and represent the second copy's nets with “w”.", "We distinguish between copy 1's keys and copy 2's keys by appending `a' and `b', respectively.", "The miter circuit is shown as a formula in fig:sat1.", "Figure: Miter circuit as a formula: SAT Attack Iteration #1When we solve using dpll:old, we receive the following feedback:     Clause set is true if we assign values to variables as: u0 a -b u1 c u2 u5 k1a -u4 k2a u3 k3a u6 y w0 w1 w2 w5 k1b w4     -k2b -w3 -k3b -w6 -yx In other words, the miter circuit formula is satisfiable!", "Of particular interest is the value of the inputs that the solver has found, being $a=1$ , $b=0$ (see -b in the feedback), and $c=1$ .", "This is a distinguishing input pattern.", "In this case, circuit copy 1 produces an output of 1, while circuit copy 2 produces an output of 0.", "We can check the Oracle to see which is the intended output; from fig:lock-example, the correct output should be 1.", "As we can see in line 6 of alg:satattack, we should make a new formula by adding more copies of the circuit but with the constraint that the output produced with the DIP of $a=1,b=0,c=1$ should be 1.", "The formula for the next iteration of the SAT attack thus looks like fig:sat2.", "Figure: SAT Attack Iteration #2We add the miter circuit formula from iteration #1 to new copies of the circuit.", "Note the new internal node names “uuX” and “wwX” to represent the next copy, as well as “aa”, “bb”, “cc”, and “yy” for the DIP and corresponding output from the oracle.", "Note also that the variables representing the key (e.g., k1a, k1b) are the same as those in the initial miter circuit formula; this forces keys found in future iterations to make the circuit produce the correct output for the DIP.", "Once again, using the SAT solver produces the following feedback:     Clause set is true if we assign values to variables as: u0 -a b -u1 c -u2 u5 k1a u4 k2a -u3 k3a -u6 -y w0 -w1 -w2 w5     k1b w4 k2b w3 -k3b -w6 yx uu0 aa -bb uu1 cc uu2 uu5 -uu4 uu3 uu6 yy ww0 ww1 ww2 ww5 -ww4 -ww3 ww6 This time, the DIP that is found is $a=0, b=1, c=1$ , which the Oracles tells us should produce the output 1.", "We can do yet another iteration of the SAT attack, adding fig:sat3 to our growing formula.", "Figure: SAT Attack Iteration #3Running the SAT solver, we find that the formula is still satisifiable, with the following feedback: Clause set is true if we assign values to variables as: u0 -a b u1 -c -u2 u5 k1a -u4 k2a u3 -k3a u6 y w0 w1 -w2 w5 k1b w4 -k2b -w3 k3b -w6 -yx uu0 aa -bb uu1 cc uu2 uu5 -uu4 -uu3 uu6 yy ww0 ww1 ww2 ww5 ww4 ww3 -ww6 uuu0 -aaa bbb -uuu1 ccc -uuu2 uuu5 uuu4 uuu3 -uuu6 yyy www0 -www1 -www2 www5 -www4 -www3 www6 If we construct the new formula for another iteration and run that through the solver (see the Appendix for the full formula), we finally receive the following feedback: Clause set is false for all possible assignments to variables.", "The formula is unsatisfiable!", "This means that the solver can no longer find any DIPs; in other words, all the remaining inputs and keys will make the two copies of the circuit behave equivalently.", "Because the formula also adds constraints that any remaining keys make the circuit produce the correct input/output behavior (from the previous DIPs), any key that satisfies the formula as a whole, without the y + yx constraint, should be a correct key.", "Thus, running the formula with that constraint missing, gives us: Clause set is true if we assign values to variables as: -u0 a b -u1 c u2 -u5 k1a -u4 -k2a u3 k3a u6 y -w0 -w1 w2 -w5 k1b -w4 -k2b w3 k3b w6 yx uu0 aa -bb uu1 cc uu2 uu5 uu4 uu3 -uu6 yy ww0 ww1 ww2 ww5 ww4 ww3 -ww6 uuu0 -aaa bbb -uuu1 ccc -uuu2 uuu5 -uuu4 -uuu3 uuu6 yyy www0 -www1 -www2 www5 -www4 -www3 www6 uuuu0 -aaaa bbbb uuuu1 -cccc -uuuu2 uuuu5 uuuu4 -uuuu3 -uuuu6 -yyyy wwww0 wwww1 -wwww2 wwww5 wwww4 -wwww3 -wwww6 If we hone in on the variables representing the key bits, we can see that $k1a = k1b = 1, k2a = k2b = 0, k3a = k3b = 1$ ." ], [ "Discussion", "The arrival of the “SAT attack” marked a turning point in the logic locking domain.", "Recent survey works such as that by Chakraborty et al.", "[5] provide a good overview of the various techniques which you should now be able to better appreciate.", "Several post-SAT techniques try to reduce the number of keys that are pruned with each iteration of the SAT attack (e.g., [47]), while others try to introduce circuit structures that cause issues for SAT solvers (e.g., [31]).", "New defense techniques are proposed and countered, even now, as the “cat-and-mouse” game continues.", "Other recent work includes the proposal of FPGA-based redaction [19] or universal circuits [4], among other strategies.", "While this tutorial covered the first formulation of the SAT attack from Subramanyan et al.", "[39], there are a few more things we should note.", "As we mentioned in sec:lock-tut, this tutorial focused on a combinational design, so you are probably interested to know what happens in more realistic systems, where we have memory elements and sequential logic.", "In the early days of logic locking, the notion of Oracle access is often paired with the idea of a fully-scanned design with an adversary-accessible scan chain (as you explored in the previous case study in sec:scan).", "With design-for-test structures like the scan-chain, an adversary can treat the different parts of the design as combinational by scanning in input sequences and scanning out the register contents (assuming, of course, that the logic locking key is itself not trivially connected to the scan chain!).", "Thus, advances in scan chain protection, such as recently proposed DisOrc [16], provide an orthogonal means to protect against the SAT attack and other Oracle-based attacks on logic locking.", "Other approaches, like the KC2 attack [32], incorporate the idea of sequential unrolling, where the adversary makes several copies of the circuit in the Boolean formula to represent the inputs and outputs of the design across several clock cycles.", "However, as one can see, even with our simple tutorial walk-through, the Boolean formula grows very quickly!", "Setting aside Oracle-based attacks, there are several Oracle-less attacks as well.", "These include analyzing the circuit in terms of logic redundancy with incorrect keys [15] as well as efforts focused on structural analysis and machine learning [6], [36], [1].", "We direct interested readers to a useful survey paper on machine learning and logic locking by Sisejkovic et al. [37].", "In this paper, we provided a tutorial introduction to issues in hardware security.", "In particular, we presented two detailed case studies of problems in hardware security: attacking cryptography via the scan chain side channel and attacks on logic locking for hardware intellectual property protection.", "Through these pedagogical examples, we provide a foundation which readers can use to engage with more recent research in these domains.", "The tutorial examples are supported by an open access online resource hosted at https://github.com/learn-hardware-security.", "This research was supported in part by NSF Grant #2039607.", "Any opinions, findings, and conclusions, or recommendations expressed are those of the author(s) and do not necessarily reflect the views of the National Science Foundation." ], [ "DES Tables", "This appendix provides the various permutation tables used within DES [21].", "The general process for reading a permutation table is as follows.", "The output bits are generated in-order using the input, with the input indexed using the appropriate value from the table.", "For example, the first (left-most / MSB) bit of the output of $IP$ will be the value of the 58th bit of the input (refer to tbl:des-IP).", "Then, the second bit will be the 50th bit of the input, and so on.", "The IP, FP, E, P, PC1, and PC2 tables are presented as tables only for ease of presentation.", "They are vectors, not tables." ], [ "Formula for the SAT Attack Worked Example", "The formula below should return UNSAT when used with a SAT solver.", "Removing the (y + yx) part should make the formula SAT and reveal the key.", "((~(a&b) <=> u0) & (~(b&c) <=> u1) & ((c&a) <=> u2) & (~(k1a+u0) <=> u5) & ((k2a + u1) <=> u4) & (~(k3a+u2) <=> u3) & (~(u5&u4) <=> u6) & (y <=> (u6 | u3))) & ((~(a&b) <=> w0) & (~(b&c) <=> w1) & ((c&a) <=> w2) & (~(k1b+w0) <=> w5) & ((k2b + w1) <=> w4) & (~(k3b+w2) <=> w3) & (~(u5&w4) <=> w6) & (yx <=> (w6 | w3))) & (y + yx) &   ( ((~(aa&bb) <=> uu0) & (~(bb&cc) <=> uu1) & ((cc&aa) <=> uu2) & (~(k1a+uu0) <=> uu5) & ((k2a + uu1) <=> uu4) & (~(k3a+uu2) <=> uu3) & (~(uu5&uu4) <=> uu6) & (yy <=> (uu6 | uu3))) & ((~(aa&bb) <=> ww0) & (~(bb&cc) <=> ww1) & ((cc&aa) <=> ww2) & (~(k1b+ww0) <=> ww5) & ((k2b + ww1) <=> ww4) & (~(k3b+ww2) <=> ww3) & (~(uu5&ww4) <=> ww6) & (yy <=> (ww6 | ww3))) & (aa & ~bb & cc & yy ) )   &   ( ((~(aaa&bbb) <=> uuu0) & (~(bbb&ccc) <=> uuu1) & ((ccc&aaa) <=> uuu2) & (~(k1a+uuu0) <=> uuu5) & ((k2a + uuu1) <=> uuu4) & (~(k3a+uuu2) <=> uuu3) & (~(uuu5&uuu4) <=> uuu6) & (yyy <=> (uuu6 | uuu3))) & ((~(aaa&bbb) <=> www0) & (~(bbb&ccc) <=> www1) & ((ccc&aaa) <=> www2) & (~(k1b+www0) <=> www5) & ((k2b + www1) <=> www4) & (~(k3b+www2) <=> www3) & (~(uuu5&www4) <=> www6) & (yyy <=> (www6 | www3))) & (~aaa & bbb & ccc & yyy ) )   &   ( ((~(aaaa&bbbb) <=> uuuu0) & (~(bbbb&cccc) <=> uuuu1) & ((cccc&aaaa) <=> uuuu2) & (~(k1a+uuuu0) <=> uuuu5) & ((k2a + uuuu1) <=> uuuu4) & (~(k3a+uuuu2) <=> uuuu3) & (~(uuuu5&uuuu4) <=> uuuu6) & (yyyy <=> (uuuu6 | uuuu3))) & ((~(aaaa&bbbb) <=> wwww0) & (~(bbbb&cccc) <=> wwww1) & ((cccc&aaaa) <=> wwww2) & (~(k1b+wwww0) <=> wwww5) & ((k2b + wwww1) <=> wwww4) & (~(k3b+wwww2) <=> wwww3) & (~(uuuu5&wwww4) <=> wwww6) & (yyyy <=> (wwww6 | wwww3))) & (~aaaa & bbbb & ~cccc & ~yyyy ) )" ] ]
2207.10466
[ [ "Bichromatic four-wave mixing and quadrature-squeezing from biexcitons in\n atomically thin semiconductor microcavities" ], [ "Abstract Nonlinear optical effects such as four-wave mixing and generation of squeezed light are ubiquitous in optical devices and light sources.", "For new devices operating at low optical power, the resonant nonlinearity arising from the two-photon sensitive bound biexciton in a semiconductor microcavity is an interesting prospective platform.", "Due to the particularly strong Coulomb interaction in atomically thin semiconductors, these materials have strongly bound biexcitons and operate in the visible frequency range of the electromagnetic spectrum.", "To remove the strong pump laser from the generated light in optical devices or to simultaneously excite non-degenerate polaritons, a bichromatic-pump configuration with two spectrally separated pump lasers is desirable.", "In this paper, we theoretically investigate spontanous four-wave mixing and quadrature-squeezing in a bichromatically pumped atomically thin semiconductor microcavity.", "We explore two different configurations that support degenerate and non-degenerate scattering from polaritons into bound biexcitons, respectively.", "We find that these configurations lead to the generation strongly single- and two-mode quadrature-squeezed light." ], [ "Introduction", "Strongly bound biexcitons in atomically thin semiconductor microcavities provide an avenue for low-power nonlinear optical devices, because the resonant scattering of unbound exciton-polaritons into a bound biexciton yields a powerful enhancement of the nonlinear optical response [1], [2], [3].", "Previous analyses of degenerate four-wave mixing have shown that strong parametric gain from biexcitons [4], [5] can provide quadrature-squeezing with significantly lower power than when using conventional third-order nonlinear materials [6].", "Thereby, atomically-thin semiconductors have a significant potential as a platform for nonlinear optics.", "In the interest of spectrally separating the pump photons from the generated signal, it is highly convenient to use a bichromatic-pump scheme, where two non-degenerate pumps create a degenerate signal at the mean of the two pump photon energies [7], [8], [9], [10], [11], [12].", "In addition, such a bichromatic-pump setup allows to excite non-degenerate polaritons, which opens up a non-degenerate nonlinear scattering channel [13].", "Four-wave mixing in pump-probe experiments with semiconductors [14], [15], [16], [17], [18], [19], [20] and semiconductor microcavities [21], [22] is a heavily studied theoretical and experimental topic, and it was early recognized that the bound biexciton plays a paramount role in four-wave mixing.", "However, spontaneous four-wave mixing, also known as parametric flourescence or hyper-raman scattering, in semiconductors is far less studied [23], [24], despite being a central topic for quantum light sources with conventional nonlinear materials [25], [26], [27], [28].", "Four-wave mixing is a third-order nonlinear process that transforms two pump photons with frequencies $\\omega _1,\\;\\omega _2$ to a pair of signal/idler photons with frequencies $\\omega _3,\\;\\omega _4$ .", "Stimulated four-wave mixing is facilitated by applying an idler laser field with frequency $\\omega _3$ , which will produce a signal at $\\omega _4=\\omega _1+\\omega _2-\\omega _3$ due to energy conservation.", "In contrast, in the absence of a stimulating idler field, spontaneous four-wave mixing produce a broad and continuous spectrum of photon pairs.", "This is the regime that is studied in this paper.", "Due to the pairwise creation of photon pairs, the field properties of the generated light can exhibit strong non-classical correlation signatures in the form of quadrature squeezing, two-mode squeezing and photon-number correlations.", "In semiconductors, particularly in semiconductor microcavities with a strong light-matter interaction, spontaneous four-wave mixing can be mediated by the biexciton, which can break into a correlated pair of polaritons [see Fig.", "REF (a)–(b)].", "Figure: (a) Microcavity containing an atomically-thin semiconductor.", "The cavity is driven with bichromatic coherent laser light at frequencies ω 1 \\omega _1 and ω 2 \\omega _2.", "Four-wave mixing due to nonlinearities in the optical response of the semiconductor creates an output photon pair with frequencies ω 3 \\omega _3 and ω 4 \\omega _4.", "(b) Diagrammatic illustration of biexcitonic four-wave mixing.", "The input photons (red and yelow wiggly arrows) create a pair of uncorrelated exciton-polaritons (single black lines).", "The two polaritons can scatter via the Coulomb interaction W ¯ b - \\overline{W}_{\\rm b}^{-} and create a bound biexciton (double line).", "The biexciton spontaneously breaks into a pair of polaritons via the Coulomb interaction W b - W^{-}_{\\rm b}, which are outcoupled from the cavity as photons (blue and green wiggly arrows).", "(c) Polariton energy bands (E q ± E^\\pm _q, black lines) in Configuration A, where the cavity frequency is tuned in order to bring the lower polariton branch into Feshbach resonance with the bound biexciton (blue dotted line), 2E 0 - =E b,- xx 2E_0^-=E^{\\rm xx}_{\\rm b,-}.", "The driving frequencies are symmetrically centered around the lower polariton (orange and red arrows).", "The uncoupled cavity and exciton energy bands are shown with grey lines.", "(d) Similar to (c) for Configuration B, where the cavity frequency is tuned such that an upper-lower polariton pair matches the energy of a bound biexciton, E 0 - +E 0 + =E b,- xx E^-_0 + E^+_0=E^{\\rm xx}_{\\rm b,-}, forming a non-degenerate Feshbach resonance.The driving frequencies are symmetrically centered around 1 2E b,- xx \\frac{1}{2}E^{\\rm xx}_{\\rm b,-} with a resonance occurring when ω 1 =E 0 + ,ω 2 =E 0 - \\omega _1=E^+_0,\\;\\omega _2=E^-_0.In this paper, we theoretically investigate spontaneous four-wave mixing in bichromatically pumped semiconductor microcavities with atomically-thin semiconductors.", "We investigate the spectrum and quadrature squeezing of the generated light and discuss how the cavity resonance and pump frequencies can be tuned in two configurations to facilitate efficient light generation and squeezing.", "Specifically, we investigate two configurations close to resonances, where four-wave mixing and squeezing is efficient.", "In the first configuration [Configuration A, cf.", "Fig.", "REF (c)], the cavity frequency is tuned such that the lower polariton (LP) energy $E^-_0$ matches half the bound biexciton energy $\\frac{1}{2}E^{\\rm xx}_{\\rm b,-}$ .", "Here, the pump lasers should be nearly degenerate around the lower-polariton energy in order to efficiently excite LPs, which scatter via the Coulomb interaction into bound biexcitons through a degenerate polaritonic Feshbach resonance [1], [29].", "A Feshbach resonance occurs when the energy of an open scattering channel (in this case a LP pair) matches the energy of a bound multiparticle complex (here a bound biexciton) [30].", "The biexciton decays spontaneously into two degenerate LPs, which leads to single-mode squeezed light emission.", "In the second configuration [Configuration B, cf.", "Fig.", "REF (d)], the cavity frequency is tuned such that the sum of the LP and upper polariton (UP) energies $E^-_0 + E^+_0$ matches the bound biexciton energy $E^{\\rm xx}_{\\rm b,-}$ , thereby giving rise to a non-degenerate polaritonic Feshbach resonance, also known as a polaritonic cross Feshbach resonance [13].", "The driving field is resonant, when the two pump laser energies match the two polariton energies.", "An uncorrelated UP-LP pair can then resonantly scatter into a bound biexciton, which decays into a non-degenerate UP-LP pair that leads to the emission of two-mode squeezed light.", "In this context, two-mode quadrature-squeezing refers to a reduction of the quadrature noise in the cross-correlation between spectrally distinct frequency bands [31].", "For the investigation, we employ a rigorous perturbative expansion of the electronic and photonic correlations up to third order in the driving field through the dynamics-controlled truncation (DCT) scheme [32], [33].", "In order to calculate not only intracavity dynamics, but also the properties of the outcoupled and thus detectable field, we combine DCT with a Heisenberg-Langevin approach [34], [6].", "Due to the presence of two non-degenerate pumps, the steady state of the system is not constant [35], but can be expressed as a discrete Fourier series.", "This series expansion complicates the solution of the Heisenberg-Langevin equations of motion in frequency space compared to the case of a constant steady-state, which we solve through a discrete Fourier-index formalism.", "The paper is organized as follows: In Sec.", ", we describe the semiconductor model that is used for the electronic states in the semiconductor, the photonic states in the cavity and the external driving field.", "In Sec.", ", the DCT and Heisenberg-Langevin methods are applied to calculate the dynamics, spontaneous four-wave mixing spectrum and squeezing.", "In Sec.", ", we present and discuss the results for Configurations A and B.", "Finally, we conclude in Sec.", "." ], [ "Model", "Here, we describe the used two-band semiconductor model for electrons and holes, as well as the coupling to cavity photons and the external laser drive." ], [ "Hamiltonian", "We consider an atomically thin semiconductor placed in a planar cavity, which is driven with two coherent pump beams with frequencies $\\omega _1$ and $\\omega _2$ at normal incidence.", "The electromagnetic field in the cavity is quantized through the bosonic annihilation and creation operators $a_{\\sigma ,\\mathbf {k}}$ and $a_{\\sigma ,\\mathbf {k}}^\\dagger $ , which describe cavity photons with polarizaton $\\sigma $ and in-plane momentum $\\mathbf {k}$ .", "We assume the cavity to be rotationally symmetric in the plane, such that the cavity mode is polarization-degenerate.", "The semiconductor is described by electronic states in the conduction and valence bands with fermionic annihilation and creation operators $c_{\\zeta \\mathbf {k}}, \\; c_{\\zeta \\mathbf {k}}^\\dagger $ for the conduction band and $v_{\\zeta \\mathbf {k}}, \\; v_{\\zeta \\mathbf {k}}^\\dagger $ for the valence band.", "The index $\\zeta =(\\xi ,s)$ labels valley ($\\xi $ ) and spin ($s$ ).", "We shall restrict our analysis to the lowest-energy optical transitions in transition-metal dichalcogenide monolayers, which are located at the valleys $\\xi =K$ and $\\xi =K^{\\prime }$ , respectively.", "Due to spin-orbit coupling, the lowest-energy transitions allow photons with right-hand ($\\sigma =\\mathrm {R}$ ) or left-hand ($\\sigma =\\mathrm {L}$ ) circular polarization to excite electron-hole pairs with spin-valley combinations $\\zeta =(K,\\uparrow )$ and $\\zeta =(K^{\\prime },\\downarrow )$ .", "Due to this unambiguous relation between photon polarization and electron spin and valley, we can absorb photon polarization into the index $\\zeta $ and use the shorthand notation $\\zeta \\in \\lbrace K,K^{\\prime }\\rbrace $ to denote all three degrees of freedom.", "The total Hamiltonian $H$ is divided into a noninteracting term $H_0$ and a Coulomb interaction term $H_{\\rm C}$ , such that $H=H_0+H_{\\rm C}$ .", "The noninteracting Hamiltonian describing electrons, photons and their coupling is given by $\\begin{split}H_0 &= \\sum _{\\zeta \\mathbf {k}} [E^{\\rm c}_\\mathbf {k}c_{\\zeta ,\\mathbf {k}}^\\dagger c_{\\zeta ,\\mathbf {k}}+E^{\\rm v}_\\mathbf {k}v_{\\zeta ,\\mathbf {k}}^\\dagger v_{\\zeta ,\\mathbf {k}}+ E^{\\rm p}_\\mathbf {k}a_{\\zeta ,\\mathbf {k}}^\\dagger a_{\\zeta ,\\mathbf {k}}]\\\\& + \\sum _{\\zeta \\mathbf {k}\\mathbf {q}} [A_\\mathbf {q}c_{\\zeta ,\\mathbf {k}+\\mathbf {q}}^\\dagger v_{\\zeta ,\\mathbf {k}} a_{\\zeta ,\\mathbf {q}}+ A_\\mathbf {q}^* a_{\\zeta ,\\mathbf {q}}^\\dagger v_{\\zeta ,\\mathbf {k}}^\\dagger c_{\\zeta ,\\mathbf {k}+\\mathbf {q}}],\\end{split}$ where the first two terms account for the energy of electrons in the conduction and valence band, $E^{\\rm c}_\\mathbf {k}=E^{\\rm c}_{0} + \\hbar ^2k^2/(2m_{\\rm e})$ and $E^{\\rm v}_\\mathbf {k}=E^{\\rm v}_{0} - \\hbar ^2k^2/(2m_{\\rm h})$ , with $E^{\\rm c}_0-E^{\\rm v}_0$ the quasiparticle bandgap and $m_{\\rm e}, \\:m_{\\rm h}$ the effective electron and hole masses.", "The third term accounts for the energy of cavity photons, $E^{\\rm p}_\\mathbf {k}= \\hbar [\\omega _{\\rm p,0}^2 + (ck/\\bar{n})^2]^{1/2}$ , where $\\omega _{\\rm p,0}$ is the resonance frequency at $\\mathbf {k}=0$ , $c$ is the speed of light and $\\bar{n}$ is the effective refractive index of the cavity mode [36], [37].", "The last two terms describe electron-hole-photon coupling with strength $A_\\mathbf {q}=\\sqrt{E^{\\rm p}_0/E^{\\rm p}_\\mathbf {q}}A_0$ , which depends on the out-of-plane confinement of the cavity mode and the Bloch momentum matrix element of the semiconductor [38], [39].", "The Coulomb interaction is given by [40] $\\begin{split}&H_{\\rm C} = \\frac{1}{2}\\sum _{\\mathbf {k}_1\\mathbf {k}_2\\mathbf {q}} \\sum _{\\zeta _1\\zeta _2} V_\\mathbf {q}\\Big (c^\\dagger _{\\zeta _1,\\mathbf {k}_1+\\mathbf {q}}c^\\dagger _{\\zeta _2,\\mathbf {k}_2-\\mathbf {q}}c_{\\zeta _2,\\mathbf {k}_2}c_{\\zeta _1,\\mathbf {k}_1}\\\\&+ v^\\dagger _{\\zeta _1,\\mathbf {k}_1+\\mathbf {q}}v^\\dagger _{\\zeta _2,\\mathbf {k}_2-\\mathbf {q}}v_{\\zeta _2,\\mathbf {k}_2}v_{\\zeta _1,\\mathbf {k}_1}+ 2c^\\dagger _{\\zeta _1,\\mathbf {k}_1+\\mathbf {q}}v^\\dagger _{\\zeta _2,\\mathbf {k}_2-\\mathbf {q}}v_{\\zeta _2,\\mathbf {k}_2}c_{\\zeta _1,\\mathbf {k}_1}\\Big ).\\end{split}$ Here, $V_\\mathbf {q}= e_0^2[2S\\epsilon _0\\epsilon _\\mathbf {q}q]^{-1}$ is the screened 2D Coulomb potential, where $e_0$ is the elementary charge, $S$ is the quantization surface area, $\\epsilon _0$ is the vacuum permittivity, and $\\epsilon _\\mathbf {q}$ is the dielectric function for 2D semiconductors which is described in Appendix .", "The inter- and intra-valley Coulomb exchange interaction has been neglected here, because in transition-metal dichalcogenides it is significantly weaker [40], [20] than the direct interaction in Eq.", "(REF ).", "We note that the exchange interaction leads to effects such as biexciton fine structure [41] and a splitting of the exciton into branches with linear and quadratic dispersion [42] that coincide at zero center-of-mass momentum.", "While these effects are rich and interesting, we consider here only the physics of the dominating nonlinear response from the direct Coulomb interaction and leave the additional inclusion of exchange effects to more detailed future analyses." ], [ "Driving field", "The two pump laser drives are introduced via the input-output formalism [43], [44], which is based on the microscopic interaction between the internal cavity mode with the quantized continuum of external modes.", "By formally solving the equation of motion of the external field operators, the external input field is linked to the equation of motion for the cavity field operator as $-i\\hbar \\partial _t a_{\\zeta ,0}^\\dagger = [H,a_{\\zeta ,0}^\\dagger ] +i\\hbar \\gamma ^{\\rm p}a_{\\zeta ,0}^\\dagger + i\\hbar \\sqrt{2\\gamma ^{\\rm p}} a^{\\rm in\\dagger }_{\\zeta },$ where the last two terms describe outcoupling from the cavity mode and incoupling of the driving field, respectively.", "We take the expectation value of the driving field to be in a coherent state and to have the bichromatic form $*{a^{\\rm in}_\\zeta (t)} = *{a^{\\rm in}_\\zeta }_1 e^{i\\omega _1 t} + *{a^{\\rm in}_\\zeta }_2 e^{i\\omega _2 t},$ where $\\omega _1$ and $\\omega _2$ are the frequencies of the two driving lasers and $*{a^{\\rm in}_\\zeta }_1$ and $*{a^{\\rm in}_\\zeta }_2$ are the corresponding amplitudes.", "We shall take the total power in each of the driving lasers to be equal and denote by $\\mathbf {\\lambda }^{\\rm in}_{i}, \\; i=1,2$ the polarization vector of the two drives in the circular basis.", "We then express the input field components as polarization vectors as ${\\mathbf {a}^{\\rm in}}_i = [{a^{\\rm in}_K}_i, {a^{\\rm in}_{K^{\\prime }}}_i]^{\\rm T}$ , where the superscript T denotes transposition.", "The input-field vectors are related to the total driving power $\\mathcal {P}_{\\rm in}$ as [43] ${\\mathbf {a}^{\\rm in}}_i = [\\mathcal {P}/(2E^{\\rm p}_0)]^{1/2} \\mathbf {\\lambda }^{\\rm in}_i$ ." ], [ "Methods", "Here, we introduce the DCT scheme for time evolution of the expectation values and the Heisenberg-Langevin equations for the fluctuation operators.", "The model and techniques are a generalization of the work presented in Ref.", "denning2022efficient to bichromatic driving fields." ], [ "Time evolution", "To calculate the time evolution of the system, we apply the DCT scheme [32], [33] to perturbatively expand the equations of motion for the coherent expectation values to third order in the driving field $a^{\\rm in}_\\zeta $ .", "The first step in this procedure is the Heisenberg equation of motion for a general operator $Q$ , $-i\\hbar \\partial _t *{Q} = *{[H,Q]}$ (see Ref.", "denning2022efficient for the explicit derivation).", "The Coulomb interaction and the fermionic commutation relations generate expectation values with an unequal number of conduction and valence band operators $c_{\\zeta ,\\mathbf {k}}$ and $v_{\\zeta ,\\mathbf {k}}$ , such as $*{c^\\dagger _{\\zeta ,\\mathbf {k}+\\mathbf {q}} c^\\dagger _{\\zeta ^{\\prime },\\mathbf {k}^{\\prime }-\\mathbf {q}} c_{\\zeta ^{\\prime },\\mathbf {k}^{\\prime }} v_{\\zeta ,\\mathbf {k}}}$ .", "Here, conduction-band electron densities $c_{\\zeta ^{\\prime },\\mathbf {k}^{\\prime }-\\mathbf {q}}^\\dagger c_{\\zeta ^{\\prime },\\mathbf {k}^{\\prime }}$ are expressed perturbatively in terms of electron-hole pair operators using a unit-operator expansion [45], [40] $c_{\\zeta ^{\\prime },\\mathbf {k}^{\\prime }-\\mathbf {q}}^\\dagger c_{\\zeta ^{\\prime },\\mathbf {k}^{\\prime }} = \\sum _{\\zeta _1\\mathbf {k}_1} c_{\\zeta ^{\\prime },\\mathbf {k}^{\\prime }-\\mathbf {q}}^\\dagger v_{\\zeta _1,\\mathbf {k}_1} v_{\\zeta _1,\\mathbf {k}_1}^\\dagger c_{\\zeta ^{\\prime },\\mathbf {k}^{\\prime }} + \\mathcal {O}[(a^{\\rm in })^4]$ , and similarly for valence-band hole densities.", "In this expansion, the Hilbert space has been restricted to pairs of conduction band electrons and valence band holes [40], which means that the effects of unpaired electrons or holes have not been accounted for.", "In this paper, this assumption is valid, because the only source of carriers is optical excitation through the driving field, which creates electrons and holes in pairs.", "However, in the presence of unpaired electrons or holes through e.g.", "n- or p-doping, which we do not consider in this paper, the pair expansion is no longer valid.", "Within the DCT scheme, the third-order perturbative expansion corresponds to only keeping terms with up to three normal-ordered electron-hole pair or cavity photon operators [46], since the number of pair or photon operators corresponds to the order in the driving field.", "Such a perturbative expansion is possible because all correlations have their origin in the action of the external driving field [32], [33].", "Furthermore, in the coherent-response limit, which is considered here, the third-order expectation values are systematically factorized as ${c^\\dagger v c^\\dagger v v^\\dagger c} = {c^\\dagger v c^\\dagger v}*{v^\\dagger c}$ and ${a^\\dagger c^\\dagger v v^\\dagger c} = {a^\\dagger c^\\dagger v}*{v^\\dagger c}$ .", "This third-order factorization is exact in the coherent regime [46], i.e.", "when the only source of electrons and holes is excitation with coherent light near the resonances of the system, and when incoherent scattering processes e.g.", "via phonons can be neglected.", "Phonon scattering is later included phenomenologically through an exciton dephasing rate, which is obtained from a separate self-consistent microscopic calculation [47].", "This means that the validity of our approach is limited to the regime where cavity outcoupling ($\\gamma ^{\\rm p}$ ) dominates over exciton dephasing ($\\gamma ^{\\rm x}$ ), such that polaritons will be outcoupled before significant dephasing and scattering takes place.", "In practise, this limits the theory to the low-temperature limit, where phonon dephasing is slow.", "For the calculations presented in this paper, the temperature is 30 K, which yields a phonon-induced dephasing rate below 1 meV, while the cavity outcoupling is 9 meV, consistent with fabricated dielectric microcavities with transition-metal dichalcogenide monolayers [48].", "Thus, there is a separation between these two time scales by an order of magnitude.", "The electron-hole are expanded on the complete set of excitonic eigenstates as $*{c^\\dagger _{\\zeta \\mathbf {k}}v_{\\zeta \\mathbf {k}^{\\prime }}} = \\sum _{i} \\phi ^{i*}_{\\beta \\mathbf {k}+\\alpha \\mathbf {k}^{\\prime }}*{P^{i\\dagger }_{\\zeta ,\\mathbf {k}-\\mathbf {k}^{\\prime }}}$ , where $\\phi ^{i}_\\mathbf {k}$ is the $i$ th exciton wavefunction (solution to the Wannier equation) in momentum space with zero-momentum energy $E^{\\mathrm {x}}_{i,0}$ and $P^{i\\dagger }_{\\zeta ,\\mathbf {q}}$ is the creation operator of the corresponding exciton with center-of-mass momentum $\\mathbf {q}$ , and $\\alpha =m_{\\rm e}/(m_{\\rm e} + m_{\\rm h})$ and $\\beta =m_{\\rm h}/(m_{\\rm e} + m_{\\rm h})$ .", "We restrict the analysis to the lowest-energy ($i=\\mathrm {1s}$ ) exciton, which is energetically separated from the next excitonic state by hundreds of meV for atomically thin transition-metal dichalcogenides [49], [50] and we shall thus drop the index $i$ from here onwards, meaning that the electronic states are projected onto the lowest exciton wavefunction.", "This is valid, when the excitation energy is far away from any of the neighbouring exciton states.", "In practise, the projection of electronic states onto the lowest-energy exciton wavefunction ensures an efficient numerical evaluation of the dynamics and exciton-exciton scattering coefficients.", "Two-photon expectation values are partitioned into factorized parts and correlations, defined as $\\mathcal {D}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q}} := *{a^\\dagger _{\\zeta ,\\mathbf {q}}a^\\dagger _{\\zeta ^{\\prime },-\\mathbf {q}}} - *{a^\\dagger _{\\zeta ,\\mathbf {q}}}*{a^\\dagger _{\\zeta ^{\\prime },-\\mathbf {q}}}$ .", "Three-particle photon-electron-hole correlations are defined as $*{a_{\\zeta ,\\mathbf {q}}^\\dagger c^\\dagger _{\\zeta ^{\\prime },\\mathbf {k}-\\alpha \\mathbf {q}}v_{\\zeta ^{\\prime },\\mathbf {k}+\\beta \\mathbf {q}}}^{\\rm c} = *{a_{\\zeta ,\\mathbf {q}}^\\dagger c^\\dagger _{\\zeta ^{\\prime },\\mathbf {k}-\\alpha \\mathbf {q}}v_{\\zeta ^{\\prime },\\mathbf {k}+\\beta \\mathbf {q}}} - *{a_{\\zeta ,\\mathbf {q}}^\\dagger }\\!\\!", "*{ c^\\dagger _{\\zeta ^{\\prime },\\mathbf {k}-\\alpha \\mathbf {q}}v_{\\zeta ^{\\prime },\\mathbf {k}+\\beta \\mathbf {q}}}$ , i.e.", "with the contribution that can be factorized in electron-hole pairs and photons subtracted.", "These correlations are projected onto the exciton wavefunction as $\\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}:= \\sum _\\mathbf {k}\\phi _{\\mathbf {k}}*{a_{\\zeta ,\\mathbf {q}}^\\dagger c^\\dagger _{\\zeta ^{\\prime },\\mathbf {k}-\\alpha \\mathbf {q}}v_{\\zeta ^{\\prime },\\mathbf {k}+\\beta \\mathbf {q}}}^{\\rm c}$ .", "The four-particle correlations of two electron-hole pairs have a more complicated structure due to the two possible electron-hole pairings: $*{c^\\dagger _{\\zeta \\mathbf {k}+\\mathbf {q}}v_{\\zeta \\mathbf {k}}c^\\dagger _{\\zeta ^{\\prime }\\mathbf {k}^{\\prime }-\\mathbf {q}}v_{\\zeta ^{\\prime }\\mathbf {k}^{\\prime }}}^{\\rm c} := *{c^\\dagger _{\\zeta \\mathbf {k}+\\mathbf {q}}v_{\\zeta \\mathbf {k}}c^\\dagger _{\\zeta ^{\\prime }\\mathbf {k}^{\\prime }-\\mathbf {q}}v_{\\zeta ^{\\prime }\\mathbf {k}^{\\prime }}} - *{c^\\dagger _{\\zeta \\mathbf {k}+\\mathbf {q}}v_{\\zeta \\mathbf {k}}}\\!\\!", "*{c^\\dagger _{\\zeta ^{\\prime }\\mathbf {k}^{\\prime }-\\mathbf {q}}v_{\\zeta ^{\\prime }\\mathbf {k}^{\\prime }}} + *{c^\\dagger _{\\zeta ^{\\prime }\\mathbf {k}^{\\prime }-\\mathbf {q}}v_{\\zeta \\mathbf {k}}}\\!\\!", "*{c^\\dagger _{\\zeta \\mathbf {k}+\\mathbf {q}}v_{\\zeta ^{\\prime }\\mathbf {k}^{\\prime }}}$ .", "These correlations are projected on the 1s exciton wavefunction in terms of the biexcitonic correlations $\\tilde{\\mathcal {B}}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q},\\pm }$ in the triplet ($+$ ) and singlet ($-$ ) linear combinations through the relation [51] $&\\frac{1}{2}(*{c_{\\zeta \\mathbf {k}+\\mathbf {q}}^\\dagger v_{\\zeta \\mathbf {k}}c_{\\zeta \\mathbf {k}^{\\prime }-\\mathbf {q}}^\\dagger v_{\\zeta ^{\\prime }\\mathbf {k}^{\\prime }}}^{\\rm c}\\pm *{c_{\\zeta ^{\\prime }\\mathbf {k}+\\mathbf {q}}^\\dagger v_{\\zeta \\mathbf {k}}c_{\\zeta \\mathbf {k}^{\\prime }-\\mathbf {q}}^\\dagger v_{\\zeta ^{\\prime }\\mathbf {k}^{\\prime }}}^{\\rm c})\\\\ &=:\\phi _{\\mathbf {k}+ \\beta \\mathbf {q}}^{*}\\phi _{\\mathbf {k}^{\\prime } - \\beta \\mathbf {q}}^{*}\\tilde{\\mathcal {B}}_{\\mathbf {q},\\pm }^{\\zeta \\zeta ^{\\prime }}\\mp \\phi _{\\alpha \\mathbf {k}+ \\beta (\\mathbf {k}^{\\prime }-\\mathbf {q})}^{*}\\phi _{\\beta (\\mathbf {k}+\\mathbf {q}) + \\alpha \\mathbf {k}^{\\prime }}^{*}\\tilde{\\mathcal {B}}_{\\mathbf {k}^{\\prime }-\\mathbf {k}-\\mathbf {q},\\pm }^{\\zeta \\zeta ^{\\prime }}.$ Due to the Coulomb interaction, the equation of motion for the biexcitonic correlation $\\tilde{\\mathcal {B}}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q},\\pm }$ is coupled to correlations with different momenta, $\\tilde{\\mathcal {B}}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q}^{\\prime },\\pm }$ .", "To alleviate this momentum off-diagonal coupling and thereby simplify the structure of the equations of motion, the biexcitonic correlations are expanded on the biexcitonic wavefunctions $\\Phi ^{\\pm }_{\\mu ,\\mathbf {q}}$ , which are the solutions to an effective two-exciton Schrödinger equation [20] with corresponding energies $E^{\\rm xx}_{\\mu ,\\pm }$ (see Appendix  for details).", "Bound ($\\mu =\\mathrm {b},\\:E_{\\rm b,-}^{\\mathrm {xx}}<2E_{0}^{\\rm x}$ ) and unbound ($E_{\\mu ,-}^{{\\rm xx}}>2E_{0}^{\\rm x}$ ) solutions exist in the singlet channel, where the effective exciton-exciton Coulomb interaction is attractive.", "The triplet channel supports only unbound solutions [52], because the effective interaction is repulsive.", "The unbound solutions constitute a two-exciton scattering continuum.", "In this biexcitonic eigenbasis, the correlations are expressed as $\\tilde{\\mathcal {B}}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q},\\pm }=\\sum _\\mu \\Phi ^{\\pm }_{\\mu \\mathbf {q}} \\mathcal {B}^{\\zeta \\zeta ^{\\prime }}_{\\mu ,\\pm }$ .", "At the level of the equations of motion, we phenomenologically include phonon-induced broadening of the exciton and biexciton by introducing complex-valued energies in the equations of motion: $\\tilde{E}^{\\rm x}_\\mathbf {q}=E^{\\rm x}_\\mathbf {q}+ i\\hbar \\gamma ^{\\rm x}, \\: \\tilde{E}^{\\rm xx}_{\\mu ,\\pm }=E^{\\rm xx}_{\\mu ,\\pm } + 2i\\hbar \\gamma ^{\\rm x}$ .", "The broadening $\\gamma ^{\\rm x}$ is calculated through a microscopic, self-consistent approach for the phonon interaction as in Refs. selig2016excitonic,christiansen2017phonon,khatibi2018impact,brem2019intrinsic,lengers2020theory.", "We approximate the biexcitonic broadening as twice the exciton broadening [57], [58], [59].", "Similarly, photon outcoupling from the cavity is included in the complex cavity frequency $\\tilde{E}^{\\rm p}_\\mathbf {q}= E^{\\rm p}_\\mathbf {q}+ i\\hbar \\gamma ^{\\rm p}$ .", "The equations of motion of the photon and exciton amplitudes and the three types of correlations as described above form a closed set within the third-order DCT scheme [46].", "In a rotating reference frame with respect to the mean drive frequency $\\omega _{\\rm r}=(\\omega _1+\\omega _2)/2$ the equations of motion are $\\begin{split}-i\\hbar \\partial _t *{a_{\\zeta ,0}^\\dagger } &= (\\tilde{E}^{\\rm p}_0-\\hbar \\omega _{\\rm r})*{a_{\\zeta ,0}^\\dagger } + \\Omega _0 *{P^\\dagger _{\\zeta ,0}}\\\\&+i\\hbar \\sqrt{2\\gamma ^{\\rm p}}[*{a^{\\rm in\\dagger }_{\\zeta }}_1e^{-i\\omega _{12} t} + *{a^{\\rm in\\dagger }_{\\zeta }}_{2}e^{+i\\omega _{12} t}]\\\\-i\\hbar \\partial _t*{P_{\\zeta ,0}^\\dagger } &= (\\tilde{E}^{\\rm x}_0-\\hbar \\omega _{\\rm r})*{P^{\\dagger }_{\\zeta ,0}} + \\Omega _0*{a_{\\zeta ,0}^\\dagger }\\\\ &-\\sum _\\mathbf {q}\\tilde{\\Omega }_\\mathbf {q}(\\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}+ \\delta _{\\mathbf {q},0}*{a_{\\zeta ,0}^\\dagger }*{P_{\\zeta ,0}^\\dagger })*{P_{\\zeta ,0}}\\\\ &+ W^0 *{*{P_{\\zeta ,0}^\\dagger }}^2*{P_{\\zeta ,0}^\\dagger }+\\sum _{\\mu \\zeta ^{\\prime }\\pm } W_{\\mu }^{\\pm }\\mathcal {B}_{\\mu ,\\pm }^{\\zeta \\zeta ^{\\prime }} *{P_{\\zeta ^{\\prime },0}}.\\\\-i\\hbar \\partial _t \\mathcal {B}_{\\mu ,\\pm }^{\\zeta \\zeta ^{\\prime }} &= (\\tilde{E}^{\\rm xx}_{\\mu ,\\pm }-2\\hbar \\omega _{\\rm r})\\mathcal {B}_{\\mu ,\\pm }^{\\zeta \\zeta ^{\\prime }}+\\frac{1}{2}(1\\pm \\delta _{\\zeta \\zeta ^{\\prime }})\\\\ &\\hspace{-14.22636pt}\\times \\lbrace \\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}W\\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _{\\mu }*{P^\\dagger _{\\zeta ,0}}*{P^\\dagger _{\\zeta ^{\\prime },0}}+\\sum _{\\mathbf {q}}[\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Omega \\hspace{-0.83328pt}}\\hspace{0.83328pt}_{\\mu ,-\\mathbf {q}}^\\pm \\mathcal {C}^{\\zeta ^{\\prime }\\zeta }_{-\\mathbf {q}}+\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Omega \\hspace{-0.83328pt}}\\hspace{0.83328pt}_{\\mu ,\\mathbf {q}}^\\pm \\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}]\\rbrace \\\\-i\\hbar \\partial _t \\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}&=(\\tilde{E}^{\\rm p}_{\\mathbf {q}}+\\tilde{E}^{\\rm x}_{\\mathbf {q}}- 2\\hbar \\omega _{\\rm r})\\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}\\\\&+ \\Omega _{\\mathbf {q}} \\mathcal {D}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q}}- \\frac{1}{2}\\delta _{\\zeta \\zeta ^{\\prime }}\\tilde{\\Omega }_\\mathbf {q}*{P^\\dagger _{\\zeta ,0}}^2+ \\sum _{\\mu \\pm } \\Omega _{\\mu ,\\mathbf {q}}^{\\pm }\\mathcal {B}_{\\mu ,\\pm }^{\\zeta \\zeta ^{\\prime }}\\\\-i\\hbar \\partial _t \\mathcal {D}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}&=2(\\tilde{E}^{\\rm p}_\\mathbf {q}- \\hbar \\omega _{\\rm r})\\mathcal {D}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}+ \\Omega _\\mathbf {q}\\mathcal {C}^{\\zeta ^{\\prime }\\zeta }_{-\\mathbf {q}}+ \\Omega _{-\\mathbf {q}}\\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}.\\end{split}$ The full details of the derivation can be found in Ref.", "denning2022efficient, but for completeness, all quantities are defined in Appendix .", "In each equation, the first term on the right-hand side describes free evolution, and the remaining terms describe couplings or driving.", "For the photon amplitude $*{a^\\dagger _{\\zeta ,0}}$ , the second term is the linear vacuum-Rabi coupling to the exciton with coupling strength $\\Omega _0$ (where $2\\Omega _0$ is the vacuum Rabi splitting).", "The last term is input-field driving, where in the rotating frame, the two pumps rotate with $\\pm \\omega _{12}$ , with the difference frequency $\\omega _{12}:=(\\omega _2-\\omega _1)/2$ .", "For $*{P^\\dagger _{\\zeta ,0}}$ , the second term describes vacuum Rabi coupling.", "The third term arises from the fermionic substructure of excitons and generates nonlinear saturation of the light-matter interaction due to Pauli blocking $\\tilde{\\Omega }_\\mathbf {q}$ .", "The last two terms describe Coulomb exciton-exciton interactions at the mean-field level ($W^0$ ) and Coulomb-induced interactions with the biexcitonic correlations ($W^\\pm _\\mu $ ) beyond mean-field.", "For the biexcitonic correlations $\\mathcal {B}^{\\zeta \\zeta ^{\\prime }}_{\\mu ,\\pm }$ , the second term contains Coulomb-scattering of uncorrelated excitons ($\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}W\\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _\\mu $ ) and coupling to exciton-photon correlations through the light-matter interaction ($\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Omega \\hspace{-0.83328pt}}\\hspace{0.83328pt}_{\\mu ,\\mathbf {q}}^\\pm $ ).", "For the exciton-photon correlations $\\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}$ , the second term describes linear coupling to two-photon correlations by exchanging an exciton with a photon ($\\Omega _0$ ).", "The third term describes a nonlinear scattering of two uncorrelated excitons ($\\tilde{\\Omega }_\\mathbf {q}$ ), and the last term describes coupling to biexcitonic correlations via optical fields ($\\Omega ^\\pm _{\\mu ,\\mathbf {q}}$ ).", "The second and third terms in the equation of motion for the two-photon correlations $\\mathcal {D}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}$ describe coupling to exciton-photon correlations by exchanging a photon with an exciton through the light-matter coupling $\\Omega _0$ ." ], [ "Steady-state discrete Fourier decomposition", "As noted, the driving terms oscillate with the frequency $\\pm \\omega _{12}$ in the rotating frame.", "In contrast to the single-pump case [6] where the driving term is constant in the rotating frame, this means that the steady state evolution of the expectation values is not constant, but contains terms oscillating with integer multiples of $\\omega _{12}$ .", "This means that the steady-state dynamical variables $Y(t)$ , where $Y$ is any of the expectation values in Eq.", "(REF ), are periodic with period $T=2\\pi /*{\\omega _{12}}$ and can thus be represented as a discrete Fourier series $\\begin{split}Y(t)\\big \\vert _{t\\rightarrow \\infty } &= \\sum _{m=-\\infty }^\\infty Y_m e^{-in\\omega _{12}t} \\\\Y_m &= \\lim _{t\\rightarrow \\infty }\\frac{1}{T}\\int _{t}^{t+T}{t^{\\prime }} Y(t^{\\prime })e^{im\\omega _{12}t^{\\prime }}.\\end{split}$ In order to find the steady-state discrete Fourier series, Eq.", "(REF ) is propagated numerically in time until the coefficients $Y_m$ have converged sufficiently." ], [ "Polaritons", "When photons and excitons are strongly coupled, as considered in this paper, the relevant linear-response eigenstates are polaritonic states.", "These states emerge naturally out of the equation of motion by diagonalization in the linear limit.", "Taking the first two lines of Eq.", "(REF ), removing the nonlinear terms and the driving, and casting them in the general nonzero momentum form in the laboratory (i.e.", "non-rotating) reference frame, we have $\\begin{split}-i\\hbar \\partial _t *{a_{\\zeta ,\\mathbf {q}}^\\dagger } &= \\tilde{E}^{\\rm p}_\\mathbf {q}*{a_{\\zeta ,\\mathbf {q}}^\\dagger } + \\Omega _\\mathbf {q}*{P^\\dagger _{\\zeta ,\\mathbf {q}}}\\\\-i\\hbar \\partial _t*{P_{\\zeta ,\\mathbf {q}}^\\dagger } &= \\tilde{E}^{\\rm x}_\\mathbf {q}*{P^{\\dagger }_{\\zeta ,\\mathbf {q}}} + \\Omega _\\mathbf {q}*{a_{\\zeta ,\\mathbf {q}}^\\dagger }.\\end{split}$ In the limit where photon outcoupling and exciton dephasing are neglected, these equations of motion can be diagonalized through a unitary Hopfield transformation [60] by introducing the polariton operators $\\Xi ^{\\pm \\dagger }_{\\zeta ,\\mathbf {q}} = u^{\\rm p\\pm }_\\mathbf {q}a^\\dagger _{\\zeta ,\\mathbf {q}} + u^{\\rm x\\pm }_\\mathbf {q}P^\\dagger _{\\zeta ,\\mathbf {q}}$ where the expansion coefficients are given by $\\begin{split}u^{\\rm p\\pm }_\\mathbf {q}&= \\frac{E^{\\rm p}_\\mathbf {q}- E^{\\rm x}_\\mathbf {q}\\pm \\eta _\\mathbf {q}}{\\sqrt{(E^{\\rm p}_\\mathbf {q}- E^{\\rm x}_\\mathbf {q}\\pm \\eta _\\mathbf {q})^2 + 4\\Omega _\\mathbf {q}^2}}\\\\u^{\\rm x\\pm }_\\mathbf {q}&= \\frac{2\\Omega _\\mathbf {q}}{\\sqrt{(E^{\\rm p}_\\mathbf {q}- E^{\\rm x}_\\mathbf {q}\\pm \\eta _\\mathbf {q})^2 + 4\\Omega _\\mathbf {q}^2}},\\end{split}$ with $\\eta _\\mathbf {q}= \\sqrt{(E^{\\rm p}_\\mathbf {q}- E^{\\rm x}_\\mathbf {q})^2 + 4\\Omega _\\mathbf {q}^2}$ .", "In this basis, the linear evolution of Eq.", "(REF ) becomes $-i\\hbar \\partial _t *{\\Xi ^{\\pm \\dagger }_{\\zeta ,\\mathbf {q}}} &= E^\\pm _\\mathbf {q}*{\\Xi ^{\\pm \\dagger }_{\\zeta ,\\mathbf {q}}},$ with the polariton energies $E^\\pm _\\mathbf {q}= \\frac{1}{2}[E^{\\rm x}_\\mathbf {q}+ E^{\\rm p}_\\mathbf {q}\\pm \\eta _\\mathbf {q}]$ .", "The commutation relations of the polariton operators are not bosonic, but have correction terms due to the fermionic substructure of the exciton operator $P^\\dagger _{\\zeta ,\\mathbf {q}}$  [40].", "In principle, the full equations of motion Eq.", "(REF ) can be expressed in the polaritonic basis.", "However, this is not necessary and will not change the dynamics itself, only the basis that it is expressed in.", "In this work, we only use the polaritonic energies to identify the linear resonances of the system and to understand the nonlinear scattering between zero-momentum polaritons and the bound biexciton as depicted in Fig.", "REF ." ], [ "Fluctuations", "With the equations of motion of the single-time expectation values as in Eq.", "(REF ), one can access the instantaneous quantum statistics in the steady state or in the transient evolution.", "This can in principle be used to calculate the intracavity squeezing or photon number.", "However, the relevant detectable quantity is not the intracavity field, but the outcoupled field.", "Since the cavity field at zero-momentum couples out of the cavity into a continuum of external radiation modes with different frequencies, outcoupled quantities are always described and often measured by a spectrum rather than a single number, in contrast to intracavity quantities, which can be expressed in terms of a single mode [44], [61].", "To calculate the spontaneous four-wave mixing spectrum and quadrature squeezing of the generated light, we need not only the single-time expectation values as in Sec.", "REF , but also multitime averages.", "To calculate these, we employ a Heisenberg-Langevin approach for the time evolution of the fluctuation operator of the cavity field, $\\delta a^{\\dagger }_{\\zeta ,0}(t) = \\lim _{t\\rightarrow \\infty }[a^{\\dagger }_{\\zeta ,0}(t) - *{a^{\\dagger }_{\\zeta ,0}(t)}]$ and similarly for the exciton fluctuation operator $\\delta P^\\dagger _{\\zeta ,0}$ .", "Importantly, the cavity in/out-coupling and the phonon-induced dephasing give rise to Langevin noise sources [62].", "For the cavity field, the Langevin noise enters directly from microscopic theory through the input-output formalism, as seen in Eq.", "(REF ), where $\\delta a^{\\rm in\\dagger }_{\\zeta }(t):=\\lim _{t\\rightarrow \\infty }[a^{\\rm in\\dagger }_{\\zeta }(t) - *{a^{\\rm in\\dagger }_{\\zeta }(t)}]$ is a Langevin noise term with the properties [44], [43] $*{\\delta a^{\\rm in\\dagger }_\\zeta } = *{\\delta a^{\\rm in\\dagger }_\\zeta (t)\\delta a^{\\rm in\\dagger }_{\\zeta ^{\\prime }}(t^{\\prime })}=*{\\delta a^{\\rm in\\dagger }_\\zeta (t)\\delta a^{\\rm in}_{\\zeta ^{\\prime }}(t^{\\prime })}=0,\\;\\;*{\\delta a^{\\rm in}_\\zeta (t)\\delta a^{\\rm in\\dagger }_{\\zeta ^{\\prime }}(t^{\\prime })} = \\delta _{\\zeta \\zeta ^{\\prime }}\\delta (t-t^{\\prime }).$ For the phenomenologically introduced phonon-broadening, we introduce a similar Langevin noise term to accompany the decay process, $-i\\hbar \\partial _t P^\\dagger _{\\zeta ,0} = [H,P^\\dagger _{\\zeta ,0}] + i\\hbar \\gamma ^{\\rm x}P^\\dagger _{\\zeta ,0}+ i\\hbar \\sqrt{2\\gamma ^{\\rm x}}\\delta P^{\\rm in\\dagger }_{\\zeta },$ where the noise correlation properties are derived from the rules in Ref.", "lax1966quantum $*{\\delta P^{\\rm in\\dagger }_\\zeta } = *{\\delta P^{\\rm in\\dagger }_\\zeta (t)\\delta P^{\\rm in\\dagger }_{\\zeta ^{\\prime }}(t^{\\prime })}=*{\\delta P^{\\rm in\\dagger }_\\zeta (t)\\delta P^{\\rm in}_{\\zeta ^{\\prime }}(t^{\\prime })}=0,*{\\delta P^{\\rm in}_\\zeta (t)\\delta P^{\\rm in\\dagger }_{\\zeta ^{\\prime }}(t^{\\prime })} = \\delta _{\\zeta \\zeta ^{\\prime }}\\delta (t-t^{\\prime }).$ Calculating the commutator in Eq.", "(REF ) in a third-order expansion in the input field, it is found that the nonlinear terms couple $\\delta P^\\dagger _{\\zeta ,0}$ to exciton-photon pair fluctuation operators $\\delta \\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q}}$ and biexcitonic fluctuations $\\delta \\mathcal {B}^{\\zeta \\zeta ^{\\prime }}_{\\mu ,\\pm }$ (see Ref.", "denning2022efficient for details).", "For sufficiently small fluctuations compared to the mean values, the nonlinear terms involving products of fluctuations of the forms $\\delta P\\delta \\mathcal {C}$ , $\\delta P\\delta \\mathcal {B}, \\;\\delta P^{\\rm in\\dagger }\\delta P^\\dagger , \\; \\delta a^{\\rm in\\dagger }\\delta a^\\dagger , \\; \\delta P^{\\rm in\\dagger }\\delta a^\\dagger $ and $\\delta a^{\\rm in\\dagger }\\delta P^\\dagger $ can be neglected and a set of equations that are linear in the fluctuation operators is obtained, $\\begin{split}-i\\hbar \\partial _t\\delta a^\\dagger _{\\zeta ,0} &=(\\tilde{E}^{\\rm p}_0 - \\hbar \\omega _{\\rm r}) \\delta a^\\dagger _{\\zeta ,0}+ \\Omega _0\\delta P^\\dagger _{\\zeta ,0} + i\\hbar \\sqrt{2\\gamma ^{\\rm p}}\\delta a^{\\rm in\\dagger }_{\\zeta },\\\\-i\\hbar \\partial _t \\delta P_{\\zeta ,0}^\\dagger &= (\\tilde{E}_0^{\\rm x}-\\hbar \\omega _{\\rm r}) \\delta P_{\\zeta ,0}^\\dagger + \\Omega _0\\delta a_{\\zeta ,0}^\\dagger + i\\hbar \\sqrt{2\\gamma ^{\\rm x}} \\delta P_{\\zeta ,0}^{\\rm in\\dagger }\\\\&\\hspace{-28.45274pt}-\\sum _\\mathbf {q}\\tilde{\\Omega }_\\mathbf {q}[\\delta \\mathcal {C}^{\\zeta \\zeta }_\\mathbf {q}*{P^\\dagger _{\\zeta , 0}}+(\\delta _{\\mathbf {q},0}*{a_{\\zeta ,0}^\\dagger }*{P_{\\zeta ,0}^\\dagger }+ \\mathcal {C}^{\\zeta \\zeta }_\\mathbf {q})\\delta P_{\\zeta , 0}]\\\\&\\hspace{-28.45274pt}+W^0*{P^\\dagger _{\\zeta ,0}}^2\\delta P_{\\zeta ,0}+\\sum _{\\zeta ^{\\prime }\\mu \\pm }W^\\pm _\\mu [\\delta \\mathcal {B}_{\\mu ,\\pm }^{\\zeta \\zeta ^{\\prime }}*{P_{\\zeta ^{\\prime },0}}+\\mathcal {B}_{\\mu ,\\pm }^{\\zeta \\zeta ^{\\prime }} \\delta P_{\\zeta ^{\\prime },0}]\\\\-i\\hbar \\partial _t \\delta \\mathcal {B}_{\\mu ,\\pm }^{\\zeta \\zeta ^{\\prime }} &= (\\tilde{E}^{\\rm xx}_{\\mu ,\\pm }-2\\hbar \\omega _{\\rm r})\\delta \\mathcal {B}_{\\mu ,\\pm }^{\\zeta \\zeta ^{\\prime }}\\\\&+\\frac{1}{2}(1\\pm \\delta _{\\zeta \\zeta ^{\\prime }})[\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Omega \\hspace{-0.83328pt}}\\hspace{0.83328pt}_{\\mu ,-\\mathbf {q}}^\\pm \\delta \\mathcal {C}^{\\zeta ^{\\prime }\\zeta }_{-\\mathbf {q}}+\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Omega \\hspace{-0.83328pt}}\\hspace{0.83328pt}_{\\mu ,\\mathbf {q}}^\\pm \\delta \\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}]\\\\&\\hspace{-28.45274pt}+\\frac{i\\hbar }{4}\\sqrt{2\\gamma ^{\\rm x}}(1\\pm \\delta _{\\zeta \\zeta ^{\\prime }})\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Phi \\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _{\\mu ,0}[*{P^\\dagger _{\\zeta ,0}}\\delta P^{\\rm in\\dagger }_{\\zeta ^{\\prime }}+ *{P^\\dagger _{\\zeta ^{\\prime },0}}\\delta P^{\\rm in\\dagger }_{\\zeta }]\\\\-i\\hbar \\partial _t\\delta {\\mathcal {C}}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q}} &=(\\tilde{E}^{\\rm x}_\\mathbf {q}+ \\tilde{E}^{\\rm p}_\\mathbf {q}-2\\hbar \\omega _{\\rm r})\\delta {\\mathcal {C}}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q}}+ \\Omega _\\mathbf {q}\\delta \\mathcal {D}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}\\\\&+ \\sum _{\\mu \\pm } \\Omega ^\\pm _{\\mu ,\\mathbf {q}} \\delta \\mathcal {B}^{\\zeta \\zeta ^{\\prime }}_{\\mu ,\\pm }+i\\hbar \\delta _{\\mathbf {q},0}\\sqrt{2\\gamma ^{\\rm x}}*{a^\\dagger _{\\zeta ,0}}\\delta P^{\\rm in\\dagger }_{\\zeta ^{\\prime }}\\\\&+i\\hbar \\delta _{\\mathbf {q},0}\\sqrt{2\\gamma ^{\\rm p}}[*{a^{\\rm in\\dagger }_{\\zeta }}\\delta P^\\dagger _{\\zeta ^{\\prime },0}+*{P^\\dagger _{\\zeta ^{\\prime },0}}\\delta a^{\\rm in\\dagger }_{\\zeta }]\\\\-i\\hbar \\partial _t\\delta \\mathcal {D}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}&=2(\\tilde{E}^{\\rm p}-\\hbar \\omega _{\\rm r})\\delta \\mathcal {D}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}+ \\Omega _\\mathbf {q}\\delta \\mathcal {C}^{\\zeta ^{\\prime }\\zeta }_{-\\mathbf {q}}+ \\Omega _{-\\mathbf {q}}\\delta \\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q}}\\\\ &+ i\\hbar \\sqrt{2\\gamma ^{\\rm p}}\\delta _{\\mathbf {q},0}\\big [*{a^{\\rm in\\dagger }_{\\zeta }}\\delta a^\\dagger _{\\zeta ^{\\prime },0}+ *{a^{\\rm in\\dagger }_{\\zeta ^{\\prime }}}\\delta a^\\dagger _{\\zeta ,0}\\\\&\\hspace{56.9055pt}+ *{a^\\dagger _{\\zeta ,0}}\\delta a^{\\rm in\\dagger }_{\\zeta ^{\\prime }}+ *{a^\\dagger _{\\zeta ^{\\prime },0}}\\delta a^{\\rm in\\dagger }_{\\zeta }\\big ].\\end{split}$ Being interested in the steady-state emission properties, we consider the limit of $t\\rightarrow \\infty $ and write all expectation values using their discrete Fourier series (see Eq.", "(REF )).", "We then transform the fluctuation equations to Fourier space as $\\delta Q(\\omega ) = \\int _{-\\infty }^{\\infty }{t}e^{i\\omega t}\\delta Q(t)$ , where $\\delta Q$ is any of the fluctuation operators.", "Terms on the right-hand side of Eq.", "(REF ) of the form $\\delta Q(t)Y(t)$ transform as $\\sum _m\\delta Q(\\omega -n\\omega _{12})Y_m$ , where $Y$ is a steady-state expectation value and $Y_m$ is its discrete Fourier decomposition from Eq.", "(REF ).", "Thus, the fluctuation equations are not diagonal in frequency space, since $\\delta Q(\\omega )$ is coupled to $\\delta Q(\\omega +n\\omega _{12})$ for $n\\in \\mathbb {Z}$ .", "Figure: (a) Partitioning of the continuous frequency axis for the fluctuation operators δQ(ω)\\delta Q(\\omega ) into zones of width ω 12 \\omega _{12}.", "The continuous frequency ν\\nu is defined to be in the interval ν∈[-ω 12 /2,ω 12 /2]\\nu \\in [-\\omega _{12}/2,\\omega _{12}/2], and the Fourier index mm indicates the zone on the frequency axis, such that δQ m (ν):=δQ(ω-mω 12 )\\delta Q_m(\\nu ):=\\delta Q(\\omega -m\\omega _{12}).", "(b) The frequency-axis partitioning can be illustrated as a stacking of the frequency zones, such that the continuous frequency ν\\nu is on the horizontal axis and the discrete Fourier index mm is on the vertical axis.", "(c) In this stacked representation, the dynamics of the Heisenberg-Langevin equation Eq.", "() only couples fluctuations that are on the same vertical line, i.e.", "the equations of motion are diagonal in ν\\nu .", "The couplings of Eq.", "() are indicated with blue arrows.To handle this challenge, we partition the frequency axis into zones of size $\\omega _{12}$ and define the frequency $\\nu $ to be in the interval $[-\\omega _{12}/2,\\omega _{12}/2]$ .", "We then introduce the discrete Fourier-index notation $\\delta Q_m(\\nu ) := \\delta Q(\\nu -m\\omega _{12})$ [see Fig.", "REF (a)–(b)].", "Now the Heisenberg-Langevin equations are diagonal in $\\nu $ , i.e.", "$\\delta Q_m(\\nu )$ is not coupled to $\\delta Q_{m^{\\prime }}(\\nu ^{\\prime })$ with $\\nu ^{\\prime }\\ne \\nu $ , but only to $\\delta Q_{m^{\\prime }}(\\nu )$ .", "Instead, the frequency off-diagonal coupling in the fluctuation equations appears as a coupling between different Fourier indices [see Fig.", "REF (c)].", "At the same time, we suppress the momentum-index 0 on $\\delta P$ and $\\delta a$ and use only the Fourier index, such that $\\delta P^\\dagger _{\\zeta ,m}(\\nu )$ and $\\delta a^\\dagger _{\\zeta ,m}(\\nu )$ implicitly refer to the zero-momentum exciton and cavity fluctuation operators.", "By solving the frequency-space equations for $\\delta \\mathcal {B}^{\\zeta \\zeta ^{\\prime }}_{\\mu ,\\pm },\\;\\delta \\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}$ and $\\delta \\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_\\mathbf {q}$ formally (see Appendix ), they are eliminated and replaced by a renormalization of the equation for $\\delta P^\\dagger _{\\zeta ,m}(\\omega )$ .", "For a compact notation, we introduce the combined fluctuation vector $\\delta \\psi _{\\zeta ,m}(\\nu ) = [\\delta a^\\dagger _{\\zeta ,m}(\\nu ), \\delta P^\\dagger _{\\zeta ,m}(\\nu ),\\delta a_{\\zeta ,m}(\\nu ), \\delta P_{\\zeta ,m}(\\nu )]^{\\rm T}.$ The frequency-space Heisenberg-Langevin equation for $\\delta \\psi $ reads $\\sum _{\\zeta ^{\\prime } m^{\\prime }} [G^{-1}(\\nu )]^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }} \\delta \\psi _{\\zeta ^{\\prime },m^{\\prime }}(\\nu )= T^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu )\\delta \\psi ^{\\rm in}_{\\zeta ^{\\prime },m^{\\prime }}(\\nu ),$ where the inverse Green's function is given by $\\begin{split}&[G^{-1}(\\nu )]^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}= \\delta _{mm^{\\prime }}\\delta _{\\zeta \\zeta ^{\\prime }}\\mathbb {I}_{4}\\omega _{\\rm r}\\\\&+\\delta _{mm^{\\prime }}\\delta _{\\zeta \\zeta ^{\\prime }}\\!\\!", "[-\\hbar \\nu _m - \\tilde{E}^{\\rm p}_0&0&0&0\\\\0&-\\hbar \\nu _m - \\tilde{E}^{\\rm x}_0&0&0\\\\0&0&\\hbar \\nu _m - \\tilde{E}^{\\rm p*}_0&0\\\\0&0&0&\\hbar \\nu _m - \\tilde{E}^{\\rm x*}_0]\\\\&- [0 & \\delta _{mm^{\\prime }}\\delta _{\\zeta \\zeta ^{\\prime }}\\Omega _0 & 0 & 0\\\\\\hat{\\Omega }^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime },0}(\\nu ) & \\Sigma ^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu ) & 0 & \\Delta ^{m^{\\prime }-m}_{\\zeta \\zeta ^{\\prime }}\\\\0&0& & \\Omega _0\\delta _{mm^{\\prime }}\\delta _{\\zeta \\zeta ^{\\prime }}\\\\0& \\Delta ^{m-m^{\\prime }*}_{\\zeta \\zeta ^{\\prime }}&\\hat{\\Omega }^{-m,-m^{\\prime }}_{\\zeta \\zeta ^{\\prime },0}(-\\nu )&\\Sigma ^{-m,-m^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(-\\nu )],\\end{split}$ where $\\mathbb {I}_4$ is the $4\\times 4$ identity matrix, $\\nu _m:=\\nu -m\\omega _{12}$ , $\\Sigma ^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu )$ is the exciton self-energy, and $\\hat{\\Omega }_{\\zeta \\zeta ^{\\prime },0}^{mm^{\\prime }}$ is a renormalised exciton-photon coupling strength.", "The latter two stem from renormalisations due to the formal elimination of the multiparticle fluctuations and are given in Appendix .", "These renormalisations give small quantitative corrections to the spontaneous four-wave mixing spectra.", "Most importantly, $\\Delta ^{m}_{\\zeta \\zeta ^{\\prime }}$ is the parametric gain connecting different fourier components $m_1$ and $m_1+m$ , which represents the core mechanism behind spontaneous four-wave mixing.", "It is given by $\\begin{split}\\Delta _{\\zeta \\zeta ^{\\prime }}^m &= \\sum _{n}\\delta _{\\zeta ,\\zeta ^{\\prime }}W^0 *{P_{\\zeta ,0}^\\dagger }_{n}*{P_{\\zeta ,0}^\\dagger }_{m-n}+ \\sum _{\\mu \\pm } W_{\\mu }^\\pm \\mathcal {B}^{\\zeta \\zeta ^{\\prime }}_{\\mu \\pm ,m}\\\\&-\\delta _{\\zeta ,\\zeta ^{\\prime }}\\sum _\\mathbf {q}\\tilde{\\Omega }_\\mathbf {q}\\big [ \\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q},m}+\\delta _{\\mathbf {q},0}\\sum _{n}*{a_{\\zeta ,0}^\\dagger }_{n}*{P_{\\zeta ,0}^\\dagger }_{m-n} \\big ].\\end{split}$ The parametric gain describes a coherent pairwise driving of the excitons, and is a purely a result of the nonlinearities in the system, specifically the Coulomb interaction and the Pauli-blocking from the fermionic substructure of excitons.", "Moreover, without parametric gain, the generated field would have the same spectral and coherence properties as the classical input field.", "The parametric gain contains terms from the instantaneous Coulomb interaction between excitons ($W^0$ ), biexcitonic correlations ($W^\\pm _\\mu $ ) from the bound biexciton ($\\mu =\\mathrm {b}$ ) and two-exciton continuum ($\\mu \\ne \\mathrm {b}$ ), and Pauli-blocking $\\tilde{\\Omega }_\\mathbf {q}$ .", "All matrix elements and coefficients are given in Appendices  and .", "The matrix $T^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu )$ describes the coupling to the input field, $\\delta \\psi ^{\\rm in}_{\\zeta ,m}(\\nu )=[\\delta a^{\\rm in\\dagger }_{\\zeta ,m}(\\nu ), \\delta P^{\\rm in\\dagger }_{\\zeta ,m}(\\nu ),\\delta a^{\\rm in}_{\\zeta ,m}(\\nu ), \\delta P^{\\rm in}_{\\zeta ,m}(\\nu )]$ .", "The contribution to $T^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu )$ in zeroth order of the input field is given by $T^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu ) = i\\hbar \\delta _{mm^{\\prime }}\\delta _{\\zeta \\zeta ^{\\prime }}\\:{\\rm diag}[\\sqrt{2\\gamma ^{\\rm p}},\\sqrt{2\\gamma ^{\\rm x}},-\\sqrt{2\\gamma ^{\\rm p}},-\\sqrt{2\\gamma ^{\\rm p}}]$ .", "Due to elimination of the multiparticle fluctuations, additional second-order contributions are also present, which are given in Appendix .", "The formal solution of Eq.", "(REF ) is given by $\\delta \\psi _{m,\\zeta }(\\nu ) = \\sum _{m^{\\prime }\\zeta ^{\\prime }}\\mathcal {G}^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu )\\delta \\psi ^{\\rm in}_{\\zeta ^{\\prime }m^{\\prime }}(\\nu ),$ where $\\mathcal {G}^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu ) = \\sum _{m^{\\prime \\prime }\\zeta ^{\\prime \\prime }}G^{mm^{\\prime \\prime }}_{\\zeta \\zeta ^{\\prime \\prime }}(\\nu )T^{m^{\\prime \\prime }m^{\\prime }}_{\\zeta ^{\\prime \\prime }\\zeta ^{\\prime }}(\\nu )$ .", "We note that in practise, the renormalizations from formal elimination of the multiparticle fluctuations $\\delta \\mathcal {B},\\;\\delta \\mathcal {C}$ and $\\delta \\mathcal {D}$ influence the numerical calculations presented in this paper only with small quantitative corrections.", "This means that in many cases, one can neglect $\\delta \\mathcal {B}$ and $\\delta \\mathcal {C}$ in Eq.", "(REF ), leading to the simplifications $\\begin{split}\\Sigma ^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu )&=0, \\\\\\hat{\\Omega }^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }0}(\\nu ) &= \\Omega _0\\delta _{\\zeta \\zeta ^{\\prime }}\\delta _{mm^{\\prime }}\\\\T^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu ) &= i\\hbar \\delta _{mm^{\\prime }}\\delta _{\\zeta \\zeta ^{\\prime }}\\:{\\rm diag}[\\sqrt{2\\gamma ^{\\rm p}},\\sqrt{2\\gamma ^{\\rm x}},-\\sqrt{2\\gamma ^{\\rm p}},-\\sqrt{2\\gamma ^{\\rm p}}].\\end{split}$ This is demonstrated explicitly in Appendix , where we compare the full calculation of fluctuation spectra with the calculation under the simplifications in Eq.", "(REF ).", "All calculations presented in the main text have been performed using the full contributions of Eq.", "(REF ).", "However, the negligible influence of multiparticle fluctuations supports the linearization of the Heisenberg-Langevin equations, since products such as $\\delta \\mathcal {C}\\delta P$ can be expected to have an even smaller effect than the terms linear in $\\delta \\mathcal {C}$ .", "This fact does not mean that many-body correlation effects are unimportant: indeed, the dominating contribution to the parametric gain stems from the bound biexciton, $\\mathcal {B}_{\\rm b,-}^{KK^{\\prime }}$ ." ], [ "Emission spectrum and squeezing", "The detected field is the output from the cavity perpendicular to the surface, i.e.", "at zero in-plane momentum $\\mathbf {k}=0$ .", "We take the field to be polarisation-filtered before detection and denote the polarisation-projected cavity operator by $a = \\mathbf {\\lambda }_{\\rm out}^\\mathrm {T}\\mathbf {a}_0$  [44], where $\\mathbf {a}_0 = (a_{K,0}, a_{K^{\\prime },0})^{\\rm T}$ and $\\mathbf {\\lambda }_{\\rm out}$ is the polarisation of the detection channel in the circular basis.", "The corresponding fluctuation operator is $\\delta a(t) = \\lim _{t\\rightarrow \\infty }[a(t)-*{a(t)}]$ ." ], [ "Spectral correlation functions", "The emission properties of the generated field from the cavity are characterized by two spectral correlation functions $\\begin{split}*{\\delta a_m^\\dagger (\\nu )\\delta a_{m^{\\prime }}(\\nu ^{\\prime })} &=\\sum _{\\zeta \\zeta ^{\\prime }}\\sum _{\\zeta _1\\zeta _2}\\sum _{m_1m_2}\\sum _{j_1j_2}\\lambda _{\\rm out}^\\zeta \\lambda _{\\rm out}^{\\zeta ^{\\prime }}[\\mathcal {G}_{\\zeta \\zeta _1}^{mm_1}(\\nu )]_{1,j_1}\\\\ &\\times [\\mathcal {G}_{\\zeta ^{\\prime }\\zeta _2}^{m^{\\prime }m_2}(\\nu ^{\\prime })]_{3,j_2}*{\\delta \\psi ^{{\\rm in},j_1}_{\\zeta _1 m_1}(\\nu )\\delta \\psi ^{{\\rm in},j_2}_{\\zeta _2 m_2}(\\nu ^{\\prime })}\\end{split}$ $\\begin{split}*{\\delta a_m(\\nu )\\delta a_{m^{\\prime }}(\\nu ^{\\prime })} &=\\sum _{\\zeta \\zeta ^{\\prime }}\\sum _{\\zeta _1\\zeta _2}\\sum _{m_1m_2}\\sum _{j_1j_2}\\lambda _{\\rm out}^\\zeta \\lambda _{\\rm out}^{\\zeta ^{\\prime }}[\\mathcal {G}_{\\zeta \\zeta _1}^{mm_1}(\\nu )]_{3,j_1}\\\\ &\\times [\\mathcal {G}_{\\zeta ^{\\prime }\\zeta _2}^{m^{\\prime }m_2}(\\nu ^{\\prime })]_{3,j_2}*{\\delta \\psi ^{{\\rm in},j_1}_{\\zeta _1 m_1}(\\nu )\\delta \\psi ^{{\\rm in},j_2}_{\\zeta _2 m_2}(\\nu ^{\\prime })}\\end{split}$ where $j\\in [1,4]$ is the vector index of $\\delta \\psi $ as defined in Eq.", "(REF ).", "As shown in Sections REF and REF , the emission spectrum is described through the correlation function in Eq.", "(REF ), whereas the squeezing spectrum is described through both correlation functions in Eqs.", "(REF ) and (REF ).", "The spectral correlation of the input field is derived from the temporal correlations from Sec.", "REF as $*{\\delta \\psi ^{{\\rm in},j}_{\\zeta m}(\\nu )\\delta \\psi ^{{\\rm in},j^{\\prime }}_{\\zeta ^{\\prime } m^{\\prime }}(\\nu ^{\\prime })} = 2\\pi \\delta _{\\zeta \\zeta ^{\\prime }}\\delta _{m,-m^{\\prime }}\\eta _{jj^{\\prime }}\\delta (\\nu +\\nu ^{\\prime }),$ where $\\eta _{jj^{\\prime }} = \\delta _{j^{\\prime },1}\\delta _{j,3} + \\delta _{j^{\\prime },2}\\delta _{j,4}$ .", "Although no correlations are present between photon and exciton input fields, such correlations will generally be nonzero in the internal field $\\delta \\psi $ .", "With this, we obtain $*{\\delta a_{-m}^\\dagger (-\\nu )\\delta a_{m^{\\prime }}(\\nu ^{\\prime })} &= 2\\pi S_1^{mm^{\\prime }}(\\nu )\\delta (\\nu -\\nu ^{\\prime })\\\\*{\\delta a_{-m}(-\\nu )\\delta a_{m^{\\prime }}(\\nu ^{\\prime })} &= 2\\pi S_2^{mm^{\\prime }}(\\nu )\\delta (\\nu -\\nu ^{\\prime }),$ where $\\begin{split}S_1^{mm^{\\prime }}(\\nu ) &= \\sum _{\\zeta \\zeta ^{\\prime }}\\sum _{\\zeta _1 m_1}\\sum _{j_1j_2}\\lambda _{\\rm out}^\\zeta \\lambda _{\\rm out}^{\\zeta ^{\\prime }}[\\mathcal {G}_{\\zeta \\zeta _1}^{-m,-m_1}(-\\nu )]_{1,j_1}\\\\ &\\hspace{71.13188pt}\\times [\\mathcal {G}_{\\zeta ^{\\prime }\\zeta _1}^{m^{\\prime },m_1}(\\nu )]_{3,j_2}\\eta _{j_1 j_2}\\\\S_2^{mm^{\\prime }}(\\nu ) &= \\sum _{\\zeta \\zeta ^{\\prime }}\\sum _{\\zeta _1 m_1}\\sum _{j_1j_2}\\lambda _{\\rm out}^\\zeta \\lambda _{\\rm out}^{\\zeta ^{\\prime }}[\\mathcal {G}_{\\zeta \\zeta _1}^{-m,-m_1}(-\\nu )]_{3,j_1}\\\\ &\\hspace{71.13188pt}\\times [\\mathcal {G}_{\\zeta ^{\\prime }\\zeta _1}^{m^{\\prime },m_1}(\\nu )]_{3,j_2}\\eta _{j_1 j_2}.\\end{split}$ For convenience, we shall use the shorthand notation $S_i(\\nu -m\\omega _{12}) := S^{mm}_i(\\nu ),$ in order to label the fluctuation spectra on the continuous frequency axis rather than in the Fourier-zone partitioning.", "This relation can alternatively be cast as $S_i(\\omega ) := \\sum _m\\int {\\nu } S_i^{mm}(\\nu )\\delta (\\omega -(\\nu -m\\omega _{12}))$ ." ], [ "Emission spectrum", "The emission spectrum of the outcoupled field $S_{\\rm tot}(\\omega ) = S_{\\rm coh}(\\omega ) + S_{\\rm FWM}(\\omega )$ contains a coherent contribution $S_{\\rm coh}=S_{\\rm coh,1}\\delta (\\omega -\\omega _1) + S_{\\rm coh,2}\\delta (\\omega -\\omega _2)$ , which shares the spectral distribution and temporal coherence of the driving field [43], [44], and a spontaneously generated field $S_{\\rm FWM}$ , which is created by spontaneous four-wave mixing.", "The former is of minor interest in the present investigation, because it simply shares the quantum statistics with the classical input field.", "The primary emission spectrum of interest is the spontaneous four-wave mixing spectrum, which is given by $S_{\\rm FWM}(\\omega ) = 2\\gamma ^{\\rm p}S_1(\\omega ),$ where $S_1(\\omega )$ is defined in Eq.", "(REF )." ], [ "Homodyne noise spectrum and quadrature squeezing", "Quadrature squeezing is described through homodyne detection of the output field from the cavity (see Fig.", "REF ).", "While the quadrature squeezing of the internal cavity mode can be evaluated simply as the variance of the quadrature operator $\\delta X(\\theta ) := e^{i\\theta }\\delta a^\\dagger + e^{-i\\theta }\\delta a$ with respect to the quadrature phase $\\theta $ , the outcoupled and thus measurable field quadrature fluctuations are more complicated [44], [63].", "In homodyne detection, the source field from the cavity $E_{\\rm s}(t) = \\sqrt{2\\gamma ^{\\rm p}} a(t)$ is mixed with a strong local oscillator $E_{\\rm lo}(t)$ on a beamsplitter, such that the fields leaving the beamsplitter $E_1$ and $E_2$ are given by $[E_1(t) \\\\ E_2(t)] = \\frac{1}{\\sqrt{2}}[1&i\\\\i&1][E_{\\rm s}(t) \\\\ E_{\\rm lo}(t)].$ The local oscillator has a frequency equal to the rotating frame frequency $\\frac{1}{2}(\\omega _1+\\omega _2)$ .", "Thus, when working in the rotating frame, the local oscillator expectation values are time-independent: $*{E_{\\rm lo}(t)}=e^{i\\varphi }\\sqrt{F_{\\rm lo}}$ , where $F_{\\rm lo}$ and $\\varphi $ are the photon flux and reference phase of the local oscillator, respectively, and the expectation value is evaluated in the rotating frame.", "Taking the local oscillator to be in a coherent state, we have $*{E_{\\rm lo}^\\dagger (t) E_{\\rm lo}(t)} = F_{\\rm lo}$ .", "In the homodyne measurement process, the two output fields $E_1$ and $E_2$ are measured on separate photodetectors (which are labeled with corresponding indices 1 and 2) and the difference of these photocurrents are recorded as the signal.", "The statistical properties of this photocurrent difference are derived using normal-ordered detection theory [64], [65], [66], [67], [63].", "Here, we extend the derivation of Ref.", "carmichael1987spectrum to the case of balanced homodyne detection and to bichromatic driving, where no time-independent stationary source field exists.", "The detection model is based on the assumption that a single photoelectric detection event produces a current pulse of duration $\\tau _{\\rm d}$ and amplitude $ge/\\tau _{\\rm d}$ , where $e$ and $g$ denote the electronic charge and photodetector gain, respectively.", "The photocurrents of detector $\\mu =1,2$ is then given by [65], [63] ${I}_\\mu (t)=\\frac{ge}{\\tau _{\\rm d}}n_\\mu ,$ where $n_{\\mu }$ is a classical stochastic variable representing the number of overlapping pulses in the detection electronics of detector $\\mu $ , i.e.", "the number of pulses initiated in the time interval $t-\\tau _{\\rm d}$ to $t$ .", "For practical purposes, we shall work with a scaled current, $I_\\mu (t):={I}_\\mu (t)/(ge)$ , which has units of inverse time and represents the flux of detected photons.", "The joint probability of detecting $n_{1}$ photons in detector 1 and $n_{2}$ in detector 2 within the time interval $[t-\\tau _{\\rm d},t]$ is given by [65], [63] $\\begin{split}p^{(1)}(n_1, &t-\\tau _{\\rm d}, t; n_2, t-\\tau _{\\rm d}, t) \\\\ &={:\\prod _{\\mu =1,2}\\frac{ [{G}_\\mu (t-\\tau _{\\rm d},t)]^{n_{\\mu }}}{n_{\\mu }!", "}e^{-{G}_\\mu (t-\\tau _{\\rm d},t)}:},\\end{split}$ where the generator ${G}$ of the distribution is given by ${G}_\\mu (t_1,t_2)=\\eta \\int _{t_1}^{t_2}{t} E_\\mu ^\\dagger (t)E_\\mu (t),$ and $\\eta $ is the detection efficiency.", "The symbol $::$ denotes normal- and time-ordering at the level of the electric field operators ($E^\\dagger _\\mu $ arranged to the left of $E_\\mu $ and time arguments increasing to the right (left) in products of $E_\\mu ^\\dagger $ ($E_\\mu $ )).", "The probability distribution Eq.", "(REF ) is essentially Mandel's counting formula, as presented in e.g.", "Ref. [44].", "From $p^{(1)}$ , the mean photocurrent is derived as [63] $\\begin{split}\\overline{I_\\mu (t)} &=\\frac{1}{\\tau _{\\rm d}} {{G}_\\mu (t-\\tau _{\\rm d},t)},\\\\&=\\frac{\\eta }{\\tau _{\\rm d}}\\int _{t-\\tau _{\\rm d}}^t{t^{\\prime }} *{E_\\mu ^\\dagger (t^{\\prime }) E_\\mu (t^{\\prime })}\\end{split}$ where we note that we use overlines to denote the mean value of the classical stochastic variable $I_\\mu $ and angular brackets to denote quantum-mechanical expectation values.", "Similarly, the two-time probability of detecting $n$ counts in detector $\\mu $ in the time interval $[t-\\tau _{\\rm d},t]$ and $m$ counts in detector $\\mu ^{\\prime }$ in the time interval $[t+\\tau -\\tau _{\\rm d},t+\\tau ]$ is given by [63] $\\begin{split}&p^{(2)}_{\\mu \\mu ^{\\prime }}(n,t-\\tau _{\\rm d},t;m,t+\\tau -\\tau _{\\rm d},t+\\tau )\\\\ &= \\Bigg \\langle :\\frac{{G}_\\mu (t-\\tau _{\\rm d},t)^n}{n!", "}e^{-{G}_\\mu (t-\\tau _{\\rm d},t)}\\\\ &\\hspace{28.45274pt}\\times \\frac{{G}_{\\mu ^{\\prime }}(t+\\tau -\\tau _{\\rm d},t+\\tau )^m}{m!", "}e^{-{G}_{\\mu ^{\\prime }}(t+\\tau -\\tau _{\\rm d},t+\\tau )}:\\Bigg \\rangle .\\end{split}$ From this probability, the detector current correlation function is found to be (for $\\tau >0$ ) $\\begin{split}\\overline{I_\\mu (t)I_{\\mu ^{\\prime }}(t+\\tau )}&= \\\\ (\\frac{1}{\\tau _{\\rm d}})^2&\\Big [{:{G}_\\mu (t-\\tau _{\\rm d},t){G}_{\\mu ^{\\prime }}(t+\\tau -\\tau _{\\rm d},t+\\tau ):} \\\\&+ \\delta _{\\mu \\mu ^{\\prime }}\\Theta (\\tau _{\\rm d}-\\tau ){{G}_\\mu (t+\\tau -\\tau _{\\rm d},t)}\\Big ],\\end{split}$ where $\\Theta $ is the Heaviside function.", "The measured spectral noise function $N(\\omega )$ of the homodyne signal $I_-:=I_1-I_2$ is then the Fourier transformation of the photocurrent fluctuation correlation function, averaged over one period of the signal oscillation $T=2\\pi /{\\omega _{12}}=4\\pi /{\\omega _2-\\omega _1}$ : $\\begin{split}N(\\omega ,\\theta ) = &\\lim _{t_0\\rightarrow \\infty }\\frac{1}{T}\\int _{t_0}^{t_0+T}{t}\\int _0^\\infty {\\tau } \\cos (\\omega \\tau ) \\\\ &\\hspace{56.9055pt}\\times [\\overline{I_-(t)I_-(t+\\tau )}- \\overline{I_-(t)}\\;\\;\\overline{I_-(t+\\tau )}].\\end{split}$ From Eqs.", "(REF ), (REF ) and (REF ), the following expression for the photocurrent fluctuation correlation function is derived (see Appendix ) $\\begin{split}&\\overline{I_-(t)I_-(t+\\tau )} -\\overline{I_-(t)}\\;\\;\\overline{I_-(t+\\tau )}\\\\&= N_0(\\tau ) + \\frac{\\eta ^2F_{\\rm lo}}{\\tau _{\\rm d}^2}\\int _{t-\\tau _{\\rm d}}^t\\!\\!\\!\\!\\!\\!", "{t^{\\prime }}\\int _{t-\\tau _{\\rm d}}^{t}\\!\\!\\!\\!\\!\\!", "{t^{\\prime \\prime }}*{:\\delta X_{\\rm s}(\\theta ,t^{\\prime })\\delta X_{\\rm s}(\\theta ,t^{\\prime \\prime }+\\tau ):},\\end{split}$ where $X_{\\rm s}(\\theta ,t) = e^{i\\theta } E_{\\rm s}^\\dagger (t) + e^{-i\\theta }E_{\\rm s}(t)$ is the source-field quadrature operator with angle $\\theta =\\varphi +\\pi /2$ and its fluctuation operator is $\\delta X_{\\rm s}(\\theta ,t) := X_{\\rm s}(\\theta ,t)-*{X_{\\rm s}(\\theta ,t)}$ .", "The measured quadrature angle $\\theta $ stems from the local oscillator phase $\\varphi $ and a phase displacement $\\pi /2$ from the beamsplitter as seen in Eq.", "(REF ).", "The first term in Eq.", "(REF ) is the shot noise correlation function, which in the limit where the local oscillator is much stronger than the source field is given by $N_0(\\tau ) = \\frac{\\eta F_{\\rm lo}}{\\tau _{\\rm d}^2}(\\tau _{\\rm d}-\\tau )\\Theta (\\tau _{\\rm d}-\\tau ).$ In order to characterize the intrinsic noise properties of the source field and to disentangle these properties from the detector properties, we take the limit of infinite detection bandwidth ($\\tau _{\\rm d}\\rightarrow 0$ ) and unity detection efficiency ($\\eta =1$ ).", "In this limit, inserting Eq.", "(REF ) into Eq.", "(REF ) yields the noise spectrum relative to the shot-noise level (see Appendix ) $\\frac{N(\\omega )}{N_0(\\omega )} = 1+\\Lambda (\\omega ,\\theta ),$ where $N_0(\\omega )=\\int _0^\\infty {\\tau }\\cos (\\omega \\tau )N_0(\\tau )$ is the shot noise spectrum and $\\Lambda (\\omega ,\\theta ) = {\\rm Re}[\\Lambda _1(\\omega ) + e^{-2i\\theta }\\Lambda _2(\\omega )],$ where $\\Lambda _i(\\omega ) = 4\\int _0^\\infty {\\tau }\\cos (\\omega \\tau )C_i(\\tau )$ and $C_i(\\tau ) = \\frac{1}{2\\pi }\\int _{-\\infty }^\\infty {\\omega }e^{i\\omega \\tau }2\\gamma ^{\\rm p}S_i(\\omega )$ .", "All details of the derivation are given in Appendix .", "The optimal quadrature angle, where the noise is minimized, is given by $e^{-2i\\theta }=-\\Lambda _2(\\omega )/{\\Lambda _2(\\omega )}$ .", "At this angle, the spectrum of squeezing is $\\Lambda (\\omega ) = {\\rm Re}[\\Lambda _1(\\omega )] - {\\Lambda _2(\\omega )},$ where the homodyne phase $\\theta $ has been suppressed in $\\Lambda (\\omega ,\\theta )$ for notational brevity.", "In the analysis of squeezing, we shall use the normalized optimal-angle spectrum $1+\\Lambda (\\omega )$ to characterize the amount of squeezing in the generated light.", "This spectrum is normalized such that a value of $1+\\Lambda (\\omega )=1$ corresponds to the shot noise level, i.e.", "no squeezing at all, whereas a value of $1+\\Lambda (\\omega )=0$ corresponds to complete elimination of the shot noise and thus perfect squeezing.", "We note that the spontaneous four-wave mixing spectrum $S_{\\rm FWM}(\\omega )$ and the squeezing spectrum $\\Lambda (\\omega )$ are of fundamentally different nature and are measured with very different techniques.", "The spontaneous four-wave mixing spectrum $S_{\\rm FWM}(\\omega )$ can be detected by filtering out the bichromatic pump from the generated light and sending the filtered signal into an optical spectrum analyser.", "The frequency argument $\\omega $ is connected to the emitted photon energy on the order of 2 eV.", "In contrast, the squeezing spectrum $\\Lambda (\\omega )$ is measured through homodyne detection, i.e.", "by beating the generated signal with a local oscillator at frequency $\\omega _{\\rm r}:=(\\omega _1+\\omega _2)/2$ .", "Here, the frequency argument $\\omega $ is the beat frequency between the signal and the local oscillator, and is thus much smaller, on the order of a few meV." ], [ "Results", "By numerically solving the equations of motion Eq.", "(REF ) explicitly to reach the periodic steady state, the corresponding Fourier components are calculated from the resulting time series over a steady-state period from Eq.", "(REF ).", "The steady-state Fourier components are then used to construct the fluctuation Green's function from Eq.", "(REF ), and calculate the spontaneous four-wave mixing spectrum $S_{\\rm FWM}(\\omega )$ and the squeezing spectrum $\\Lambda (\\omega )$ as described in Sec.", "REF .", "In this section, we present these results for Configurations A and B, respectively, as shown in Fig REF (c)-(d).", "The numerical calculations become increasingly challenging as $\\hbar \\omega _{12}:=(\\omega _2-\\omega _1)/2$ approaches 0, because the oscillation period becomes longer and longer.", "However, in the case where $\\hbar \\omega _{12}=0$ , the spectra $S_{\\rm FWM}(\\omega )$ and $\\Lambda (\\omega )$ can be calculated using the simpler calculation methods for a monochromatic driving field described in Ref. denning2022efficient.", "For completeness, we include the results for $\\hbar \\omega _{12}=0$ in this way.", "The calculations are performed for atomically thin $\\mathrm {MoS_2}$ encapsulated by hexagonal BN on both sides.", "The coupling strength of the cavity is taken to be $\\Omega _0 = 20\\:{\\rm meV}$ and the cavity outcoupling rate is taken to be $\\hbar \\gamma ^{\\rm p} = 9\\;{\\rm meV}$ , consistent with fabricated devices [48], [68].", "The temperature is taken to be 30 K, leading to a phonon-induced exciton dephasing of $\\hbar \\gamma ^{\\rm x}=0.8\\;{\\rm meV}$ (see Appendix ).", "The pump polarization is taken to be linear, with the two drives having the same linear polarization, and the pump power is 5 mW with a pump spot size of 9 $\\mathrm {\\; \\mu m^2}$ .", "The detected polarization is taken to be co-linear with the drive, and we note that very similar results are seen for cross-polarized detection.", "Further details about the numerical calculations and all parameters are given in Appendix ." ], [ "Configuration A", "Configuration A is defined by setting the cavity frequency such that the lower polariton energy $E^-_0$ matches half the bound biexciton energy $\\frac{1}{2}E^{\\rm xx}_{\\rm b,-}$ , and setting the mean value of the driving fields to $\\frac{1}{2}(\\hbar \\omega _1+\\hbar \\omega _2)=E^-_0=\\frac{1}{2}E^{\\rm xx}_{\\rm b,-}$ [cf.", "Fig.", "REF (c)].", "When the drive energy difference $\\hbar \\omega _{12}$ is zero, the degenerate driving field resonantly creates lower polaritons, which resonantly scatter into bound biexcitons via the Coulomb interaction.", "The latter process is also known as a polaritonic Feshbach resonance [1], [2], [29] and gives a strong enhancement of the nonlinear response, which in this case leads to strong spontaneous four-wave mixing and single-mode squeezing.", "Since the in-scattering particles in this configuration (lower polaritons) are identical, this is a degenerate Feshbach resonance.", "In Fig.", "REF (a), the spontaneous four-wave mixing spectrum is shown for four values of the drive frequency difference $\\hbar \\omega _{12}$ .", "The spectra feature multiple peaks, which we can identify by considering the resonant four-wave mixing channels.", "Due to energy conservation the energy of a generated photon pair with frequencies $\\omega _3$ and $\\omega _4$ must match the sum of two pump photon energies.", "With two different pump frequencies $\\omega _1$ and $\\omega _2$ , there are three possible combinations of pump photon pairs, meaning that the output photon pair frequencies must fulfill one of the relations $\\omega _3+\\omega _4 &= \\omega _1+\\omega _2 \\\\\\omega _3+\\omega _4 &= 2\\omega _1\\\\\\omega _3+\\omega _4 &= 2\\omega _2.$ The resonant output channels appear where one of the output photon energies equals a polariton energy.", "Thus, by setting $\\hbar \\omega _3=E^\\pm _0$ , the resonance conditions Eq.", "(REF ) become $\\omega _4 &= \\omega _1+\\omega _2-E^\\pm _0/\\hbar \\\\\\omega _4 &= 2\\omega _1 - E^\\pm _0/\\hbar \\\\\\omega _4 &= 2\\omega _2 - E^\\pm _0/\\hbar .$ These four-wave mixing resonance conditions give rise to a total of eight possible output photon frequencies (two values of $\\omega _3$ and three values of $\\omega _4$ for each of these).", "In Fig.", "REF (a), the resonances $E_0^\\pm $ and $\\hbar \\omega _1+\\hbar \\omega _2-E^\\pm _0$ are indicated with vertical grey lines.", "These resonance frequencies are independent of $\\hbar \\omega _{12}$ .", "The strongest (middle) resonance peak is the lower polariton energy $E^-_0$ , which in Configuration A matches the mean drive energy, $\\frac{1}{2}(\\hbar \\omega _1+\\hbar \\omega _2)$ .", "The corresponding photon pair partner from Eq.", "(REF ) has the same energy.", "The upper polariton resonance $E^+_0$ and its partner $\\hbar \\omega _1+\\hbar \\omega _2-E^+_0$ are positioned symmetrically around the lower polariton.", "Since the excitation of the upper polariton is much further from resonance, these peaks appear with a much weaker amplitude than the central lower polariton peak.", "In addition, the resonance condition in Eq.", "() for the lower polariton, $2\\hbar \\omega _2-E^-_0$ is also shown with vertical dotted lines.", "This resonance depends on $\\hbar \\omega _{12}$ and becomes degenerate with $E^-_0$ in the limit $\\hbar \\omega _{12}=0$ .", "For the largest value of $\\hbar \\omega _{12}$ , the resonance in Eq.", "() is also shown for the upper polariton, i.e.", "$2\\hbar \\omega _2-E^+_0$ , with a dashed vertical line.", "This resonance cannot be seen for the other values of $\\hbar \\omega _{12}$ : when $\\hbar \\omega _{12}$ is too small, the drive energies $\\hbar \\omega _1$ and $\\hbar \\omega _2$ are close to resonance with $E^-_0$ , but very far away from the upper polariton $E^+_0$ .", "When $\\hbar \\omega _{12}$ matches the polariton splitting $E^+_0-E^-_0$ , as is approximately the case in the panel with $\\hbar \\omega _{12}=35\\;\\mathrm {meV}$ , the emission line $\\hbar \\omega _2-E^+_0$ is degenerate with the dominating lower polariton $E^-_0$ .", "When $\\hbar \\omega _{12}$ is larger, the drives are far away from resonance with the lower polariton, meaning that the dominating polaritonic features (as indicated with vertical gray lines) are diminished, thereby revealing the much weaker peak at $\\hbar \\omega _2-E^+_0$ .", "Fig.", "REF (b) shows the homodyne squeezing spectrum as $1+\\Lambda (\\omega )$ .", "As discussed in Section REF , this spectrum is normalized such that a value of $1+\\Lambda (\\omega )=1$ corresponds to no squeezing at all, whereas a value of $1+\\Lambda (\\omega )=0$ corresponds perfect squeezing.", "Since the dominant emission channel in Configuration A is degenerate photon pairs at the lower polariton energy $E^-_0$ , the output field is single-mode squeezed, which is observed as a squeezing spectrum that is minimal (providing the strongest squeezing) at zero frequency.", "The squeezing bandwidth is several meV, which stems from the typical scale of the resonance linewidths of the cavity photons, excitons and biexcitons in the system [69].", "Figure: Results for Configuration B.", "(a) Spontaneous four-wave mixing spectrum for four values of the drive frequency difference ℏω 12 \\hbar \\omega _{12}.", "The vertical lines indicate four-wave mixing resonances as described in the main text.", "(b) Homodyne squeezing spectra as 1+Λ(ω)1+\\Lambda (\\omega ) with colours corresponding to the values of ℏω 12 \\hbar \\omega _{12} in panel (a).", "The vertical line indicates the sideband frequency ℏω=1 2(E 0 + -E 0 - )\\hbar \\omega =\\frac{1}{2}(E^+_0-E^-_0), where two-mode squeezing is observed.", "(c) Zero-frequency squeezing 1+Λ(0)1+\\Lambda (0) (solid line) and squeezing at the optimal homodyne frequency 1+Λ(ω opt )1+\\Lambda (\\omega _{\\rm opt}) (dotted line) as a function of the drive frequency difference ℏω 12 \\hbar \\omega _{12}.", "The vertical line indicates ℏω 12 =1 2(E 0 + -E 0 - )\\hbar \\omega _{12} = \\frac{1}{2}(E^+_0-E^-_0).", "(d) Fourier components m=0m=0 and m=2m=2 of parametric gain Δ m \\Delta ^m, shown as the sum of the norms of spin-diagonal and spin off-diagonal matrix elements.", "The full lines shows the total parametric gain, and the dashed lines show the contributions from the bound biexciton alone.", "The vertical line indicates ℏω 12 =1 2(E 0 + -E 0 - )\\hbar \\omega _{12}=\\frac{1}{2}(E^+_0-E^-_0).Fig.", "REF (c) shows the dependence of the squeezing at zero homodyne frequency as a function of the drive frequency difference $\\hbar \\omega _{12}$ .", "As expected for Configuration A, the squeezing is strongest in the degenerate limit $\\hbar \\omega _{12}=0$ , where both drives are resonant with the lower polariton.", "In addition, there is a weak resonance appearing at $\\hbar \\omega _{12}=E^+_0-E^-_0$ , where the drive frequency $\\omega _2$ is resonant with the upper polariton.", "The resonance is weak, because the energy of two upper polaritons overshoot the bound biexciton energy, $2E^+_0 \\gg E^{\\rm xx}_{\\rm b,-}$ .", "We note that the bichromatic-pump scheme can be utilised to generate strong single-mode squeezing.", "Since the width of the squeezing dip around $\\hbar \\omega _{12}=0$ in Fig.", "REF (c) is several meV, one can choose a pump frequency difference of e.g.", "$\\hbar \\omega _{12} = 100\\;{\\rm \\mu eV}$ , which allows to filter out the pump lasers spectrally and still have strong single-mode squeezing over a bandwidth that exceeds the resolution of any standard photodetector.", "In Fig.", "REF (d), the leading Fourier components $m=0$ and $m=2$ of the parametric gain $\\mathbf {\\Delta }^m$ is shown as the sum of the absolute values of the spin-diagonal and spin off-diagonal components.", "The $m=-2$ component is simply the complex conjugate of $m=2$ , and the remaining components are negligible in comparison to the ones shown.", "The reason behind this is twofold: first, the contributions to the parametric gain are products of $*{a^\\dagger }$ and ${P^\\dagger }$ or quantities that are driven by such products, and therefore only contain even Fourier orders.", "The Fourier components with $m$ larger than 2 are only driven by higher-order processes and are therefore strongly suppressed in comparison with the leading components.", "This observation corroborates the validity of the DCT perturbative expansion up to third order in the pump field.", "However, we can not be entirely certain that the contributions to next (5th) order are of a different nature and can introduce new effects.", "In addition to the total parameric gain, the contribution from the bound biexciton alone is also shown with dashed lines.", "As can be seen, this constitutes the dominating contribution to the parametric gain, and thus the other contributions in Eq.", "(REF ) from Pauli blocking and from the two-exciton scattering continuum are small corrections." ], [ "Configuration B", "Configuration B is defined by setting the cavity frequency such that $E^+_0+E^-_0=E^{\\rm xx}_{\\rm b,-}$ (corresponding to $E^{\\rm p}_0=E^{\\rm xx}_{\\rm b,-}-E^{\\rm x}_0$ ) and the mean driving frequency such that $\\hbar \\omega _1+\\hbar \\omega _2=E^+_0+E^-_0=E^{\\rm xx}_{\\rm b,-}$ [see Fig.", "REF (d)].", "Thus, we expect the driving to be resonant, when the drive frequency difference $\\hbar \\omega _{12}=\\frac{1}{2}(E^+_0-E^-_0)$ , such that $\\hbar \\omega _{1}=E^-_0$ and $\\hbar \\omega _2=E^+_0$ .", "In this case, the drive resonantly excites upper and lower polaritons, which scatter resonantly into bound biexcitons via the Coulomb interaction.", "The latter process is a non-degenerate polaritonic Feshbach resonance, because the in-scattering particles (upper and lower polaritons) are different with non-degenerate energies.", "Fig.", "REF (a) shows the spontaneous four-wave mixing spectra for Configuration B at different drive-frequency differences $\\hbar \\omega _{12}$ .", "In contrast to Configuration A, the emission spectrum features two equally bright peaks from the upper and lower polariton, respectively.", "In terms of the four-wave mixing resonances introduced in Sec.", "REF , these peaks correspond to Eq.", "(REF ) with $\\hbar \\omega _3=E^+_0$ and thus $\\hbar \\omega _4 = \\hbar \\omega _1+\\hbar \\omega _2-E^+_0=E^-_0$ .", "These two main emission energies are indicated with grey vertical lines in Fig.", "REF (a).", "In addition, the weaker four-wave mixing resonance from Eq.", "() $2\\hbar \\omega _2-E^-_0$ is indicated with vertical dotted lines.", "When the drive energy difference becomes larger, the power of the main emission peaks is weakened, whereby the additional four-wave mixing resonances $2\\hbar \\omega _2-E^+_0$ and $2\\hbar \\omega _1-E^+_0$ become visible as well (indicated with vertical dashed and dash-dotted lines, respectively).", "In Fig.", "REF (b), the homodyne squeezing spectrum is shown.", "Here, the most remarkable difference from Configuration A is the appearance of maximal squeezing at the sideband frequency $\\frac{1}{2}(E^+_0-E^-_0)$ rather than at zero homodyne frequency.", "This is because the non-degenerate Feshbach resonance creates two-mode squeezing, i.e.", "squeezing from the strong correlations between two different frequency bands.", "This is strongly related to the two equally strong peaks in the spontaneous four-wave mixing spectrum from Fig.", "REF (a) in contrast to Fig.", "REF (a), which features a single strong central peak.", "Although the detection of such high-frequency two-mode squeezing is challenging in standard homodyne detection due to finite detection bandwidth, succesful detection can be carried out with bichromatic heterodyne detection, where the high-frequency sideband squeezing is mixed down to a low-frequency beat signal [70], [71].", "Fig.", "REF (c) shows the squeezing at zero homodyne frequency (solid line) and at the optimal homodyne sideband frequency (dotted line) as a function of the drive energy difference $\\hbar \\omega _{12}$ .", "The optimal sideband frequency is close to $\\frac{1}{2}(E^+_0-E^-_0)$ , but due to small nonlinear shifts, we have taken the numerically optimal homodyne frequency.", "The behaviour shows a resonance around $\\hbar \\omega _{12}=\\frac{1}{2}(E^+_0-E^-_0)$ (indicated with vertical line), where $\\hbar \\omega _1=E^-_0$ and $\\hbar \\omega _2=E^+_0$ and polaritons are efficiently excited by the drive fields.", "This resonance is also seen in Fig.", "REF (d), where the parametric gain is shown as a function of the drive energy difference $\\hbar \\omega _{12}$ .", "Here, it is seen that the parametric gain is strongest around $\\hbar \\omega _{12}=\\frac{1}{2}(E^+_0-E^-_0)$ .", "Furthermore, as is the case for Configuration A, the bound biexciton (dashed lines) dominates the parametric gain in this configuration." ], [ "Conclusion", "In conclusion, we have presented a theoretical investigation of spontaneous four-wave mixing and squeezing in bichromatically pumped atomically-thin semiconductor cavity polaritonic systems, in particular focusing on the strong nonlinear response from the bound biexciton.", "By applying a rigorous truncation scheme of the many-body state, we have derived a tractable set of equations of motions for exciton and photon fields, as well as the correlated multiparticle fields.", "In addition, we have employed a Heisenberg-Langevin approach to calculate the fluctuation spectra in the presence of two non-degenerate pump laser fields.", "The combination of these two methods gives access to the spontaneous four-wave mixing spectra and the squeezing properties of the outcoupled field from the cavity in the presence of strong Coulomb-generated correlations.", "We have focused on two resonant configurations, corresponding to a degenerate and non-degenerate polaritonic Feshbach resonance, respectively.", "In the degenerate configuration, a pair of lower polaritons is resonant with the bound biexciton, thereby giving rise to a single dominating peak in the spontaneous four-wave mixing spectrum and strong single-mode squeezing.", "In the non-degenerate configuration, a upper and lower polariton pair is resonant with the bound biexciton, thereby giving rise to two balanced peaks in the spontaneous four-wave mixing spectrum and strong two-mode squeezing.", "We believe that these results will open new opportunities in the cross-field between semiconductor physics and nonlinear optics.", "Although the numerical calculations in this paper have been performed for an atomically-thin semiconductor, the overall features of the presented phenomena can be expected to be seen in other semiconductor materials with spectrally resolved biexcitons as well.", "We note that a previous experimental investigation [4] measured parametric gain from biexcitons in a bulk CuCl microcavity in the near-UV spectral range, although no homodyne detection was performed in the experiment.", "Furthermore, ZnO quantum wells with biexciton binding energies around 15 meV [72] are another interesting platform to potentially observe the predicted squeezing mechanism in the near-UV spectrum.", "Polariton Feshbach resonance has been observed in pump-probe experiments with InGaAs quantum wells [29], [73], [13].", "E.V.D.", "acknowledges support from Independent Research Fund Denmark through an International Postdoc Fellowship (Grant No.", "0164-00014B).", "A.K.", "gratefully acknowledges support from the Deutsche Forschungsgemeinschaft through Projects No.", "420760124 (KN 427/11-1) and No.", "163436311—SFB 910 (Project B1)." ], [ "Matrix elements", "The equation of motion Eq.", "(REF ) is derived as in Ref.", "denning2022efficient, the only difference being the bichromatic input field as described in Sec.", "REF .", "Here, we shall simply state the results from Ref. denning2022efficient.", "The matrix elements in Eq.", "(REF ) are given by $\\begin{split}\\tilde{\\Omega }_\\mathbf {q}&= \\sum _{\\mathbf {k}_1} A_\\mathbf {q}(\\phi _{\\mathbf {k}_1}^*\\phi _{\\mathbf {k}_1+\\alpha \\mathbf {q}}\\phi _{\\mathbf {k}_1 + \\mathbf {q}}+\\phi _{\\mathbf {k}_1+\\mathbf {q}}^*\\phi _{\\mathbf {k}_1+\\alpha \\mathbf {q}}\\phi _{\\mathbf {k}_1}\\Big )\\\\W^0 &= \\sum _{\\mathbf {k}_1\\mathbf {k}_2} V_{\\mathbf {k}_2-\\mathbf {k}_1}\\phi _{\\mathbf {k}_1}\\phi _{\\mathbf {k}_1}(\\phi _{\\mathbf {k}_1}^*-\\phi _{\\mathbf {k}_2}^*)(\\phi _{\\mathbf {k}_1}^*-\\phi _{\\mathbf {k}_2}^*)\\\\W^\\pm _\\mu &= \\sum _\\mathbf {q}\\Phi ^\\pm _{\\mu ,\\mathbf {q}} \\tilde{W}^{\\pm *}_{\\mathbf {q},0}\\\\\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}W\\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _\\mu &= \\sum _{\\mathbf {q}\\mathbf {q}^{\\prime }} \\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Phi \\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _{\\mu ,\\mathbf {q}}(\\mathcal {S}^\\pm )^{-1}_{\\mathbf {q},\\mathbf {q}^{\\prime }}\\tilde{W}^\\pm _{\\mathbf {q}^{\\prime },0}\\\\\\Omega ^\\pm _{\\mu ,\\mathbf {q}} &= \\sum _{\\mathbf {q}^{\\prime }} \\Phi ^\\pm _{\\mu ,-\\mathbf {q}^{\\prime }}\\tilde{A}^\\pm _{\\mathbf {q}^{\\prime },\\mathbf {q}}\\\\\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Omega \\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _{\\mu ,\\mathbf {q}} &= \\sum _{\\mathbf {q}^{\\prime }\\mathbf {q}^{\\prime \\prime }}\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Phi \\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _{\\mu ,\\mathbf {q}^{\\prime }}(\\mathcal {S}^{\\pm })^{-1}_{\\mathbf {q}^{\\prime },\\mathbf {q}^{\\prime \\prime }} \\tilde{A}^{\\pm *}_{\\mathbf {q}^{\\prime \\prime },\\mathbf {q}},\\end{split}$ with $\\begin{split}\\tilde{W}_\\mathbf {q}^\\pm =V_\\mathbf {q}&\\sum _{\\mathbf {k}_1\\mathbf {k}_2}\\phi _{\\mathbf {k}_1}\\phi _{\\mathbf {k}_2}(\\phi ^*_{\\mathbf {k}_1-\\beta \\mathbf {q}} - \\phi ^*_{\\nu _1,\\mathbf {k}_1+\\alpha \\mathbf {q}})(\\phi ^*_{\\mathbf {k}_2+\\beta \\mathbf {q}} - \\phi ^*_{\\mathbf {k}_2-\\alpha \\mathbf {q}})\\\\\\pm &\\sum _{\\mathbf {k}_1\\mathbf {k}_2}V_{\\mathbf {k}_1-\\mathbf {k}_2+(\\alpha -\\beta )\\mathbf {q}}\\phi _{\\mathbf {k}_1}\\phi _{\\mathbf {k}_2}(\\phi ^*_{\\mathbf {k}_1-\\beta \\mathbf {q}} - \\phi ^*_{\\mathbf {k}_2-\\alpha \\mathbf {q}})\\\\&\\hspace{113.81102pt}\\times (\\phi ^*_{\\mathbf {k}_1+\\alpha \\mathbf {q}} - \\phi ^*_{\\mathbf {k}_2+\\beta \\mathbf {q}})\\\\\\tilde{A}_{\\mathbf {q}^{\\prime },\\mathbf {q}}^\\pm &= \\tilde{\\Omega }_{\\mathbf {q}}\\delta _{\\mathbf {q}\\mathbf {q}^{\\prime }}\\mp A_\\mathbf {q}\\sum _{\\mathbf {k}}\\phi _{\\mathbf {k}+ \\alpha \\mathbf {q}}\\phi _{\\mathbf {k}+\\mathbf {q}-\\beta \\mathbf {q}^{\\prime }}^*\\phi _{\\mathbf {k}-\\alpha \\mathbf {q}^{\\prime }}^*\\\\\\mathcal {S}_{\\mathbf {q},\\mathbf {q}^{\\prime }}^\\pm &=\\delta _{\\mathbf {q}\\mathbf {q}^{\\prime }}\\mp \\sum _{\\mathbf {k}}\\phi _{\\mathbf {k}-\\alpha \\mathbf {q}}\\phi _{\\mathbf {k}+\\mathbf {q}^{\\prime }-\\beta \\mathbf {q}}\\phi _{\\mathbf {k}-\\mathbf {q}+\\beta \\mathbf {q}^{\\prime }}^*\\phi _{\\mathbf {k}+\\alpha \\mathbf {q}^{\\prime }}^*.\\end{split}$ The biexcitonic wavefunctions $\\Phi ^\\pm _{\\mu ,\\mathbf {q}}$ are the solutions to the eigenvalue equation $(2E^{\\rm x}_0 + \\frac{\\hbar ^2 q^2}{M})\\Phi ^\\pm _{\\mu ,\\mathbf {q}} + \\sum _{\\mathbf {q}^{\\prime }\\mathbf {q}^{\\prime \\prime }}(\\mathcal {S}^\\pm )^{-1}_{\\mathbf {q},\\mathbf {q}^{\\prime }}\\tilde{W}^\\pm _{\\mathbf {q}^{\\prime },\\mathbf {q}^{\\prime \\prime }} \\Phi ^\\pm _{\\mu ,\\mathbf {q}^{\\prime \\prime }} = E^{\\rm xx}_{\\mu ,\\pm }\\Phi ^\\pm _{\\mu ,\\mathbf {q}},$ where $M = m_{\\rm e}+m_{\\rm h}$ is the total exciton mass." ], [ "Formal solutions of multiparticle fluctuations", "The first thing to notice is that the coupling coefficients between the correlated fluctuations $\\delta \\mathcal {B}$ , $\\delta \\mathcal {C}$ , and $\\delta \\mathcal {D}$ are linear and do not involve any steady-state expectation values.", "This means that the formal solution of these fluctuations follows the procedure in Ref.", "denning2022efficient without any changes to the many-body quantities $\\Pi $ and $K$ .", "The only difference is the source terms involving $\\delta P^\\dagger $ , $\\delta a^\\dagger $ , $\\delta a^{\\rm in\\dagger }$ , $\\delta P^{\\rm in\\dagger }$ , which occur together with time-varying amplitudes.", "To incorporate this into the derivation, we first consider a general equation of motion involving two arbitrary fluctuation operators $\\delta Q$ and $\\delta R$ as $-i\\hbar \\partial _t \\delta Q(t) = A(t)\\delta R(t)$ where $A(t)$ is periodic with the period $T=\\frac{2\\pi }{\\omega _{12}}$ .", "We can then express $A(t)$ in terms of its discrete Fourier series as $A(t) = \\sum _n A_n e^{-in\\omega _{12}t}$ and write $\\delta Q(t)=\\frac{1}{2\\pi }\\int _{-\\infty }^{\\infty }{\\omega } e^{-i\\omega t}\\delta Q(\\omega )$ and similarly for $\\delta R$ , such that $\\begin{split}-i\\hbar \\partial _t \\frac{1}{2\\pi }\\int _{-\\infty }^{\\infty }{\\omega ^{\\prime }} &e^{-i\\omega ^{\\prime } t}\\delta Q(\\omega ^{\\prime })\\\\&= \\sum _n A_n e^{-in\\omega _{12}t}\\int _{-\\infty }^{\\infty }{\\omega ^{\\prime }} e^{-i\\omega ^{\\prime } t}\\delta R(\\omega ^{\\prime }).\\end{split}$ Multiplying by $e^{i\\omega t}$ and integrating over $t$ , we find $-\\hbar \\omega \\delta Q(\\omega ) = \\sum _n A_n \\delta R(\\omega -n\\omega _{12}).$ Similarly, if we have a product of two amplitudes occurring on the right-hand side, we can use the properties of the Fourier series of a product, $C(t) = A(t)B(t) \\Rightarrow C_m = \\sum _n A_n B_{m-n}.$ Thus a time-domain equation of motion of the form $-i\\hbar \\partial _t \\delta Q(t) = A(t)B(t)\\delta R(t),$ transforms to $-\\hbar \\omega \\delta Q(\\omega ) = \\sum _m\\sum _{n} A_n B_{m-n} \\delta R(\\omega -m\\omega _{12}).$ .", "With this, we can recycle the results from Ref.", "[6], where the formal solution of the equation of motion for $\\delta C$ [Eq.", "(S36) in the Supplementary Material of Ref.", "denning2022efficient] becomes $\\begin{split}\\delta \\mathcal {C}^{\\zeta \\zeta ^{\\prime }}_{\\mathbf {q}}\\!\\!", "(\\omega ) &=i\\hbar \\sqrt{2\\gamma ^{\\rm p}}\\sum _{\\zeta _1\\zeta _1^{\\prime }}K^{\\zeta \\zeta ^{\\prime }\\mathbf {q}}_{\\zeta _1\\zeta _1^{\\prime }0}(\\omega )\\sum _n*{a^{\\rm in\\dagger }_{\\zeta _1}}_n\\delta P^\\dagger _{\\zeta _1^{\\prime },0}(\\omega -n\\omega _{12})\\\\&\\hspace{-39.83368pt}-i\\hbar \\sqrt{2\\gamma ^{\\rm p}}\\sum _{\\zeta _1\\zeta _1^{\\prime }}K^{\\zeta \\zeta ^{\\prime }\\mathbf {q}}_{\\zeta _1\\zeta _1^{\\prime }0}(\\omega )\\frac{\\Omega _0}{\\hbar \\omega +2(\\tilde{E}^{\\rm p}_0-\\hbar \\omega _{\\rm d})}\\\\&\\hspace{-39.83368pt}\\times \\sum _n\\Big [*{a^{\\rm in\\dagger }_{\\zeta _1}}_n\\delta a^\\dagger _{\\zeta _1^{\\prime },0}(\\omega -n\\omega _{12})+*{a^{\\rm in\\dagger }_{\\zeta _1^{\\prime }}}_n\\delta a^\\dagger _{\\zeta _1,0}(\\omega -n\\omega _{12})\\\\ &\\hspace{-14.22636pt}+*{a^\\dagger _{\\zeta _1,0}}_n\\delta a^{\\rm in\\dagger }_{\\zeta _1^{\\prime }}(\\omega -n\\omega _{12})+ *{a^\\dagger _{\\zeta _1^{\\prime },0}}_n\\delta a^{\\rm in\\dagger }_{\\zeta _1}(\\omega -n\\omega _{12})\\Big ]\\\\ &\\hspace{-39.83368pt}+ i\\hbar \\sqrt{2\\gamma ^{\\rm x}}\\sum _{\\zeta _1\\zeta _1^{\\prime }\\mathbf {q}_1}K^{\\zeta \\zeta ^{\\prime }\\mathbf {q}}_{\\zeta _1\\zeta _1^{\\prime }\\mathbf {q}_1}\\!\\!", "(\\omega )\\sum _n\\Bigg \\lbrace \\delta _{\\mathbf {q}_1,0}*{a^\\dagger _{\\zeta _1,0}}_n\\delta P^{\\rm in\\dagger }_{\\zeta _1^{\\prime },0}(\\omega -n\\omega _{12})\\\\&\\hspace{-39.83368pt}-\\sum _{\\mu \\pm }\\frac{\\frac{1}{4}(1\\pm \\delta _{\\zeta _1\\zeta _1^{\\prime }})\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Phi \\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _{\\mu ,0}\\Omega ^\\pm _{\\mu ,\\mathbf {q}_1}}{\\hbar \\omega +\\tilde{E}^{\\rm xx}_{\\mu ,\\pm }-2\\hbar \\omega _{\\rm d}}\\Big [*{P^\\dagger _{\\zeta _1,0}}_n\\delta P^{\\rm in\\dagger }_{\\zeta _1^{\\prime },0}(\\omega -n\\omega _{12})\\\\&\\hspace{71.13188pt}+ *{P^\\dagger _{\\zeta _1^{\\prime },0}}_n\\delta P^{\\rm in\\dagger }_{\\zeta _1,0}(\\omega -n\\omega _{12})\\Big ]\\Bigg \\rbrace ,\\end{split}$ where $\\begin{split}K(\\omega ) = -\\Big [&\\delta _{\\mathbf {q},\\mathbf {q}_1}\\delta _{\\zeta \\zeta _1}\\delta _{\\zeta ^{\\prime },\\zeta _1^{\\prime }}(\\hbar \\omega + \\tilde{E}^{\\rm x}_\\mathbf {q}+ \\tilde{E}^{\\rm p}_\\mathbf {q}-2\\hbar \\omega _{\\rm d})\\\\&\\hspace{85.35826pt}+ \\Pi ^{\\zeta \\zeta ^{\\prime }\\mathbf {q}}_{\\zeta _1\\zeta _1^{\\prime }\\mathbf {q}_1}(\\omega )\\Big ]^{-1}.\\end{split}$ is the Green's function for $\\delta \\mathcal {C}(\\omega )$ with self-energy $\\begin{split}\\Pi ^{\\zeta \\zeta ^{\\prime }\\mathbf {q}}_{\\zeta _1\\zeta _1^{\\prime }\\mathbf {q}_1}\\!\\!", "(\\omega ) &=-\\frac{\\Omega _\\mathbf {q}\\Omega _{-\\mathbf {q}_1}}{\\hbar \\omega +2(\\tilde{E}^{\\rm p}_\\mathbf {q}-\\hbar \\omega _{\\rm d})}\\\\&\\times [\\delta _{\\zeta ^{\\prime },\\zeta _1}\\delta _{\\zeta ,\\zeta _1^{\\prime }}\\delta _{-\\mathbf {q},\\mathbf {q}_1}+ \\delta _{\\zeta ,\\zeta _1}\\delta _{\\zeta ^{\\prime },\\zeta _1^{\\prime }}\\delta _{\\mathbf {q},\\mathbf {q}_1}]\\\\ &\\hspace{-28.45274pt}-\\sum _{\\mu \\pm }\\frac{\\frac{1}{2}(1\\pm \\delta _{\\zeta \\zeta ^{\\prime }})\\Omega ^\\pm _{\\mu ,\\mathbf {q}} \\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Omega \\hspace{-0.83328pt}}\\hspace{0.83328pt}_{\\mu ,\\mathbf {q}_1}^\\pm }{\\hbar \\omega +\\tilde{E}^{\\rm xx}_{\\mu ,\\pm }-2\\hbar \\omega _{\\rm d}}[\\delta _{\\zeta ^{\\prime }\\zeta _1}\\delta _{\\zeta ,\\zeta _1^{\\prime }}+ \\delta _{\\zeta \\zeta _1}\\delta _{\\zeta ^{\\prime }\\zeta _1^{\\prime }}].\\end{split}$ The Fourier-transformation of the equation of motion for $\\delta P^\\dagger $ , Eq.", "(REF ), becomes $\\begin{split}-\\hbar \\omega &\\delta P_{\\zeta ,0}^\\dagger (\\omega )= \\tilde{E}_0^{\\rm x} \\delta P_{\\zeta ,0}^\\dagger (\\omega )+ \\Omega _0\\delta a_{\\zeta ,0}^\\dagger (\\omega )\\\\&+\\sum _{\\zeta ^{\\prime }}\\sum _n\\Delta _{\\zeta \\zeta ^{\\prime }}^n\\delta P_{\\zeta ^{\\prime },0}(\\omega -n\\omega _{12})+ i\\hbar \\sqrt{2\\gamma ^{\\rm x}} \\delta P_{\\zeta ,0}^{\\rm in\\dagger }(\\omega ),\\\\&+ \\sum _{\\zeta ^{\\prime }\\zeta _1\\zeta _2\\mathbf {q}}\\sum _n *{P_{\\zeta ^{\\prime },0}}_n Q^{\\zeta \\zeta ^{\\prime }}_{\\zeta _1\\zeta _2\\mathbf {q}}(\\omega -n\\omega _{12})\\delta \\mathcal {C}^{\\zeta _1\\zeta _2}_\\mathbf {q}(\\omega -n\\omega _{12})\\\\&- i\\hbar \\sqrt{2\\gamma ^{\\rm x}}\\frac{1}{2}\\sum _{\\zeta ^{\\prime }\\mu \\pm }\\sum _{nn^{\\prime }}*{P_{\\zeta ^{\\prime },0}}_{n}\\frac{\\frac{1}{2}(1\\pm \\delta _{\\zeta \\zeta ^{\\prime }})W^\\pm _\\mu \\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Phi \\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _{\\mu ,0}}{\\hbar (\\omega -n\\omega _{12})+\\tilde{E}^{\\rm xx}_{\\mu ,\\pm }-2\\hbar \\omega _{\\rm d}}\\\\ &\\hspace{56.9055pt}\\times \\Big [*{P^\\dagger _{\\zeta ,0}}_{n^{\\prime }}\\delta P^{\\rm in\\dagger }_{\\zeta ^{\\prime },0}(\\omega -[n+n^{\\prime }]\\omega _{12})\\\\ &\\hspace{71.13188pt}+ *{P^\\dagger _{\\zeta ^{\\prime },0}}_{n^{\\prime }}\\delta P^{\\rm in\\dagger }_{\\zeta ,0}(\\omega -[n+n^{\\prime }]\\omega _{12})\\Big ],\\end{split}$ where $\\Delta ^m_{\\zeta \\zeta ^{\\prime }}$ is defined in Eq.", "(REF ), and where the $Q$ -matrix is defined slightly different than in Ref.", "denning2022efficient, $\\begin{split}Q^{\\zeta \\zeta ^{\\prime }}_{\\zeta _1\\zeta _2\\mathbf {q}}\\!", "(\\omega ) &= -\\delta _{\\zeta \\zeta ^{\\prime }}\\delta _{\\zeta _1\\zeta }\\delta _{\\zeta _2\\zeta }\\tilde{\\Omega }_\\mathbf {q}\\\\ &-\\sum _{\\mu \\pm }\\frac{\\frac{1}{2}(1\\pm \\delta _{\\zeta _1\\zeta _2})W^\\pm _\\mu \\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Omega \\hspace{-0.83328pt}}\\hspace{0.83328pt}_{\\mu ,\\mathbf {q}}^\\pm (\\delta _{\\zeta _2\\zeta }\\delta _{\\zeta _1\\zeta ^{\\prime }} + \\delta _{\\zeta _1\\zeta }\\delta _{\\zeta _2\\zeta ^{\\prime }})}{\\hbar \\omega +\\tilde{E}^{\\rm xx}_{\\mu ,\\pm }-2\\hbar \\omega _{\\rm d}}.\\end{split}$ Inserting the formal solution for $\\delta \\mathcal {C}$ into Eq.", "(REF ) and writing all frequency-dependent quantities using the Fourier-index form, we find $\\begin{split}-\\sum _{\\zeta ^{\\prime }m^{\\prime }}&{[\\hbar (\\nu -m\\omega _{12}) + \\tilde{E}^{\\rm x}_0]\\delta _{\\zeta \\zeta ^{\\prime }}\\delta _{mm^{\\prime }} + \\Sigma _{\\zeta \\zeta ^{\\prime }}^{mm^{\\prime }}(\\nu )}\\delta P^\\dagger _{\\zeta ^{\\prime }m^{\\prime }}(\\nu )\\\\&= \\sum _{\\zeta ^{\\prime }m^{\\prime }} \\Omega _{\\zeta \\zeta ^{\\prime },0}^{mm^{\\prime }}(\\nu ) \\delta a_{\\zeta ^{\\prime },m^{\\prime }}^\\dagger (\\nu )+\\sum _{\\zeta ^{\\prime }m^{\\prime }}\\Delta _{\\zeta \\zeta ^{\\prime }}^{m^{\\prime }-m}\\delta P_{\\zeta ^{\\prime },m^{\\prime }}(\\nu )\\\\ &+ \\sum _{\\zeta ^{\\prime }m^{\\prime }} T^{{\\rm x},mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu ) \\delta P^{\\rm in\\dagger }_{\\zeta ^{\\prime }m^{\\prime }}(\\nu )+ \\sum _{\\zeta ^{\\prime }m^{\\prime }} T^{{\\rm p},mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu ) \\delta a^{\\rm in\\dagger }_{\\zeta ^{\\prime }m^{\\prime }}(\\nu ),\\end{split}$ where the self-energy and renormalized coupling matrices are given by $\\begin{split}\\Sigma _{\\zeta \\zeta ^{\\prime }}^{mm^{\\prime }}(\\nu ) &= i\\hbar \\sqrt{2\\gamma ^{\\rm p}}\\sum _{nn^{\\prime }}\\sum _{\\zeta _1\\zeta _2\\mathbf {q}}\\sum _{\\zeta _1^{\\prime }\\zeta _2^{\\prime }}*{P_{\\zeta _2^{\\prime },0}}_n Q^{\\zeta \\zeta _2^{\\prime }}_{\\zeta _1\\zeta _2\\mathbf {q},m+n}(\\nu )\\\\ &\\times K^{\\zeta _1\\zeta _2\\mathbf {q}}_{\\zeta _1^{\\prime }\\zeta ^{\\prime }0,m+n}(\\nu )*{a^{\\rm in\\dagger }_{\\zeta _1^{\\prime }}}_{n^{\\prime }}\\delta _{m^{\\prime },m+n+n^{\\prime }}\\\\\\hat{\\Omega }^{mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }0}(\\nu ) &=\\Omega _0 \\delta _{mm^{\\prime }}\\delta _{\\zeta \\zeta ^{\\prime }}- i\\hbar \\sqrt{2\\gamma ^{\\rm p}}\\sum _{nn^{\\prime }}\\sum _{\\zeta _1\\zeta _2\\zeta _3\\mathbf {q}}\\sum _{\\zeta _1^{\\prime }\\zeta _2^{\\prime }}\\\\ &\\times *{P_{\\zeta _3,0}}_n\\frac{\\Omega _0 Q^{\\zeta \\zeta _3}_{\\zeta _1\\zeta _2\\mathbf {q},m+n}(\\nu )K^{\\zeta _1\\zeta _2\\mathbf {q}}_{\\zeta _1^{\\prime }\\zeta _2^{\\prime }0,m+n}(\\nu )}{\\hbar (\\nu -[m+n]\\omega _{12}) + 2(\\tilde{E}^{\\rm p}_0-\\hbar \\omega _{\\rm d})}\\\\ &\\times [*{a^{\\rm in\\dagger }_{\\zeta _1^{\\prime }}}_{n^{\\prime }}\\delta _{\\zeta ^{\\prime }\\zeta _2^{\\prime }}+ *{a^{\\rm in\\dagger }_{\\zeta _2^{\\prime }}}_{n^{\\prime }}\\delta _{\\zeta ^{\\prime }\\zeta _1^{\\prime }}]\\delta _{m^{\\prime },m+n+n^{\\prime }}\\end{split}$ and the renormalized incoupling matrices are given by $\\begin{split}T^{{\\rm p},mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu ) &=i\\hbar \\sqrt{2\\gamma ^{\\rm p}}\\sum _{nn^{\\prime }}\\sum _{\\zeta _1\\zeta _2\\zeta _3\\mathbf {q}}\\sum _{\\zeta _1^{\\prime }\\zeta _2^{\\prime }}*{P_{\\zeta _3,0}}_n Q^{\\zeta \\zeta _3}_{\\zeta _1\\zeta _2\\mathbf {q},m+n}(\\nu )\\\\&\\times K^{\\zeta _1\\zeta _2\\mathbf {q}}_{\\zeta _1^{\\prime }\\zeta _2^{\\prime }0,m+n}(\\nu )\\Bigg \\lbrace *{P_{\\zeta _2^{\\prime },0}^\\dagger }_{n^{\\prime }}\\delta _{\\zeta ^{\\prime }\\zeta _1^{\\prime }}\\\\&-\\frac{\\Omega _0[*{a^{\\dagger }_{\\zeta _1^{\\prime },0}}_{n^{\\prime }}\\delta _{\\zeta ^{\\prime }\\zeta _2^{\\prime }}+ *{a^{\\dagger }_{\\zeta _2^{\\prime },0}}_{n^{\\prime }}\\delta _{\\zeta ^{\\prime }\\zeta _1^{\\prime }}]}{\\hbar (\\nu -[m+n]\\omega _{12}) + 2(\\tilde{E}^{\\rm p}_0-\\hbar \\omega _{\\rm d})}\\Bigg \\rbrace \\delta _{m^{\\prime },m+n+n^{\\prime }}\\\\T^{{\\rm x},mm^{\\prime }}_{\\zeta \\zeta ^{\\prime }}(\\nu ) &= i\\hbar \\sqrt{2\\gamma ^{\\rm x}}\\Bigg \\lbrace \\delta _{\\zeta \\zeta ^{\\prime }}\\delta _{mm^{\\prime }}-\\frac{1}{2}\\sum _{\\zeta _1\\mu \\pm }\\sum _{nn^{\\prime }}*{P_{\\zeta ^{\\prime },0}}_{n}\\\\ &\\times \\frac{\\frac{1}{2}(1\\pm \\delta _{\\zeta \\zeta _1})W^\\pm _\\mu \\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Phi \\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _{\\mu ,0}}{\\hbar (\\nu -[m+n]\\omega _{12})+\\tilde{E}^{\\rm xx}_{\\mu ,\\pm }-2\\hbar \\omega _{\\rm d}}\\\\ &\\times [*{P^\\dagger _{\\zeta ,0}}_{n^{\\prime }}\\delta _{\\zeta ^{\\prime }\\zeta _1}+ *{P^\\dagger _{\\zeta _1,0}}_{n^{\\prime }}\\delta _{\\zeta ^{\\prime }\\zeta }]\\delta _{m^{\\prime },m+n+n^{\\prime }}\\Bigg \\rbrace \\\\ &+i\\hbar \\sqrt{2\\gamma ^{\\rm x}}\\sum _{nn^{\\prime }}\\sum _{\\zeta _1\\zeta _2\\zeta _3\\mathbf {q}}\\sum _{\\zeta _1^{\\prime }\\zeta _2^{\\prime }}*{P_{\\zeta _3,0}}_n Q^{\\zeta \\zeta _3}_{\\zeta _1\\zeta _2\\mathbf {q},m+n}(\\nu )\\\\ &\\times K^{\\zeta _1\\zeta _2\\mathbf {q}}_{\\zeta _1^{\\prime }\\zeta _2^{\\prime }\\mathbf {q}_1,m+n}(\\nu )\\Bigg \\lbrace \\delta _{\\mathbf {q}_1,0}*{a^\\dagger _{\\zeta _1^{\\prime },0}}_{n^{\\prime }}\\delta _{\\zeta ^{\\prime }\\zeta _2^{\\prime }}\\\\&-\\sum _{\\mu \\pm }\\frac{\\frac{1}{4}(1\\pm \\delta _{\\zeta _1^{\\prime }\\zeta _2^{\\prime }})\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}\\Phi \\hspace{-0.83328pt}}\\hspace{0.83328pt}^\\pm _{\\mu ,0}\\Omega ^\\pm _{\\mu ,\\mathbf {q}_1}}{\\hbar (\\nu -[m+n]\\omega _{12})+\\tilde{E}^{\\rm xx}_{\\mu ,\\pm }-2\\hbar \\omega _{\\rm d}}\\\\ &\\times [*{P^\\dagger _{\\zeta _1^{\\prime },0}}_{n^{\\prime }}\\delta _{\\zeta ^{\\prime }\\zeta _2^{\\prime }}+ *{P^\\dagger _{\\zeta _2^{\\prime },0}}_{n^{\\prime }} \\delta _{\\zeta ^{\\prime }\\zeta _1^{\\prime }}]\\Bigg \\rbrace \\delta _{m^{\\prime },m+n+n^{\\prime }},\\end{split}$ with $K^{\\zeta _1\\zeta _2\\mathbf {q}}_{\\zeta _1^{\\prime }\\zeta ^{\\prime }0,m}(\\nu ) := K^{\\zeta _1\\zeta _2\\mathbf {q}}_{\\zeta _1^{\\prime }\\zeta ^{\\prime }0}(\\nu -m\\omega _{12})$ and similarly for $Q^{\\zeta \\zeta _2^{\\prime }}_{\\zeta _1\\zeta _2\\mathbf {q},m}(\\nu )$ ." ], [ "Homodyne noise spectrum", "In this appendix, we provide additional details of the derivation of the homodyne noise spectrum.", "In terms of the fields $E_{\\rm lo}$ and $E_{\\rm s}$ given in Eq.", "(REF ), the mean photocurrent difference from Eq.", "(REF ) takes the form $\\begin{split}\\overline{I_-(t)}&:=\\overline{I_1(t)}-\\overline{I_2(t)} \\\\ &=i\\frac{\\eta }{\\tau _{\\rm d}}\\int _{t-\\tau _{\\rm d}}^t{t^{\\prime }} \\Big [*{E_{\\rm lo}(t^{\\prime })}*{E_{\\rm s}^\\dagger (t^{\\prime })}- *{E_{\\rm lo}^\\dagger (t^{\\prime })}*{E_{\\rm s}(t^{\\prime })}\\Big ]\\\\&=\\frac{\\eta \\sqrt{F_{\\rm lo}}}{\\tau _{\\rm d}}\\int _{t-\\tau _{\\rm d}}^t{t^{\\prime }} *{X_{\\rm s}(\\theta ,t^{\\prime })},\\end{split}$ where $X_{\\rm s}(\\theta ,t) = e^{i\\theta } E_{\\rm s}^\\dagger (t) + e^{-i\\theta }E_{\\rm s}(t)$ is the source-field quadrature operator with angle $\\theta =\\varphi +\\pi /2$ .", "The two terms in $\\theta $ stem from the local oscillator phase $\\varphi $ and a phase displacement $\\pi /2$ from the beamsplitter.", "Similarly, the correlation function of the photocurrent difference from Eq.", "(REF ) becomes $\\begin{split}&\\overline{I_-(t)I_-(t+\\tau )} =\\overline{I_1(t)I_1(t+\\tau )} + \\overline{I_2(t)I_2(t+\\tau )} \\\\&\\hspace{113.81102pt}- \\overline{I_1(t)I_2(t+\\tau )} - \\overline{I_2(t)I_1(t+\\tau )} \\\\&=\\frac{1}{\\tau _{\\rm d}^2}\\Bigg \\lbrace \\eta ^2\\int _{t-\\tau _{\\rm d}}^t{t^{\\prime }}\\int _{t+\\tau -\\tau _{\\rm d}}^{t+\\tau }{t^{\\prime \\prime }} \\Big [*{E_{\\rm lo}^{\\dagger }(t^{\\prime })E_{\\rm lo}(t^{\\prime \\prime })}*{E_{\\rm s}^\\dagger (t^{\\prime \\prime })E_{\\rm s}(t^{\\prime })}\\\\&\\hspace{85.35826pt}+*{E_{\\rm lo}^\\dagger (t^{\\prime \\prime }) E_{\\rm lo}(t^{\\prime })}*{E_{\\rm s}^\\dagger (t^{\\prime })E_{\\rm s}(t^{\\prime \\prime })}\\\\&\\hspace{85.35826pt}-*{:E_{\\rm lo}^\\dagger (t^{\\prime }) E_{\\rm lo}^\\dagger (t^{\\prime \\prime }):}*{:E_{\\rm s}(t^{\\prime }) E_{\\rm s}(t^{\\prime \\prime }):}\\\\&\\hspace{85.35826pt}-*{:E_{\\rm lo}(t^{\\prime }) E_{\\rm lo}(t^{\\prime \\prime }):}*{:E_{\\rm s}^\\dagger (t^{\\prime }) E_{\\rm s}^\\dagger (t^{\\prime \\prime }):}\\Big ] \\\\&+\\eta \\Theta (\\tau _{\\rm d}-\\tau )\\int _{t+\\tau -\\tau _{\\rm d}}^t{t^{\\prime }} *{E_{\\rm s}^\\dagger (t^{\\prime })E_{\\rm s}(t^{\\prime })} + *{E_{\\rm lo}^\\dagger (t^{\\prime })E_{\\rm lo}(t^{\\prime })}.\\Bigg \\rbrace \\end{split}$ The last term is the shot noise correlation function, $N_0(t,\\tau )=(\\eta /\\tau _{\\rm d}^2)\\Theta (\\tau _{\\rm d}-\\tau )\\int _{t+\\tau -\\tau _{\\rm d}}^t{t^{\\prime }} [*{E_{\\rm s}^\\dagger (t^{\\prime })E_{\\rm s}(t^{\\prime })} + *{E_{\\rm lo}^\\dagger (t^{\\prime })E_{\\rm lo}(t^{\\prime })}]$ .", "In typical homodyne detection setups, the local oscillator is significantly stronger than the signal, which means that the shot noise will be dominated by the local oscillator.", "In this limit, we can neglect the signal contribution to $N_0(t,\\tau )$ which then becomes $N_0(t,\\tau ) \\simeq N_0(\\tau ) = \\frac{\\eta F_{\\rm lo}}{\\tau _{\\rm d}^2}(\\tau _{\\rm d}-\\tau )\\Theta (\\tau _{\\rm d}-\\tau ).$ Combining Eqs.", "(REF ) and (REF ), we find $\\begin{split}&\\overline{I_-(t)I_-(t+\\tau )} -\\overline{I_-(t)}\\;\\;\\overline{I_-(t+\\tau )}\\\\&= N_0(\\tau ) + \\frac{\\eta ^2F_{\\rm lo}}{\\tau _{\\rm d}^2}\\int _{t-\\tau _{\\rm d}}^t{t^{\\prime }}\\int _{t+\\tau -\\tau _{\\rm d}}^{t+\\tau }{t^{\\prime \\prime }} *{:\\delta X_{\\rm s}(\\theta ,t^{\\prime })\\delta X_{\\rm s}(\\theta ,t^{\\prime \\prime }):},\\end{split}$ where $\\delta X_{\\rm s}(\\theta ,t) := X_{\\rm s}(\\theta ,t)-*{X_{\\rm s}(\\theta ,t)}$ .", "In the second integral, we make the substitution $t^{\\prime \\prime \\prime }= t^{\\prime \\prime }-\\tau $ in order to make the integration limits equal.", "Thereby, we obtain after relabelling $t^{\\prime \\prime \\prime }$ to $t^{\\prime \\prime }$ $\\begin{split}&\\overline{I_-(t)I_-(t+\\tau )} -\\overline{I_-(t)}\\;\\;\\overline{I_-(t+\\tau )}\\\\&= N_0(\\tau ) + \\frac{\\eta ^2F_{\\rm lo}}{\\tau _{\\rm d}^2}\\int _{t-\\tau _{\\rm d}}^t\\!\\!\\!\\!\\!\\!", "{t^{\\prime }}\\int _{t-\\tau _{\\rm d}}^{t}\\!\\!\\!\\!\\!\\!", "{t^{\\prime \\prime }}*{:\\delta X_{\\rm s}(\\theta ,t^{\\prime })\\delta X_{\\rm s}(\\theta ,t^{\\prime \\prime }+\\tau ):}.\\end{split}$ Inserting this expression into Eq.", "(REF ), we obtain the noise spectrum $\\begin{split}N(\\omega )&= N_0(\\omega ) + \\frac{\\eta ^2F_{\\rm lo}}{\\tau _{\\rm d}^2}\\lim _{t_0\\rightarrow \\infty }\\frac{1}{T}\\int _{t_0}^{t_0+T}{t}\\int _0^\\infty {\\tau } \\cos (\\omega \\tau )\\\\ &\\times \\int _{t-\\tau _{\\rm d}}^t\\!\\!\\!\\!\\!\\!", "{t^{\\prime }}\\int _{t-\\tau _{\\rm d}}^{t}\\!\\!\\!\\!\\!\\!", "{t^{\\prime \\prime }}\\\\ &\\times \\Big \\lbrace *{\\delta E_{\\rm s}^\\dagger (t^{\\prime })\\delta E_{\\rm s}(t^{\\prime \\prime }+\\tau )}+ *{\\delta E_{\\rm s}^\\dagger (t^{\\prime \\prime }+\\tau ) \\delta E_{\\rm s}(t^{\\prime })}\\\\&\\hspace{28.45274pt}+ e^{-2i\\theta }[\\Theta (t^{\\prime }-t^{\\prime \\prime }-\\tau )*{\\delta E_{\\rm s}(t^{\\prime })\\delta E_{\\rm s}(t^{\\prime \\prime }+\\tau )}\\\\&\\hspace{56.9055pt}+ \\Theta (t^{\\prime \\prime }+\\tau -t^{\\prime })*{\\delta E_{\\rm s}(t^{\\prime \\prime }+\\tau )\\delta E_{\\rm s}(t^{\\prime })}]\\\\ &\\hspace{28.45274pt}+ e^{2i\\theta }[\\Theta (t^{\\prime \\prime }+\\tau -t^{\\prime })*{\\delta E_{\\rm s}^\\dagger (t^{\\prime })\\delta E_{\\rm s}^\\dagger (t^{\\prime \\prime }+\\tau )}\\\\&\\hspace{56.9055pt}+ \\Theta (t^{\\prime }-t^{\\prime \\prime }-\\tau )*{\\delta E_{\\rm s}^\\dagger (t^{\\prime \\prime }+\\tau )\\delta E_{\\rm s}^\\dagger (t^{\\prime })}]\\Big \\rbrace \\end{split}$ where $N_0(\\omega ) = \\lim _{t_0\\rightarrow \\infty }\\frac{1}{T}\\int _{t_0}^{t_0+T}{t}\\int _0^\\infty {\\tau } \\cos (\\omega \\tau ) N_0(\\tau )= \\frac{1}{2}\\eta F_{\\rm lo} \\frac{\\sin ^2(\\omega \\tau _{\\rm d}/2)}{(\\omega \\tau _{\\rm d}/2)^2}$ is the contribution from shot noise to the homodyne noise spectrum.", "The quantity $\\sin ^2(\\omega \\tau _{\\rm d}/2)/(\\omega \\tau _{\\rm d}/2)^2$ is a filter factor that arises from the $\\tau $ -integral and describes the bandwidth of the detector as the inverse response time $\\tau _{\\rm d}^{-1}$ .", "The first term in Eq.", "(REF ) can be rewritten using the spectral correlation function $S_1^{mm^{\\prime }}(\\nu )$ from Eq.", "(REF ) as $\\begin{split}&\\mathcal {N}_1 := \\lim _{t_0\\rightarrow \\infty }\\frac{1}{T}\\int _{t_0}^{t_0+T}{t}\\int _{t-\\tau _{\\rm d}}^t\\!\\!\\!\\!\\!\\!", "{t^{\\prime }}\\int _{t-\\tau _{\\rm d}}^{t}\\!\\!\\!\\!\\!\\!", "{t^{\\prime \\prime }}*{\\delta E_{\\rm s}^\\dagger (t^{\\prime })\\delta E_{\\rm s}(t^{\\prime \\prime }+\\tau )}\\\\&= \\frac{1}{T}\\int _{t_0}^{t_0+T}{t}\\int _{t-\\tau _{\\rm d}}^t\\!\\!\\!\\!\\!\\!", "{t^{\\prime }}\\int _{t-\\tau _{\\rm d}}^{t}\\!\\!\\!\\!\\!\\!", "{t^{\\prime \\prime }}\\frac{1}{(2\\pi )^2}\\int _{-\\infty }^\\infty {\\omega ^{\\prime }}\\int _{-\\infty }^\\infty {\\omega ^{\\prime \\prime }}\\\\ &\\times e^{i\\omega ^{\\prime }t^{\\prime }}e^{-i\\omega ^{\\prime \\prime }(t^{\\prime \\prime }+\\tau )}*{\\delta E_{\\rm s}^\\dagger (-\\omega ^{\\prime })\\delta E_{\\rm s}(\\omega ^{\\prime \\prime })}\\\\ &=\\frac{1}{T}\\int _{t_0}^{t_0+T}{t}\\int _{t-\\tau _{\\rm d}}^t\\!\\!\\!\\!\\!\\!", "{t^{\\prime }}\\int _{t-\\tau _{\\rm d}}^{t}\\!\\!\\!\\!\\!\\!", "{t^{\\prime \\prime }}\\frac{1}{2\\pi }\\int _{-\\omega _{12}/2}^{\\omega _{12}/2}\\!\\!\\!\\!\\!\\!", "{\\nu ^{\\prime }}\\int _{-\\omega _{12}/2}^{\\omega _{12}/2}\\!\\!\\!\\!\\!\\!", "{\\nu ^{\\prime \\prime }}\\sum _{m^{\\prime }m^{\\prime \\prime }}\\\\ &\\times e^{i(\\nu ^{\\prime }-m^{\\prime }\\omega _{12})t^{\\prime }}e^{-i(\\nu ^{\\prime \\prime }-m^{\\prime \\prime }\\omega _{12})(t^{\\prime \\prime }+\\tau )}\\delta (\\nu ^{\\prime }-\\nu ^{\\prime \\prime })2\\gamma ^{\\rm p}S^{m^{\\prime },m^{\\prime \\prime }}_1(\\nu ^{\\prime }),\\end{split}$ where we used the identity $\\int _{-\\infty }^\\infty {\\omega }=\\sum _m\\int _{-\\omega _{12}/2}^{\\omega _{12}/2}{\\nu }$ .", "By shifting the $t^{\\prime }$ and $t^{\\prime \\prime }$ integration variables to $t^{\\prime }-t$ and $t^{\\prime \\prime }-t$ , the integration over $t$ can be carried out, which gives the kronecker delta $\\frac{1}{T}\\int _{t_0}^{t_0+T}{t} e^{i(m^{\\prime \\prime }-m^{\\prime })\\omega _{12}t} = \\delta _{m^{\\prime }m^{\\prime \\prime }}$ .", "Upon resolving this Kronecker delta as well as the delta function $\\delta (\\nu ^{\\prime }-\\nu ^{\\prime \\prime })$ and subsequently rewriting the summation over $\\nu ^{\\prime }$ and $m^{\\prime }$ to an integral over $\\omega ^{\\prime }$ , we can resolve the integrals over $t^{\\prime }$ and $t^{\\prime \\prime }$ , leading to $\\begin{split}\\mathcal {N}_1 &=\\frac{1}{2\\pi }\\int _{-\\infty }^\\infty {\\omega ^{\\prime }} \\frac{\\sin ^2(\\omega ^{\\prime }\\tau _{\\rm d}/2)}{(\\omega ^{\\prime }/2)^2}e^{-i\\omega ^{\\prime }\\tau } 2\\gamma ^{\\rm p}S_1(\\omega ^{\\prime }).\\end{split}$ The second term in Eq.", "(REF ), where the time arguments are reversed, yields the complex conjugate $\\mathcal {N}_1^*$ .", "For the term in Eq.", "(REF ) proportional to $e^{-2i\\theta }$ , the situation is slightly more complicated because of the Heaviside functions.", "Similarly to Eq.", "(REF ), we can express the temporal correlation function in terms of $S_2^{mm^{\\prime }}(\\nu )$ and resolve the integral over $t$ , such that $\\begin{split}&\\mathcal {N}_2 = \\lim _{t_0\\rightarrow \\infty }\\frac{1}{T}\\int _{t_0}^{t_0+T}\\!\\!\\!\\!\\!\\!", "{t}\\int _{t-\\tau _{\\rm d}}^t\\!\\!\\!\\!\\!\\!", "{t^{\\prime }}\\int _{t-\\tau _{\\rm d}}^{t}\\!\\!\\!\\!\\!\\!", "{t^{\\prime \\prime }}\\\\ &\\hspace{56.9055pt}\\times [\\Theta (t^{\\prime }-t^{\\prime \\prime }-\\tau )*{\\delta E_{\\rm s}(t^{\\prime })\\delta E_{\\rm s}(t^{\\prime \\prime }+\\tau )}\\\\&\\hspace{85.35826pt}+ \\Theta (t^{\\prime \\prime }+\\tau -t^{\\prime })*{\\delta E_{\\rm s}(t^{\\prime \\prime }+\\tau )\\delta E_{\\rm s}(t^{\\prime })}]\\\\&= \\int _{-\\tau _{\\rm d}}^0\\!\\!\\!\\!\\!\\!", "{t^{\\prime }}\\int _{-\\tau _{\\rm d}}^0\\!\\!\\!\\!\\!\\!", "{t^{\\prime \\prime }}\\frac{1}{2\\pi }\\int _{-\\infty }^\\infty {\\omega ^{\\prime }}[\\Theta (t^{\\prime }-t^{\\prime \\prime }-\\tau )e^{i\\omega ^{\\prime }t^{\\prime }}e^{-i\\omega ^{\\prime \\prime }(t^{\\prime \\prime }+\\tau )}\\\\ &\\hspace{56.9055pt}+\\Theta (t^{\\prime \\prime }+\\tau -t^{\\prime })e^{-i\\omega ^{\\prime \\prime }t^{\\prime }}e^{i\\omega ^{\\prime }(t^{\\prime \\prime }+\\tau )}]2\\gamma ^{\\rm p}S_2(\\omega ^{\\prime }).\\end{split}$ To resolve the integrals over $t^{\\prime }$ and $t^{\\prime \\prime }$ , we now make the assumption that the detector response time $\\tau _{\\rm d}$ is much faster than any time scale in the dynamics of the source field.", "This corresponds to taking the limit of infinite detection bandwidth.", "Naturally, the detection bandwidth will restrict the observable homodyne spectrum, but in the interest of characterizing the intrinsic properties of the source field, $\\tau _{\\rm d}\\rightarrow 0$ is the correct limit to consider.", "Since the integral over $\\tau $ in Eq.", "(REF ) only runs over positive values, this means that only the second Heaviside function in Eq.", "(REF ) can be nonzero.", "Thus, we end up with (for $\\tau >0$ ) $\\mathcal {N}_2 &= \\frac{1}{2\\pi }\\int _{-\\infty }^\\infty {\\omega ^{\\prime }} \\frac{\\sin ^2(\\omega ^{\\prime }\\tau _{\\rm d}/2)}{(\\omega ^{\\prime }/2)^2}e^{i\\omega ^{\\prime }\\tau }2\\gamma ^{\\rm p}S_2(\\omega ^{\\prime }).$ The term in Eq.", "(REF ) proportional to $e^{2i\\theta }$ can be connected to $S_2^*(\\omega )$ by following an analogous derivation.", "The final expression for the quadrature noise spectrum becomes $N(\\omega ) = \\frac{1}{2}\\eta F_{\\rm lo}\\frac{\\sin ^2(\\omega \\tau _{\\rm d}/2)}{(\\omega \\tau _{\\rm d}/2)^2}\\lbrace 1 + \\eta \\Re [\\Lambda _1(\\omega ) + e^{-2i\\theta }\\Lambda _2(\\omega )]\\rbrace ,$ where the functions $\\Lambda _i(\\omega )$ are defined below Eq.", "(REF ) in the main text.", "In order to characterize the intrinsic noise properties of the source field, we set the detection efficiency $\\eta $ to unity.", "Thereby, when normalizing the quadrature noise spectrum to the shot-noise level, we obtain $\\frac{N(\\omega )}{N_0(\\omega )} = 1 +\\Re [\\Lambda _1(\\omega ) + e^{-2i\\theta }\\Lambda _2(\\omega )].$" ], [ "Effect of multiparticle fluctuation renormalization", "As mentioned in the main text, the multiparticle renormalizations of the Heisenberg-Langevin equations of motion only give small quantitative corrections to the spontaneous four-wave mixing and squeezing spectra.", "This is explicitly demonstrated in Fig.", "REF , which compares the full and simplified calculations for the same parameters as in Figs.", "REF and REF .", "Specifically, Fig.", "REF (a)–(b) show the spontaneous four-wave mixing spectrum and squeezing spectrum calculated from the full theory (solid lines) and using the simplifications in Eq.", "(REF ) (dots).", "In Fig.", "REF (c)–(d), the relative deviation between the two calculations are shown.", "This relative deviation $\\delta _x$ is calculated as $\\delta _x(\\omega )={x_{\\rm full}(\\omega ) - x_{\\rm simplified}(\\omega )}/x_{\\rm full}(\\omega )$ , where $x(\\omega )$ is either $S_{\\rm FWM}(\\omega )$ or $1+\\Lambda (\\omega )$ .", "Evidently, the relative deviation is below 1% for the spontaneous four-wave mixing spectrum, and below $10^{-5}$ for the squeezing spectrum." ], [ "Numerical calculations", "The numerical calculations in the paper have been performed for atomically thin $\\mathrm {MoS_2}$ encapsulated with hexagonal BN on both sides, which is often used in experiments [see e.g.", "[74]] We use the screened Coulomb potential obtained from solving Poison's equation for the van der Waals heterostructure: dielectric environment/air gap/atomically thin semiconductor/air gap/dielectric environment [75], [41].", "The small interlayer air gaps $h_{\\rm int}$ take account of naturally occurring air gaps between the atomically thin semiconductor and its dielectric environment [76] described by the dielectric constant $\\varepsilon _{\\rm e}$ .", "The parameters used for the calculations in this paper are summarized in Table REF .", "The phonon-induced dephasing rate $\\gamma ^\\text{x}$ is calculated according to the methods given in Ref.", "[47], without self-consistent inclusion of radiative broadening, because this is contained in the interaction with the quantized electromagnetic field.", "In the numerical representation of the equations of motion, the dynamical variables such as $*{a^\\dagger }$ and $*{P^\\dagger }$ are expressed in surface-density units $*{a^\\dagger }/\\sqrt{S}$ and $*{P^\\dagger }/\\sqrt{S}$ .", "This ensures that all coupling coefficients and scattering matrices in the equations of motion are independent of the quantization surface area $S$ .", "Only the input-field driving term contains explicit reference to $S$ , when converting the driving power $\\mathcal {P}_{\\rm in}$ to surface-density units $\\mathcal {P}_{\\rm in}\\rightarrow \\mathcal {P}_{\\rm in}/S$ .", "For the numerical calculations, $S$ is taken as the laser spot area (see Table REF ).", "Table: Parameters used for numerical calculations." ] ]
2207.10420
[ [ "Temporal and Spatial Online Integrated Calibration for Camera and LiDAR" ], [ "Abstract While camera and LiDAR are widely used in most of the assisted and autonomous driving systems, only a few works have been proposed to associate the temporal synchronization and extrinsic calibration for camera and LiDAR which are dedicated to online sensors data fusion.", "The temporal and spatial calibration technologies are facing the challenges of lack of relevance and real-time.", "In this paper, we introduce the pose estimation model and environmental robust line features extraction to improve the relevance of data fusion and instant online ability of correction.", "Dynamic targets eliminating aims to seek optimal policy considering the correspondence of point cloud matching between adjacent moments.", "The searching optimization process aims to provide accurate parameters with both computation accuracy and efficiency.", "To demonstrate the benefits of this method, we evaluate it on the KITTI benchmark with ground truth value.", "In online experiments, our approach improves the accuracy by 38.5\\% than the soft synchronization method in temporal calibration.", "While in spatial calibration, our approach automatically corrects disturbance errors within 0.4 second and achieves an accuracy of 0.3-degree.", "This work can promote the research and application of sensor fusion." ], [ "INTRODUCTION", "There have been many studies on multi-sensor fusion in autonomous vehicles in recent years, benefiting from the multi-sensors equipped in autonomous vehicles.", "Sensor fusion is an essential and critical strategy to enhance the performance and ensure the reliability of perception modules in ADAS, such as in target classification[1], segmentation[2], SLAM[3], etc.", "Both temporal and spatial calibrations are the most critical procedures for sensor fusion.", "There have been a number of methods and systems for temporal calibrations and spatial calibrations.", "For temporal calibration, hardware-based approaches are widely adopted for its accuracy.", "However, a unified clock signal and global timestamp are required and sensor hardware trigger interface must be reserved, which is not universally applicable to all devices; software-based approaches such as soft-synchronous interpolation[4], values are aligned by curve fitting, which brings accuracy problems because the data are theoretical values obtained by fitting to the theoretical value.", "Figure: The architecture of the proposed calibration system.For spatial calibration, specially designed objects are required for traditional manual calibration approaches, such as manually selected points[5], which brings redundant manual operations.", "Long-time operation and varying loads can lead to slight drifts and deviations in extrinsic parameters.", "Current artificially designed targets[6] are utilized to calibrate the extrinsic parameters in automatic calibration works, but it's only at the stage of the laboratory.", "Some feature-based calibration method[7] utilizes edge features to compute the extrinsic parameters.", "However, these features are not well corresponded to each other under some scenarios.", "The current temporal and spatial calibration modules are not well integrated and associated, and only a few works were proposed to solve the temporal and spatial calibration integration for camera and LiDAR through calibration boards while ignoring the online performance.", "Therefore, in this paper we aim to combine the temporal-spatial calibration into online integrated calibration.", "For this purpose, sensors' pose estimation model between adjacent moments is introduced for online integrated calibration, and line features are used to correct the error of the pose estimation model.", "The main contributions of this paper are as follows: 1) A novel pose estimation model is proposed to decrease time delay and extrinsic offset in temporal and spatial online integrated calibration procedures.", "2) We introduce a method for eliminating dynamic point clouds which only used the association of adjacent two frames instead of detection boxes based on prior information 3) We introduced a searching optimization method to improve the optimization efficiency and proposed a new set of accuracy assessment metrics to evaluate the accuracy of the calibration results." ], [ "RELATED WORKS", "In this section, we discuss the development of temporal and spatial calibrations.", "Temporal calibration and spatial calibration are poorly coupled and generally separated.", "For the temporal calibration part, hardware-based methods have been widely applied, due to their exceptional accuracy.", "The proposal of TriggerSync[8] represented the high accuracy of the hardware triggered method, however, it was low robust to the false correlation of trigger pulses due to unexpected delays, and more interfaces were required.", "To solve the problem of an unexpected delay, the Global Position System(GPS)[9] signal was proposed as the timing signal.", "However, in the GPS rejected region, timing accuracy was seriously affected by the signal problem.", "To overcome this drawback, Marsel Faizullin et al.", "[10] proposed a microcontroller-based platform for GPS signal emulation that can be extended to arbitrary multi-sensors beyond LiDAR-imu.", "Hannes Sommer et al.", "[11] used simple external devices, including LED (for camera) or photodiodes (for LiDAR), to achieve temporal calibration.", "This was done at the cost of additional equipment in exchange for a high rate and high precision estimate of time offset.", "Hardware-triggered interfaces or support from other hardware were required by all of the above approaches, which lead to their limited use in all scenarios.", "Instead, software-based methods were gradually popular because of its low hardware requirements.", "The message_filter method was based on Robot Operating System(ROS), however, it only adopted a simple strategy to match timestamps when dealing with sensor data of different frequencies, which performed poorly in high-precision data fusion scenarios.", "bidirectional synchronization methods such as TICSync[12], PTP or NTP, performed well but require sensor firmware support, which was rare and expensive.", "For the spatial calibration part, checkerboard and selected points were first chosen to solve the calibration problem between LiDAR and camera[13].", "The main limitation of the above approaches was manual and time-consuming.", "To overcome the above limitations, several ways were proposed by some researchers to calibrate the extrinsic parameters more intelligently.", "Special artificial calibration targets, such as spheres [14], were introduced to avoid manually selecting.", "However, these methods still shared the limitation of specially designed targets and man-made target errors.", "To solve this problem, targetless methods were proposed.", "[15] was based on the maximization of mutual information obtained between surface intensities measured.", "In [16], they presented automatic targetless calibration methods based on hand-eye calibration, which only required three-dimensional point clouds and camera images to compute motion information, with a rough calibration result.", "In contrast to the above separate approaches for temporal and spatial calibration, some studies focused on integrated calibration.", "Chia-Le Lee et al.", "[17] proposed a method to achieve temporal and extrinsic calibration between radar and LiDAR based on a specified triangular plate target.", "And Z. Taylor et al.", "[18] proposed a targetless self-calibration method that relied on the same motion observed in different sensors to find their relative offsets.", "All these methods prompted us to wonder whether there is an approach to solve the above problems by online integrated temporal and spatial calibration based on environmental features.", "Therefore, we propose a method based on the line-based extrinsic calibration method [19], extending the applicability of the extrinsic calibration to temporal and spatial online integrated calibrations." ], [ "METHODOLOGY", "In this paper, a sensors pose estimation model is proposed to synchronize sensors data from adjacent moments to the same moment.", "For the error term equation of the pose estimation model, we adopted the correction based on the environmental line feature to eliminate the influence of the error term on the accuracy, due to the environmental line features are widely existed in indoor and outdoor scenes and has a better correspondence between the camera and LiDAR.", "Fig.REF illustrates the architecture of our method.", "Our proposed method consists of four steps.", "First, the data between adjacent moments of the camera and LiDAR are projected back to the same moment by means of pose estimation; Second, the line features are extracted in a series of ways for both the camera and LiDAR and are filtered and optimized; Then, a method for eliminating dynamic point clouds is proposed, which only used the association of adjacent two frames instead of detection boxed based on prior information.", "Finally, the point cloud line features are projected on the pixel frames, and the score is calculated and optimized during searching optimization periods.", "Optimizing the score for calibrating the error of the previous stage of the projection of the adjacent frames.", "The detailed steps are described in the following sections." ], [ "Pose estimation model", "The temporal and spatial integrated calibration of camera and LiDAR lies in the conversion matrix for temporal and spatial superposition.", "In this paper, we assume that the camera acquires the image at moment $ t $ and the LiDAR acquires the point cloud at moment $ t+e_{t} $ , the solution of the conversion matrix between the adjacent moment is solved by the following method: Estimating sensor motion within $ e_{t} $ period and then calculating the positional coordinate change between the vehicle body system and sensors system onboard during $ e_{t} $ period by means of coordinate system projection.", "According to the pose estimation model: $\\begin{split}T&=T_{e}\\times T_{v} \\\\T_{e}&=T_{e}^{\\prime }+\\epsilon _{e} , T_{v}=T_{v}^{\\prime }+\\epsilon _{v}\\end{split}$ where $T$ is the transform matrix from LiDAR at moment $ t + e_{t} $ to camera at moment $t$ , $ T_{e}$ is the measurement model of LiDAR-camera extrinsic parameters, $ T_{v}$ is the model of LiDAR-camera coordinate system motion estimation as shown in Fig.REF (a).", "Since there is an error between measurement and ground truth, disassembly of $ T_{e}$ and $ T_{v}$ are shown, then Equation REF is introduced in the next derivation: $\\begin{split}T=\\left(T_{e}^{\\prime } \\cdot T_{v}^{\\prime }\\right)+\\left(T_{e}^{\\prime } \\epsilon _{v}+T_{v}^{\\prime } \\epsilon _{e}\\right)+\\epsilon _{e} \\epsilon _{v}\\end{split}$ where $T_{e}^{\\prime } \\cdot T_{v}^{\\prime }$ is measured, $\\epsilon _{e} \\epsilon _{v} $ is too small to be taken into account, $ T_{e}^{\\prime } \\epsilon _{v}+T_{v}^{\\prime } \\epsilon _{e} $ is the error.", "Figure: A illustration of pose estimation model.", "(a) shows the pose estimation model of the camera at time tt to the LiDAR at time t+e t t+e_{t}.", "(b) shows the motion estimation via the inertial measurement unit navigation conversion.$ T_{e}^{\\prime } \\epsilon _{v}$ represents the measurement error of the extrinsic parameters, affected by initial values and prolonged vehicle driving.", "$T_{v}^{\\prime } \\epsilon _{e} $ represents the error in the estimation of the LiDAR-camera coordinate system motion.", "The motion of the inertial coordinate system $T_{I_{-} I}$ during $e_{t} $ instant period is calculated by inertial pre-integration as shown in Fig.REF (b).", "Although the uniform linear motion estimation model and the pre-integrated model is quite approximate to the ground truth of the motion of the vehicle body coordinate system , the error term has not to be eliminated since the motion of the LiDAR-camera coordinate system is projected and estimated from the motion of the vehicle body coordinate system, which reduces the accuracy of the projection between adjacent moments $t$ and $t+e_{t}$ .", "In order to correct the effect of the error term online, line features are introduced in integrated calibration." ], [ "Line features extraction", "The line features are extracted to correct the error term in the previous stage of optimization.", "As shown in Fig., the process of line feature extraction is carried out in both image and point cloud.", "For image feature extraction, grayscale processing and canny marginalization are adopted, then line feature extraction algorithm [20] is applied.", "After that, the model of inverse distance transformation is introduced to deal with the consistent correlation between LiDAR points and image points, which leads to adopting a larger search step to prevent falling into local optimum during searching optimization.", "For LiDAR features extraction, the points are divided into different lines, boundary line features are obtained by distance discontinuity.In order to extract sufficient line features on low beam LiDAR, a local mapping method is applied to combine three frames of point cloud into one, which can present more points in one frame.", "Depending on the strength of the GPS signal and the accuracy requirement of the autonomous driving scenario, we propose two local mapping methods: the GPS-based approach and the Normal Distribution Transform (NDT)-based approach[21].", "The former method is less computationally intensive with less accuracy, while the latter method is more computationally intensive with more accuracy." ], [ "Projection and feature filtering", "For matching the line features extracted from the point cloud and the image, pointcloud points set $P^{L}=\\left\\lbrace p_{1}^{L}, p_{2}^{L}, p_{3}^{L}, \\cdots \\cdots , p_{n}^{L}\\right\\rbrace $ and the image points set $P^{C}=\\left\\lbrace p_{1}^{C}, p_{2}^{C}, p_{3}^{C}, \\cdots \\cdots , p_{m}^{C}\\right\\rbrace $ are created, where $n$ is the number of point cloud points, $m$ is the number of image points, $L$ is the LiDAR coordinate system, and $C$ is the camera coordinate system.", "We define the rotation matrix $R_{L \\rightarrow C}$ and the translation vector $t_{L \\rightarrow C}$ , which represent the coordinate transformation process of the point cloud $P^{L}$ projected to $P^{C}$ in the LiDAR coordinate system.", "$P_{i}^{C}=R_{L \\rightarrow C} \\cdot P_{i}^{L}+t_{L \\rightarrow C}$ Further filtering is done for outliers in line feature extraction.", "The above-mentioned LiDAR point cloud has been converted into image form, so a convolution kernel with an 8-pixel boundary is designed to filter out the outliers on the image and get more organized line features from images.", "In addition, for the points on the ground, are eliminated because their lateral line features do not match well." ], [ "Dynamic point clouds targets eliminating", "Eliminating the dynamic point cloud targets is extremely essential to the features matching.", "In the registration of point clouds at adjacent moments $t$ and $t+e_{t}$ , although the stationary targets occupy the dominant position in the observed environment, even a few dynamic targets will bring serious interference to the accuracy of registration, especially in the case of registration based on point cloud line feature rather than the registration based on the original point cloud.", "In order to remove dynamic targets, the current widely used method is target detection based on deep learning.", "Origin point clouds and prior information as the input of the detection network, which could identify possible dynamic objects such as vehicles and pedestrians and return detection box.", "Then the point clouds in the detection box are clustered and deleted to eliminate the dynamic target.", "However, this method relies on prior information and an additional detection network, which poses a real-time problem for online integrated calibration.", "In order to solve this problem, a lightweight method with excellent accuracy has been proposed.", "The K-dimensional binary tree is used to quickly find the nearest neighbors in the multidimensional space by using the dimensional information of the segmentation.", "The nearest neighbor matching distance for stationary objects is considered to be smaller, while the nearest neighbor matching distance for dynamic objects is larger, so we set a threshold to filter out dynamic targets.", "Because the point cloud at time $t$ is speculated instead of ground truth of $t+e_{t}$ , there should be an angular error.", "According to the triangle similarity principle, the distant dynamic target needs a larger threshold to be filtered out, so we set a factor to the fixed threshold to make it a linear dynamic threshold.", "Dynamic targets are filtered by linear dynamic thresholds and clustered for subsequent analysis.", "Figure: An illustration of how the dynamic targets impact on line feature projection.", "(a) is the result of projection without dynamic targets eliminated, and (b) is the result of projection with dynamic targets eliminated.", "[thpb] Optimization process [1] Image line features $I_t$ at frame $t$ , Pointcloud horizontal $F_{h}^{t+e_{t}}$ and vertical $F_{v}^{t+e_{t}}$ line features at frame $t+e_{t}$ , Initial extrinsic matrix $T_e$ , Motion estimation matrix $T_v$ , last frame gray rate $gray\\_rate$ Calibrated extrinsic matrix and motion estimation; Initialization: $score$ , $max_{score}$ $\\leftarrow $ 0.", "$gray\\_rate >\\gamma $ $step\\_size\\_larger = \\alpha _1$ $step\\_size\\_smaller = \\alpha _2$ $step\\_size\\_larger = \\alpha _3$ $step\\_size\\_smaller = \\alpha _4$ each $LiDAR\\ point$ in $F_{h}^{t+e_{t}}$ $gray\\_value = weight * T_e * T_v * p_t$ $score += gray\\_value$ each $LiDAR\\ point$ in $F_{v}^{t+e_{t}}$ $gray\\_value = (1-weight) * T_e * T_v * p_t$ $score += gray\\_value$ $score >max\\_score$ $max\\_score = score$ Update current_parameters $gray\\_rate = score / 255 / points\\_num$ current_parameters" ], [ "Searching optimization", "In the searching optimization process, an approach with both computation accuracy and efficiency is introduced.", "In the previous stage, the LiDAR line features have been extracted and projected onto the line features of the image, and the proportion of LiDAR points projected onto the image gray area has been calculated.", "Figure: Different gray rate determines different step size strategies.", "(a) shows a larger gray rate change and (b) shows a smaller gray rate change.For computation accuracy, as is shown in Fig.REF , four searching steps are proposed according to different gray rates.", "For computation efficiency, a searching method is applied to optimize the cost function.", "In [19], they compare the current function score with adjacent 728 scores, if the searching program finds parameters that have a higher score, it will stop the current searching process and begin a new searching process at the position providing a higher score.", "This searching process will stop when reaching the set iteration count or finding the best score, thus being able to increase the computation efficiency.", "We improved this process so that when the direction of the fastest gradient descent is found, we follow that direction to get the fastest optimization.", "As shown in Algorithm 1, this is the process of searching optimization." ], [ "EXPERIMENT", "In this section, the experiment setups and results of temporal and spatial calibration are described in detail.", "We validate our proposed approach on KITTI dataset[22], from which we used a high-resolution camera and a Velodyne HDL-64E LiDAR.", "The LiDAR was scanned at 10 Hz and the data used in the experiment is synced and rectified.", "The ground truth temporal and spatial parameters can be obtained from calibration files." ], [ "Temporal calibration", "Our method is compared to the approximate strategy of message filter in ROS.", "Several evaluation metrics are designed in order to validate the accuracy, which is based on Euclidean distance and ICP and NDT registration.", "Euclidean distance-based evaluation metrics: The average Euclidean distance of nearest point in the filter_cloud is calculated and compared.", "NDT-based evaluation metrics: With the voxel downsampling grid size and other parameters being the same, the final iteration number is calculated and compared.", "ICP-based evaluation metrics: with the maximum number of iterations, the maximum distance and other parameters are the same, the score of Euclidean distance is calculated and compared.", "Table: CALIBRATION RESULTS OF METRICSTABLE REF shows the score three metrics: $infer\\_cloud\\_avg$ refers to the average score of a point cloud treated by InferCloud at only 5Hz.", "$infer\\_cloud\\_interpolation$ refers to the average score of a point cloud interpolated upward at 10Hz.", "$filter\\_cloud\\_avg$ is the average score processed using the ROS soft synchronization method.", "In NDT evaluation, $stepsize1$ is 0.01m and $stepsize2$ is 0.05m.", "In ICP evaluation, $iterationNum1$ is once while $iterationNum2$ is five.", "The whole results are in Fig.REF .", "Figure: An illustration of three designed metrics performance.", "(a) shows the Euclidean distance score of Euclidean distance-based evaluation metric; (b) shows the final iteration number of NDT-based evaluation metrics; (c) shows the Euclidean distance score of ICP-based evaluation metrics.The data of Fig.REF tested based on the scenario where the point cloud frequency is 5Hz, the image and inertial navigation frequency is 10Hz, and the point cloud frame is 0.1s slower than the image.", "It reflects the advantages of our method and the traditional ROS method in three evaluation metrics.", "In Fig.REF , the red lines ($filter\\_cloud$ ) represent the ROS soft synchronization score curve, and the green and blue lines ($infer\\_cloud$ ) represent the score curve after soft synchronization using our method.", "Meanwhile, in the NDT/ICP evaluation metrics, we use different step sizes/different iterations to reflect the universality of our method optimization.", "The green line represents the smaller step length/fewer iterations, and the blue line represents the larger step length/larger iterations.", "Since the signal frequency processed by traditional ROS method is determined by the data signal with the lowest frequency, it can be found that the frequency of the $filter\\_cloud$ is 5Hz.", "If only the 5Hz point cloud is processed by our method, and the solid green or blue line is compared with the red line on the image, it can be found that our method is generally superior to the ROS method in all aspects.", "In addition, our method can also interpolate upward.", "The shallower line ($infer\\_cloud\\_interpolation$ ) below the turquoise is the interpolation score curve.", "It can be seen that the extra points processed by interpolation are still with the same level of $filter\\_cloud$ .", "Compared with the method of ROS, our approach improves 38.5% in the Euclidean distance-based metrics and with excellent performance in NDT-based evaluation metrics and ICP-based evaluation metrics." ], [ "Spatial calibration", "We initially added a 2.0-degrees rotation disturbance on the X, Y, and Z axes to the ground truth parameters.", "It must be clarified that whether the 2.0-degrees rotation disturbance is positive or negative is stochastic.", "During the experiment, we compared the calibration error with the ground truth.", "In addition, we tested the speed of correcting disturbance.", "Figure: An illustration of instant correction of roll, pitch and yaw disturbances.", "The red, green and blue lines in (a) represent the correction of disturbance by our method after adding 2.0-degrees disturbance angle in roll, pitch and yaw directions respectively.", "(b) shows the calibration result of our method when the perturbation angle added 2.0-degrees in the roll direction.", "(c) shows the calibration result of our method when the perturbation angle added 2.0-degrees in the pitch direction.", "(d) shows the calibration result of our method when the perturbation angle added 2.0-degrees in the yaw direction.It can be seen from Fig.REF that our method can basically correct the extrinsic parameter drift phenomenon caused by some reasons within 5 frames, and the error generally shows a downward trend and can begin to converge at 1-3 frames.", "In Fig.REF (b),(c) and (d), the direction in which the deviation is added is indicated by a darker color.", "It can be found that from the perspective of the metric of angle correction, our method can also reduce the error and basically complete convergence within 1-3 frames.", "But there is one small problem that appeared on the direction of pitch angle correction, speculated that for most of us choose picture perspective the middle section of the line on the vertical direction features, and on the pitch direction, if the deviation is also a small angle along with the vertical direction deviation, such as the trunk of a tree after the middle period of line feature extracting, even moving up and down will match the tree trunk well.", "The overall impact of this oscillation problem is not very big, about $\\pm 1.0$ degree.", "And it can be quickly corrected when there are obvious transverse line features.", "The overall calibration results in more scenarios on the KITTI dataset can be seen in Fig.REF , which demonstrates that the proposed method is applicable to different scenarios.", "Figure: An illustration of spatial calibration of several scenarios.", "Green points represent the projected LiDAR line points." ], [ "CONCLUSION", "The approach for online integrated temporal and spatial calibration for camera and LiDAR is proposed.", "Based on the pose estimation model of the adjacent moment, line features are selected to correct the error of the model.", "Hardware interface and chessboard are not required in this integrated calibration method.", "Besides, it's more accurate than the soft simultaneous interpolation method and upward interpolation between the point cloud between adjacent moments is supported.", "It also demonstrates that the line features of point clouds and images are robust features for matching and calibrating the error of the pose estimation model.", "In addition, we show that the score of the current integrated calibration results can be calculated and further exploited to improve computational efficiency and accuracy.", "In future work, we plan to introduce temporal and spatial online integrated calibration to multiple cameras and multiple LiDARs." ], [ "Acknowledgment", "This paper and the research behind it would not have been possible without the exceptional support of atatang Inc.(www.datatang.ai) that kindly provides the High quality on demand professional services specialized in data collection and annotation powering our model." ] ]
2207.10454
[ [ "Weakly Supervised Object Localization via Transformer with Implicit\n Spatial Calibration" ], [ "Abstract Weakly Supervised Object Localization (WSOL), which aims to localize objects by only using image-level labels, has attracted much attention because of its low annotation cost in real applications.", "Recent studies leverage the advantage of self-attention in visual Transformer for long-range dependency to re-active semantic regions, aiming to avoid partial activation in traditional class activation mapping (CAM).", "However, the long-range modeling in Transformer neglects the inherent spatial coherence of the object, and it usually diffuses the semantic-aware regions far from the object boundary, making localization results significantly larger or far smaller.", "To address such an issue, we introduce a simple yet effective Spatial Calibration Module (SCM) for accurate WSOL, incorporating semantic similarities of patch tokens and their spatial relationships into a unified diffusion model.", "Specifically, we introduce a learnable parameter to dynamically adjust the semantic correlations and spatial context intensities for effective information propagation.", "In practice, SCM is designed as an external module of Transformer, and can be removed during inference to reduce the computation cost.", "The object-sensitive localization ability is implicitly embedded into the Transformer encoder through optimization in the training phase.", "It enables the generated attention maps to capture the sharper object boundaries and filter the object-irrelevant background area.", "Extensive experimental results demonstrate the effectiveness of the proposed method, which significantly outperforms its counterpart TS-CAM on both CUB-200 and ImageNet-1K benchmarks.", "The code is available at https://github.com/164140757/SCM." ], [ "Introduction", "Weakly supervised object localization (WSOL), which learns to localize objects by only using image-level labels, has attracted much attention recently for its low annotation cost.", "The representative study of WSOL, Class Activation Map (CAM) generates localization results using features from the last convolutional layer.", "However, the model trained for classification usually focuses on the discriminative regions, resulting insufficient activation for object localization.", "To solve such an issue, there are many CNN-based methods have been proposed in the literature, including regularization , , , , adversarial training , , , and divergent activation , , , but the CNN’s inherent limitation of local activation dampens their performance.", "Although discriminative activation is optimal for minimizing image classification loss, it suffers from the inability to capture object boundaries precisely.", "Figure: Transformer-based localization pipelines in WSOL.", "The dashed arrows indicate the module parameters update during backpropagation.", "(a) TS-CAM : the training pipeline encodes the feature maps into semantic maps (SM) through a convolution head, then applies a GAP to receive gradients from the image-label supervision.", "(b) SCM(Ours): our training pipeline incorporates external SCM to produce new semantic maps SM refined with the learned spatial and semantic correlation.", "Then it updates the Transformer backbone through backpropagation to obtain better attention maps and semantic representations for WOLS.", "(c) Inference: SCM is dropped out, and we couple attention maps (AM) and SM just like TS-CAM for final localization prediction.", "(d) Comparison of AM, SM, and final activation maps of TS-CAM and proposed SCM.Recently, visual Transformer has succeeded in computer vision due to its superior ability to capture long-range feature dependency.", "Vision Transformer splits an input image into patches with the positional embedding, then constructs a sequence of tokens as its visual representation.", "The self-attention mechanism enables Transformer to learn long-range semantic correlations, which is pivotal for object localization.", "A representative study is Token Semantic Coupled Attention Map (TS-CAM) which replaces traditional CNN with Transformer and takes full advantage of long-range dependencies to solve the partial activation problem.", "It localizes objects by semantic-awarded attention maps from patch tokens.", "However, we argue that only using a Transformer is not an optimal choice in practice.", "Firstly, Transformer attends to long-range global dependency while inevitably it cannot capture local structure well, which is critical in describing the boundaries of objects.", "In addition, Transformer splits images into discrete patches.", "Thus it may not attend to the inherent spatial coherence of objects, which makes it unable to predict the complete activation.", "As shown in Fig.REF (d), the activation map obtained from TS-CAM captures the global structure.", "Still, it concentrates in a small semantic-rich region like the bird's upper body, failing to solve partial activation completely.", "Furthermore, we observe that the fur has no abrupt change in neighboring space, and its semantic context may favor propagating the activated regions to provide a more accurate result covering the whole body.", "Inspired by this potential continuity, we propose a novel external module named Spatial Calibration Module (SCM), tailored for Transformers to produce activation maps with sharper boundaries.", "As shown in Fig.REF (a)-(b), instead of directly applying Global Average Pooling (GAP) on semantic maps to calculate loss as TS-CAM , we insert an external SCM to refine both semantic and attention maps and then use the calibrated features to calculate the semantic loss.", "Precisely, it implicitly calibrates attention representation of Transformer and produces more meaningful activation maps to cover functional areas based on spatial and contextual coherence.", "Our core design, a unified diffusion model, is introduced to incorporate semantic similarities of patch tokens and their local spatial relations during training.", "While in the inference phase, SCM can be dropped out to maintain the model's simplicity, as shown in Fig.REF (c).", "Then, we use the calibrated Transformer backbone to predict the localization results by coupling SM and AM.", "The main contributions of this paper are as follows: We propose a novel spatial calibration module (SCM) as an external Transformer module to solve the partial activation problem in WSOL by leveraging the spatial correlation.", "Specifically, SCM is designed to optimize Transformers implicitly and will be dropped out during inference.", "We propose a novel information propagation methodology that provides a flexible way to integrate spatial and semantic relationships to enlarge the semantic-rich regions and cover objects completely.", "In practice, we introduce learnable parameters to adjust the diffusion range and filter the noise dynamically for flexible control and better adaptability.", "Extensive experiments demonstrate that the proposed framework outperforms its counterparts in the two challenging WSOL benchmarks.", "The weakly supervised object localization aims to localize objects by solely image-level labels.", "The seminar work CAM demonstrates the effectiveness of localizing objects using feature maps from CNNs trained initially for classification.", "Despite its simplicity, CAM-based methods suffer from limited discriminative regions, which cannot cover objects completely.", "The field has focused on how to expand the activation with various attempts.", "Firstly, the dropout strategy is proposed to guide the model to attend to more significant regions.", "For instance, HaS hides patches in training images randomly to force the network to seek other relevant parts; CutMix adopts the same way to drop out patches but further augment the area of the patches with ground-truth labels to reduce information loss.", "Similarly, ADL adopts an importance map to maintain the informative regions' classification power.", "Instead of dropping out patches, people leverage the pixels correlations to fulfill objects as they often share similar patterns.", "SPG learns to sense more areas with similar distribution and expand the attention scope.", "I$^2$ C exploits inter-and-cross images' pixel-level consistency to improve the quality of localization maps.", "Furthermore, the predicted masks can be enhanced to become complete.", "GC-Net highlights tight geometric shapes to fit the masks.", "SPOL fuses shallow features and deep features from CNN that filter the background noise and generates sharp boundaries.", "Instead of applying only CNN as the backbone for WSOL, Transformer can be another candidate to alleviate the problem of partial activation as it captures long-range feature dependency.", "A recent study TS-CAM utilizes attention maps from patches coupled with reallocated semantics to predict localization maps, surpassing most of its CNN counterparts in WSOL.", "Recent work LCTR adopted a similar framework with Transformer while inserting their tailored module in each Transformer block to strengthen the global features.", "However, we observe that using Transformer alone cannot solve the partial activation completely as it fails to capture the local structure and ignores spatial coherence.", "What is more, it is cumbersome to insert a module for each Transformer block like LCTR .", "To address the issue, we propose a simple external module termed spatial calibration module (SCM) that calibrates Transformer by incorporating spatial and semantic relations to provide more complete feature maps and erase background noise." ], [ "Graph Diffusion.", "Pixels in natural images generally exhibit strong correlation, and constructing graph structure to capture such relationships has attracted much attention.", "In semantic segmentation, studies like , build graphs on images to obtain contextual information and long-term dependencies to model label distribution jointly.", "In image preprossessing, Gene et.al analyses graphs constructed from 2D images in spectral and succeeds in many traditional processing areas, including image compression, restoration filtering, and segmentation.", "The graph structure enables many classic graph algorithms and leads to new insights and understanding of image properties.", "Similarly, in WSOL, the limited activation regions share semantic coherence with neighboring locations, making it possible to expand the area by information flow to cover objects precisely.", "In our study, we revise the classic Graph Diffusion Kernel (GDK) algorithm to infer complete pseudo masks based on partial activation results.", "GDK is initially adopted in graph analysis like social networks , search engines , and biology to inference pathway membership in genetic interaction networks.", "GDK's strategy to explore graphs via random walk inspires us to modify it to incorporate information from the image context, enabling dynamical adjustment by semantic similarity." ], [ "Methodology", "This section describes the Spatial Calibration Module (SCM), which is built by stacking multiple activation diffusion blocks (ADB).", "ADB consists of several submodules, including semantic similarity estimation, activation diffusion, diffuse matrix approximation, and dynamic filtering.", "At the end of the section, we show how to predict the final localization results by using the proposed framework during the inference.", "Figure: The overall framework consists of two parts.", "(Left) Vision Transformer provides the original attention map F 0 F_0 and semantic map S 0 S_0, (Right) They are dynamically adjusted by stacked activation diffusion blocks (ADBs).", "The detail of the layer design is shown at the bottom-right corner (the residual connections for F l F_l and S l S_l are omitted for simplicity).", "Once model optimized, F 0 F_0 and S 0 S_0 are directly element-wise multiplied for final prediction." ], [ "Overall Architecture", "In WSOL, the attention maps from models trained on image-level labels mainly concentrate on discriminative parts, which fail to cover the whole objects.", "Our proposed SCM aims to diffuse activation at small areas outwards to alleviate the partial activation problem in WSOL.", "In a broad view, the whole framework is supervised by image-level labels during training.", "As shown in Fig.REF (b), Transformer learns to calibrate both attention maps and semantic maps through the semantic loss from SCM implicitly.", "To infer the prediction, as described in Fig.REF (c), we drop SCM and use the element-wise product of revised maps to localize objects.", "As shown in Fig.REF , an input image is split into $N=H\\times W$ patches with each represented as a token, where $(H, W)$ is the patch resolution.", "After grouping these patch tokens and CLS token into a sequence, we send it into $I$ cascaded Transformer blocks for further representation learning.", "Similar as TS-CAM , to build the initial attention map $F^{0} \\in \\mathbb {R}^{H \\times W}$ , the self-attention matrix $W_i \\in \\mathbb {R}^{(N+1)\\times (N+1)}$ at $i^{th}$ layer is averaged over the multiple self-attention heads.", "Denote $M_i \\in \\mathbb {R}^{H\\times W}$ as attention weights that corresponds to the class token in $W_i$ , we average $\\lbrace M_i\\rbrace _{i=1}^I$ across all intermediate layers to get the attention map $F^{0}$ of Transformer.", "${F^0 = \\frac{1}{I} \\sum _{i=1}^I {M_i}}$ To obtain the semantic map $S^{0} \\in \\mathbb {R}^{H \\times W \\times C}$ , where $C$ denotes the number of categories, we extract all spatial tokens $ \\lbrace t_{n} \\rbrace _{n=1}^N$ from the last Transformer layer and then encode them by a convolution head, $S^0 = \\text{reshape} (t_{1}... t_{N}) * k$ where $*$ is the convolution operation, $k$ is a $3\\times 3$ convolution kernel, and $\\text{reshape}(\\cdot )$ is an operation that converts a sequence of tokens into 2D feature maps.", "Then we send both $F^{0}$ and $S^{0}$ into SCM to refine them.", "As illustrated in Fig.REF , for the $l^{th}$ ADB, denote ${S}^{l}$ and $F^{l}$ as the inputs, and $S^{l+1}$ and $F^{l+1}$ as the outputs.", "Firstly, to guide the propagation, we estimate embedding similarity $E$ between pairs of patches in ${S}^{l}$ .", "To enlarge activation $F^{l}$ , we apply $E$ to diffuse $F^{l}$ towards the equilibrium status indicated by the inverse of Laplacian matrix $L^{l}$ .", "In practice, we re-activate $F^{l}$ by approximating $(L^{l})^{-1}$ with Newton Shulz Iteration.", "Afterward, a dynamic filtering module is applied to remove over-diffused parts.", "Finally, the refined $F^{l}$ updates $S^{l}$ via an element-wise multiplication.", "In general, by stacking multiple ADBs, the intensity of both maps is dynamically adjusted to balance semantic and spatial features.", "In the training phase, we apply GAP to $S^{L}$ to get classification logits and calculate semantic loss with the ground truth.", "During inference, SCM will be dropped out, and the element-wise product of newly extracted ${F}^{0}$ and ${S}^{0}$ is used to obtain the localization result." ], [ "Activation Diffusion Block", "In this subsection, we dive into Activation Diffusion Block (ADB).", "Under the assumption of continuity of visual content, we calculate the semantic and spatial relationships of patches in $S^{L}$ , then diffuse it outwards dynamically to alleviate the partial activation problem in WSOL." ], [ "Semantic Similarity Estimation.", "Within the $l^{th}$ activation diffusion block, $l \\in \\lbrace 1, 2, ..., L\\rbrace $ , we need semantic and spatial relationships between any pair of patches for propagation.", "To achieve it, we construct an undirected graph with each $v_i^l$ connected with its first-order neighbors.", "Please refer to Fig.5 at the Appendix for details.", "Given token representation of $S^l$ , we build an $N$ -node graph $G^l$ .", "Denote the $i^{th}$ node as $v_i^l\\in \\mathbb {R}^{Q}$ .", "Then, we can infer the semantic similarity $E^l$ , where the specific element $E^l_{i, j}$ is defined as the cosine distance between $v_i^l$ and $v_j^l$ : $E^l_{i, j} = \\frac{{v}_i^l({v}_j^l)^{\\intercal }}{|| {v_i}^l || ||{v_j}^l||}$ where $v_i^l$ and $v_j^l$ are flattened vectors, and the larger value $E^l_{i, j}$ denotes the higher similarity shared by $v_i^l$ and $v_j^l$ .", "Figure: Illustration of activation diffusion pipeline with a hand-crafted example.", "(a) Input image.", "(b) Original Transformer's attention map.", "(c) Diffused attention map.", "(d) Filtered attention map.", "As the spatial coherence is embedded into the attention map via our SCM, the obtained attention map by using proposed method captures a complete object boundary with less noise." ], [ "Activation Diffusion.", "To present spatial relationship, we define a binary adjacency matrix $A^l \\in \\mathbb {R}^{N \\times N}$ , whose element $A^l_{i, j}$ indicates whether $v_i^l$ and $v_j^l$ are connected.", "We further introduce a diagonal degree matrix $D^l \\in \\mathbb {R}^{N \\times N}$ , where $D^l_{i, i}$ corresponds to the summation of all the degrees related to $v_i^l$ .", "Then, we obtain Laplacian matrix $\\hat{L^l} = D^l - A^l$ , with each element $(L^l)^{-1}_{i, j}$ describes the correlation of $v_i^l$ and $v_j^l$ at the equilibrium status.", "Recent studies , , on graph representation inspire us that the inverse of the Laplacian matrix leads to the global diffusion, which allows each unit to communicate with the rest.", "To enhance the diffusion with semantic relationships, we incorporate $\\hat{L^l}$ with node contextual information $E^l$ .", "Intuitively, we take advantage of the spatial connectivity and semantic coherence to split the tokens into the semantic-awarded foreground objects and the background environment.", "In practice, we use a learnable parameter $\\lambda $ to dynamically adjust the semantic intensity, which makes the diffusion process more flexible and easier to fit various situations.", "The Laplacian matrix $L^l$ with semantics is defined as, ${L^l} = (D^l - A^l) \\odot (\\lambda E^l-1)$ where $\\odot $ represents element-wise multiplication, and 1 denotes the information flow exchange with neighboring vertexes.", "$(D^l - A^l)$ denotes the spatial connectivity, ($\\lambda E^l-1$ ) represents the semantic coherence, and $\\odot $ incorporates them for diffusion.", "Please refer to Appendix for full details of Eqn.", "(REF ).", "After the global propagation, the reallocated activation score map can be calculated as follows, $F^{l+1} = ({L^l})^{-1} \\Gamma (F^{l})$ where $F^{l+1}$ is the output re-allocated attention map and $\\Gamma $ is a flattening operation that reshapes $F^{l}$ into a patch sequence." ], [ "Diffuse Matrix Approximation.", "In practice, directly using $({L^l})^{-1}$ may be impractical since ${L^l}$ is not guaranteed to be positive-definite and its inverse may not exist.", "Meanwhile, as observed in our initial experiments, directly applying the inverse produced unwanted artifacts.", "To deal with the problems, we exploit Newton Schulz Iteration , to solve $({L^l})^{-1}$ to approximate the global diffusion result, $\\begin{split}X_0 & = \\alpha (L^l)^{\\intercal }\\\\X_{p+1} & = X_{p}(2I-L^lX_p), \\end{split}$ where $X_0$ is initialized as $(L^l)^{\\intercal }$ multiplied by a small constant value $\\alpha $ .", "The subscript $p$ denotes the number of iterations, and $I$ is the identity matrix.", "As discussed above, we only need $({L^l})^{-1}$ to thrust propagation instead of obtaining the equilibrium result, so we just iterate the Eqn.", "(REF ) for $p$ times then take the approximated $({L^l})^{-1}$ back to Eqn.", "(REF ).", "Then we obtain the diffused activation of $F^{l}$ , which is visualized in Fig.REF (c).", "We can see that diffusion has redistributed the averaged attention map with more boundary details, such as the ear and the mouth, which are beneficial for final object localization." ], [ "Dynamic Filtering.", "As depicted in Fig.REF (c), we found that the reallocated score map $F^{l+1}$ provides a sharper boundary, but there is a side-effect that it diffuses the activation out of object boundaries, which may make the unnecessary background context back into $S^{l+1}$ or result in over-estimation of bounding box.", "Therefore, we propose a soft-threshold filter, depicted as Eqn.", "(REF ), to increase density contrast between the objects and the surrounding background to depress the outside noise.", "$\\mathcal {T}(F^{l},\\beta ) = \\beta \\cdot \\text{tanhShrink}(\\frac{F^{l}}{\\beta })$ where $\\beta \\in (0, 1)$ is a threshold parameter for more flexible control.", "$\\mathcal {T}$ denotes a soft-threshold function, and $\\text{tanhShrink}(x) = x - \\text{tanh}(x)$ is used to depress activation under $\\beta $ .", "Then $S^{l+1}=S^{l}\\odot \\mathcal {T}(F^{l},\\beta )$ .", "As shown in Fig.REF (d), the filter operation removes noise and provides sharper contrast." ], [ "Prediction", "After optimizing the model through backpropagation, the calibrated Transformer can generate the object-boundary-aware activation maps.", "Thus, we drop SCM during inference to obtain the final bounding box.", "Specifically, the bounding box prediction is generated by coupling $S^0$ and $F^0$ as depicted in Fig.REF .", "As $S^0\\in \\mathbb {R}^{H \\times W \\times C}$ is a $C$ -channel 2D semantic map, each channel represents an activation map for a specific class $c$ .", "To obtain the prediction from score maps, we carry out the following procedures: (1) Pass $S^0$ through a GAP to calculate classification scores.", "(2) Select $i^{th}$ map $S_i^0\\in \\mathbb {R}^{H \\times W}$ corresponding to the highest classification score from $S^0$ .", "(3) Calculate the element-wise product $F^0 \\odot S_i^0$ .", "The coupled result is then up-sampled to the same size as the input for bounding box prediction.", "Figure: Visual comparison of TS-CAM and SCM on 4 samples from CUB-200-2011 and ISVRC2012.Here we use three rows for each method to show activation maps, binary map predictions, and bounding box predictions, respectively.", "The threshold value γ\\gamma is set to be the optimal values proposed in TS-CAM and SCM." ], [ "Datasets.", "We evaluate SCM on two commonly used benchmarks, CUB-200-2011 and ILSVRC2012 .", "CUB-200-2011 is an image dataset with photos of 200 bird species, containing a training set of 5,994 images and a test set of 5,794 images.", "ILSVRC contains about 1.2 million images with 1,000 categories for training and 50,000 images for validation.", "Our SCM is trained on the training set and evaluated on the validation set from which we only use the bounding box annotations for evaluation." ], [ "Evaluation Metrics.", "We evaluate the performance by the commonly used metric GT-Known and save models with the best performance.", "For GT-Known, a bounding box prediction is positive if its Intersection-over-Union (IoU) $\\delta $ with at least one of the ground truth boxes is over 50% .", "Furthermore, for a fair comparison with previous works, we apply the commonly reported Top1/5 Localization Accuracy(Loc Acc) and Classification Accuracy(Cls Acc).", "Compared with GT-Known, Loc Acc requires the correct classification result besides the condition of GT-Known.", "Please refer to the appendix for more strict measures like MaxboxAccV1 and MaxboxAccV2 as recommended by to evaluate localization performance only." ], [ "Implementation details.", "The Transformer module is built upon the Deit pretrained on ILSVRC.", "In detail, we initialize $\\lambda $ , $\\beta $ in ABDs to constant values (1 and 0.5 respectively), and choose $p=4$ and $\\alpha =0.002$ in Eqn.", "(REF ).", "For input images, each sample is re-scaled to a size of 256$\\times $ 256, then randomly cropped to 224$\\times $ 224.", "The MLP head in the pretrained Transformer is replaced by a 2D convolution head with kernel size of 3, stride of 1, and padding of 1 to encode feature maps into semantic maps $S^0$ (200 output units for CUB-200-2011, and 1000 for ILSVRC).", "The new head is initialized with He's approach .", "During training, we use AdamW with $\\epsilon =1e^{-8}$ , $\\beta _{1}=0.9$ , $\\beta _{2}=0.99$ and weight decay of 5e-4.", "On CUB-200-2011, the training lasts 30 epochs with an initial learning rate of 5e-5 and batch size of 256.", "On ILSVRC, the training procedure carries out 20 epochs with a learning rate of 1e-6 and batch size of 512.", "We measure model performance on the validation set after every epoch.", "At last, we save the parameters with the best GT-Known performance on the validation set." ], [ "Performance", "To demonstrate the effectiveness of the proposed SCM, we compare it against previous methods on CUB-200-2011 and ILSVRC2012 in Table.REF .", "From GT-Known in CUB, SCM outperforms baseline method TS-CAM with a large margin, yielding GT-known 96.6$\\%$ with a performance gain of 8.9$\\%$ .", "Compared with other CNN counterparts, SCM is competitive and outperforms the state-of-the-art SPOL using only about 24$\\%$ parameters.", "As for ILSVRC, SCM surpasses TS-CAM by 1.2$\\%$ on GT-Known and 5.1$\\%$ on Top-1 Loc Acc and is competitive against SPOL built on the multi-stage CNN models.", "Compared with SPOL, SCM has the following advantages, (1)Simple: SPOL produces semantic maps and attention maps on two different modules separately, while SCM is only finetuned on a single backbone.", "(2) Light-weighted: SPOL is built on a multi-stage model with huge parameters, while SCM is built on a small Transformer with only about 24$\\%$ parameters of the former.", "(3) Convenient: SPOL has to infer the prediction with the complex network design, but SCM is dropped out during the inference stage.", "Furthermore, compared with the recent Transformer-based works like LCTR , with the same backbone Deit-S, we surpass it by a large margin $4.2\\%$ in terms of GT-Known in CUB and obtain comparable performance on Loc Acc for both CUB and ISVRC.", "We achieve this without additional parameters during inference, while other recent proposed methods add carefully designed modules or processes to improve the performance.", "The models are saved with the best GT-Known performance and achieve satisfactory Loc Acc and Cls Acc.", "Please refer to Sec.REF for more details.", "The visual comparison of SCM and TS-CAM is shown in Fig.REF .", "We observe that TS-CAM preserves the global structure but still suffers from the partial activation problem that degrades its localization ability.", "Specifically, it cannot predict a complete component from the activation map.", "We notice that minor and sporadic artifacts appear on the binary threshold maps, and most of them include half parts of the objects.", "After adding SCM as a simple external adaptor, the masks become integral and accurate, so we believe that SCM is necessary for Transformers to find their niche in WSOL.", "Table: Illustration of diffusion for v i v_i and its first-order neighbors, where (H,WH, W) is the reshaped 2D graph resolution, where HH denotes the number of nodes per column, and WW denotes the number of nodes per row.", "Each circle represents a patch in this graph, and we denote the patch sequence indexes on top of them.", "The arrows represent flow change with the horizontal direction that denotes exchange with neighbor vertexes, and the vertical represents input and output for GG.", "We further specify types of exchange by different colors, where (Green) F i u(t)F_iu(t) is the initial input rate; (Blue) The communication rate with neighbors; (Red) The rate of semantic flow which is related to the embedding similarity and the amount of flow." ] ]
2207.10447
[ [ "Integrating a fiber cavity into a wheel trap for strong ion-cavity\n coupling" ], [ "Abstract We present an ion trap with an integrated fiber cavity, designed for strong coupling at the level of single ions and photons.", "The cavity is aligned to the axis of a miniature linear Paul trap, enabling simultaneous coupling of multiple ions to the cavity field.", "We simulate how charges on the fiber mirrors affect the trap potential, and we test these predictions with an ion trapped in the cavity.", "Furthermore, we measure micromotion and heating rates in the setup." ], [ "Introduction", "In a quantum network, a coherent interface consists of a unitary interaction between light and matter that is much stronger than decay channels to the environment [1].", "Recent experiments with a single trapped ion coupled to a fiber-based optical resonator have demonstrated a coherent coupling rate $g_0$ exceeding the atomic spontaneous-emission rate $\\gamma $  [2], [3], [4], [5].", "Due to this coherent ion–photon interaction, fiber-based cavities integrated with ion traps offer a promising platform for a quantum network node.", "Additional strengths of the platform include both direct emission into optical fibers, for transmission in long-distance networks, and the small fiber footprint, for miniaturizing quantum nodes and scaling up their complexity.", "Many key applications of quantum networks, including distributed quantum computation, require the network nodes to host more than a single qubit [6], [7].", "While a single trapped ion has been coupled to a fiber cavity [2], [8], coupling of multiple ions has not yet been achieved.", "The ion-trap geometries used in Refs.", "[2], [8] give rise to residual radiofrequency fields that vanish only at a single point within the trapping region and introduce excess micromotion away from this point.", "It is not possible to compensate for this micromotion [9], which compromises both the ion–cavity coupling [8] and the fidelity of gate operations between multiple ions.", "Here, we present an ion–cavity system designed for strong coupling of multiple ions to a fiber cavity.", "The fiber mirrors are integrated along the axis of a linear Paul trap known as a wheel trap.", "Ions can be positioned along this axis without introducing excess micromotion.", "The paper is structured as follows: Section  describes the ion-cavity system.", "The influence of surface charges and the influence of the fiber-mirror positions on the ion are studied with simulations in Sec. .", "We then turn to experiments in Sec.", ", first with a test setup without integrated fibers, and subsequently with the integrated ion–cavity system.", "We present measurements of micromotion in the test setup and heating rates of both the test setup and the ion–cavity system.", "As a final test, we show that the effects of surface charges, commonly found on dielectric surfaces [10], [11], can be counteracted by means of the trap electrodes." ], [ "Experimental setup", "We first focus on the details of the wheel trap, a linear Paul trap developed for quantum metrology experiments [12], [13], [14].", "The trap consists of a diamond wafer, ${300}{}$ thick, on which gold electrodes are sputtered, ${5}{}$ thick, using titanium as an adhesive layer.", "The thinness of this wafer makes the wheel trap uniquely suited for integration with an optical microcavity along the trap axis.", "A photo of our adaptation of the wheel trap is shown in Fig.", "REF , and a schematic of the trap center is shown in Fig.", "REF .", "We define the x-y plane as the plane of the wafer and the z axis as the trap axis.", "The ion–electrode distance is ${250}{}$ .", "It is important to match the capacitances of the four RF electrodes.", "In the trap-design phase, we calculate these capacitances using finite-element-analysis software and adjust the electrode geometry accordingly.", "Compensation electrodes are used to minimize micromotion in the x-y plane [9], [15].", "In contrast to earlier wheel-trap designs [12], [13], [14], the DC electrodes (also known as endcaps) that confine ions along the z axis are hollow, allowing us to integrate fiber mirrors within them.", "The electrodes consist of stainless-steel tubes with an inner diameter of ${250(50)}{}$ and an outer diameter of ${500(20)}{}$ .", "Figure: a) Image of a wheel trap.", "b) Schematic of the ion-cavity system.The two fiber mirrors inside the opposing endcaps form a fiber Fabry-Pérot cavity (FFPC) [16].", "To prevent charging of the fibers due to scattered laser light [10], [11], we recess the fibers by ${10(2)}{}$ into the tubes.", "We use one multimode (MM) fiber IVG-Fiber Cu200/220 with a diameter of ${220(3)}{}$ and one photonic-crystal (PC) fiber NKT Photonics PCF - LMA 20 with a diameter of ${230(5)}{}$ .", "A CO$_2$ laser-ablation process generates concave, near-spherical profiles on each fiber facet [16], [19].", "The radii of curvature are ${318(5)}{}$ for the MM profile and ${312(5)}{}$ for the PC profile.", "The mirrors consist of alternating layers of SiO$_2$ and Ta$_2$ O$_5$ , applied to the fiber facets via ion-beam sputtering Advanced Thin Films, Boulder, CO 80301, USA.", "At a wavelength of ${854}{}$ , the MM mirror has a transmission of ${2(1)}{}$ , whereas the transmission of the PC mirror is ${16(1)}{}$ .", "The cavity finesse is ${9.2(2)e4}{}$ for a length of ${507(8)}{}$ , corresponding to a linewidth of $\\kappa = 2\\pi \\cdot {1.61(3)}{}$ (half-width at half maximum).", "We calculate an ion-cavity coupling strength of $g_0 = 2\\pi \\cdot {20.3(3)}{}$ for the $|3^2\\mathrm {D}_{5/2}\\rangle $ to $|4^2\\mathrm {P}_{3/2}\\rangle $ transition of a $^{40}$ Ca$^+$ ion [21].", "The largest Clebsch-Gordon coefficient for transitions between Zeeman states of these manifolds is $\\alpha = \\sqrt{2/3}$ , so that the largest possible coupling strength is $g = \\alpha g_0 = 2\\pi \\cdot {16.6(3)}{}$ .", "The two relevant spontaneous emission channels are from $|4^2\\mathrm {P}_{3/2}\\rangle $ to $|3^2\\mathrm {D}_{5/2}\\rangle $ , with a decay rate of $\\gamma _\\textrm {PD} = 2\\pi \\cdot {0.67}{}$ , and from $|4^2\\mathrm {P}_{3/2}\\rangle $ to $|4^2\\mathrm {S}_{1/2}\\rangle $ , with a decay rate of $\\gamma _\\textrm {PS} = 2\\pi \\cdot {10.74}{}$ ; both rates are half widths.", "Based on the values of $g$ , $\\kappa $ , $\\gamma _\\textrm {PD}$ , and $\\gamma _\\textrm {PS}$ , we expect our system to operate in the strong coupling regime [21].", "The ion-cavity system is located inside an ultra-high vacuum chamber at a pressure below ${1E-10}{\\bar{}}$ , the lowest pressure value that can be determined from the ion-pump current.", "The DC electrodes with integrated fiber mirrors are glued on quartz v-grooves Epoxy EPO-TEK 353ND, each of which is glued on a shear-mode piezo Meggit PZ27, as shown in Fig.", "REF .", "The length of the cavity is stabilized by applying a Pound–Drever–Hall feedback signal to one of the piezos [24].", "Each piezo is glued on a stainless-steel tilt-adjuster, with which the angle of each fiber mirror is aligned during the construction of the setup.", "Each tilt-adjuster is mounted on a 3D nanopositioning assembly The nanopositioning assembly consists of three SLC-1720-W-S -UHVT-NM modules., which allows positioning of each fiber mirror along three axes over a range of ${12}{}$ with a resolution of ${1}{}$ for relative movements.", "In practice, we translate the fiber mirrors over a range of at most ${1}{}$ along each axis.", "Figure: Rendered image of the ion-cavity system.", "a) Side view: To the left and right of the wheel trap, we see the DC electrodes with fiber mirrors, the quartz v-grooves, the shear-mode piezos, and the tilt-adjusters, all mounted on two 3D nanopositioning assemblies.", "b) Top view: An ablation target, along the line of sight of the wheel trap, is indicated in green.", "In this image, one of two inverted viewports is shown.For loading ions, we use single laser pulses with pulse energies around ${150}{}$ at a wavelength of ${515}{}$ to ablate neutral $^{40}$ Ca atoms from a target, which is mounted ${2}{}$ from the trap along the line of sight (Fig.", "REF ) [26].", "Two objectives, each with a numerical aperture of $0.18$ , are mounted inside inverted viewports.", "The objectives collect the ion fluorescence, which is guided to an electron-multiplying CCD camera and to a photomultiplier tube." ], [ "Simulating the ion-cavity system", "We now consider two means by which the harmonic potential of the Paul trap may be distorted.", "First, if surface charges are present on the fiber mirrors, their electric fields will shift the potential [10], [11].", "Second, if the nanopositioning assemblies are used to displace the DC electrodes and integrated fiber mirrors, the potential minimum may be displaced, or the confinement strength may change [27], [28].", "In this section, ion trap simulations are used to study both surface charges and electrode displacements." ], [ "Ion-trap potentials", "The trapping potential at a position $\\mathbf {r} = (x,y,z)$ has three components: the pseudopotential $\\phi _\\mathrm {RF}$ generated by the RF electrodes, the potential $\\phi _\\mathrm {DC}$ generated by the DC electrodes, and the potential $\\phi _\\sigma $ due to any charges on the fiber facets [29]: $\\phi _\\mathrm {trap}(\\mathbf {r}) = \\phi _\\mathrm {RF}(\\mathbf {r}) + \\phi _\\mathrm {DC}(\\mathbf {r}) + \\phi _\\sigma (\\mathbf {r}) \\mathrm {.", "}$ To simulate the trapping potential, we follow the steps outlined in Ref. [11].", "We start by importing the geometry of the ion-cavity system as depicted in Fig.", "REF into finite-element analysis (FEA) software COMSOL Multiphysics 5.6.", "We define our coordinate system to match the system indicated in Fig.", "REF , with the origin in the center of both the ion trap and the FFPC.", "The fiber-cavity length is set to ${500}{}$ in this geometry.", "Unless otherwise mentioned, the trap electrodes are grounded.", "We now determine the contributions $\\phi _\\mathrm {RF}$ , $\\phi _\\mathrm {DC}$ and $\\phi _\\sigma $ separately for a $^{40}$ Ca$^+$ ion (mass $m = 40\\;\\mathrm {u}$ , charge $e$ ), starting with $\\phi _\\mathrm {RF}$ .", "In experiments, $\\phi _\\mathrm {RF}$ is generated by driving the wheel trap in one of two possible configurations.", "In the first configuration (RF-GND), we ground one pair of opposing RF electrodes and apply a driving signal with amplitude $V_\\mathrm {RF}$ and frequency $\\Omega _{\\mathrm {rf}}$ to the other pair.", "In the second configuration (symmetric), we drive both RF electrode pairs such that the phase of the RF signal on one electrode pair is shifted by ${180}{}$ relative to the signal on the other pair.", "To simulate the RF-GND configuration, we set a voltage $V_0 = {1}{}$ on one pair of RF electrodes.", "To simulate the symmetric configuration, we set a voltage $V_0 = {0.5}{}$ on one pair of RF electrodes and a voltage $-V_0$ on the other pair.", "For both configurations, the FEA software simulates the electric field $\\mathbf {E}(\\mathbf {r})$ , with which we calculate $\\phi _\\mathrm {RF}$ from the expression [31] $\\phi _\\mathrm {RF}(\\mathbf {r})=\\frac{V_\\mathrm {RF}}{V_0}\\frac{e^2|\\mathbf {E}(\\mathbf {r})|^2}{4m\\Omega _{\\mathrm {rf}}^2} \\mathrm {.", "}$ Next, we set a voltage $V_\\mathrm {DC}$ on the DC electrodes and simulate $\\phi _\\mathrm {DC}(\\mathbf {r})$ .", "Finally, to simulate $\\phi _{\\sigma }$ , we set a homogeneous surface-charge density $\\sigma $ on the facets of the fibers and assume the charges to be static.", "Since the fiber mirrors are located inside the DC electrodes, we do not consider charges on the sides of the fiber mirrors, in contrast to Ref. [11].", "Following Eq.", "REF , we sum the three potentials to determine $\\phi _\\mathrm {trap}$ over the trapping region.", "In Fig.", "REF , $\\phi _\\mathrm {trap}$ is plotted as a function of position along all three axes.", "Here, the symmetric drive configuration is used with $V_\\mathrm {RF}={160}{}$ , $V_\\mathrm {DC} = {1}{}$ , and no charges present on the fiber mirrors.", "We fit a harmonic oscillator potential $\\phi (\\mathrm {r})=\\frac{1}{2}m\\omega _r(r-r_0)^2+\\phi _0 $ with offset $\\phi _0$ to $\\phi _\\mathrm {trap}$ in order to extract the ion position $r_0 \\in \\lbrace x_0,y_0,z_0\\rbrace $ and the trap frequency $\\omega _{r}$ along an axis $r\\in \\lbrace x,y,z\\rbrace $ .", "This fit yields $x_0={0(1)}{}$ , $y_0 = {1(1)}{}$ , $z_0={0(1)}{}$ , $\\omega _x = 2\\pi \\cdot {{3.134(2)}{}}$ , $\\omega _y = 2\\pi \\cdot {3.174(2)}{}$ and $\\omega _z =2\\pi \\cdot {1.041(1)}{}$ , where the uncertainties on the positions are set by the mesh resolution in the simulation.", "Figure: a) Simulated potential φ trap \\phi _\\mathrm {trap} for the symmetric drive configuration with V RF =160V_\\mathrm {RF} = {160}{} and V DC =1V_{\\textrm {DC}} = {1}{}, plotted for all three axes.", "Fits of the data with Eq.", "are also plotted.", "b) Simulated RF potential φ RF \\phi _\\mathrm {RF} for the symmetric and the RF-GND drive configurations, plotted for the z axis.In Fig.", "REF , we compare $\\phi _\\mathrm {RF}$ along the z axis for the two drive configurations.", "Over a range of ${200}{}$ around the trap center, the potential of the symmetric drive is constant at a value of approximately ${1.22(8)}{}$ .", "The potential of the RF-GND drive is harmonic with a minimum at $z={1(1)}{}$ and reaches a maximum value of ${89.0(1)}{}$ .", "In the symmetric case, the ${180}{}$ phase shift of the two RF signals causes the electric field to vanish along the z axis [32], while the asymmetric RF-GND configuration generates a vanishing field at only one point [9], meaning that ions displaced from this minimum will be subject to micromotion.", "This statement holds true for all linear Paul traps, but for typical centimeter-scale trap lengths, the curvature of $\\phi _\\mathrm {RF}$ in the RF-GND configuration is negligible.", "However, for the ${300}{}$ -long wheel trap, the curvature of $\\phi _\\mathrm {RF}$ becomes significant, as we will see in Sec.", "REF ." ], [ "Influence of surface charges", "We now include surface charges on the fiber facets in our simulations.", "The trapping potential is simulated for surface-charge densities $\\sigma $ ranging from $0.1$ to ${50}{^2}$ , with $V_\\mathrm {RF}={160}{}$ and $V_\\mathrm {DC} = {0}{}$ .", "In each simulation, the same value of $\\sigma $ is used for both fiber facets.", "In this first simulation of charge densities, values less than or equal to zero are not considered, since they lead to unstable trapping without a voltage on the DC electrodes.", "As in Sec.", "REF , we determine the trap frequencies, with uncertainties given by the standard deviations of the fit parameters.", "Figure: a) Motional frequencies ω x \\omega _x,ω y \\omega _y, and ω z \\omega _z plotted for surface-charge densities up to 50 2 {50}{^2} on the fiber mirrors.", "Error bars correspond to the standard deviation of the fit parameters and are too small to be visible.", "b) Voltage V DC V_\\mathrm {DC} corresponding to an axial trap frequency ω z =2π·1\\omega _z = 2\\pi \\cdot {1}{} as a function of the surface-charge density.Figure REF shows the trap frequencies along all three axes as a function of the surface-charge density.", "We observe that $\\omega _z$ increases with increasing charge density while $\\omega _x$ and $\\omega _y$ decrease.", "This is the same effect that one observes when increasing $V_\\textrm {DC}$ for a linear Paul trap, since the surface charges and the applied voltage play the same role.", "Here we highlight another advantage of using the wheel trap for an integrated fiber cavity: charges on the fiber mirrors are equivalent to DC voltages on the endcaps.", "In practice, it is difficult to add or remove surface charges in a controlled fashion [10], [11], but $V_\\textrm {DC}$ provides a knob with which we can achieve the equivalent result.", "As a proof of principle for this approach, we determine the value of $V_\\mathrm {DC}$ that results in an axial trap frequency of $\\omega _z =2\\pi \\cdot {1}{}$ for surface charge densities between $-10$ and ${50}{}$ .", "These values are plotted in Fig.", "REF .", "This range of densities corresponds to the range from experiments reported in Ref. [11].", "The compensation voltage decreases linearly with increasing surface charge density.", "Note that the approach works for any value of $\\omega _z$ ; $\\omega _z =2\\pi \\cdot {1}{}$ was simply chosen as a round number." ], [ "Influence of the cavity position", "As a final consideration in our simulations, we vary the positions of the DC electrodes with integrated fiber mirrors in order to understand the effect on the trapping potential.", "In the experimental setup, it is necessary to adjust the relative positions of the electrodes so that the mirrors form a cavity.", "The laser-ablation process results in mirror profiles that are centered with respect to the fiber facets with an uncertainty of ${0.9}{}$  [19].", "Gluing the fibers into the DC electrodes results in a centering uncertainty of ${30}{}$ .", "Thus, we require a positioning range of ${31}{}$ .", "In addition, in order to position the cavity mode with respect to an ion, we will need to translate both mirrors and thus both electrodes.", "We displace both fiber mirrors along the x axis for values $\\delta _f$ between ${-50}{}$ and ${50}{}$ and determine the position of the ion $\\mathbf {r}_0 = (x_0,y_0,z_0)$ , which is plotted in Fig.", "REF for the x axis.", "Again, we estimate a ${1}{}$ uncertainty for the ion position from the resolution of the simulation mesh.", "Within the uncertainty, we observe no displacement of the ion, from which we conclude that it will be possible to position the fiber mirrors within our setup without affecting the ion position.", "Figure: Ion position x 0 x_0 as function of the position of the fiber mirrors along the x axis, indicated in Fig.", "." ], [ "Experimental tests", "Prior to assembly of the ion-cavity system, we built an ion-trap test setup without integrated fiber mirrors.", "The DC electrodes described in Sec.", "are replaced by electrodes with ${260(50)}{}$ inner diameter and ${410(20)}{}$ outer diameter, separated by ${3}{}$ .", "We perform two experimental tests with this setup: first, a measurement of the micromotion, and second, measurements of the motional heating rates.", "The heating rates quantify how much electric-field noise couples from the environment to the motion of the ion [33].", "After assembling the ion-cavity system, we repeated the heating rate measurements.", "The earlier measurements without fiber mirrors allow us to distinguish between noise observed with the bare ion trap and noise due to the dielectric fiber mirrors, which are known sources of electric-field noise [34], [35].", "Subsequently, we tested the predictions of Sec.", "REF regarding the influence of surface charges present on the fiber mirrors." ], [ "Comparison of micromotion in RF-GND and symmetric configurations", "A symmetric configuration of the ion-trap drive leads to a vanishing electric field along the trap axis, as discussed in Sec.", "REF .", "Using the ion-trap test setup, we quantify the micromotion of a single $^{40}$ Ca$^+$ ion for both symmetric and RF-GND configurations over a ${20}{}$ range along the z axis.", "We measure Rabi oscillations on the $|4^2\\mathrm {S}_{1/2},m_j = +1/2\\rangle $ to $|3^2\\mathrm {D}_{5/2},m_j = +1/2\\rangle $ transition (qubit transition) and on the micromotion sideband of this transition.", "We determine the Rabi frequency $\\Omega _Q$ of the qubit transition using a fitting model that assumes the Debye-Wallner coupling as the damping source [36].", "For the micromotion sideband, this approach is not applicable, since the period of Rabi oscillations is longer than the ${259(11)}{}$ coherence time of the qubit.", "We instead extract the Rabi frequency $\\Omega _M$ by fitting a damped sinusoidal oscillation.", "Error bars correspond to one standard deviation of the fit parameters.", "For $\\Omega _M < {1}{}$ , we are unable to resolve multiple oscillations of the ion's state due to limitations in the experimental control hardware.", "For these Rabi frequencies, we estimate $\\Omega _M$ from the first data point at which the excitation on the micromotion sideband overlaps with $0.5$ , and the error of $\\Omega _M$ is given by the quantum projection noise.", "When displacing the ion along the z axis, the mean voltage on both electrodes is held constant in order to keep $\\omega _\\mathrm {z}$ constant.", "At each ion position, before determining $\\Omega _Q$ and $\\Omega _M$ , we minimize the micromotion using the compensation electrodes.", "In Fig.", "REF , the modulation index $\\beta \\approx 2\\frac{\\Omega _M}{\\Omega _Q}$ is plotted as a function of the ion position.", "The modulation index is proportional to the residual RF electric field and is thus a measure of micromotion [9].", "The micromotion vanishes at a single point for the RF-GND configuration, as expected from the simulations in Sec.", "REF .", "In contrast, the micromotion vanishes over the full measurement range of ${20}{}$ for the symmetric configuration.", "The setup has been designed to couple multiple ions to the fiber-cavity mode.", "As a typical ion–ion distance is around ${5}{}$ , we see from Fig.", "REF , that it is not possible to confine two ions in the RF-GND configuration without excess micromotion.", "In contrast, in the symmetric configuration, it should be possible to confine at least five ions without excess micromotion.", "The ion position in Fig.", "REF is calibrated as follows: we take an image of the ion at $z={0}{}$ , displace the ion with the DC electrodes, and take an image of the displaced ion.", "From the two images, we calculate the ion displacement in units of pixels per volt.", "After measuring the micromotion, we load two ions into the trap and take a third image, from which we determine the distance $\\delta z$ between the two ions in pixels.", "We also calculate $\\delta z$ in meters with the relation [37] $\\delta z = \\Big ( \\frac{e^2}{2\\pi \\epsilon _0 m \\omega _z^2} \\Big )^{1/3} \\mathrm {,}$ thereby obtaining a conversion from pixels to meters.", "Combining both conversions yields the ion displacement in units of meters per volt.", "We obtain ${1.28(16)}{}$ for the RF-GND drive with $\\omega _{z} = 2\\pi \\cdot {1.517(1)}{}$ and ${1.23(18)}{}$ for the symmetric drive with $\\omega _{z} = 2\\pi \\cdot {1.650(1)}{}$ .", "Figure: Modulation index β\\beta as function of the ion position for the RF-GND and symmetric drive configurations." ], [ "Heating rate measurements without fiber mirrors", "We measure the ion heating rate using sideband thermometry [38]: the ion is Doppler-cooled for ${5}{}$ , followed by optical pumping and ${10}{}$ of sideband cooling.", "After a waiting time $t_\\mathrm {w}$ , we drive the red motional sideband of the qubit transition for ${2}{}$ .", "This sequence is repeated for the blue sideband.", "The phonon number is extracted from 100 of these measurements, and this process is then repeated 20 times for each waiting time in order to calculate the mean value and the sample standard deviation.", "For all measurements presented, we set the trap frequencies to $\\omega _\\mathrm {x}=2\\pi \\cdot {3.399(1)}{}$ , $\\omega _\\mathrm {y} =2\\pi \\cdot {3.229(1)}{}$ , and $\\omega _\\mathrm {z} =2\\pi \\cdot {1.517(1)}{}$ .", "The phonon numbers of the axial mode and one radial mode are plotted in Fig.", "REF for waiting times up to ${50}{}$ .", "Figure: Mean phonon number extracted via sideband thermometry as a function of the waiting time between sideband cooling and interrogation pulse, without fiber mirrors integrated in the setup.", "The solid lines represent weighted fits corresponding to heating rates of n ¯ ˙ z =13(3)\\dot{\\bar{n}}_z = {13(3)}{} and n ¯ ˙ x =32(8)\\dot{\\bar{n}}_x={32(8)}{}.", "Error bars represent the standard deviation calculated from 20 samples.From a weighted least-squares linear fit, we extract the heating rates $\\dot{\\bar{n}}_{x} = {32(8)}{}$ , $\\dot{\\bar{n}}_{y} = {26(6)}{}$ and $\\dot{\\bar{n}}_{z} = {13(3)}{}$ .", "Similar rates have been measured with a $^{25}$ Mg$^+$ ion in the original wheel trap [13], [14]." ], [ "Heating rate measurements with fiber mirrors", "After integrating the fiber mirrors into the experimental setup, we measure the heating rates again.", "The distance between the fiber mirrors is set to ${550}{}$ and the axial trap frequency to $2\\pi \\cdot {1.636(5)}{}$ .", "We find that the red motional sideband is no longer suppressed by sideband cooling, so instead, the ion's temperature is determined from fits to Rabi oscillations [39], [40], [41], [42], [35].", "The laser beam driving Rabi oscillations overlaps with all three motional modes, so we cannot determine the phonon number of each mode separately.", "As the contribution of the axial mode dominates by more than one order of magnitude, we express the mean phonon number $\\bar{n}$ as a projection onto the z axis, as described in Ref. [35].", "Note that $\\bar{n}$ approximates the mean phonon number $\\bar{n}_z$ of the axial mode, but $\\bar{n}_z$ is smaller than $\\bar{n}$ .", "Figure: Mean phonon number n ¯\\bar{n} as a function of the waiting time between Doppler cooling and interrogation pulse, with fiber mirrors integrated in the setup.", "The solid line represents a weighted fit corresponding to a heating rate of n ¯ ˙=14(2)\\dot{\\bar{n}} = {14(2)}{}.In Fig.", "REF , $\\bar{n}$ is plotted for a variable waiting time $t_\\textrm {w}$ before the interrogation pulse.", "From a linear fit, we determine a heating rate of $\\dot{\\bar{n}} = {14(2)}{}$ , three orders of magnitude larger than the rates in Sec.", "REF .", "We attribute this higher rate to the presence of the fiber mirrors.", "We have recently developed a model for ion heating based on dielectric losses in the fibers, which is supported by further experiments that we have conducted with this setup [35]." ], [ "Counteracting surface charges", "As a final test, we return to the predictions of Sec.", "REF and adjust the trap electrode voltages to counteract the effects of surface charges on the fibers.", "Starting from $L = {507(8)}{}$ , which is determined from a measurement of the cavity free-spectral range, we increase the length via the nanopositioning assemblies.", "For each value of $L$ , we adjust the voltages on the DC electrodes such that the axial trap frequency is $\\omega _z = 2\\pi \\cdot {1.6(1)}{}$ and the ion position remains fixed within ${25}{}$ .", "Here, $V_\\mathrm {PC}$ and $V_\\mathrm {MM}$ are the voltages on the DC electrodes containing the PC fiber and the MM fiber, respectively.", "Figure: Voltages V PC V_\\mathrm {PC} and V MM V_\\mathrm {MM} applied to the DC electrodes in order to keep both the ion position and trap frequency fixed over a range of fiber–fiber distances LL.", "Error bars are too small to be visible.The uncertainties of $V_\\mathrm {PC}$ and $V_\\mathrm {MM}$ correspond to the precision of the voltage source.", "In Fig.", "REF , we plot $V_\\mathrm {MM}$ and $V_\\mathrm {PC}$ for fiber–fiber distances up to ${1200}{}$ .", "Both voltages increase for increasing distance, but $V_\\mathrm {PC}$ is always positive, while $V_\\mathrm {MM}$ takes on both negative and positive values over a range that is three times higher.", "We attribute this difference to the presence of surface charges on the fiber facets.", "This measurement shows that despite surface charges, a trapped ion can be confined at a fixed position over a range of cavity lengths." ], [ "Conclusion", "We have designed and constructed an ion–cavity system with integrated fiber mirrors.", "The system is designed such that strong coupling of multiple ions to the fiber cavity will be possible without excess micromotion.", "Simulations show that voltages on the DC electrodes compensate for surface charges on the fiber mirrors, and that translation of the fiber mirrors does not affect the ion position.", "Prior to the assembly of the system, we built an ion-trap test setup without the fiber mirrors and measured micromotion and heating rates, both of which are consistent with values in state-of-the-art ion traps for quantum information processing.", "After integration of the fiber mirrors, we observed a heating rate that was three orders of magnitude higher, which we attribute to the presence of the fibers.", "This observation led to a recent study on the role of dielectrics in ion traps [35].", "We have trapped an ion within the cavity for cavity lengths as short as ${507(8)}{}$ , and we have confirmed that voltages on the DC electrodes compensate for surface charges.", "A next step will be to measure the coupling strength of single and multiple ions to the cavity field, followed by demonstrations of multi-ion protocols that have been previously implemented in macroscopic ion–cavity platforms, including quantum state-transfer [43] and optimized collective coupling [44].", "Due to high coherent coupling rates and short cavity lifetimes, fiber-based systems may replace those bulkier setups, enabling scalable links between distributed ion-trap quantum computers.", "All data presented and discussed in this article are available at Ref. [45].", "The authors have no conflict of interest to disclose.", "We thank David Leibrandt, Sam Brewer and Jwo-Sy Chen for discussions and insights on the design of the wheel trap.", "We thank Da An and Hartmut Häffner for help with the design of the radio-frequency resonator used for the symmetric drive configuration of the ion trap.", "This work was supported by the European Union's Horizon 2020 research and innovation program under Grant Agreement No.", "820445 (Quantum Internet Alliance), by the U.S. Army Research Laboratory under Cooperative Agreement No.", "W911NF-15-2-0060, and by the Austrian Science Fund (FWF) under Project No.", "F 7109.", "P. H. acknowledges support from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No.", "801285 (PIEDMONS).", "K.S.", "acknowledges support from the ESQ Discovery grant \"Ion Trap Technology\" of the Austrian Academy of Science.", "M.T.", "acknowledges support for the OptiTrap project under the Early Stage Funding Programme provided by the Vice-Rectorate for Research of the University of Innsbruck." ] ]
2207.10500